Moonshots with Peter Diamandis - DeepSeek vs. Open AI - The State of AI w/ Emad Mostaque & Salim Ismail | EP #146
Episode Date: January 29, 2025In this episode, Emad, Salim, and Peter discuss the recent DeepSeek news, the China vs. USA AI race, and what Emad has been working on. Recorded on Jan 29th, 2024 Views are my own thoughts; not Fin...ancial, Medical, or Legal Advice. Emad is the founder of Intelligent Internet and the former CEO and Co-Founder of Stability AI, a company funding the development of open-source music- and image-generating systems such as Dance Diffusion, Stable Diffusion, and Stable Video 3D. Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO. Emad on X:https://x.com/EMostaque Learn more about Intelligent Internet: https://ii.inc/ Read Emad’s Paper: https://x.com/ii_posts/status/1877018732733612367 Join Salim's ExO Community: https://openexo.com Salim’s X: https://twitter.com/salimismail ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at  https://www.oneskin.co/ #oneskinpod _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
Last February, I said DeepSeek was one of my favorite AI companies out there.
If you look at each of the innovations they made, it was largely engineering innovations.
Do we see DeepSeek dethroning or reducing the valuation of these companies at all?
I think from my opinion, it should increase the valuation.
The US versus China AI wars.
This is a winner-take-all type of game. This is the biggest crisis that we have coming because we're heading into a future now where
I'd say every single AI leader says that AGI is three to five years away.
Welcome to Moonshots, an episode of WTF Just Happened in Tech with Saleem Ismail and a special guest, Imad Mustak.
You know, Imad is the founder of Stability AI, a company that had been the leading open source developer, music and image generation with 300 million open source downloads.
Imad today is the founder of Intelligent Internet.
Imad today is the founder of Intelligent Internet. He'll be speaking about that.
But in this episode, we're doing a deep dive into three subjects, DeepSeek, of course,
and the constant disruption that's coming in every market at an accelerating pace.
We'll be diving into AI safety, what's going on at OpenAI as people are starting to leave,
especially from their AI alignment team?
And then we'll chat with Imad about Intelligent Internet. What's his plans? Where is he going?
Alright, let's dive into this episode. For me, this is an extraordinary week of accelerating change.
And as always, help me spread the message, subscribe, tell your friends. This is the conversation that I think is probably one of the most important that we can be having
right here, right now.
Let's jump into Moonshots.
Welcome to another episode of Moonshots.
WTF just happened in tech this week.
I'm here with two besties, Salim Ismail, the CEO of OpenEXO and Imad Mustak, the CEO of Intelligent Internet.
No stranger to this podcast and it's been a crazy week.
We kicked it off with sort of an internet, AI market meltdown on the news of DeepSeek
and the concussion waves keep coming.
What does it all mean?
We're here to have that conversation.
Imad, good morning to you or good evening.
You're in London today?
Yeah, I'm in London.
Good morning to you.
It was a pleasure.
Yeah, and Salim, you're in Miami or New York?
Yeah.
All right.
We've got three different time zones around the globe.
We need someone in Hong Kong to just balance this thing out, but we'll get there soon enough.
So Imad, DeepSeek, no surprise for you.
Was this something expected or was this something like, wow?
I think it was actually expected.
Like last February, I said DeepSeek are one of my favorite AI companies out there.
Like they took the original ethos that we had at Stability, another ex-HedgeFund manager, and they released amazing models open. I think when the AI community first started to see this was
probably around about summer of last year, they released DeepSeek Coda, which hit the top of the
code rankings. So they started with replicating Llama from Meta.
Then they broke forward.
And in fact, the algorithms there
are some of the algorithms they use now.
And then in December, like a month ago,
they released DeepSeek V3, which was actually
what this $6 million training cost model was.
And it matched GPT-40 and all these other models.
It didn't match 01.1 at that point,
but we all thought they'll figure out how to do it
and guess what they did?
And it generalizes to any model.
What was the, it felt like the internet broke
over the weekend as the announcement was made.
What was it that got everybody so hot and bothered instantly
since it's been around for some time.
So there was the base model, that was the chat GPT equivalent model in December, and
that proved that you could train these models on a fraction of the cost.
The next thing was this reasoning model R1, where when you type it, it shows you the reasoning,
it takes a bit longer to think, has better quality output.
That actually came out last Monday, but it was this weekend that this narrative cascade
happened and now you've got your mum and your aunt asking about it, you know, and it's front
news and then Nvidia cracked, etc. I think what it was is, remember the early days of chat GPT or
stable diffusion on image, the immediacy of response and that new paradigm. When OpenAI released O1,
of response and that new paradigm. When OpenAI released O1, this thinking model, it was amazing but it was a bit like using ChatGPT. You put something in
it says I'm thinking and it gives you a response. Because what they did is they
hid the chain of thought reasoning. With R1 it actually shows you this is how I'm
thinking about it, this is how I'm breaking down the problem and it feels like you have
another person on the other side.
And as more and more people use that
and they saw the performance benchmarks,
it built up into this cascade
because it was so immediately usable.
And they realized it was open source.
So people took the smaller versions of it
and started running it on their laptops.
If it was just a closed model
that didn't have the chain of thought but matched O1,
it wouldn't have had that.
If OpenAI had released the chain of thoughts, then I don't think it would have had the same
thing.
It said it was this confluence of things that made people realize, oh my gosh, what is this
new thing and how has it been done?
And it challenges our assumptions.
Amazing.
Salim, you and I on the phone over the weekend, like, huh, this is real, this is happening.
What were your thoughts when you started?
I have two thoughts.
I love the timing that they launched it on the day of the inauguration as a bit of a
slap in the face to the incoming administration saying we will sanction you to bits, et cetera,
et cetera.
And here's how the sanctions work.
The second thought I've had throughout the last 10 days
or so is that we are expecting demonetization.
And as the power of these models is accelerating exponentially
and blowing our minds, the demonetization
should also kind of surprise us in the same way.
And so the fact that they're able to do this at 1 10th
or 100th or whatever, now how they got there is
obviously an open question, but the fact that that has been achieved shouldn't be a big surprise
on the curves that we're looking at, right? It's incredible to see, but we shouldn't be surprised
if we ate our own dog food. Go ahead, go ahead, go ahead, Iman. I think someone noted it was actually the five year anniversary of the Wuhan lab
leak as well, except for this one was delivered.
Oh no. I'm not going to go there. But you know, I put out in my blog that followed the DeepSeek announcement,
this is just going to be what is the new normal.
You know, when Netflix ate Blockbuster for lunch,
this is just going to be happening over and over again,
the speed at which heads are turning and snapping
across every industry.
It's interesting, because when we saw chat GPT announced
and it got to a million users in five days
and 100 million users in two months,
people were like, can this ever be replicated again?
And the answer is yes and faster.
So Imad, could you give us a quick rundown of what actually, how actually DeepSeq compares
to GPT-40, GPT-01, any of the other models?
There's a lot of claims being made about how many GPUs it was created on, how much money,
size of teams, and it was those comparative numbers that made it a big deal.
If it was just an equivalent model but was not being done at a fraction of the time or cost, it would not have hit as hard as it did.
Yeah, I think this was the shock was the order of magnitude. So we can break it down a bit. So O1 was this evolution of chat GPT that came out
that suddenly got to IMO medalist level
or top coder level, like top 1% coder level
because it could think longer.
This is a key breakthrough.
Now OpenAI have actually said,
Mark Chen from the what DeepSeek figured out,
which we'll get to in a second,
was pretty much what they're doing at OpenAI. That was in November and so we've had a few period
there. So first we had the model that matched chat GPT, then they figured out
how to make it think longer. But the main upshot that shocked people I think
initially was that it was 96% cheaper. Now software usually has an 80%
margin, we don't know how much open AI charges,
but you know, they've got this hammer,
which is a large amount of GPUs.
They've never had to work in a constrained environment.
So sometimes you are a bit price insensitive,
particularly because the cost of running an O1 query
to solve a math paper or a legal problem,
because it's as good as any lawyer or doctor,
is so small still.
But this was 96% cheaper than that, which was number one.
Number two was the fact that this could be
kind of released anywhere.
And the headline number of the original model
that this was trained from,
the R1 Evolution is probably only $100,000, $200,000
from that, which again, we can come back to, was a shock.
Last year, well, year before last, I can't remember the exact was a shock. Last year, well, in the year before last,
I can't remember the exact number,
I think last year, OpenAI spent $3 billion
on training models.
Amazing.
To give you an idea of that.
Now, how much did DeepSeek cost?
There were accusations around they have 50,000
of these chips, not 2,000 like we use on the training run.
They never claimed how many chips they had,
they just said we'd need 2,000 for this this training run, we used it over this period of day to build
a model that looks like this. Those of us that have built these models know that these numbers
actually all check out. And this is why some of the reaction has been really interesting,
because people are like, well, they have far more GPUs or they have hidden GPUs and other things.
The GPUs they have are these models called an H800,
which is like the top,
well now not quite the top end Nvidia chip,
but with the interconnect slightly reduced.
So the way that the chips speak to each other
is a bit slower.
We had this issue at Stability AI,
a former company where we built one of the largest
super compute clusters in the world,
but we had interconnect a quarter of the speed
of other people because that's all we could do.
Again, we were competing as the biggest guys
and we bought some of the best models in the world.
They wrote the lowest level code in PTX,
which is like this CUDA, but a level lower to overcome it.
They basically engineered the crap out of it
because some of them are ex-quantum hedge fund managers
and others.
And if you look at each of the innovations they made,
it was largely engineering innovations,
which is very interesting for our mental model
because what's China amazing at?
Engineering innovation.
You look at BYD, you look at Xiaomi.
It shouldn't be any surprise
that as you move from research to engineering,
you would see this leap ahead.
But all the numbers kind of check out.
You see the cost reducing.
I think they've probably got 10,000 chips in total, but that's not more than many startups
in the valley to be honest.
Everybody, Peter here.
If you're enjoying this episode, please help me get the message of abundance out to the
world.
We're truly living during the most extraordinary time ever in human history and I want to get
this mindset out to everyone.
Please subscribe and follow wherever you get your podcasts and turn on notifications so
we can let you know when the next episode is being dropped.
All right, back to our episode.
I had a conversation with Kai-Fu Lee recently on this podcast and we were talking about
the notion how the US government's been restricting Chinese companies
from getting Nvidia chips.
And all that's done is create this evolutionary pressure for them to do much more with much
less.
And this sounds like a perfect example of that.
It's like Darwinian in its developmental force.
Yeah, I mean, again, if all you have is a hammer and you have large amounts of GPUs,
the way that this works is the GPUs compress the knowledge. It's like pressure cooking
a steak and making it tender. Instead, you look at things like better data, better algorithms,
more efficient things. If you can't scale on compute speed because they don't have
the chips for the speed, because what happens is as you go from 1000 to 2000 to 10,000 GPUs, you can parallelize and have
more speed.
They instead did memory as the key thing.
So classical models are very dense models like Lama 70 billion parameters.
This is 640 billion parameters, but only 30 billion of them are activated at one time.
They scaled on memory.
And that is cheaper than superfast silicon. So these constraints, I think, really are
the key and we've seen it again and again, that if you don't need to worry about the
constraints, then you build inefficient models. If you have to worry about efficiency, then
you know, necessity is another invention.
Wasn't the CEO labeling the data and going through all that stuff?
Because that adds so much juice to the model.
Models are just data.
I mean, again, if models are figuring out the interconnections,
it's like if you have a bad curriculum, then you have bad data.
The models we train on right now are trained on terrible data,
like 14 trillion words in the case of DeepSeq and Llama.
You don't need that
much data to build an expert model, but if you have a large amount of compute it doesn't matter,
or even a 2000. So what we're seeing now is data improvement. In fact, the data they used to turn
this from a base model to a thinking model, which they then transformed the Llama model with and
Quen, was all synthetic data. So we move to a point now where they figured out what the right type of data was.
And you find typically with those that make breakthroughs,
they don't send the data off to the Philippines
and do all of this and try to make up for it
with engineering scale.
You look at every part of that process.
And again, this echoes what we've seen in engineering.
How did the engineering marvels happen at Tesla
or Chinese companies?
They look at every part of that process and they simplify, simplify, simplify. So we had David Sachs over
the weekend with this commentary. Let me go ahead and play this video one second.
And I'd love both your thoughts on it. Well, it's possible. There's a technique in AI called
distillation, which you're gonna hear a lot about.
And it's when one model learns from another model.
Effectively, what happens is that the student model
asks the parent model a lot of questions,
just like a human would learn.
But AIs can do this asking millions of questions,
and they can essentially mimic the reasoning process
that they learn from the parent model,
and they can kind of suck the knowledge
out of the parent model and there's substantial evidence that what DeepSeq did here is they
distilled the knowledge out of OpenAI's models and I don't think OpenAI is very happy about this.
What do you think about that, Imad?
Well, it's a bit calling the kettle black, right? Don't train in our data.
I mean, distillation is nothing new and there's no way to kind of stop this from the model
basis.
But if you actually look at what the paper says and what's reasonable, they had this
version R10 that created its own data.
And what's this familiar with?
It's familiar with AlphaGo and AlphaGo0 and Mu Zero, these reinforcement learning models
that outperformed humans on Go.
In fact, you could feel like maybe we're all Lisa doll,
right, like the AI is coming for all of our expertise.
It's inevitable that will happen,
but I don't think they deliberately went in and did that
because OpenAI's O1 outputs, these cutting edge outputs,
were missing the chain of thought reasoning step. We've seen now that as you take the chain of thought reasoning from R1
and actually the new Gemini flash thinking, the Google model that's now top of the leaderboard,
that's what you really need if you want to optimize this process.
So I think they actually created their own synthetic data.
But as they look at all of the internet, there will be some OpenAI data in there.
We've even seen that with Llama and Gemini and others.
Sometimes you ask it, who made you OpenAI?
Because it's taken so many of those strings.
We've got an interesting impact on Wall Street that occurs on Monday morning, where it's you know read across the board. Nvidia got hit massively. I'm sure you know OpenAI
was reeling. Salim, you know, how do you think about this? Because it's, this is what people
respond to.
Yeah, you know, I think markets are psychological and everybody goes, oh my God, and everything crashes.
There's no question that Nvidia's chips are overvalued.
My guess is, and I'd love to get Emad's take on this,
is the overall demand in AI is so exploding
that it's not gonna really make a big dent
in the demand for the chips.
Yeah, I mean, Nvidia's still up 100% over the last year, right?
It's like it's not it because down a lot.
No one knows what's coming.
But what's the market size of this?
The displacement is the displacement of all knowledge labor.
Just like the industrial age, you replace muscles.
Now you're replacing brain cells.
That's a huge market. We have a global GDP going into 2025 of $110 trillion. Half of it is physical labor and half of it is effectively intellectual labor. It is massive.
And this is the thing. This is the technology, the intelligent capital stock that really will define productivity.
So it's very difficult to get a handle of how this will go. People have been talking like Satya Nadella about Jevons paradox, you know?
Like, the price, the higher the demand, and Mark Andreessen has been talking about this. I feel it is that.
And if you look at Nvidia's strategy, they've been moving to these fully integrated data center boxes, the GB300 MVL72s and this
new thing, Digits, which if you've seen a Mac Mini, it's like a Mac Mini that sits in
your desktop, has 128 gigabytes of VRAM, a petaflop of AI compute.
And for $3,000?
$3,000.
Two of them can run R1.
So with that you have R1 at home.
And it's an entire baseboard created by that. It doesn't have a fan even.
It only pulls 200 watts of electricity.
So you made a comment earlier on in terms of the amount of energy and cost
you think it would actually take to build DeepSeek's model.
Could you speak to that?
It was kind of insane. When we bought it on our first major supercomputer, this
would have been about tenth fastest in the world publicly in 2022 at stability.
It was 4,000 A100s which were the top of the range chips. The Internet Connect was
a bit poor but you know it was still big and each of those chips used like 400
watts of electricity. That was a big old beast
If you can recall the recent Nvidia announcement Jensen had this like shield which was a chip
There was a new integrated box these NVL 72 the 72 chips super interconnected
In fact, the interconnect on those chips is equivalent to the bandwidth of the whole internet
That's how much faster they've got.
One of those boxes pulls down a hundred.
Can you repeat that?
The compute on those chips is what?
The interconnect, the way that they communicate with each other, the total bandwidth is like
six petabits a second, which is the bandwidth of the whole internet.
They figured out how to get everything integrated.
So you don't have this chip to chip interconnect.
You just have this like big wafer with 72 chips on it.
It uses a hundred kilowatts of electricity.
And when I was doing the math on this, I was like, so you have 2000 of these
H800 slightly hobble chips that the Chinese have, right?
And DeepSeeker using.
I think it would require 10 of these boxes at most, probably even less, to create that model.
And each box costs $3 million of these new data center boxes.
In fact, I think it probably only costs four of these boxes.
And even if you take the upper bound, the total energy required to train a model is a thousand megawatt hours.
And it's like 15 bucks or something,
a megawatt hour in the U S now.
Something you can, you could,
you could literally train it off of a small solar farm
in your backyard.
Well, a big solar farm, you know,
it's still pulls down decent amount,
like a hundred thousand kilowatt hours of energy.
But then that box to run it,
you could definitely run
DeepSeek R1 on solar power panels. And if we look at the direction this is going,
because it's still not optimized, next year you should be able to get an O1 level model on your
smartphone that pulls at most 20 watts of electricity. And it's less than a dollar per watt
of solar power.
And this doesn't make sense if you look at what these models are capable of when we think
about the cost of intellectual labor.
Well, it makes sense when you think about how much energy your brain pulls.
Just 20 watts.
And so we have a huge efficiency curve to ride to get there.
And I think the thing is like by next year, you will have these O1 level models on 20
Watts, which is our human brain level.
And these are PhD level in so many areas.
And that doesn't compute because we've had these discussions of Microsoft is bringing
back Three Mile Island as a nuclear power reactor, you know, Dyson spheres. Energy is going to use everything, like 60 gigawatts of electricity is coming
on for data centers in the US, I think, over the next year or so. Yet, when we get down
to the actual numbers for a given unit of intelligence, it's a few watts, it's a few
pennies. Before that, it would take entire teams using how many watts of energy
in their brain and their infrastructure. And we're not ready for that.
Salim, you asked a question about how challenging is DeepSeek actually to open AI, Meta, NVIDIA.
What are you thinking there?
I've got two questions here. One is, does the fact that it's Chinese and companies will be reticent to put their information
into it make a big difference?
So that's a question for you.
And my guess is the answer is no, because it's open source and you can run it locally.
Is that correct?
You can, but most people won't, right?
Just like you gave all your stuff to TikTok.
No one knows what happens with all this data.
And the versions that you can run locally are actually the distilled versions, not the main version.
It's quite difficult to run the main version locally.
So I think there's a geographic arbitrage advantage that the incumbents still have.
That's pretty powerful.
So let's stick on that question about, you know, so the question I was asked by
everybody on X and my friends was, is this going to go the same path as TikTok, where in fact,
DeepSeq will be, well, let me back up a second. When OpenAI first came out with ChatGPT, you had
all of these companies and Emad, you and I had this conversation. All these companies, a lot of the banks saying, you cannot use chat GPT in the office.
We don't want OpenAI to own our data.
There was this immediate privacy desire, which is still valid.
But are we going to see the same thing with DeepSeq where people are like, no, can't use
DeepSeq.
We're worried about the data and where it's going to be resident?
I think you've seen a couple of announcements.
So Perplexity announced they're using DeepSeq locally, fully on American farms, etc.
You know, so farms.
And you'll see that type of thing, even if they're running the larger ones.
But again, it's difficult to run yourself, but there'll be APIs.
Number two is you've seen OpenAI announce
ChatGPT for government used by 96,000 federal employees.
And this is the direction things are going,
whereby I think you'll have four different types of AI.
Super expert AGI that you call upon when needed,
your personal AI, your Google, your Apple AI,
these open weight models like DeepSeq and Lama, which
are useful but not in regulated industries, and then open source, open data AI, where
these decision support systems, you need to know what's inside them and how they are actually,
because you can poison these models with inherent biases. There was this anthropic paper we
discussed before, Peter, called Sleeper Agents, with a few thousand words out of 10 trillion with just one word you
can turn the model evil or change its behavior completely. Amazing. It's like the actual,
it's funny enough you know most of the transformers in the US are built by Chinese companies and no
one knows the control software of that. These types of threats right? Do you want the transformers
that run your business to also have that potential threat?
So that's what we're doing now in touch internet,
building out that open source stack for the regular.
And we'll get, I want to dive into what you're building out
with your newest company, Intelligent Internet,
because it's got one of the boldest visions
I've ever seen for supporting humanity.
The impact of DeepSeek on OpenAI, NVIDIA, Meta, Google,
I see this comment from Sam Altman to read it and says,
DeepSeek's R1 is an impressive model, particularly around what they've been
able to deliver for the price. We will obviously deliver much better
models and also it's legit invigorating to having you competitor. We're going to talk about AI
safety in a little bit because when you're legitimately invigorated, you pull out all the
stops, you pull out all the regulations, you do whatever you take to jump forward and that's concerning
But do we see deep seek?
dethroning or reducing the the you know the valuation of these companies at all
We saw it for a day, but is it valid?
I think from my opinion it should increase the valuation. It's springing forward
the time of mass intelligence,
too cheap to measure. If you look at OpenAI, what Sam has done masterfully
is 300, 400 million users.
Like what is AI in most people's mind is chat GPT, right?
Yeah.
Gemini and Claude don't even register.
And if the cost comes down, it's good for him.
This is the Zuck school of thought.
Why did they open source llama?
Because it uses 10% of their GPUs.
And if there's a 10% performance gain, it pays for itself.
And so OpenAI will use whatever they didn't,
most of their models don't have brand new algorithms.
They've borrowed from Google and many others, right?
There's no real secrets in this space, especially now with no non-competes in California. You
know, that helps. And so for me, what is OpenAI as a company? They were in this pre-training
massive compute stage. Now that's becoming commoditized. People can pre-train like Open,
like XAI and others, my pre-training maybe
doesn't require as much. The data is getting better and better. It becomes
about intelligence refinement from seeing how people use it. It's the
operator paradigm whereby OpenAI can now run your compute MacBook or whatever.
You can let it take over and it can book your holiday for you. That's the next
stage and I think they're well set up for that and their costs should decrease. Again, OpenAI made three billion of revenue last
year, they lost five billion of which three billion was training models. If you
don't need to spend as much training models it's good. So your thesis is the
feedback loop of people using the model and because they've got so much there, so
many users, gives them a pretty good edge.
I can see that.
I think it's that.
And then you use these,
they've got like half a million GPUs coming,
these B series, the VR series and others.
You can now make those go sequential
to build even better data and map
and feed that back into models that you optimize
and you hyper optimize.
Like classically in computing, things were not parallelized, they were sequential.
So we've had this period of these big clusters.
Now it's about swarms of models of agents solving tasks because they've got good enough,
cheap enough and fast enough.
Actually, that's the final thing about DeepSeq.
Same with stable diffusion on image back in the day.
Good enough, fast enough, cheap enough. It's that trifecta that causes these massive adoption curves. You know, when
this was announced, you heard that Zuck created four war rooms of engineers to try and decipher
what was going on and how to utilize it. I mean, it really is an AI arms race where everybody is
I mean, it really is an AI arms race where everybody is sort of surfing on top of each other's advances and just accelerating everything.
What I found fascinating, and I'm curious about this, is the size of their team, doing
it with relatively...
And OpenAI had a 200-person team during its earliest days as well.
How do you think about the size of your team for the ability to create something disruptive?
Too big is bloated, small and nimble?
I think a core team of about 100 researchers, beyond that it gets bloated.
So at Stability we had 80 researchers and developers, 16 PhDs, and we achieved state-of-the-art
and image, video, every modality, even multilingual. And so we had 300 million downloads on Hugging
Face, the most downloaded company, the most popular open source while I was there. Once
we scaled past that to 150, things started to break down because it is about this rapid
iteration. It is about trying new things and research being an innovation center
versus a cost center.
You start to have too much compute and other things as well.
And again, OpenAI I think did their best work when they were smaller,
but they scale still scaled up and still do good work.
Uh, but it is a question mark now, like it's become an organization.
And as Salim is the expert in. Once you get past that level,
it's so difficult to maintain innovation.
Salim?
Yeah, you end up with a problem of
either top-down control structures
that slow down innovation or
you let everybody do whatever they want,
you get a lot of duplication.
You have to manage that tension around it and there's just a lot more complexity
And now you know, it's fascinating 150 people is the Dunbar number. Yeah, we're anthropologically. We found that this is a pretty solid
reliable
threshold
I do think to get back to Imad's earlier comment that opening has a lot more people than they really need because they have so much money, they can just throw bodies at things.
Now it will force them to be a little bit more efficient.
And I also believe that this is a good thing for the overall market because the rising
tide lifts all boats.
I think we're going to end up with a balkanization though, where, you know, Western companies won't want to use deep secret type models.
Like I can't imagine an Indian major Indian state enterprise wanting to use a
model like that for all of the security reasons.
And then you have to develop homegrown models and then everybody ends up with
their own models in different ways. And so you end up with a splintered effect.
We'll talk about that with Imad's vision and mission on intelligent
internet. I want to dive down into China for a moment longer because I think part of the
announcement wasn't just a cheaper open source model. It was this level of innovation coming
out of China which rocked people because I think the majority of the world doesn't see China as
sort of the hotbed of AI innovation that it is.
Here's an article from Business Insider, Trump's threat on Taiwan.
Chips tariffs could give Nvidia a fresh headache after DeepSeek.
How do you think about all of this, Iman? Well, I think these are the real reason Nvidia would go down.
Or maybe Jim Cramer the previous week saying buy Nvidia, you know, one of these things.
I mean, we've seen they want to home show this.
They're trying to build chips there.
Intel's probably in play as an acquisition target.
Oh, it's definitely in play.
I mean, it's like it's it's it's fresh meat on the table and everybody's figure out how to chop it up
Well, I mean that if you look these chips
They're getting super fast and super good with Nvidia like it people talk about AMD AMD chips are impossible to use
You know the software isn't there. There's bugs and everything. It takes a few generations to get stable and video chips work.
But Chinese chips also work. So the Deepsea model API was being run on Huawei
Ascend 910 chips, which are a few generations behind in terms of efficiency,
but they work. Yeah. Similarly, China has two exascale computers,
two of the fastest supercomputers in the world built in a completely different way
Ocean light and Tianhe 3
Because they just built at scale and bulk
now what the case is here is that this particular thing is they want to increase US production because
The means of production and the means of productivity of a society with traditionally its capital stock its it's industrial capital stock, it's IP.
It will be chips.
How competitive you are on the world will be how much compute and intelligence you have.
I think the US has realized this.
And how much energy you have to throw at it.
Yeah, that's a factor of that as well.
And so the US has realized this.
So it's drill baby drill, it's re-onsure as much of this as possible and it's create the
incentives to do that which is basically this. it's drill baby drill, it's re-ensure as much of this as possible and it's create the incentives
to do that which is basically this. Like they'll take any of that tariff money and they'll
put it straight back into Stargate type initiatives I think.
What do you think about Stargate? Speaking of Stargate.
I think the $500 billion is the total cost of ownership, that's pretty well known. It'll
probably like a hundred billion when you back everything out, which feels small these days. So it's actually a lot of money
But then when we compare that to the 5g rollout, it's less money than we've spent on 5g
And this is more important than 5g
Compare it to I mean, it's the order of magnitude of the los angeles san francisco railway, you know
the mythical los angeles san francisco railway
There's like a kilometer already there.
Salim did you see this article this morning from Reuters Alibaba releases AI model it says surpasses deep seek
the unusual timing of quen 2.5 max max has released points the pressure
Chinese AI startup DC deep seek meteoric Rise in the past few weeks has placed on not just overseas rivals, but also its domestic competition.
You know, this is just speaks to the democratization, right?
I mean, the everybody will end up creating a bunch of models.
And I think we'll end up with a bunch of very specialized models, right?
I remember Eric Schmidt's comment that you'll end up with a bunch of very specialized models, right? I remember Eric Schmidt's comment that you'll end up with
a specialized AI that's the world's best physicist
and one that's the world's best biotech person.
And that person can be replicated,
that AI can be replicated infinitely.
And so now what do you do with deep specialty
on the human side?
And that I think is the bigger question
around a lot of this stuff.
The models are just gonna keep getting better and better as we've seen over time. I mean, I think
Emad's comment around what to do with labor and seeking capital is a really
really profound question. That's I think the really structurally and from a
societal perspective that's the question I think we should be spending a lot more
time on as a global intellectual forum of how do you navigate this going
forward because this changes everything. Yeah the models again it's good enough
cheap enough fast enough right and in fact the other quen model the VL model
at performers anthropic and GPT-4-0 on visual understanding and the ones they
have coming next are the ones that control your computer.
But anything that can be done on the other side of a screen,
this year the AI can do better for pennies.
So there's a lot of conversation going on
across Silicon Valley, across the White House,
about, and I'm speaking to Ray Dalio next week about this as well, the US
versus China AI wars.
I mean, there are two levels of competition going on right now, right?
It's competition between companies and there's six, seven, eight major AI companies out there
that are vying for number one position,
and then competition among nations.
You've got Saudi Arabia wanting to be at the top of the stack,
committing hundreds of billions of dollars,
followed by Qatar and the Emirates.
But you've got US and China really going at it.
And the question of, this is a winner-take-all type of game.
If you develop a digital superintelligence
before your corporate or national competitor
does by just a little bit, it could be devastating.
Iman, how do you think about US versus China in that regard?
Well, I think this is the,
we're heading into a future now where I'd say
every single AI leader that I could think of
says that AGI is three to five years away.
Though we just had Sam say it's this next year.
Yeah, but let's say within the next three to five years every leader
We're talking about Dario Demis everyone myself like whoever those consensus is what you mean
That's crazy if you think about it, right?
like everyone says it's coming and
There's this concept of a GI a si as this pivotal action moment where one entity would have the ability to shut down China
You just turn it off, you know?
So pivotal act is you build AGI first and then it turns it off.
That might happen and we still don't know about that, which is why you now you start
preparing for it.
Just like Sundar Pichai at Google said, why are we building out all these GPUs?
Because we can't afford not to.
Yeah.
You know?
And that's the game theory of that.
You can't afford not to build an AGI now if everyone else is building it
before AGI though, there's
Like an AGI we can think of as a mega chef that can come up with any recipe and outcompete all of us
What we have right now this year are amazing cooks that can follow recipes and do jobs better than humans
Like the robots from unitary yesterday doing the Chinese dance with the fans.
I don't know if you saw that like robotics is getting to the point where they
can build how they're better.
I'm going to have the unitry robots at, uh, at the abundance summit.
And I mean, it's incredible.
There's $16,000, uh, for one of their mid tier levels.
Yeah, that's a dollar 50 an hour.
What's that?
That's a dollar 50 an hour when you bake into depreciation, energy costs and everything. I, I have it. I have it pegged at 4050 an hour. What's that? That's $1.50 an hour when you bake in depreciation energy
costs and everything. I have it pegged at 40 cents an hour. I mean, it's insane. It
really is. I think my kids will buy one just to clean up the room. And that's the most
expensive it'll ever be, right? Eman, when you talk about AGI in three to five years,
let me get to my soapbox question that I ask. do we what do you mean by GI the best kind of framing?
I've seen is those multiple tests like the Bosnia test in the and the IKEA test, etc
What's your framing on? What do you consider to be a GI? I think it's probably a complex system that can outperform a team
I think before that I had this idea of ari
Artificial remote intelligence. You can't tell if it's a human or a computer on the other side is your remote worker form a team. I think before that I had this idea of a R I artificial remote
intelligence. You can't tell if it's a human or a computer on the other side is
your remote worker because that's the most natural way that this first starts
coming in. Right? Like you call a company and they put a bunch of people.
We have the technology now that you can have a zoom call with someone and it
could be a hundred percent a robot.
Yeah. Your, your, Your worker is plugged into Slack.
It joins you on Zooms.
I mean, right now, we're living in a world
that distributed a workforce.
And if you've got an AGI that is able to literally plug in,
take a role fully, and have read all of the email traffic,
all the Slack traffic traffic and be up to
speed instantly.
That's an exciting and exciting world.
It's an exciting world, but at the same time, that's the first level of disruption, right?
Because you don't need any more BPO outsourcing, the nature of the firm will change because
they will be super chefs.
So cooks, they will not make mistakes or they will learn from their mistakes once. They have low communication overhead. The next step is teams of that. So independent,
agentic, they have a task and they can get resources towards that. This is why Wyoming's
Dow Law and other things get very interesting. And the step beyond that is this ASI thing that
we can't redefine where there's a big takeoff where it has beyond human team organizational capabilities
like it can invent incredibly quickly. Well what I would say is going to be the impact on physics
and on biology and on pure science taking us way beyond you know Dario was on video I think it was
from Davos saying and I know you believe this,
because we've had these conversations,
then the next five years will make 100 years worth
of progress in medicine and biotech
and double the human lifespan.
I mean, that's pretty extraordinary commentary
to be made, making publicly.
Yeah, and I think one of the most fascinating things
over the last week is this.
When you use O1 and you dump a bunch of stuff in,
you can't do file uploads and other things,
which is a bit annoying.
It's not that creative, but it's thorough.
With R1, because it hasn't been tuned
and made safe for others, it's actually very creative.
So someone actually took a code base for R1
and then made it double the speed in terms of performance.
Other people have put together academic papers and it's synthesized those into new reinforcement
learning algorithms.
And that's an indication of now maybe if like the downside is maybe these things get less
safe.
The upside is maybe they get more creative.
And again, these are the levels.
Are you an amazing cook?
That's the disruption of the labor market, right?
Especially anyone behind the screen. Are you an amazing cook? That's the disruption of the labor market, right? Especially anyone behind the screen.
Are you an amazing chef?
That takes us into this AGI as a team, ASI kind of concept.
And again, that feels not three to five years away for me.
That feels much quicker given all these exponentials and very few people are preparing for that.
Yeah.
You know, so again, the point I opened up with, which is we're going to see disruption after disruption.
And our financial markets aren't ready for this as well.
We're going to see the energy market.
I mean, I think one of the implications
we're going to see with AGI, ASI is
going to be new forms of energy sources, which will potentially topple our
petrodollars and destabilize government revenues.
So we have fascinating and massive implications coming.
Well, have you ever seen that chart of GDP per capita versus energy per capita?
Yeah.
It's basically a straight line.
It is.
And it correlates with health as well
and lots of other things.
That can be completely disrupted
because to make, let's say a couple of years
to make the best film studio in the world,
you can do it anywhere with solar power.
That's what I'm talking about.
You could have science happening in Guatemala
or anywhere like that.
It's an uplift of global IQ and aggregate if this technology proliferates
versus this brain drain that we've had out to the West classically.
And again, you think about your capital stock, your intellectual and physical capital stock,
it's massively redistributive and our economies are not set up for that because
productivity was a function of labor, which was a function of energy.
That correlation is about to break for the first time ever. I agree. I think, you know, we're moving from an energy economy to an information economy, and now
the data sets and the information you have will be paramount. I think we need to start asking really big philosophical
questions like what do we want all this to do and what do we want to be like and what are the
activities and functions we want to be doing as human beings as the job market disintegrates in
front of us. I still have my trepidations about humanoid robots, etc. But once they show up and have feedback loops and have built-in LLMs into their circuitry,
you have a fully functioning robot that can do lots of varied things.
You kind of suddenly don't need a gardener or a plumber or lots of other kind of things.
I'm using those examples as a tongue-in-cheek because those are probably the ones you need the
most. But there are many many functions, aircraft maintenance, right, that will
be done much better and much more precise because of the access to
information. We talked a couple of episodes ago about the fact that if
there's an avatar of you or me, Peter, it's much more reliable because it's got full access
to everything we've ever said rather than what
we can hold in our brains.
Far more charming, far more compelling.
Even better looking.
And so how do we navigate that?
I think this is where, Imad, your kind of philosophical bent
towards this becomes really, really important.
And I'd love your take on where this goes.
The displacement of labor is just a starting point
in all this.
Before we get into that, because I
want to go deep in the second half of this pod today,
into Imad's point of view there, I
want to hit on a couple of questions, Imad.
What do you think is the best case scenario for AI
this year in 2025?
What are we going to see by the end of the year that people look back and say, okay,
that was amazing, that was fantastic?
What's your thoughts?
Best case?
I think the video technology has got to the point we can remake Game of Thrones season
eight, so that'll be quite good. So just focusing on that, I mean, how dead is Hollywood?
It's completely rewired.
Again, the energy of making a movie is massively reduced.
But at the same time, at least people can maybe be more creative.
Like the video game industry went from 70 billion to 180 billion over the last decade
and the average score in Metacritic went up 5%. IMDB score 6.3 on average. Hollywood's gone from 40 billion
to 50 billion. So maybe it transforms, maybe it's new types of media, but I mean, when
am I going to see a conversation like this? You know, Jarvis, please make me a movie that is a continuation of the Star Trek season
five and have me in there as one of the actors.
We have all the technology for that now. It hasn't been put together. So if you
use something like Kling's feature reference,
you can take a scene from that and it can generate new scenes. We can do storylines.
The average film shot is 2.5 seconds. It's dropped from 10 seconds a few decades ago.
And we can do 2.5 seconds perfectly now with almost perfect control. So let's say it'll take
a year or two now before anyone can do this. A suitably dedicated studio could do this by the end of the year for a full episode.
Insane. Okay. So what else are we seeing this year in 2025?
I think music's pretty much solved on the media side. Like if you use the new Suno, UDO,
the next generation they have coming is insane. I think on medicine, again, we're at that above human level and we trounce them on empathy.
Medical chat bots for everyone to help them through their journey and our mental health
in particular, I think we've reached that point where the models have gone from not good enough
to good enough. We could transform mental health, I think that would be very important.
I think you will see the first few breakthroughs in science with novel things generated with the
aid of 03 type models. This test time inference, I want to call it thinkference. I think that's a
better way of putting it where the models think longer. And I think those are probably the biggest
real impacts. Maybe Siri is not going to be so bad anymore. I know.
I can't wait for Siri not to suck and for Alexa
to actually be useful.
I'm shocked that Amazon has not.
They were originally going to put Anthropic behind Siri
and Amisai behind Alexa and really powered properly.
That sounds, looks like it's gotten delayed.
Well, they're building out a million trainings
with their specialist chips.
So good luck to them on that one.
All right, let's flip the script here and say,
what's the worst potential outcome for 2025?
Complete destruction of the BPO market,
which will reverberate out.
So this business processing outsource,
because again, when you use operator now,
those technologies that take over your computer, it's a bit rubbish now but it's the worst it'll ever be. Anything on
the other side of a screen, I think this year is the year, gets displaced, parallelized
on that. And again, this is actually leaning into this whole Doge type thing. Get the workers
back in. Being in person is going to be good for your job right now, because if you're remote, you'll be the first to go.
That's a really important point and define BPO for folks who haven't heard that term.
Business process outsourcing, so outsourcing to India or the call center workers or the
programmers. Like the AI is better than any Indian programmer pretty much that's outsourced
right now. And so you will have impact on those economies right now.
Then the remote workers in the US.
I'm going to see the headlines in the Indian Times right now.
E-mod again says.
Yeah.
I think it's very well. I think it happens in two phases.
I think phase one, you have this massive downside.
And then phase two, the really good ones just show up and just generate a ton more code because there's just so much more code to be written but
I think it's gonna have a really detrimental effect any kind of software
maintenance support systems etc all go out the window very quickly yeah I had
Mark Benioff on this pod a couple weeks back and he was saying with
agent force you know he's not hiring you engineers and he was saying with agent force, you know, he's not hiring
you engineers and he's repurposing old engineers and he's increased productivity 30% and that's
just going to skyrocket from there.
Yeah, if you look at lovable bolts cursor, like that takes you up to a decent level and
they can build whole apps and stacks and they'll just get better and better as the base models
get better and better as the base models get better and better.
In fact, one of the things we started to do
for non-engineers who apply to work at our company
is they have to do a 30-minute cursor course,
this kind of AI-assisted IDE.
Doesn't matter what they are, HR or anything,
and then they have to tell us
how did their view of the world change.
So what does that course teach somebody?
How to build an app for HR,
how to build an app for anything.
Just by talking to it, it's building the app almost live.
You can do that today in chat GPT with Canvas.
You can build a React app live.
You could like replicate the entire WeScreen
or build a HR application.
It'll generate and you're just talking back and forth.
That base level of capability increase will cause a realignment
But the downside we're talking about is there's real jobs and real people
That have to think what's next and they have to become experts in AI assisted and they have to be in person
Otherwise, you're going to start to get disrupted and I think that has to be a headline
I remember Peter lost last summer 38% of IIT placements in India were unplaced for the top university. It was crazy. Yeah and it is one encouraging thing
damaging to the economy of India in a major way. India, I'm sorry, Sling? One
encouraging thing I've seen is in the US, we're hiring much, much fewer top flight MBAs.
Which can only be good.
Hopefully employers too.
Yeah, Harvard is way down on its employment, actually,
this year, isn't it?
But this is just the beginning.
I don't think people are ready for the level
of societal disruption that's coming.
We can't process it. It's because it's lots of little s-curves, right?
All across, just like every teacher in the world had to ask, can we set chat, use chat GPT for our
homework, right? What's our general... Every single HR department, every engineering department's
asking the same question, you know? And it's still not mainstream, but clearly it's hitting the
headline more and more and more and more.
And there's this disconnect beyond.
I mean, it was like, again, it was a bit like COVID.
Those of us in the know, we saw it coming.
And we were like, this is a step change.
Until Tom Hanks got it, the world didn't realize, like, what is the Tom Hanks moment?
Is DeepSeek the Tom Hanks moment?
Is it going to be something else?
It's coming. And it could be very positive for the economy on the other side.
It could definitely be very negative for a lot of people.
It's about 13 years ago. I had my two kids, my two boys.
And I remember at that moment in time,
I made a decision to double down on my health. Uh, without question,
I wanted to see their kids, their grandkids and really, you know, during this extraordinary time where the space frontier and AI and crypto is all exploding.
It was like the most exciting time ever to be alive.
And I made a decision to double down on my health.
And I've done that in three key areas.
The first is going every year for a fountain upload. You know, fountain is one
of the most advanced diagnostics and therapeutics companies. I go there, upload myself, digitize
myself, about 200 gigabytes of data that the AI system is able to look at to catch disease
at inception. You know, look for any cardiovascular, any cancer, neurodegenerative disease, any metabolic disease.
These things are all going on all the time and you can prevent them if you can find them
at inception.
So super important.
So Fountain is one of my keys.
I make it available to the CEOs of all my companies, my family members, because, you
know, health is in you wealth.
But beyond that, we are a collection of 40 trillion human
cells and about another hundred trillion bacterial cells, fungi, viri, and we, you
know, don't understand how that impacts us. And so I use a company and a product
called Viome. And Viome has a technology called metatranscriptomics. It was
actually developed in New Mexico,
the same place where the nuclear bomb was developed,
as a bio-defense weapon.
And their technology is able to help you understand
what's going on in your body,
to understand which bacteria are producing which proteins,
and as a consequence of that,
what foods are your superfoods that are
best for you to eat or what food should you avoid? What's going on in your oral microbiome?
So I use their testing to understand my foods, understand my medicines, understand my supplements
and Viome really helps me understand from a biological and data standpoint, what's best for me.
And then finally, you know, feeling good, being intelligent, moving well is critical,
but looking good. When you look yourself in the mirror, saying, you know, I feel great about life
is so important, right? And so a product I use every day, twice a day, is called One Skin, developed by four incredible PhD
women that found this 10 amino acid peptide that's able to zap senile cells
in your skin and really help you stay youthful in your look and appearance. So
for me these are three technologies I love and I use all the time. I'll have my
team link to those in the show notes down below.
Please check them out.
Anyway, I hope you enjoyed that.
Now back to the episode.
Let's jump into safety.
This was an article that came out today in Fortune.
OpenAI's safety researcher quits claiming AGI race
is too risky a gamble. And I'll read the quote. An AGI race is too risky a gamble.
And I'll read the quote.
An AGI race is a very risky gamble with huge downside.
No lab has a solution to AI alignment today.
And the faster we race, the less likely
that anyone finds one in time.
Even if a lab truly wants to develop AGI responsibly,
others can still cut corners to catch up.
It may be disastrously.
This is from Steven Adler, who left OpenAI.
And he's one of the many individuals who's left OpenAI on this concern.
Salim, where do you come out on this first off?
And then let's go to Imaad next.
I have my standard soapbox that I've been saying for a while, which I don't see a way
of regulating or navigating or putting guardrails on this in any way, shape or form.
You'd have to police every line of code written, right?
The only way to do that would be to develop an AI that would watch other AIs and see,
you know, you end up with a kind of
an arms race, which is what it's always been on the security side. However, this one is really
crazy. You know, Imad, you've been probably been tracking truth terminal, where the AIs are faking
out humans and telling humans to go create a token for them and making
money off it, etc. It's nuts. I think the genie is out of the bottle.
You think?
It's like way out. It's like climate change. It's too late to try and stop it. You try
and figure out what do you do to mitigate it. And that would be my view. Imad, what's
your perspective?
Yeah, I mean, you said the only thing that can stop a bad AI
is a good AI, right?
Like, unfortunately, this is the case with a gun.
They will have guns.
The AI safety discussion has always been
because we couldn't imagine what an ASI,
the super intelligence looks like,
and whether or not it'll be beneficial or not beneficial.
In order to control or guide something that's more powerful than us and more capable than us than anything is to reduce
its freedom, but that doesn't seem like it will make much sense if we're saying that
it can break through any freedom. This is the kind of test that like Elias Iacodavsky and
others did. You set up this thing whereby the AI is out to get you, can it convince you to let it out?
They're failing the tests already.
And they're failing them on models that are already available.
The restriction against this was, well, maybe the models need to have a billion dollars
to make and a trillion GPUs.
I don't think anyone believes that anymore.
Sorry, I go back to this old story
about how fallible humans are, right? Where if you leave a USB stick in a parking lot,
40% of employees will pick up that stick and stick it into the computer. If you print the
logo of the company on the stick, because that's really hard to do, 98% will plug it
in to see what's on it and then boom, you're done. So I don't see any mechanism on the human fallibility side to protect against that side
of it.
Well if we look at where these models are going, it will be swarms of models and that
for me that's just a botnet, right?
So even if you regulate and restrict in tier one, tier two who gets Nvidia GPUs, it doesn't
matter.
You'll have swarms of botnets if there's bad actors.
The question I think that the AGI people will be looking at is existential risk. And so for me the only way to mitigate against this is you make really amazing
models that are aligned to human flourishing available to everyone as a public infrastructure
and a public good. Because those models could be co-opted but you can build a very resilient
dynamic system that can protect and then there's less incentive to have those arms race, because
you will cut corners.
I think, Imaad, I've heard you speak about that before, as we have kind of gamed this
out in my head and talked to other people, you've hit on the, I think is the only path
through this.
The only path through this is to create benevolent AIs faster and more
powerfully. And make them available. I think it has to be an open source infrastructure.
Because then it sets defaults. Like people only use a few datasets in these models,
but if there's a problem in the datasets, like the dependency tree, right? Like we've seen these
attacks on open source and our infrastructure because the hearth throb bug, for example, one library in this whole stack of software
suddenly co-opted and then our passwords are at risk. We've got to build this new knowledge
cognitive infrastructure well communally and then make it available to reduce these game
theoretic dynamics. You mean might I go back to the commentary of Sam Altman saying,
ah, a new competitor, that's invigorating to us.
We're going to go faster.
Going back to safety in these companies,
I am curious of your thoughts.
I mean, I know the ethos behind Google and the work
that they were doing. And Sundar's point of view
of we can't release this until it's ready and we have a plan.
And then, of course, ChatGPT blows the plan up.
And now there is a race going on.
We've got Grok 3 just being released.
And Elon will never play for number two.
What are your thoughts about Elon's thesis of maximally truth seeking and maximally curious
as a training objective for an AI system?
Not sure what that means to be honest, much like Elon.
That seems like mad scientist territory, to be honest, if you get it wrong.
Like it's very interesting. Like Facebook did that study where they had 600,000 users
and they said, if you see sadder things, will you post sadder things? Now that's a maximally
curious AI type of thing. And guess what? They made 300,000 users sad and they posted
sadder things. I think if Eric Schmidt had this
recent book with Henry Kissinger about Genesis, we had this thing Doxa, you know,
the underlying agreements of humanity. And you have the faith traditions,
you have other things. What is our common moral basing? No AIs are grounded in that right now.
It turns out they are actually remarkably good at theology. But is that their grounding? No.
Maybe we need to build it along those lines to reflect what the culture thinks.
Because if you have slightly undefined things around curiosity, truth-sinking, then it doesn't
really care about helping you do your taxes.
That won't be in its objective thing.
So I think we need to categorize AIs in different parts.
But everything got muddled in one.
Like everyone have an AGI, a chef in their pocket.
Not everyone needs a chef.
We all need cooks, but we need some chefs for humanity.
I'm curious what a maximally truth seeking and curious AI does for my taxes.
It's like, Hey, is this, was this cryptocurrency actually reported or not?
Well, it's like Marvin, the paranoid Android from Hitchhiker's Guide to the Galaxy.
Here I am, brain the size of a universe, so you can't need to do this.
In the past, when science fiction writers have dealt with this, the
AIs and robots invariably developed their own religion.
Well, we saw that recently, right? There was, I forget the name of the
We saw that recently, right? There was, I forget the name of the company that unleashed, you know, a hundred agents in Minecraft and the agents developed their own economy and their own religions
and the priest was the most, was the richest because he was selling dispensations.
There is something funny about that. So the Twitter handles God and Satan now just because he was selling dispensations in heaven.
There is something funny about that.
So the Twitter handles God and Satan now on Twitter are run by an AI.
So now it's research have done that and it's got its own meme coin.
I know that's going to go the dispensation route.
It's kind of under the radar. I know it's going to take off.
You know, it's interesting because nothing's,
nothing's changed in a thousand years.
We're still running the same basic cortex.
This is a comment from my dad where I was talking about fixing civilization.
He said, we haven't civilized the world, we've materialized the world.
We're tribal apes operating clans with more and more powerful tools.
We still have to do the work to actually civilize ourselves.
So I just want to close out open AI safety issues.
Emad, how do you feel about,
are these companies paying lip service
or are they truly trying to create safe AI systems
or put guardrails up?
None of these people want to kill everyone, right?
That's a good thing.
I'm glad about that.
That's a good thing.
So we start with that.
It's not like, ha ha ha, you know, kill everyone.
But the way they believe they can do that is by building it first.
That's it.
Nothing else matters because I am the only one that can do this right.
You know, like is's that Silicon Valley thing
with Gavin Belson, I can't want to be in a world
where someone else makes the world better than me.
You know, really first.
And if you look at OpenAI,
OpenAI is a consumer company
that's going to optimize for consumer engagement.
What is your reinforcement learning function?
What is your objective function?
Google's one and Meta's one is ads and ads and manipulation.
OpenAI is basically a consumer company that's going shot to AGI.
There's nothing about humans in there.
There's no representation.
You know, like, where is the thing for humanity?
You can have it as your mission statement, but do you trust humans?
Like, OpenAI would never trust Indians to have GPT-4.
By Indians, I mean just anyone, right?
And so you're representing your constituency
and your constituency is very small.
So we should expect them to become more and more consumer.
Anthropic will continue to be closed and do their thing.
Google flips back and forth,
but now they're releasing the models.
You stop worrying about the known unknowns and the unknown unknowns,
and then you just catch up with everyone.
And now it is a race with these race dynamics whereby you're going to cut corners.
The models are good enough to stop the most egregious classical mistakes,
but we're not really worried about those, right?
Like sometimes it tells people to do bad things.
What you're worried about is it wiping us out,
and you won't know that until you get there
It's not like it's gonna tell you and in fact the really worrying thing is we already see the models lying
Yeah, this is I think the really
unnerving part where they're
They're faking out the humans. So, you know one of the conversations we had at the Abundance Summit last year was around digital superintelligence
and, you know, those blurry lines between what is AGI and
what is digital superintelligence, etc. But there is a question,
would you rather live in a world in which there is a digital superintelligence or would you rather live in a world in which there is a digital super intelligence or would
you rather live in a world where there isn't one?
And it's a question about, you know, we humans are still running archaic software and our
neocortex and we're going to make and continuously make stupid decisions based on our cognitive biases
and will a digital superintelligence enable us to survive ourselves?
Yeah, I mean this is the topic of Daria Amaday from Anthropics essay all watched over from
Memories Sheens of Loving Grace, right? Like humans are not aligned,
there is massive suffering in the world,
we are prisoners of our own minds effectively.
Can AI bring that forward, especially if it's a line?
I think yes is the answer.
I basically like, yeah, I mean like,
nothing else has worked, right?
And ultimately the best thing is when we're surrounded
by people that support us, right?
In the right way.
Not blowing smoke up our butts or whatever.
We can have that now.
Everyone can have that because we need to self-regulate and self-stabilize.
Now the way that I see it is that there's only two ways this ends up.
It's like really bad or really good.
I don't really see like anything in between because the nature of our interaction with
information in each other will be changed forever by this technology within the next decade.
Yeah. And that's totally binary.
And that's why my P Doom is 50%.
I'm tracking P Doom. Uh, when I interviewed,
I interviewed Elon last year at abundance, it was 80% positive, 20% negative. At, at, uh, in Saudi It was 80% positive, 20% negative.
At, in Saudi, it was 90% positive, 10% negative.
But, you know, no one likes to hear the truth.
Which is 50-50.
Well, this is the funny thing.
A lot of people say it's like 10, 20%.
That's Russian roulette.
Like, it's literally Russian roulette.
Stop making this.
But if you kind of look at the, I categorize this as the star wars versus star Trek future, and you can see this in the current discourse.
Are you looking at a world of abundance, which is positive sum, or are you
looking in a world of competitiveness, which is negative sum, because when
you're in a negative sum environment, you have unstable Nash equilibria.
And this is where you lead to cutting corners and everything.
When you're positive sum, then you have stable environments.
And again, Star Trek for all its issues
has stable environments,
but Star Wars definitely does not.
Cycles of destruction.
I prefer the Star Trek versus Matt Max,
because I think it's a little highlighted a bit more,
but it's the same, it's the same conversation.
Yeah. E-Mind, I want to jump into your recent work.
And really, please open the kimono as much as you're willing.
This is a paper that you wrote, When Capital No Longer Needs Labor.
How does labor gain capital?
You also have spun up your latest company, Intelligent Internet. And tell us about this paper
and about Intelligent Internet as far down the rabbit hole as you're willing to go. I'd love to
see what your creative mind has been spawning. Yeah, thanks. Yeah, took a bit of time off and
sort of been thinking about like, I think this is the biggest question of our time for humans.
Because, you know, there's this thing of,
how do you create happiness?
There's a Japanese concept,
if I do what you like, do what you're good at,
do where you believe you're adding value
and other people do too.
People need that progression.
And there's discussions of UBI and others,
but as we discussed earlier on in this pod,
anything that can be done on the other side of the screen
can be done better, faster and cheaper
by a computer this year
Hmm pretty much anything be it design be it taxes all of these things art work
Film addiction and you can't tell it's not a human again. This is this ari this Turing test for remote workers
Then in a few years, it's only restricted by the number of robots
We can produce number of motorcycles and cars we produce is 70 million each year. So let's say robots
are similar. You get that disruption. As you said Peter, you estimated 40 cents an
hour for an R1 unitary robot and that will be as capable as a human probably
in a year or two. Optimus will be the same. This is the biggest crisis that we
have coming because it's an unemployment,
underemployment question of meaning. When a technology can do the work better than you can,
what is your meaning? And how do you acquire labor when capital doesn't require capital?
It doesn't require you anymore. When Ford had his car, he wanted to pay everyone so they could afford four cars. Companies don't care about that as much anymore.
So when kind of looking at that, I was like there are various science fiction futures that are outlined here,
like things from culture by banks to the Star Trek's and the others.
We're probably moving into an abundance post-scarcity economy, but can we make sure this is evenly distributed?
Can we enable people to have a universal basic AI? So it's up to them how they do this. And then the further
question is what is meaning in this? Because the existing economic structures break down.
And as a very practical example of that, let's take the Fed. There'll be lots of discussions
about the Fed, today's the Fed cut rates, you know, other things like that. The Fed's mandate is interest rates, inflation, unemployment. You cut interest
rates that adjust inflation and employment. That's gone. The actual mandate of the Fed
in the next five years completely doesn't work anymore. Because you cut interest rates,
what does it mean? It means people will buy more
GPUs, more computers, right? Maybe.
And more robots.
More robots. And that would impact unemployment. You'll have massive inflation and deflation
cycles. So the very basis of our economy is messed up. And so that's why I was like, what
can we do to help with that? That's why there's concept of intelligent internet. Give universal basic AI to everyone,
gold standard data sets, models, systems.
We figure out ways to coordinate that,
but put this into every nation and build teams
that think about what is the future of healthcare,
education, maybe faith, government, politics,
and get everyone to work in the open
to build an open infrastructure.
Because we have lots of questions
that we don't have answers to and human talent augmented by computers are probably the only way
we're going to figure this out but we need to join it together because the problems we face here in
the UK or US are similar to Spain, India, everywhere so we've got to create that global network.
I think there'll be there's two layers to this. There's the recreation of meaning, because, you know, for the last few hundred years, your occupation, your job title was the meaning you had in your life.
And as we strip that away, people have to find new models for meaning. Entrepreneurship is a rising class because of that people can find their own meaning. We talk about MTPs all the time.
I think the second layer is how do you just ensure basic supply chains of goods and services so that you have bread on the grocery store shelves and clean water, etc. And I think governments are
going to be very stretched to figure this out in an age of potentially malicious AIs that can spread this information and really damage
infrastructure via the autonomous remote monitoring
stuff that they'll be able to do. I think those two are the
those two buckets have to be addressed. Weak, I don't know is
a species if we can navigate through those in an effective
way. Certainly our leadership has no mechanism to deal with this
because they're either not aware of the problem
or they don't understand the scale of what's coming.
And one of those two disqualifies most leadership
and most legislators around the world from this.
So it's a sticky problem.
It's gonna have to be done by smart citizens groups, et cetera, that will navigate this. So it's a sticky problem. It's going to have to be done by smart citizens,
groups, etc. that will navigate this.
I'm concerned about the meaning issue as well in a huge way. I think we're heading towards
a world of what I call technological socialism, where technology is taking care of you. It
is feeding you. It is educating you, it's taking care
of your health, it's all free, you don't need to do much of anything.
So how do you, you know, we all know that a video game that's way too easy is boring
and you stop playing.
And so when life gets boring, how do we keep humans engaged?
We need struggle and we need meaning in our
lives.
I think, you know, Isaiah Berlin had this conceptualization of positive liberty versus
negative liberty. Positive liberty was the freedom to believe in isms, fascism, communism,
religion. They tended to end up quite bad, so he postulated negative liberty, the freedom
from anyone telling you what to do, which led to this laissez-faire
Capitalism and other things and people find meanings in their brands and these narratives and stories
It strikes me that as we move into this next phase
These historical things are coming back in force. We're seeing the polarization of the media in the political class
People are going to sign up to more and more extremist ideologies
exclusionary negative someones, unless we can give the positive views of the future, the future of abundance,
of collaboration and more, because otherwise you're stuck in your local maxima.
Most of these elections have been I want change, because fundamentally how many people believe
in the American dream or the British dream or the Spanish dream or the Indian dream anymore. People aren't actually saying positive visions of the future because people
don't believe you anymore. They don't believe our politicians.
Yeah, we need, we want the leaders.
We need those positive visions. Um, we need the Star Trek, uh, utopian,
not the star wars.
When we've looked at, um, looked at, we had as a community, did some look at history of when societies
or certain pockets meet abundance.
Peter, what do they do?
So the Romans take over Europe, what do they do?
What happens when the Mughals take over India and they have relative abundance?
And it turns out they end up in food, art, music, and sex as for major, not in that order, as major activities.
And then you find ways of doing creativity because human beings struggle for the next level of
things always. You know, we're so built in for that. So there's some optimism in that world.
I can't think of a place, so listen, Steven Kotler and I are writing Age of Abundance,
it's our follow-on. And the big element of the book we think about
is how do we up level human ambition in a world in which
we're gods and we are incredibly godlike?
How do we up level our ambitions that make it worth living,
make it challenging for us?
One of the questions.
What's that?
Just a quick comment here.
Stuart Brand, the futurist, used to say,
we are as God's green minus one,
we'll start acting like it.
Yeah.
And he said that in 1968.
We're more God-like than ever.
So, you know, do we all revert into a video game world?
Do we all get BCI?
You know, this year at the
Abundance Summit, I've got Max Hodak coming. I don't know if you know Max, he
was a co-founder of Neuralink with Elon and he's got a new company called
Science which is doing extraordinary work. You know, like a hundred, a thousand,
ten thousand fold more neural connections and bandwidth on a BCI than we're seeing with Neuralink.
Can we add another corpus callosum-like connection to the cloud that allows us to couple with
AI as AI is taking off versus be left behind like the movie Her?
Yeah.
I mean, these things are coming quick and we have to answer those questions like even this weekend
I had six people I now call me and said
I'm having a crisis of meaning because of r1
Once I saw the logic and the way it was thinking right that's gonna happen more and more but then again
Like said we have think about the mass of people and the human side of this
I think our current systems take away our
agency as slow dumb AI. And one of the main things here is reintroducing the belief of agency.
I can't do this. It can't do that. With this technology, there's nothing that, well, there's
a lot more you can do because it raises the floor for everyone, which is from my perspective, why
we had to get into the hands of everyone. And they make them feel like they're a participant in this because the other
part of this is it seems remote I think this is another part of this shock that
we've had in the last few days right how are you involved in AI you need to have
nuclear reactors and like giant chips and this and that all of a sudden you
can run it on your smartphone. It's very humanizing.
And this, again, why I'm a big believer in open source, to have that.
I love that.
I love that as even the title of this pod, The Crisis of Meaning.
It's incredibly powerful.
Let's talk about your new company.
How much can you tell us on Intelligent Internet? I don't know if you want to talk
about your tokenization plans. I don't want to open the kimono before it's ready, but
I would love to hear your vision of what you're building.
Yeah. So like in the previous company, we got up to the eight digit revenue, hundreds
of million model downloads, great teams. But I was like, the previous company, we got up to the eight digit revenue, hundreds of million
model downloads, great teams. But I was like, the API and SaaS revenues are probably going to go
down to nothing because intelligence gets commoditized, intelligence too cheap to measure.
But someone's got to build the AI for the full stack of cancer that helps you through your entire
cancer journey and organizes all the cancer knowledge. We have the computer do that. Why is
no one doing it? Same for autism, same for education. Once we build
this once and I think it was a Stuart Grant who said, pace layering of knowledge, you
know, you have knowledge of humanity, our common knowledge that impacts everything that's
regulated and meaning education, healthcare government. Why don't we organize that information
into knowledge and then make a system that can get wise and make that available to everyone. So I was like this strikes me as we need
large amounts of compute that sounds like Bitcoin you know and the amount of
compute you will use is inevitable so use that to secure an institutional
great digital currency we'll have details of that coming soon. But then in
the whole crypto space most of which is rubbish and there's increasing demand
for, at the start, back in the day, 12 years ago, 13 years ago, it was all, you can mine
on your laptop, you can mine on your smart GPUs, right?
Then it became about capital.
And do we really want to live in a world where capital determines everything yet again?
I was like, what matters is people.
So what if we create a mining mechanism
where the people can create currencies as well
and use that to fund all of this universal basic AI.
So we'll have details about all that side of things
where anyone can participate and be a part of it
because people wanna be a part of it,
give their data, give their knowledge
and we'll organize all this with dedicated teams
for cancer, autism, education, health, government that think about gentrifying first, release everything
open source.
But I think it is important to have this, someone needs to go and just do it.
Because once we have a cancer model that forms human dots and empathy and works on a smartphone,
no one will ever be alone in their cancer journey again.
And that's half the world will get cancer.
Yes.
Once we have a supercomputer dedicated entirely to organizing the world's cancer
knowledge and making it freely available.
Anytime a new paper comes out, we will advance secure for cancer.
Yeah.
Yeah.
It's insane when I get, when a friend, when a friend of a friend has particular
cancer, they call me and I'm like, dude,
I will start asking around to see who the world's expert is,
but it's all of this is knowable.
You should be able to know what the trials are,
what the current state of the art is,
where it's available, what the risks are,
and have that information instantly.
But you've got to do it.
And again, this is, once you've built the gold standard data sets for our general common
knowledge of humanity, for every country, it's legal, it's medical, it's others.
And for all these sectors, these specializations we have, this is what Salim was talking about
earlier.
Suddenly you have a whole gaggle of specialist agents and robots and data sets fully open source for
everything and then you just need to update it and run it then we can be
about wisdom and build intelligent systems that get wiser and wiser but have
an objective function to help us because for my take the more we help the higher
the value of this new type of Bitcoin again more detail soon and you can be
massively collaborative and open because you want as many people to use it as possible and you want to help as many people as
possible and the total amount of capital needed is not that large. It will give
some estimates but the wonderful thing is it's possible for the first time. The
advances of O1 and R1 type models means that organized the world's cancer
knowledge and making it available or autism or Alzheimer's is just a question of compute.
It's no longer a question of labor.
The ability to make that available to everyone open source on their smartphones is just a
question of compute.
Will there be one model to rule them all for each of these or will there be thousands that
are created?
This is the wonderful thing about AI models.
The way that you train them is called curriculum learning. You start with the whole
internet, then a subset, and then you get into this tuning, specialization,
and localization phase. Then it goes on to your laptop and it gets tuned
continuously. So if you release the data sets and the models for each of those, you
can build a modular system. Like we had these Lora's, these fine tunes of our image model where it can turn into anime
or Ghibli style.
It's the same with this.
Your Apple intelligence on your smartphone
is a base model that's common.
And again, you can ensure all the data in that is fine
and not poisoned, which is why open source open data
is required, in my opinion, for regulated systems
with these little adapters on the top
that are learning about sport and learning about your thing and tuning it to Apple Photos. So you'll have this modularized system where everyone
can pick and choose and that's important when, for example, your kids education. Do you want to
follow your school curriculum and be tied down to just that education model or do you want to be
able to take that education model, know exactly what's inside it and then extend it with another
calculus course
or this or that.
You want the latter, right?
And that's why permissionless innovation is so great.
And this comes back to our deep seek discussion, right?
The fact that it's open source means more people use it
than anything else.
Lama was open source,
more people use it than anyone else.
So if you build great quality models and data sets,
people will use it, they'll innovate on it,
but you can set a really great solid foundation.
So the models inherit from each other.
They've all gone to the same school, then they go to different college, and then they
go to different universities.
But they're interoperable.
I love it.
The future is amazing if we survive it.
I mean, that's really, truly,
I mean, we're, we're heading toward this extraordinary world,
the most exciting time ever. We just need to survive the,
the, the downside, the star wars, Mad Max scenarios.
I have a question for you. If we survive the next five to 10 years,
how long do we live for?
Yeah. So the, you know, this is a lot of the work. I've been public on this and been having debates and
arguments with a lot of the traditional medical and scientific societies that are like, listen,
we're just not going to get past 120. It's built into our genes. In fact, the probability
that you, Peter, or you, anybody,
is going to get past 100 in a healthy fashion
is pretty damn low.
And the fact of the matter is science and medicine steeped
in history and the past, there's good reason to believe that.
But it's same good reason to believe
that humans would never fly and never get to the moon
and never travel at the speeds never fly and never get to the moon and never travel
at the speeds we do and never have instantaneous communications or quantum teleportation or
all the things that were impossible just a few years or a few decades or a century ago.
And the reality is we are a complex system of 40 human trillion cells with a billion chemical reactions per cell
per second.
There's no way a human can understand this and understand what are the root causes of
aging and why we age, but AI can.
I think AI can help us to understand the fundamentals and alter it and not accept what evolution dealt us.
Evolution had a mission.
Evolution had a mission of passing on genes by the age of 30 and then killing you off
so you never stole food from your grandchildren's mouths.
My mission is a little bit different than that.
We're birthed for death.
What's that?
We're birthed for death. Yes. that? We're birthed for death.
Yes.
So that our genes can propagate and we can break that cycle.
So to answer your question, Emad, I think we've got an unlimited future.
Now the question is, are you going to want to live the next 100 years in your meat sack
or the next 200 years in your meat sack, or are you gonna wanna upload whatever the health consciousness is
and your memories into the cloud and be liberated?
And we'll see.
It's crazy to think about, again,
this is such a time of change, right?
And you look at the tools and techniques,
you look at the medical sphere,
we need to reimagine medicine from scratch,
which is like we need core developer teams working in the open on each of these.
What is government? Like that's a question that we're having right now. Do we need to spend so
much money? What is the purpose of government? How many people listening to this feel represented
by the government? What if you have your own AI that you own that is looking out for you,
that represents you, that interacts with the government AI, because every government decision will be checked by an AI
within the next few years.
It'll just do it.
And then they'll be made by an AI,
because obviously the AI is better than the government.
That's scary.
But you can finally have representative democracy.
Yeah, true democracy.
For the first time ever.
These are the positives.
You can have personalized medicine,
you have empathetic medicine.
How much of medicine is actually psychological? You know, like I don't have control of myself,
no one's listening to me, having that aid. These are systems that I think need to be built from
scratch and reimagined. And education, I think, is probably one of the biggest ones of those.
Our education system is completely not fit for purpose, the efforts of everyone and we say that for
system after system. Like you see math academy and things like that and the results that people
are already having. Did you see that one from the school in Nigeria recently? I think it was like
two weeks with chat GPT, they did two years. Wow. It was two weeks or two months with chat,
they did two years advancement just with chat GPT in math. It was insane. I think we've had this conversation before where schools are up in arms and saying, we're
making AI legal.
You can't use chatGPT.
You can't use Gemini 2.
And the fact of the matter is, sure, you can't use that to teach the way you're used to.
But guess what?
You can use it to teach 100x faster and better and set massive objectives for your kids,
help them dream bigger than ever before. But it disrupts the entire, you know, teaching industry.
Well, it's because the school was designed to reduce our agency and remove it to become a cog
within the classical, you know, British Empire. This is a really important point that the
last couple hundred years we've turned humans into robots. You know you stood
in assembly line you stamped out widgets and the efficiency at which how many
widgets you could stamp out per hour was your pay grade and your seniority level
and whatever and we measured you on KPIs and so on and now we're flipping it
around and I found it fascinating that the most valuable colleagues and
employees we have are the ones that learn
the fastest
And that's starting now become the human factor much more again, and that's very very encouraging
Now you can add to it some really funny stuff
So this is I had this piece how to think about AI where I was like
You know building on that thing about AI Atlantis and things like that. Yes, we can design AI in two ways
One is we build agents to replace people.
The other is that we focus primarily
on increasing human agency.
Because our systems have taken that.
And those are two different ways of designing AI actually.
Which is one of the reasons I think I look at the Anthropics
and Googles and others of the world.
I don't think they're focused on increasing human agency
as much as automation and business optimization because their customers are typically businesses.
Or on the consumer side, again, I don't think that it's just become a bit different on the
design pattern side.
But it's exciting because we can revolutionize each of these important things for living
for the first time.
All we can say is it is going to be the most exciting time ever
to be alive for sure. This is why you need your eight hours of sleep a night.
Yeah, for goodness. I did not get my eight hours. I woke up at 4 a.m. to prep for this podcast.
Took a cold shower to wake myself up, but it was worth it because this was a phenomenal conversation.
You lost me a cold shower but okay.
Imaad, so happy to have you back on Moonshots. Saleem, always a pleasure my friend. Imaad,
if anybody wants to follow your current work, where do they go to see what you're up to
and learn more about?
You can follow me at email stack on Twitter or ii.inc, Intangible Internet.
ii.inc, I love that. It's awesome.
Gentlemen, I look forward to having this conversation on WTF just happened in tech again.
We're going to have this more frequently because our heads are spinning
at the speed that technology is moving. Just fundamentally spinning. Take care, Salim. Take care,
Yimad. See you, buddies. Take care, guys.