Limitless Podcast - This Week In AI: Why SpaceX's $1.5 Trillion IPO is Undervalued
Episode Date: December 12, 2025SpaceX's upcoming $1.5 trillion IPO is undervalued. Why? Four words. Data centers in space. We analyze Elon Musk's vision for using Starlink satellites to enhance AI capabilities, the U.S.-Ch...ina GPU rivalry, and Google's new Project Aura eyewear. We also highlight Noose Research's impressive AI model breakthrough. Tune in for insights on these groundbreaking developments shaping the future of technology!------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 SpaceX's $1.5 Trillion IPO1:02 Starlink's AI Data Centers3:32 SpaceX's Unique Position7:14 The Monopoly of SpaceX8:08 The Risk and Reward8:41 Science Fiction Becomes Reality13:36 Communication in Space16:43 AI Data Center in Space17:20 Proof of Concept19:36 GPU Wars: USA vs China25:18 Google's New Glasses29:18 Open Source Breakthrough31:08 Conclusion: Future of AI------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
On paper, SpaceX's $1.5 trillion IPO would be the biggest in history, raising about $30 billion,
but that's not the crazy part.
The crazy part is that even at that price, we still believe it's undervalued.
In fact, grossly undervalued because this all stems from one realization.
It's turning Starlink into AI data centers in space, and the winner who deploys resources
the fastest gets to reshape civilization.
It isn't just SpaceX stock finally available to the public.
This is Elon using Wall Street's money.
to build an orbital GPU swarm with these new satellites that are 150 kilowatts and Starship will launch
basically a small city's worth of AI power into orbit every single flight. We're going to talk
about all of this because eventually this leads to lunar factories producing AI satellites.
And if he pulls this off, it's going to be pretty amazing. So in this episode, we're going to
break down why this IPO exists only because of AI. This was not the case prior to this week,
how the physics actually favored data centers in space and why $1.5 trillion might be a rounding error
as SpaceX wins because, I mean, in this game, EJAS, it's kind of simple.
Whoever can deploy compute the fastest, they win the AI race, and they probably get to rewrite
the future of civilization.
Yeah, I mean, before we get into it, I want to walk everyone through the kind of series of events
that kind of led to this groundbreaking bit of news.
So it all started off around Wednesday this week, where two news sources, Bloomberg and
the information leaked news that SpaceX was potentially going to IPO sometime in,
26, but we didn't quite know the valuation at that point. And then it broke that it was a $1.5 trillion
valuation. And the reason why this is such big news is it would technically be the biggest IPO ever.
And then people were like, okay, but Elon hasn't confirmed it. Well, yesterday, Elon officially
confirmed it. He responded to a great article from Eric Berger where he kind of predicted why
he thinks SpaceX is definitely going to IPO soon. And Elon confirming it all but means that
If SpaceX IPOs, Josh, at $1.5 trillion, his net worth would effectively double, which would put him just under, not exactly at, but just under $1 trillion overnight.
Once the IPO goes like, he owns 42% of SpaceX. So listen, there's a bunch of big numbers here.
In order to successfully IPO at this valuation, he needs to raise $30 billion.
dollars. He needs to, you know, technically deliver on a lot of stuff that we're going to get
into in a second. But, you know, massive, massive vision. So the question I ask myself is,
why now and what is he doing to justify this? What is he telling the people that he's raising
$30 billion from to convince them to give him that kind of money? And it comes to one thing,
Josh, AI data centers in space. Now, you and I have had a very tumultuous relationship with this.
We've come a long way.
We've come a very long way.
And by a long way, in the last four weeks,
when this kind of like entire trend started developing,
at least on social media.
And the concept here is if you're able to put GPUs or AI data centers in space
and harness the energy of the sun,
you effectively have infinite power.
And in this game of creating the best AI to reach AGI level,
you need as much computer as you can get your hands on.
And the fact is, Earth's resources is scarce.
So Elon's whole pitch behind.
IPOing SpaceX is he believes he can control and build data centers in space using his Starlink
satellite network. Josh, he thinks he can create a constellation of Starlink satellites that can
speak to each other via laser beams, right? So they can communicate and share, compute and data
between themselves, harness solar energy from the sun and beam that down to Earth to create
AGI. This sounds like something out of a sci-fi novel, Josh. It seems like you're obviously
bullish? Like, how feasible is this? Like, what are we looking at here? Yeah, it's funny because, like,
for the longest time, Elon was strictly no, no way we ever were ever going to go public.
But the second that they realized that Starling satellites can be architected as this distributed
network of data centers, it went from no way to we kind of have to. Because, like, a lot of this
AI race, it kind of comes down to deploying assets just quicker than your competitors.
And we've seen this with XAI in the past where they did not exist, what, two years ago?
And now they are at the forefront with everybody else. And it's because they were able to deploy
their resources faster than everyone else. They were the fastest to get to this large,
coherent cluster of Nvidia GPUs. And as a result, they have some of the best models in the
world. So this is clearly a race to get to resources. SpaceX has the ability to do so. XAI has
the models to do so. But SpaceX doesn't quite have the money. So their projected revenue was,
I think it's about $24 billion in the near term, which, for a fun fact, is roughly equivalent to
NASA's annual budget. But what they realize is, wait a second, if we're going to compete on a global
scale and building AI data centers, we need far more money than that. And that is the reasoning
why they're going to public markets raising the $30 billion. And again, at $1.5 trillion,
that's about $30 billion they're going to raise in addition. So I guess the question,
I started by saying this is fairly undervalued because the stakes are so high. We'll explain the
reasoning. But I'm curious, what your take is on the valuation, the $1.5 trillion number as it sits
today. Okay. Assuming this is economically and physically possible, Josh, it's a no-brainer. And it's for one
simple reason, which Andrew actually outlines pretty well in this tweet that I'm showing here,
to win the AI race, you need the most compute, data, and energy. That last one, if you can get
infinite energy from the sun, you've already won, right? And the only reason why people haven't
considered this before is because it hasn't been possible. Elon will effectively own the highway
and the tolling system into space. SpaceX is like literally a decade ahead of the next nearest competitor.
So if he can create a monopoly on this and actually make this viable, he wins. Your point around
like him needing the money to apply resources to this, we all know that Elon started SpaceX to get to
Mars, right? He wants human civilization on Mars. And he didn't really have a good justification as to why he
with IPO SpaceX, aside from, you know, maybe getting extra cash to help fuel that vision.
Now he realizes that he needs to kind of create this constellation of satellites to pay
for the voyagers and the expensive mission to get to Mars. So it's a means to an end is the way
that I'm kind of thinking about it. And so now when I connect the dots, Josh, is $1.5 trillion
valuation undervalued or overvalued? I think it's undervalued assuming he can get the
data centers out in space and beam it down.
back to Earth because it's not just going to be him and GROC and XAI that benefits from this.
It's going to be all the other frontier AI labs that are going to want to or need to rather
use his infrastructure. And that's my simple thesis. Yeah, you can ask the question, well,
one, like, is this a monopoly? Yes, this is a monopoly. The next closest company is five to ten years
behind being Blue Origin. Is this a monopoly on a market that is meaningfully large? Yes.
AI as well as low Earth orbit satellites, as well as energy, are about some of the biggest
markets on planet Earth. So the total active market size is gigantic. And the third question is,
is there an operator capable of actually doing the impossible and making the impossible a reality?
And the answer to all three of the questions are yes. So there is a unique monopoly in a industry
that is larger than any industry that exists today run by the best operator in planet Earth.
And the convergence of those three things, if it works in the world that we do,
actually get AI data centers in space. And we're building artificial general intelligence out there.
We're harnessing a lot of power of the earth. That is a civilizational scale shift. And the amount of
wealth generated from that will make $1.5 trillion feel like a drop in the bucket. So maybe today,
$1.5 trillion feels large. But in the scale of things, that's a very small number. Typically, we speak
about companies going from zero to one. And that's where those exponential valuations come from.
with this IPO and with Elon's kind of successful Starship launches and satellite launches,
we're somewhere between zero and one. We're at 0.5. So this, the risk has been reduced because
he's reduced the cost of going to space. He's already launched satellites successfully. So now,
like the exponential returns seem really, really more feasible. So I think he's going to raise that
$30 billion in a matter of, you know, months, if that amount, right? The other thing is, like, he's
literally turning science fiction into reality here. We talk about the concept of the Cardishev scale
type 2 civilization, which is, you know, humans being able to harness the energy of their local
star. That is the sun. And we've suddenly gone from it being zero. So like that, there's no way he's
going to pull that off to. Ah, you know what? He actually might. And so 1.5 trillion dollars seems
kind of very undervalued if you have that kind of mindset. But again, people listen to the show,
Josh, are going to be like, well, I don't think the numbers make sense. You can't get rid of
heat in space. It's a vacuum. How are you going to be able to launch satellites? They're going to be
they're going to need to be kilometers square wide. Josh, have you got any of the numbers that can
help break this down? Let's run the numbers. So we talked about a good bit of the logistics and the
cooling in a previous episode earlier this or last week. So go listen to that if you're interested in that.
The numbers that we're going to talk about today are the amount of scale they're actually able to
get into outer space and what that looks like. So Elon recently posts on X their ability to get
150 kilowatts per satellite into outer space. So for some reference numbers, the average US home uses
about 10,500 kilowatt hours, which is about 1.2 kilowatts of continuous energy. So if an AI satellite
can generate around 150 kilowatt of electrical power, like Elon is saying, 150 kilowatts divided by 1.2,
is about 125 homes worth of continuous power. So one of these AI satellites has roughly the power
budget of 120 to 130 American homes running all the time. Now, how much can you do per launch? Because
a launch consists of many of these satellites. So SpaceX's own site says Starship is designed to carry
up to 150 metric tons of lower orbit. So how many is that? That's about 50 to 75 of these satellites
per flight. So if you do that math, 60 satellites times 150 kilowatts of energy, is 9 megawatts
of power in orbit from one starship, which is equivalent to a small town in New England
with every single launch that they make into outer space. That is 7,500 homes, that is 19,000
people's home usage. So every fully loaded starship launches like dropping a small city's worth
of electric households dedicated just to AI into orbit. And that is an outrageously large
amount of energy and compute that they're putting into outer space, especially relative to what
people are trying to do back here on Earth.
Like, EJES, if you remember a Blackwell chip, which is Nvidia's newest cutting-edge rack,
it uses 120 kilowatts per rack.
And each one of these satellites surpasses a blackwell.
So it's like this gargantuan amount of energy and compute that's being sent to orbit every single launch.
Yeah, he's tapped into the infinite energy clip.
And it's become a reality.
And if he can harness all of that, he basically wins.
The other thing I was thinking about, Josh, is to your point around,
SpaceX or Starlink specifically being a monopoly on this.
That's effectively the way that I see it.
Like pretty much every single usable satellite out there
is a Starlink affiliated satellite.
In fact, we've got this really cool graphic
that we want to show everyone here.
Oh yeah, this is sick.
Okay, if you're looking at this,
if you squint, it might look like a bacteriophage, right?
But no, this is a simulation of all live satellites
that are orbiting Earth right now.
And if you notice my cursor, whenever I touch pretty much any satellite that's kind of bumbling around here, you'll see Starlink repeated over and over and over and over again.
So he has the market monopoly.
I think something like 90% of satellites out there are Starlink based or have been launched from SpaceX infrastructure and launches.
So he has the market monopoly.
And the reason why that is so important is the capex cost of launching this out into space and making AI data distances in space fee.
it actually becomes affordable to do this, right? Effectively, if you imagine having just empty
data center slots bumbling around in space and all you need to do is fit in the latest GPU
and launch it up there for cheaper than the cost of the weight of the GPU, Elon will be able
to do that via SpaceX. So it's a crazy monopoly. I do not see anyone coming even close. Maybe it's
Amazon or rather Jeff Bezos's blue origin, but that's still a decade behind, right? So as far as I see it,
Elon's got this in the bag.
Yeah.
And I mean, because of the rapid reusability of Starship, you're able to send just tons of mass to orbit.
Elon, he posted that he's expecting one megaton per year.
So, I mean, 100 gigawatts, it's on the, that includes 100 gigawatts.
It's on the order of like a dozen large power plants.
So this map that we're looking at right now, full of all these satellites, is going to get much more full.
Another interesting thing about this map is the way that they communicate to each other, EJ.
A lot of the communication done on Earth is done through fiber optic cables when it relates to
AI because it's the fastest way to move information as close to the speed of light as possible.
The cool thing about Earth is it's a vacuum with no resistance or friction.
So when these satellites are communicating with each other, even over a distance, the only
limitation is actually the speed of light.
And the bandwidth is huge because there is no friction in communication because they are
simply sending photons to each other.
They're sending and receiving photons.
So the unlock that that enables, where now you have a three-dimensional plane of Earth, where
Earth is two-dimensional, you can only build so much here, you turn them to three-dimensional.
mentions and you're restrained only by the speed of light in terms of photons moving back and forth
to each other. This gets really sci-fi-esque very quickly, but also you could start to see how
affected this is. And what an incredible business. Oh my God, I am stoked for the CEO. Are you an
investor at $1.5 trillion? Yeah. Yeah, of course I am. Yeah. Hell yeah. Dude, it is the TAM,
the total addressable market, is literally infinite space. Yeah, well, it does not exist. It doesn't
It doesn't exist.
Yeah, exactly.
Earth will eventually become energy constrained.
If you look at like the energy bills, Josh, of the local town surrounding data centers that
are in Abilene, Texas, it's gone through the freaking roof, dude.
And this is going to be something that we constantly come at odds with because, you know,
who needs it more?
What's more important?
Humans that are kind of local and conducting their own kind of business or creating
absolute artificial superintelligence for the world to use.
I know some people might, you know, argue the latter.
So the point is, we need to find energy elsewhere.
And the most obvious one is the thing that stares at us in a blue sky every single day.
So to round this off, Josh, and for the listeners and watchers of this show, if you want to imagine what the end game of this looks like, it's this graphic that we're showing you on this screen right now, except it's Starlink satellites all over the sun.
And it's just absorbing the energy and transmuting that down to build.
artificial general intelligence on Earth. It's a crazy, crazy vision. Things are getting really
sci-fi really quick. Oh, Josh, I had one more. I had one more thing. Just a little nugget,
a little Easter right. Okay, if SpaceX is one of your favorite AI companies, give me another one.
Just give me your, maybe your next three or four. Like, what's one company that comes to mind?
I want you to say Google, by the way. So like, you can cut this and say it now. Well, clearly Google is one
of them. We are huge fanboys. We love Google. Why do you ask that question? I was hoping you were going to say that.
They are an official percentage owner in SpaceX to the tune of around 8%. That means they invested
$900 million in SpaceX about six years ago. Josh, that is currently worth $111.11 billion.
If SpaceX IPs at 1.5 billion. When you got it, you got it, man. And Google just got the juice.
Listen, I'm a SpaceX shareholder too. My friend John and I, we worked so hard back in 2020
to find an SPV of an SPV of an SPV to get shares in. And it is now up like 50.
I'm so excited to finally have some liquidity. Not that it matters. I fully intend to add more,
but like, man, good time to be an optimist, good time to be an investor in the future because
it is getting so weird and so wild. But if you are new here, every Friday we release an episode
about kind of a roundup of the world in AI. There's more stuff that happened.
And this second half of the episode, we're getting into that, starting with what feels like a natural extension of this EJAS, right?
Where we officially have the first AI data center in space that actually worked.
Yep.
So this tweet that you're seeing from Philip Johnson, Philip is kind of like the guy that kicked off this trend, I would argue.
He is the CEO and founder of StarCloud, which is a startup incubated by Y Combinator with a very bold vision of putting AI data centers in space.
The one difference between Philip and Elon is he's actually put a data center in space
and he launched it through one of Elon's Falcon rockets last month.
And they put an Nvidia H100 in space.
It's currently out there running inference and training models.
And it was able to do two really cool things, Josh.
Number one, it trained a version, an open source version of chat GPT from scratch on all of Shakespeare's works.
Just for fun to see if it's.
figure it out. And secondly, it took an open source version of Google's AI model called
Gemma and is currently, right as we speak, inferencing between that model in outer space,
back to Earth. So you can send a prompt to the Gemma model out there and get a response
in a couple of seconds, which is just insane to say. Now, people listen to this might be like,
well, an Nvidia-H-100, that's an old GPU, who cares? And it's just one of them. Like,
this isn't the vision that you just pitch me of a satellite network eclipsing the sun.
baby steps, guys, this is baby steps.
The fact that this GPU can survive out there
dispels the rumors that, you know,
oh, you can't radiate the heat
or radiation is going to kill the GPU.
It's out there.
It's been running for two and a half weeks now.
So this is a very feasible thing.
Josh, are you excited about this?
I'm excited about it.
It's a great proof of concept.
It shows like, okay, shielding works.
Okay, heat dissipation works.
What I really love the most about this
is the models that they run.
Here on the post that you're showing
they trained the nano-GPT model from Andre Carpathy to work on those Shakespeare things. So it's cool,
like our, I mean, number one fan boy of Andre here, he created this small little nano-GPT,
and now they've created a model in space. And there's two models running there. There's one that
they actually trained, and then there's one that they had pre-built, that is returning tokens.
This is awesome. It is a proof of concept, and it shows that, I mean, granted, I think people have now
realize that this is possible, but they're actively proving it in reality. And for that, it's
awesome. StarCloud seems to be doing cool stuff. I'm very happy with the company, very excited for the
future. Yeah, but enough talk about GPUs in space, Josh. Let's, let's kind of like touch.
You mean we're going to ground this conversation? I am grounding the conversation. I am touching
wood. I don't have any grass name me right now, but I'm back on earth officially.
There is a GPU war going on between our two favorite foes, Josh, the USA and China. Now,
they've had a very tumultuous relationship, mainly because
Nvidia has been restricted from selling their GPUs to China, which, by the way,
makes up a huge percentage of their revenue every year.
So the fact that Trump said no to Jensen was a big deal.
Until this week, when he gave the thumbs up to sell not just any Nvidia GPU,
but their second older generation GPU, the H100 or H200, sorry, to China.
And it comes with a few stipulations, Josh, which is 25% of revenue that Nvidia makes from these sales to China needs to go to the USGov.
So he's paying a pretty hefty tax on this.
And so people were pretty excited.
Nvidia stock jumped up on the news, except on the same trading session in that same day, it retraced because China put out a commentary officially saying, we don't want your GPUs.
And in fact, we're going to restrict the amount of H-200 GPUs that our companies buy from you because they can just use Chinese GPUs, right?
So that was the response from China saying, like, listen, we don't need it.
So it was a very volatile day in terms of trading the Nvidia stock, except that then the Chinese companies themselves spoke up and said, hey, we're Deepseek over here.
And we kind of need Nvidia's GPUs because we can't trade.
we can't train frontier level intelligence
unless we have those GPUs.
So there's been this big back and forth.
China in response to that has been like,
okay, let's hold like a council session internally.
Let me talk to some of the companies here
and figure out how many H-200s you guys actually need.
And I have a few thoughts about this, Josh,
but what's your general reaction before we get into it?
It's funny.
So the story isn't the fact that we made them available to China.
It's that China said we don't want them.
In fact, we already have them.
We have stolen them.
We have smuggled them in.
So that is a generous offer of you, but hey, we've already been training models on your hardware for a long time.
So thanks, but no thanks.
And that comes to the surprise of probably no one.
Everyone had an idea that a lot of these were getting sent to Singapore and then routed into China.
And I suspect what that means is these new deep seek models, which have been hyper efficient in the past, will just become even more efficient going forward.
And we've always talked about this constraint that they've had where they've been resource limited,
therefore they've had to be resourceful on the software stack.
In a world where they are resourceful on the software stack, but also have a competitive
hardware stack, that feels like a supercharged deep seek.
And I feel like whatever model they released next is going to be a seriously heavy hitter.
Yeah, I mean, it's a few things.
There's the performance competition that you just explained, but there's also a strategic political
one. And the strategic political one has two sides. Okay. So if I'm China, I don't want my Chinese
companies hooked on American GPUs. I want them to use Chinese-made GPUs. Why? Because if the
Chinese-made GPUs are good enough to create frontier-level intelligence, we now have less of a
dependency on the US. In fact, it's probably the most important dependency that China has on the US right
now stretching across from raw materials to computer, energy, to whatever you might be, right? So
they want to rely on Chinese-made GPUs, not the US-made GPUs. On the US side of things, they're thinking
of this specific strategy, which is, if we can keep selling China older generation Nvidia GPUs,
their intelligence will always lag ours, and we will always be the leader. The second way to
look at that is also, Nvidia has a really sticky software moat. So typically anyone that runs
NVIDIA GPUs isn't just hooked to them because they're using the hardware.
It's because they rely on the software stack called Kuda, C-U-D-A.
I think it stands on a compute unified device architecture, right?
The more and more they use Nvidia GPUs, the more and more they rely on it, Josh.
So it's kind of like a drug for them.
So this is strategic from the US.
They're like, sell them the GPUs, just take a 25% stack.
Like we're releasing Rubin, which is NVIDIA's latest GPUs coming out in 2026.
Just keep them hooked on it.
Keep them behind.
Let's keep them in check.
Well, it seems like that's what they're doing.
So that's the update on the China front, which is interesting.
But there's another update on the hardware front, EJS,
so I'm very excited to talk about.
And that is Google's new glasses.
Now, I am old enough to remember the Google Glass version one,
which came out probably over a decade now.
And at the time, it was viewed as the nerdy Silicon Valley kind of like freak headware device.
And it never went anywhere.
it didn't work very well. But all these years later, it appears as if Google has announced that they're getting back in the hardware game, back in the eyewore game, and releasing these classes. So Ejazz, please fill me in. Tell me why these are not going to suffer the same fate as Google S1.0 or meta's rebands, which are still not on your face and for good reason. So it's a very difficult thing to build good eyewear. And no one's done it. Why is Google going to make it differently if they even can?
So in terms of like specific details, we don't have much information beyond this demo video that we're showing on screen here.
They've termed the project, Project Aura, which is basically going to be their next-gen version of Google Class.
And from my initial reaction is from this video is it looks very similar to the Apple Vision Pro experience, coupled with a few different features that meta-ray bands kind of introduced in their demo a few months.
earlier. My take is it's, okay, number one, it's going to be better than Google Glass. I would
hope so, right? From the design of the glass, it looks more casual and applicable to a wider range
of one. It looks a little chunky on her head now that I'm actually looking at the person that's
using this. I was going to say, they look beefy. Those are some chunky glasses. That's definitely
way too big, which kind of implies that, you know, assuming this is a prototype, there's,
they've got a chunky kind of build out, a physical buildout, like the computer running.
I'm super curious what the sensors look like here, how they've been able to optimize for it.
Aesthetically, it doesn't look like the best, but functionally, it looks pretty cool.
Again, like that Apple Vision Pro experience, you can kind of like have several desktop screens across.
You can access all your favorite apps.
The number one thing that I'm excited about this, Josh, is Google has built out a range of different apps and services that I use on a daily basis.
So if I can somehow kind of combine that in with my visual experience on a daily basis,
it automatically becomes useful, right?
Apple kind of took a step in this direction going from the Apple iPhone to the Apple Watch, right?
People didn't have to pull out their phone.
They could just kind of like look at their wrist.
Google can now have the benefit of like kind of taking over your vision doing this.
So automatically through that, I think it's going to be useful.
In terms of viability, I'm going to base it heavily on the meta-rayband's reception.
which was absolutely terrible.
I think Google has the benefit of being able to scale hardware manufacturing way better
or have more experience doing that way more than meta can.
So that's pretty bullish.
But it's going to come down to the execution.
And I'm going to reserve my right to judge for now.
Yeah, I'll go on record as being the biggest hater for this product.
I think it's going to be terrible.
But the thing that I love is that Google's making it.
They're working on it.
And I'm excited for version three of whatever this is.
So whatever they make this year or next year in 2020,
this demo, that's not really appealing. Whatever the version two is, maybe the battery shrinks,
maybe the glasses shrink, still not that great. Version three, normally version three of
these things start to get good, so maybe 2028 will be somewhere in the reasonably viable
glasses world. But it's cool. I'm glad Google's in the game because this is very clearly an
important form factor in the future of the way we interface AI and computers. And there's
another company now working on manufacturing them. So for that, it is a win for everybody.
And the final item on the docket comes from the open source world,
Noose Research, which is a frontier open source intelligence lab,
had a really big breakthrough.
And by big, I mean really small, big breakthrough.
They launched a 30 billion parameter model, Josh,
which compared to frontier intelligence models right now,
which range from, I think, 700 billion to one trillion parameters.
This is tiny.
Except it did a very big thing.
It scored an 87 out of 120 on the notorious Putnam mathematics competition.
For those of you who aren't aware of this competition, it is incredibly hard for any of the smartest humans in the world to do.
And in fact, comparing its results to the human scores of last year, it would have placed second scoring a problem set of eight perfect scores, which is just crazy.
So the fact that this tinier model packs such a crazy punch is nuts.
Now, for those of you thinking, oh, well, it's just a mathematics nerdy model.
Who the hell cares?
Bear in mind that Alibaba spent, I think, upwards of $15 billion in collectively training their latest Kwen 3 model, which is a pretty amazing open source model.
They scored 27 out of 120 doing the same test.
So the reason why this excites me is for a few reasons. Number one, Nuse research open source this entire thing. So if you'll listen to this and you're curious to kind of test this out yourself, you can do it. And 30 billion parameter models is something that can you can feasibly run on consumer hardware at home. That's number one. Number two, they did this in a distributed fashion. So typically when you're training a model, you want to build heavy data centers and run a bunch of reinforcement learning, reasoning, stuff like that to create the model. They didn't use any of that. They used a distributed network of computers.
to be able to create this.
So that's really cool.
And number three,
something that they pioneered, Josh,
is something called an agent harness.
Basically, the way that this model became so smart
is in the post-training phase
where they spun up a number of AI agents
which reasoned with each other
in a competitive tournament bracket-style thing
and the one with the best solution won
and submitted their answer.
And that's how they reached this frontier-level intelligence.
The reason why this is so important
is if you assume that compute is the only thing you need
to train frontier level intelligence,
this experiment disproves that.
This experiment proves to you
that you can achieve that same level of intelligence
with much less compute.
It's awesome they're getting these distilled models
to be so powerful.
And this is kind of a trend that projects
towards a loose bear case for AI,
which is if you can continue to distill these down
into these hyper-optimized models
that can eventually run on your phone,
Does the AI at the edge overpower the AI in the data center?
And does that mean that the value of these large data centers goes down?
I don't think so.
But this is a proof of concept and directionally proving or I guess giving us some data on what that actually means and how that looks.
This is a cool.
It's a fun project.
And the fact that they're doing it in a decentralized way seems interesting.
That wraps up the end of today's episode.
It was a big one.
If you can't tell, Josh and I are slight bulls on SpaceX and Elon's companies.
You didn't show off your hoodie. Show off your hoodie.
Oh, I didn't. I got my news research hoodie.
Yeah. So clearly you see where both of us stand here.
Yes. Space maximalist. Yes. Other other maximalist.
Open source. Distributed. Open source. Distributed. Distributing trading sick.
I'm a fan of both. I'm a fan of both.
Oh, yeah.
And one thing's become clear to me as we wrap up this episode is that the future of AI is dependent
on a number of different
protocol layers, Josh.
I don't think SpaceX wins
without the help of Tesla,
without the help of Starlink,
without the help of X-AI.
They all kind of feed into each other
and I'm excited to see
how this industry built out.
You can bet your ass.
Limitless is going to be
the channel that breaks all this news.
I just want us to take
a little bit of a victory lap, Josh.
We have called out
two very important trends months earlier
that have become super important.
one of them being AI data centers in space,
the other one being the Google Bull case.
Am I missing any?
Well, another one, we had Blake Scholl on the podcast
who does supersonic jets,
and he just announced a supersonic jet generator
for AI data center.
So we've been early and we've been right.
I mean, directionally right.
We first started off as haters,
but haters of the AI space.
But we've changed our mind
and we've been covering it along the way.
So listen, it might not always be right,
but it's always early
and you'll always be up to date
on everything that matters in this world.
of AI and Frontier Technology.
Yes.
So if you're listening to this and you aren't subscribed,
and that is 80% of you, by the way, please subscribe.
It helps us out so much wherever you're listening to it.
It could be on Spotify, Apple, YouTube, whatever.
If you're on YouTube actually hit the notifications button
because that also alerts you of the latest alpha.
We drop three to four episodes a week.
It's awesome.
And the final call to action is we have a banger of a newsletter
which drops every Friday.
And from next week, it drops twice every Wednesday.
and Friday, one with an SAE investment thesis on the company that Josh OI are super bullish about,
and the other highlighting the top five bits of news in AI and Frontier Tech of that week.
So you don't want to miss it.
If you want to keep up to date, subscribe to the Lemueless ecosystem.
We will see you on the net.
