Limitless Podcast - This Week in AI: OpenAI Changes Everything, Neo Robot, Nvidia To $5T, Grokipedia
Episode Date: October 30, 2025🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------NVIDIA's hits a historic $5 trillion all time high, and OpenAI is restructuring... Artificial General Intelligence (AGI) within a decade. We explore the implications of OpenAI's transition to a public benefit corporation and its partnership with Microsoft. Also featured is the OneX Neo humanoid robot for home use, Grokipedia, an AI-driven encyclopedia by Elon Musk's Grok, and the innovative X402 web standard for microtransactions. We conclude with insights on tech stock trends and express our gratitude to listeners.------TIMESTAMPS0:00 Intro0:47 OpenAI's New Vision4:53 Microsoft's Strategic Moves8:53 Navigating AI Investments10:17 Introducing Humanoid Robotics18:13 Grokipedia Launches25:37 x40230:45 Space Data Centers37:46 Closing Thoughts------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Welcome back to the weekly AI Roundup.
It's been another huge week.
InVity became the first company to cross $5 trillion in market value,
but they're closely followed behind by Apple and Microsoft
to cross $4 trillion in market cap value,
some big numbers being thrown around.
Open AI had a pretty controversial week.
They restructured their entire company,
but they also revealed their secret plans for AGI over the next 10 years.
Elon Musk decided to release an AI version of Wikipedia called GROCAPDia,
powered by their flagship model GROC, we'll get into that.
And finally, there is a new humanoid robot available to you for the cheap price of $499 per month.
Josh, there's a lot to get through to today.
Starting off with all the Open AI news, there's two main things to discuss here.
They made two big announcements.
One was their vision for the next 10 years on achieving AGI, and the second thing was on the restructuring.
Starting with the AGI side of things, Sam Altman came.
on livestream with his chief scientist, Jakub, to describe what the vision of open air is. And I think
this is in response to a lot of people saying, Sam, like, do you have your best intentions at heart?
You keep saying that, but then your actions are kind of producing products that might seem
kind of consumery and not really big thinking. Are you still focused on AGI? So this is kind of like
clearing the plate here. And there's three main takeaways that I got from this that I found interesting.
Number one, Sam has extended his deadline to achieving AGI.
He says he's going to achieve it within 10 years, and it's going to be a slower transition than initially expected.
He describes it as achieving AGI gradually over a number of different years.
This is interesting to me because we're finally achieving a realistic timeline, in my opinion, versus some kind of like hype headline where we're just kind of like thinking it's going to arrive tomorrow.
Number two, he's started giving solid milestones on this.
He says in 2026, the deep learning that Open AI is conducting internally will result in an AGI level science model, which will perform kind of at the caliber of an AI research intern.
And then he says two years later in 2028, they'll have an AI researcher level model.
So you go from intern to research great performance, which is super cool.
And the final point that really interested me was around compute.
Josh, they've spent or committed around $1.4 trillion on data centers right now,
which is roughly the equivalent of 30 gigawatts.
It's a huge amount of compute.
But what shocked me the most was he came out and said that his eventual plan
is to have a factory built every week that produces a gigawatt of power.
That's about $20 billion of compute per week.
All really huge numbers.
It kind of took my breath away.
Josh, what's your take?
Open AI is just doing what everyone kind of knows they were always going to do.
They very quickly realized they can't be a non-for-profit.
They are now becoming a for-profit organization.
In fact, it's a public benefit corporation.
So it's a PBC, not an LLC or a C corp or anything,
which means the intention is to be kind of aligned with public interest,
but basically it just blows the cap off of everything.
And this is a really big deal.
You just, you mentioned that, or you mentioned to me actually earlier,
and this was news to me that Sam Olsman expressed interest in going public.
I think there's a big opportunity here to talk about how much money can be made by opening eye going public and by
kind of restructuring this. So the restructuring deal is interesting. Are we ready to talk about that? Can we go
into this like crazy chaotic restructuring that's happening? Okay. This is fascinating to me. So on screen
there is a disastrous diagram with a bunch of arrows. A lot of it is kind of hand-waving, not important.
The things that matter, Microsoft owns a large percentage of the nonprofit. The nonprofit quickly realized they
needed to become a for-profit. They had a capped-up side on the previous version, but now they do
not. So Microsoft was limited previously to a 100x return on their investment. Now there is no
return on the investment, or no limit to the return on the investment. And this is interesting
for a few things, EJez, because one, I am very resentful of Open AI because they are very clearly
the fastest growing AI company in the world. And I have no way to get exposure to them. That
doesn't feel like it's aligned with the public. Sam Altman is taking all the money for himself. He's
throwing it at GPUs, and I get no upside.
That doesn't feel right.
This is Open AI, you're a public benefit company.
Why are you not benefiting me as a member of the public?
Second to this is that Microsoft now seems to be the best way to get exposure.
Because there is this way of getting access to OpenAI when the IPO,
but Open AI and Microsoft are now,
are Microsoft's not the biggest stakeholder in the Open AI company at 27% versus the OpenAI
nonprofit, which is actually 26%.
So the largest form of exposure is through Microsoft.
They have uncapped profits, and they are reporting earnings, I think, this week.
So Microsoft might be the best way to get exposure to Open AI currently.
But opening eye as a whole has kind of disappointed me through this whole process because
it's all been very hand-wavy and not really aligned with, I think, the public intention that
they stated just because they've always been this big for-profit entity disguised as a nonprofit.
Yeah, I completely agree with a lot of the points you made.
And to just kind of give the listeners a bit of context here, Open AI founded in 2015 as a nonprofit.
It was set up as a nonprofit org with this ambitious vision of we are going to achieve AGI and we are going to provide it to the people.
So everyone should have access to this super powerful technology.
Then four years later in 2019, they decided, we kind of want to make a bit of money from this and we need to raise a lot of capital.
So in 2019, they set up a for-profit structure, but it was owned by the nonprofit.
So they could still get away with saying, hey, we're a nonprofit.
But it was capped, as you said, Josh, to any investor for 100x, just 100x returns.
So a lot of feedback they've been getting and a lot of pressure they've been getting from future investors or current investors recently is,
hey, we want to make more money on this deal.
Like, it's unfair that we're investing billions in you.
Microsoft invested $13 billion.
And we want, like, more upside than 100x.
So Sam went to the drawing board and was like, okay, we can create a new structure called a public benefit corporation, which as you said now has uncapped gains. It's pretty crazy. So like now you can just make as much money as you can on a normal private company that you were investing in. And the biggest point around this is on this diagram here, you'll see if you zoom in, if you squint, Microsoft, minority owner, they are now no longer a minority owner. Like you said, they are the majority owner, which is just crazy.
But I'll tell you a few other reasons why Microsoft is a huge winner from all of this.
It's not just money.
It's not just the fact that their $13 billion stake is now worth $135 billion.
What a crazy return.
But they still maintain exclusive access and rights to all of OpenAI's research IP,
which means that any kind of new model that they release,
Microsoft basically has the first, say, an exclusive access to provide it to their partners,
to their customers and any clients and enterprise clients that they want to bring in in the future.
Secondly, and this was a genius move, Josh, they set a standard. They said, listen, OpenAI,
we will relieve all our exclusive access and IP access only once you've achieved AGI.
But of course, the obvious question is, what the hell is AGI? How do you determine what AGI is?
There's so much discussion around what the hell this thing is. So they said, we'll leave it to a
expert party of third party analysts. So it's a panel which is going to decide and determine
open AI's fate going forward. So the winner of all of this, this restructuring, this open AI
vision announcement is actually Microsoft. And it's just crazy to see. Okay. So how do you navigate
this as a participant who wants to get wealthy off of AI? There is a few ways. I think for people who
observe public markets, you'll know the fact that Apple is sitting at all time high, Microsoft's sitting
at all time high, Google. Basically across the board, everyone is sitting at all time high. Google. Basically,
Across the board, everyone is sitting at all-time highs, which I believe is high signal.
Clearly, things are working. Clearly, a lot is being spent. Clearly, there is some sort of a bubble,
but the end is not quite near. The big question that I have is like, come on, how do I get access
to Open AI? If you don't, if you're not an accredited investor, if you can't figure out how to
get private shares, the answer right now is just Microsoft. Nobody in the world owns more shares,
not even Open AI themselves than Microsoft does. And Microsoft, like you said, EGES, they have access
to their IP. They get access to a revenue share. They get access to.
to actual equity within the company. And if you believe in the success of Open AI at a scale that we do,
which is many, many trillions of dollars of market cap, 27% of that is not to be left at. And I mean,
Microsoft is a $4 trillion company. In the case that Open AI makes it to that level,
that is another nearly trillion dollars in Microsoft's pocket, which is a really big deal.
So it is unfortunate that it gets diluted out through the form of having to buy the same company
that makes Microsoft office, which is just like an abomination, my personal opinion, at least.
But I think that is the way to kind of navigate this forward until we do get the eventual IPO from the opening I team like Sam Alman discussed.
So that's kind of how I'm thinking about navigating it is like I want access.
Sam's being a little a little closed with that.
But Microsoft is probably your path to success there.
That's so interesting.
I read it kind of the in a different way, Josh, which is the gloves are now finally off.
Open AI doesn't have the constraint of having to lop as a non-profit organization.
it's quite clearly a full profit.
The foundation owns equity stake in this.
So I think Sam will probably move quite quickly on an IPO
because he needs so much capital
if he wants to get to a gigawatt factory per week, like he says.
That's probably going to be the biggest IPO ever.
I think right now, open AI and SpaceX
are kind of neck and deck for most valuable private company in the world.
And when that company goes public, my God,
the amount of volume that is going to be traded on day one,
I'll be buying the top.
Shout out to their early employees who got stock in Open AI. My gosh, you were about to buy very, very, very, very lux houses. But I guess that's it for Open AI. This is a really fun topic that I've been dying to talk about, which is humanoid robotics. I love human robotics. And the first one has hit the market, or at least so they say, or not entirely sure where this stands. You just want to introduce everyone to the one X neo-humanoid home robot. What the hell is it? What does it do? How does it work? Yes. So if you remember on a previous episode, actually, blah, blah.
week, we spoke about the new figure humanoid robot where I can enter your home. It could do a bunch of
things, but it can also do a bunch of like other things like manufacturing drugs, service jobs and all that kind of stuff.
This is the latest humanoid robot to come out of a company called OneX. And my first impressions of it, Josh, can I, can I be honest with you? Can I be honest? I'd love for you to be nothing but honest.
It looks like a grown-up telitubby. I don't mean that in a demeaning way. Let me skip ahead to show you like a clip of
this thing. It's kind of like in this soft, it's wearing a soft suit. It's got a very pleasant looking
face. It's got these kind of like glow up ears. It looks like an adult alien telitubby.
But it's comforting to be around. And in terms of like what it can do, it's this humanoid robot that can
live in your home and it can do all the things you would kind of expect a houseware robot to do.
It can clean your dishes. It can put away the groceries. It can lift things. I think it has like a
lifting capacity of 55 pounds, which is pretty crazy for something that weighs.
60 pounds. I don't know how those mechanics work. I'm not a physicist, but that is pretty insane to
see. And this video is like a really high production thing, Josh. I don't know how much they invest
in this, but if the robot that we eventually get in our homes in 2026 for the people that are
ordering this for 500 bucks or buying it outright for $20,000, which is a hefty price tag,
if it gets anywhere near something like this, I think it's going to completely change the game.
I would personally get something like this in my home. Josh, have you put down the
the payment on this? Like, what are you doing here? Uh, no, not at all. I again, here's the,
the tale of two worlds where I am a techno optimist who is rooted in reality. And I, I have been
around long enough. I have seen enough of these videos that look very beautiful and lovely and
very welcoming to have a humanoid robot. But you just, how many people have humanoid robots
in their house? A total of zero. It just doesn't exist. No one shipped them to market. A few things
that are important to know about this robot. It is, like you mentioned, $20,000 or $500 a month. And it does
the following. It is able to do basic tasks like laundry, possibly the dishes, possibly vacuuming,
but we're not entirely sure. And the problem with that is that if it can't do the task,
you have to use this thing, I believe they call it expert mode. Now, expert mode is the giant red flag
waving in the sky for me because expert mode is the highest signal that this is not ready to enter
the market. Expert mode means if it's not able to do the thing you want, you can schedule a time with
the Neo team and someone from the headquarters or wherever they're based will get into this little
VR suit and manually walk the robot through your home to do the task for you. So instead of having
a mate come to your house, you get some tech engineer put on some goggles and he is walking
through your personal space doing the task that you want to do, that you've scheduled ahead of time
in order to make this happen. And I believe that is what part of that monthly $500 a month thing is,
is you're actually, you're paying hours of labor for humans to do the emulated version inside
of this humanoid robot. So that to me is a really big red flag. The second red flag that I want
to serve before I'll pass back to you, you guys, is the fact that this is a pre-order. You're not
buying a robot. You're buying the rights to a future robot. And there is no guarantees that this
robot will ever exist at scale, nor will it exist to the extent that it's shown in this video.
And it's one of those things, and we've seen this with figure, it's like, like, show me
don't tell me. And I'm seeing a lot of these really lovely demos, but until this thing is
actually produced and until this thing gets shipped and until it arrives at people's houses,
I am going to be sitting here a little skeptical because a lot of the companies like, I'll use
Tesla as an example, they have a proven track record of building really badass manufacturing,
hardcore engineering stuff. These startups don't really have any track record of manufacturing hard
things. And a humanoid robot is an incredibly difficult thing to manufacture. So when they say
pre-orders and delivery in 2026. Well, I assume the best case is like December 31st,
2026. I would be shocked if they delivered these at scale before then. But I'm wondering if you have,
if you have a different take on that. Sadly, no, I am not going to pay the equivalent of half a month's
rent, well, maybe not in New York, but in some other state, to get some third party that is wearing
a metamorph suit and acting like a shitty version of Avatar, the movie, to navigate my robot
and clean my clothes. I would rather spend that money to get some actual house help to come in
and help me out with all the things that we're seeing on this video so far. Okay, I have one simple
rule. If you are going to advertise a next generation humanoid robot that is autonomous,
it should be autonomous. And don't sell me something that it needs to be teleoperated. It makes
sense in the case of something like self-driving cars. Waymo I know has a teleoperation thing.
And that makes sense because there's a lot of safety and stuff involved.
But for something that is consumer-friendly, for something that is meant to make my life a hell
of a lot easier in my most private space ever at home, it should be fully autonomous.
And let alone, the home is like the most private place, right, Josh?
Like I have all my valuables here.
I don't want some dude in a third world country, sorry, navigating around my house and picking
stuff up and putting it down.
What if they break it?
What if they do something wrong?
Who's liable for that?
Do I get insurance with this thing?
I have no idea.
And then tacking on a $20,000 price tag to buy this thing outright.
If you are buying this outright for $20,000, I'm going to go ahead and say that you are kind of like an idiot at that extent.
Like I do not advise doing this.
And then the second point you made, Josh, I think is the more prescient point, which is this thing hasn't been built yet.
This hasn't been produced at scale.
This thing isn't going to come into your home.
It's not even guaranteed you to come into your home until like some time in 2026.
By that time, you're going to have other robot companies like Unitri, like Figure, come out with their actual robot, which is being built and scaled.
I have no idea where 1X is as a manufacturing company.
You've mentioned several times actually on our show, Josh, that one of the bullish cases on Tesla's Optimus robot is the fact that Elon is such a manufacturing nerd.
He's so good at scaling.
And I just don't see this or want to trust this with a robot setup.
I don't want to call people dumb for spending $20,000.
Because if I had a disposable $20,000, I would certainly be an early adopter.
Maybe we can refer to that.
One of the weird edge case questions that I have is like, okay, you pay $500 a month.
Then what?
Like, what if I don't want it anymore?
Is there like a repo man that comes?
Does the robot walk out of my house?
How does that work?
The teleoperation thing, it's a serious problem.
Granted, you can schedule it.
But I imagine myself like, oh, you walk out of the shower and here is like some dude from like
wherever he's from.
He's just like, say, like, hi, I'm just cleaning your windows.
It's this really weird, like, intimate distraction.
And I think the open-endedness of that expert mode relative to the set amounts of chores that the robot can do leaves a lot to be desired or just a lot of unknowns.
Where if I do spend all this money, I don't actually know what it's capable of doing.
They say it could do a few things, but there's a lot of edge cases.
Everyone's houses are different.
Just a little uncertain on this robot.
But again, happy people pushing this forward.
I think this is great progress.
I love the vision.
If the vision in the video becomes a reality, that is a life that I really am excited.
excited about. That is a reality. That is awesome. So like, shout out to the one X team for trying.
They're pushing the ball forward. I am not going to knock it too hard because I just admire their
effort in making this a reality. And like everything else, we'll see. Ship the first robot. Let's
see how it performs in homes. Let's see how you're actually able to roll this out. And then we'll
see how it goes. But this is, I guess that's it for the robot stuff. Now we get into the
cyberspace, the world of GROC. Oh, hell yeah. And GROC-OPEDia has launched this week,
so why is this a big deal? This is a huge deal for me, because,
I grew up on Wikipedia and I used it to help me research and write a bunch of essays at high school and at college.
I still used it up until a few days ago where I would like help me clarify something about like learning about some archaic thing which I wouldn't be able to find anywhere.
It is the internet resource knowledge.
Millions of people use it every single day.
But there's one issue, which is I don't know whether this source of truth is actually the truth.
And it's been something that we've been increasingly faced in society as things like social media become enveloped around the world.
Like everyone's making TikToks. Everyone's learning their news from TikTok, from tweets.
And so it becomes really murky in terms of like figuring out what that truth is.
So Elon came out a few months ago, and it's crazy to say that this happened a few months ago.
And he said, I think AI will be the best source of truth.
And so what I'm going to do as part of building out GROC, which is XAI's flagship AI model,
is create an AI version of Wikipedia.
So he built that, and he probably aptly called it Grogapedia.
So what we're seeing in front of us right now is a demo of Grogopedia.
It looks like a simple chat-GW-T prompt bar,
except that you can search for everything and anything that you typically would in Wikipedia.
And when you do so, you get this really neat little concise summary and intro on whatever topic.
One we're seeing on the screen is the Tesla Cyber Truck.
and then it goes on to kind of dig into its early life,
similar that you'd see on Wikipedia,
how it was originated,
and it cites sources, Josh,
throughout the entire thing.
That to me is the most important part
because typically in Wikipedia,
they do this thing which is kind of hidden
behind closed doors,
where they restrict the number of sources
you can use as legitimate sources for your articles.
And if you look at that list
and if you compare it to the black list,
which is the very, very long list of sources which you can't quote,
it starts limiting what can be factual and what can be opinionated.
And it starts skewing a lot of people's opinions as to what is the truth or what is not.
Grocch Wikipedia and Grok, on the other hand, is completely unbiased.
It pulls sources from anywhere and everywhere,
and it does its own analysis to vindicate whether the source is legit or whether it's not.
It does this in real time and it processes hundreds of millions of articles,
sources, primary resources, secondary resources, every single day. So I'm super excited about this
because it's going to be my new research tool going forward in terms of fact checking.
The purpose of GROC is to seek truth. That's kind of their end goal. That's their stated goal.
The same way that a lot of these other companies like Open AI was to have open source AI,
GROX is just to seek truth. And I think with GROC 5, they're really making an attempt to lean into
that because a lot of these traditional models are trained on the data of the internet,
which is highly opinionated and highly biased. And what the GROC team,
the XAI team in particular is doing, is they're figuring out ways to filter through the noise
and find as close to source truth as they can. And we've kind of seen this work with on the X
platform with community notes, how generally speaking they are the closest thing we have to source
truth because it takes all these opposing opinions and kind of aggregates them and comes to a
conclusion. What Grockopedia is is showing us community notes at scale. And it's kind of,
it's the training set that is going to be used on the next GROC model, which is GROC 5,
but they're kind of offering it to the public as a public good. And not only is it a public good,
it's an interactive public good. So if you go to Grogoppedia, you can actually suggest a new page
to be generated because they're all AI generated. And GROC will take the liberty on itself to actually
filter your request and see if it's worthy or if it's not. So if Grogapedia deems your request
worthy enough, it will actually go and do all the work in order to generate a new Wikipedia page or a new
Rockapedia page. And I think that kind of collaborative nature is fun not only for a GROC to get better
training data, but for the public to kind of get their own way. And what's funny, EGES, is I've seen a lot of
larger influences on X posting their rockopedia page with a thank you letter because for so long,
their public perception has been skewed in a way that wasn't necessarily true. And Grogoppedia allows
that to change, where now there is a source of truth about the person. And when you see slander
by someone online, you can check, you can reference check this, and it at least is trying its best
to be truthful. So that to me is why Groghcadia is, this exciting new, important revolution
and technology that we have as a public good. Thank you to the X team. Yeah. Yeah. Well, Josh,
one of those people that posted a thank you about Grogopedia was the founder of Wikipedia himself,
Larry Sanger, one of the founders of Wikipedia. And he goes in this tweet, okay, I finished reading my first
long article about Grochoppedia, the one titled Larry Sanger. That's him. That's his name.
And he goes on to describe that for a version one for an AI-based Wikipedia, it's actually
really good. And it got a lot of things correct. Most importantly, the major things, which are the
most controversial things about his biography. And he goes on citing a few examples. He does
have a few criticisms, the one being that it gets a few minor mistakes that kind of roll in to become a
big problem throughout the article. But again, he says, hats off to you, Elon at the end.
For a V-1, this is amazing. I'm looking forward to seeing 1.0 and 2.0 going forwards.
I think what a lot of people ended up doing when this product released, Josh, was to test a few
controversial topics. Now, I don't want us to be politically affiliated at all on this podcast,
but just to give a few examples, there was a side-by-side comparison of RFK Jr., which is
a big political figure in the US.
And you notice that, like, one is very explicitly unbiased and less opinionated versus
the other, GROC being the more unbiased version.
You've got a side-by-side comparison of Donald Trump.
And we had a kind of unbiased adjudicator here in this, Josh, in this experiment,
where they gave Gemini 2.5.
I don't know why Gemini 2.5 is the unbiased AI model of choice, but apparently it is.
And it ended up deciding that GROC was in fact the more unbiased model.
But there's also a bunch of really cool features that you can do in this as well, Josh.
You mentioned that you can like submit to GROC an article that you want to get generated and have it instantly done.
You can also fix mistakes in GROC as well.
You can kind of highlight a request or highlight a thing that you think is wrong, submit it to GROC,
and it analyzes it in the back end.
The coolest part about this is you're not relying on a bunch of like specific moderators,
human moderators that go to bed at night that are awake at different times to get back to you.
This is just done by an AI all in real time.
If there's something that you don't understand, you can also just highlight it similar to how you would do on your iPhone and click like search.
And it brings you the Grogoppedia page of that thing.
So overall, I think this is a net improvement in what Wikipedia was.
Do I think it's the best thing?
No, but I think it's going to get a lot better over time.
And I'm super excited about this.
This thing is open source.
Anyone can access it.
Super cool. Bullish on Grogapedia. And it's actually not the only open source public good that we are getting in this week's episode. There's a second one that goes by a very weird convoluted name of X402. And I want to just briefly outline X402. Give you like the 10 second little elevator pitch. You can imagine X402, it turns the web into a giant vending machine. So you just like picture the internet as a vending machine. You can walk up to it and you could try to like grab something off of the shelf, whether it be a file or a video or a song.
And if it costs money, the website will say, 402, pay this tiny amount.
And you could pay it instantly.
It's a very tiny amount.
Like we're talking fractions of pennies.
You could pay instantly using crypto stable coins like USDC.
And then boom, you get the thing.
And the amazing perk of this is you don't need an account.
You don't need a password.
You just pay for the thing you want and you walk away.
Just like a vending machine.
You want to watch a new video.
Oh, here's like give me 10 cents or 10% one tenth of a penny and you get access to
this video and you can take it off the wall.
And X402, it's just this open platform, kind of like HTTP.
In fact, they use part of the HTTPP protocol for this that allows the internet to implement
this across the board.
So anyone who wants to integrate it can integrate it.
You can kind of imagine.
I think a lot of people are familiar with Linux, how it's just open source.
If you want to integrate it, you can.
That's what 402 is.
And it's this really awesome thing that allows AI agents in particular to start engaging with
the internet without needing an account, without needing a password, without needing to prove
that they are humans. So Ejez, you have this big explainer up. Do you want to add anything to that
explanation? No, I think you did a great job. What I will add is, I love the vending machine
example, but yet, to emphasize, this is a new web standard. So you don't have to go through
a middleman like Stripe or PayPal. This is something that you can spin up and integrate into your
website or app today, right now. And what's cool about this is it unlocks a bunch of really cool use cases.
In the previous world, before X402, you would need to set up a PayPal account.
You would need to set up a checkout system or enable subscriptions via your own API.
So if someone wanted to get access to your product, you would query your API and you could either offer them two options.
Here's all the information in the product that you want for free.
Or you need to subscribe to my thing, which is kind of like, think of like a newsletter or a news media corporation in today's world.
You've got to pay a subscription to get access to the Financial Times, for example.
Now you can go to the same app and website and just pay per use, which is the coolest part for me.
Like, Josh, do you know the number of times that I've gone to read an article on tech or whatever,
and I've just been paywalled?
I would love to be able to pay like 50 cents or whatever the amount is one time and get access to it instantaneously.
A protocol like this allows you to do it and any human can get access to it.
But the second part is something that you mentioned, Josh, AI agents.
I'm firmly of the belief that the future way of digital commerce is not going to be via humans or facilitated by humans.
It's all going to be done digitally through agents.
You need a standard like this, an open protocol that facilitates payments in a matter of microseconds, to exist for that world to exist.
So this is a major, major evolution.
People probably won't be as excited about this until they start using the product themselves.
They probably won't have any idea that it's happening in the back end.
But as a former and current crypto nerd, this is super cool.
Yeah, well, here's why people should be excited about it because initially it sounds horrific.
I'm like, wait, I don't want to pay to read an article or to watch a video.
And it seems like a terrible, terrible, terrible user experience.
But the alternative has actually been much worse.
It is the advertising model that gets in the way.
It's the subscription model that costs $30 a month just to read a couple of articles.
And what this does is it kind of offloads a lot of the financial burden.
of a company onto like an open source platform. So if you don't want to, if you wanted to go read
that article, EJS, you wouldn't need to pay $20 a month to the New York Times. You could just pay a
couple cents for that specific article and get a chance to get the content that you want and move on
with it. And I imagine that the net of this probably equals out to be lesser than what it would
be in the traditional sense of things. So that hurts a lot of traditional companies, where if you are
making a killing off of $20 a month for people to read five articles, this is probably going to
be a little scary. But for the rest of the open internet that wants to lean into this, that wants to
have the AI agents do things on our behalf, if I could give my agent a wallet with $100 in it and say,
here's your budget for the next couple of months, go get me all of the information you need,
tracked on all the sources you need to aggregate that for me. That is like a huge, huge unlock.
And most importantly, you just don't need an account or anything. It just lowers the friction to use
the internet. So this by all means, great progress, big fan of X402 and just excited to see what
comes from it. Normally when we get these protocols, the amount of innovation that happens as a
second order effect of it is just massive. So this has a real chance at really shaping the way
the internet looks and how we engage with it over the next couple of years. Okay, going on to our
final topic and arguably the most controversial on the docket this week, we want to put data centers out
in space. This is something we mentioned last week where a company called StarCloud is planning on
building an AI compute data center and launching it into space. Apparently, this is going to make
it a hell of a lot more efficient. Apparently, beaming compute down onto Earth is way more
environmentally friendly because it's out in space. There's nothing around it. It can just emanate heat,
right? But Josh, I know you have a lot of strong opinions on this. I have this tweet lined up to give an
update as to what exactly they're doing and how they're achieving it. And I have to say,
and by the way, this tweet thread was produced by an AI model perplexity. So shout out to them.
They give a very compelling case, Josh. There's a few things. They talk about the fact that these
satellites are probably, this hardware is probably not going to be as heavy as people are
claiming it to be. So it actually makes it kind of efficient or cost worthy enough to launch
some of these things in space. It's going to be the first ever AI model that is going to be
launched and trained and fine-tuned out in space.
That's another one.
And also they talk about the biggest criticism that they've faced,
which is around how these data centers in space are going to emanate heat.
There is no way to do that in space.
Space is a vacuum.
And they talk about this neat little convection layer that they're going to like kind
of film the entire or cover the entire data center with.
I don't know if that's true.
That's kind of like the theory around it.
But most important,
the economics are pretty insane. On Earth, they claim that it's going to cost you around $167 million
to set up the same kind of thing that they're trying to do in space, but in orbit, it's only going to
cost them $8.2 million. Josh, to any kind of businessman looking at these figures, that is an
investable, worthy company. Tell me why I'm wrong. This is outrageous. I can't even talking about
this. See, this is proof that, like, you can't really trust AI agents, really. So the cost
structure here is pretty outrageous. So like $167 million versus $8.2 million. Clearly there's
something wrong there because the cost per kilogram to orbit at the moment is $2,000 to $10,000 per
kilogram. Five or 40 megawatts is an astronomical amount of infrastructure required, particularly
because in order to cool a five megawatt data center in space, because there is no atmosphere,
It requires 16 square kilometers of radiators and heat dissipation.
And if you want a 40 megawatt data center up there, that is a very gigantic piece of space junk
that is not really proven in technology.
There's a lot of things like shielding that is really difficult because there is a lot more
solar flare energy.
If a single bit gets flipped in any of these training runs, it's very difficult to recover.
You have to do the whole thing over again.
So radiation is a problem. The cooling is a problem. The cost, I think they must be assuming that the cost per kilogram to orbit is like starship final form. When it gets down to like 10 to 30 kilograms per orbit, the reality is it costs several orders of magnitude more to do that today. And what this isn't taking into account either is the decreasing cost of electricity over time. Since the early 1900s industrial era in the United States, we have not really needed to produce that much more energy. Like our energy
It's been kind of gradual, but it's never gone up the exponential curve in the same way technology did.
We are very much at the exponential curve right now, where that is changing.
And I think to assume that energy costs are going to remain this high is fairly outrageous,
because clearly the goal is to get energy costs as close to zero as possible.
There is no change to this trend.
And this whole thing kind of seems outrageous.
Now, granted, I love the idea that they are experimenting, that they want to try to make these fun
outer space orbits, but it just seems kind of hand-wavy.
like one of their bragging points is, oh, this is a hundred times more compute power than the largest
GPU that's ever been in space. And that's because we don't ship GPUs to space. Like, this is just
the first big one we've ever shipped. And it's one of them. And real data centers now need
millions of these coherently operating together. So to me, it's like, it's a fun science experiment,
but I wouldn't take this very seriously because there are lots of long time scales and technology
breakthroughs required in order to make that number realistic. All right. Okay. Listen, Josh,
all very valid points,
but I have one more comment to make.
You criticize the fact that this company is suggesting
to build out like, what is it,
a 14 kilometer satellite
or data infrastructure to actually build this.
That's right.
Okay, to cool this, right?
Isn't the whole point of space that there's a lot of space?
Why can't they just launch whatever the hell they want?
We have infinite space in this thing.
Why can't they just do that?
It turns out, like space isn't super infinite.
and particularly on low Earth orbit,
I imagine there's going to be a large battle
for low Earth orbit.
Why low Earth orbit?
Well, it is low Earth.
It's close to Earth.
And because of a thing called latency,
we want these things,
we want to be able to communicate
with these things faster.
And unfortunately, the speed of light
does have a cap and things cannot,
we haven't figured out how to make things travel
any faster than the speed of light.
So if your data center is further away,
it takes much longer to communicate back with Earth.
And that becomes a problem.
What I prefer is,
instead of putting these like giant,
16 square kilometer
bases of things
into this precious space
in space
well wouldn't it be cool
to just throw a lot more
of these Starlink V3 satellites down there
so they can beam direct to cell service back to us
I think there's a lot more practical utility
of that low Earth orbit versus
throwing these gigantic things
that require a lot of space
you just if you saw we talked about the SpaceX launch
last week those satellites are very small
they don't take up a lot of space but they're very high impact
These, on the other hand, are gigantic.
And yeah, it doesn't strike me as very exciting as a lucrative way of using our limited real estate up there.
Yeah, I appreciate the boldness of the idea.
This team did come out of Y Combinator, so I'm presuming that it is quite fitting to the batch and cohort that they're graduating from Y Combinator always launches pretty loony ideas, which end up sometimes becoming world-changing.
But we are yet to see whether StarCloud will prove their things.
thesis of launching Jensen Huang's
Nvidia GPUs into space.
Speaking of Jensen Huang,
$5 trillion market cap,
it is an absolute crazy week.
It is earnings week this week, guys.
Stock market is at all-time highs.
We mentioned Microsoft and Apple,
Apple super randomly, coming out of nowhere.
I thought AI had beholden them.
I thought they had completely skipped the entire phase.
Somehow they surpassed a $4 trillion market cap.
Maybe they're underpriced.
We'll talk about more of that, potentially next week.
But that is all that we have on our docket today.
I just want to say from me personally,
thank you so much for the support all of you guys have shown us.
So far we had like one of our most viewed videos ever last week
and we're having an amazing week this time.
All the comments and feedback that you guys are giving all the likes,
all the subscriptions,
all the ratings that you're giving us on RSS, Spotify, Apple Music,
wherever you're listening to this is hugely, hugely appreciated.
If you haven't done any of that already, that's okay.
Please do it now.
It takes a few seconds.
Josh, do you have anything else that you want to share?
Oh, I have to remind you.
We've had a big week in the world of YouTube, which is awesome.
There's been a lot of new people joining and subscribing.
Just a reminder that there is an audio version of this available on Spotify where you can also watch the video, as well as just audio only on RSS feeds, basically anywhere where you get podcasts.
Spotify is my favorite way to watch the show.
I sometimes I'll rewatch our episodes and I'll go on with Spotify.
And it's nice because it's a pocket player that does audio, but it also has video built right in.
So if I'm on the treadmill and I want to watch, I could just watch it on my phone.
or if I just like lock my phone,
because unless you have YouTube premium,
you can't lock your phone and still listen,
it still plays in the background.
So Spotify's my preferred way of watching.
But yeah,
just thank you all so much for the support
and for sharing with your friends
and just being here for every episode
as we travel along the journey
and reach all-time highs across the board,
particularly in stocks this week.
So big earning season coming up.
I'm sure we'll get into some of that next week,
but that's been another great week on Limitless.
And thank you all for watching as always.
We'll see you guys next week.
