Limitless Podcast - THIS WEEK IN AI: Google TurboQuant, OpenAI Ends Sora, SpaceX IPO
Episode Date: March 27, 2026Time to dive into the impact of Google's TurboQuant algorithm on memory stocks, enhancing AI performance while shaking market valuations. We analyze OpenAI's challenges with Sora and its stra...tegic pivot towards AGI Deployment. We also discuss a potential $2 trillion SpaceX IPO and Apple’s restrictive App Store updates affecting AI innovation. Finally, we touch on Meta’s ambitious market cap target amidst layoffs and Josh’s surprising new role as an AI music producer.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------TIMESTAMPS0:00 AI Market Shock1:14 Google's TurboQuant Breakthrough4:40 Memory Market Reactions6:35 OpenAI's Sora Shutdown13:49 OpenAI's Focus Shift14:58 SpaceX IPO Rumors17:52 Apple vs AI Vibe Coding24:59 Google and Apple's Strategic Moves27:13 Meta's Ambitious Goals28:19 Unveiling Josh's Music Secret34:17 Future of AI Music------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Google might have just popped the AI bubble.
They released a new algorithm yesterday,
which shrinks the requirement of memory needed to run an AI model,
which isn't great news for the top memory stocks,
which had billions of dollars of their market cap wiped out over the last 24 hours.
We're looking at Sandisk is down almost 7% today.
Micron is down 18% over the past five days.
The majority of that yesterday,
and SK Hynux, another major memory provider, is down 6% today.
It's absolutely decimated the market.
We're going to get into this story and so much more on this episode.
Yeah, and some important context to know before we get into this is the reason why these memory stocks are so high is because memory has been the single largest constraint in the world of AI.
If you have ever tried to build a PC or buy a PC in the last 12 to 18 months, chances are it's almost doubly as expensive as it was the previous 18 months.
That's because AI.
It's what happened when crypto stole all the GPUs.
AI is taking all of the memory.
and memory is the limiting factor.
Memory is what serves inference whenever you ask chat GPT or Anthropica question.
When it serves you the answer, that uses inference, that uses memory.
So memory has been the single crux.
It's the reason why companies haven't been able to scale as fast as they want,
because the costs have been out of control.
This algorithm changes things a little bit.
This changes the math here, just a smidge,
but enough to make it scare the market entirely, right?
Yeah, it changes it in a huge way.
And the wildest part is the Google research team made this public and available for everyone to read the blueprint of.
So it's called TurboQuant, and they're describing it as a new compression algorithm that reduces LLM key value cache memory by six times and delivers up to 8x speed up.
So let me translate what that means.
When you type or speak to an AI model, it has a short-term memory.
Think about like humans, they kind of hold things in their head for a bit, and then they go to bed.
and then they commit some of those memories for long-term memory,
and some of those they kind of discard.
LLMs work in a similar way,
and it's called the cash memory.
Now, the issue with LLMs are that cash memory,
as you talk to an AI more and more,
gets really quite big and bloated,
which means that AIs are very expensive to run.
That's why we spend so much money on GPUs.
That's why these data centers cost billions of dollars to build out.
It is part of the reason when you inference a model.
Now, what Google did was they released an algorithm which said,
hey yeah you know that thing that bloated up your AI model we can cut that down pretty massively by six times actually and save you a ton of money so the world is going crazy with this new research because if you could scale this up to clode or chat GPT you may not need as many GPUs as we originally thought to serve frontier AI models to anyone yeah it's a really big deal like you mentioned every time you message a chat pot the model stores your entire conversation in that kv cache and on a large model like the 70 billion parameter model if you
run a long conversation, that cache will eat up to like 40 gigabytes of space on your GPU, which is
more than the actual model itself. So that's why this is such a huge issue. And the big breakthrough
here is that Google can do so without any loss functions. If anyone's a Silicon Valley fan,
the TV show, the crown jewel of the entire show was this middle out compression algorithm
that didn't have any loss function. You can just compress the data without losing any of the quality.
And Google's announcing that they've actually done this. They've actually solved this problem. And
that's why the memory market is freaking out. It's like, oh my God, wait, if we can get six
times the compression, eight times faster on the same GPUs, surely there will be less demand
for memory. Well, despite these stocks being down massively, I think the market is kind of overreacting.
This is a good thing for AI in general, and it will lead to more consumption of AI. It's great
that the AI models become cheaper to inference and that it frees up space for us to compute
other AI prompts and queries. Jevons paradox is going to occur here. In the
same way that people were kind of crapping on GPUs and saying these things are too expensive
and aren't going to get cheaper. The same thing is going to happen with memory here. It's not going to
result in less demand. It's going to result in even more demand. The demand for AI products right now
is insatiable, and I don't think this changes anytime soon. The other part is I think that what Google
has created here is something that's going to take time to scale out. This is not going to happen
tomorrow. This is a research paper, which you and I were actually joking about before we started
recording was released 11 months ago, but because the paper was so complex and had so many
algorithms and mathematical formulas, it didn't actually surface to the public until yesterday,
and everyone was like, wow.
So it takes time.
Since those 11 months, we haven't scaled it out to major models just yet.
This only applies to small to medium models, but I can imagine this scaling to frontier models
at some point, which will be a win-win for both sides.
I mean, yeah, there's no reason why you wouldn't be focused solely on this, right?
It's like if you can serve six-time the inference on the same amount of GPUs or TPUs or whatever
GPU accelerator are using. That's a huge breakthrough for any company. And it also extends past the
cloud. It moves to local inference as well. It's like if you're running a Mac mini or you're
even on your iPhone, the amount of intelligence you can run on a model that is six times more
efficient, eight times faster, and has zero loss function is a really big deal. And I think
Google may be moving towards this. What I see this trend happening with Google is, I mean,
they have the TPU. They're building their own stack. They have the partnership with
Apple to get into all of the iPhones. They have their like kind of Android division. They're,
they're publishing their research publicly to disrupt the market as they go. It's a really
noteworthy strategy from Google. And it seems like, as always, they're leading on the thought
leader front, right? They've always been the top tier research. We'll see how they can implement
this into actual products. But it seems like the market's wrong. I can't imagine a world in
which there is less memory demand as a result of this. Like, as you said, there will only be more.
If we get six times more intelligence, we will gobble that up. So,
quickly. So market seems like it's wrong. Maybe it's just an excuse to dump because the stocks have
gone up so much over the last 12 months. But I wouldn't be too worried about this if you are a
owner of any sort of memory. It's actually very bullish Google stock, in my opinion. I actually
bought a bunch more when this came out because I was like, oh, this is going to result in massive
efficiencies for AI. Another thing is, like, let's we forget, Google has been like the pioneer of a lot
of these different AI trends. Like one of the earliest LLM formations was from a bunch of Google or
ex-Google researchers. TPUs they've been working on for over a decade now, so they saw the
GPU trend like way, way, way in advance. And now they're doing the same here. My question to you is,
why do you think they publicize this for everyone to see? Surely this, like, I saw a few comments
about this as well. Like, surely they should have patented this, because this is a pretty sick algorithm
that now, like, any company can use. Yeah, I don't think the patenting thing really works anywhere at all,
because a lot of this stuff is just a couple lines of code. It's like Andre wrote auto research and
it's 600 lines of code. It's like someone will take it, someone will use it. The race is far too
powerful, far too, there's far too much of a monetary incentive to do this. And you could think of
it almost like Google's deep seek moment where deep seek created this unbelievable research,
dropped the bomb, they got all the publicity, but then the rest of the market kind of rushed to
tackle that. This is Google dropping their deep seek moments. And I think it's noteworthy. It's really
impressive for Google. Google has been kind of at the forefront of all of this stuff and they're
proving that their research team really are super impressive and have the ability to continue
to lead the frontier forward.
So speaking of companies that are at the frontier, one of those companies are falling behind
this week, Josh.
We got to do a little victory lap, slam dunk.
I'm not really sure how to frame this, but this is about SORA?
Yeah, let's start with the victory lap, I guess.
So SORA, for those of you who don't know, is one of Open AI's winning consumer apps.
It is their AI video generation app
that functions very similarly to TikTok.
You kind of scroll through reels,
but all the videos are generated.
But the coolest part is you can feature yourself in them.
You can feature your friends or any kind of IP or characters.
This week, they announced that they were sunsetting the entire project.
SORA, which six months ago wasn't a thing, is now dead.
Some fun stats from Sora's very short but mighty rain.
Within the first day, they achieved over 5 million downloads,
which is just insane.
I don't think any other app
has done that before.
They also hit number one
and stayed at number one
of the Apple App Store
and I think the Android Play App Store
as well
for a number of different weeks
but it wasn't without any kind of grimace
the public was split.
Hollywood is basically
how do I say
dancing on their graves?
Open AI is shutting down
its AI video slot-making platform
SORA.
And if you look at the comments
from different people,
it's just like, did we just win?
People are like saying
like this is such a big W.
So people are generally happy outside of their AI sphere,
but I don't know, Josh.
How do you feel about this?
Are you happy?
Yeah, well, this was just a failed experiment in the way that meta did.
I forget, I don't even remember meta's experiment that they did.
What was it?
Vibes?
Vibes.
Perhaps vibes where they tried to do the same thing.
It was TikTok but for AI.
And Open AI tried TikTok but for AI again,
but through the SORA platform.
And I am, I mean, we said this was stupid.
I haven't opened the app since the week of,
I think there's a harsh reality that Open AI is learning that they perhaps don't understand.
Just due to the nature of them being a new company, Sam really, this is his first large company
that is run.
But, I mean, it seems like a very predictable outcome when you just look at what standard
consumer behavior is and you kind of compare it to other platforms.
Because when you think about a platform like meta, right, it has 2 billion users or
something like that, and particularly on Instagram, how many people of those 10 billion
actually want to create content?
maybe 10 million, maybe 0.5% of the people would actually be serious creators. And the reality is that
almost every single human just wants to scroll and zone out instead of spending energy like
conceptualizing and editing these complicated things. And the novelty factor of it was very fun.
But people don't want this and it's not that good. And I think that novelty wears off. It's like we
created those videos that show us and you in them. And it was really cool for like two hours.
But creating a dedicated application where you have to go and download it and you have to use it
outside of the chat GPT Open AI universe,
is very annoying.
Had they rolled this up into the same app
that everyone's using every day with chat GPT,
maybe that's a different story.
But it's, I don't know,
it's tough to kind of reason why they would have gone through
with this in the first place the way they did.
Let me try and argue the other side.
I think what SORA did really well
was they made it really easy
if you were super lazy,
but you kind of had an idea for a piece of content,
to make that content.
Just write it in words,
takes a couple seconds,
maybe max a minute,
it. And then a couple minutes later, you have a fully-fledged video. Now, obviously, Sora V1 wasn't that
great, Sora V2, started to improve and become much better. Sora V3 was way better, but there was
still some AI cringiness around this. I do think this improves with a bunch of now other AI
models that still exist to do this. Capcut actually yesterday released their own version, just as Sora
was sunsetted, that uses Cidance 2, which is a Chinese AI model. And my God, the content that
creators can now create just through words looks insane there. And then you have,
GROC imagine on the ex-AI side under Elon Musk, and they're piling so many millions and
millions of dollars to try and improve that model. So there is something there. I just think that a lot
of this reason behind Open AI doing this is, one, they're constrained on cash. They've spent a lot
of money on GPU Stargate. They had to sunset that. And now they're figuring out that they
need to focus all their cash and resources on building the best LLM to beat Anthropic and the best
coding AI to beat Anthropic, who is quite frankly eating their lunch right now.
But there is another bit of news, which I didn't realize immediately,
but then I noticed Disney.
Open AI had signed a $1 billion deal with Disney through SORA.
And the idea here was Disney would invest $1 billion in Open AI,
and Open AI would also get access to Disney's IP and characters
and make them super famous on SORA via the AI generated type of video medium.
It sounds like Disney didn't even know.
I've seen multiple tweets like this.
where on Friday they were signing a deal with Open AI,
and then on Wednesday of this week,
when they decided to announce the news
that they were shutting down Sora,
the Disney team had no idea.
It's tough.
I think this is a good lesson.
There is this quote from Johnny Ive that I love
that I think would be nice for Sam Altman to hear
when he was asked about what Steve Jobs taught him about focus.
And he says,
this sounds really simplistic,
but it shocks me how few people actually practice this.
One of the things that Steve would say to me
was because he was worried that I wouldn't focus.
He would say, how many things have you said no to? And I would tell him, I said no to this and I said no to that. And he would create these sacrificial things that he was saying no to. But focus means saying no to something with every bone in your body you think is a phenomenal idea. And you wake up thinking about it. But because you are focusing on something else, you can't work on that one thing. And Open AI does not have that focus. Open AI is creating these sacrificial things. They are not deeply focused on one thing. And that's why there's four different apps you have to download to use all of their tools. There's just all of these spread attempts at creating virality. And man you're
manufacturing virality through TikTokification of it, through this image generation of the studio
Ghibli stuff. None of it is just building their core thing. And I think when you look at Anthropic
and when you look at Open AI and the difference between the two is that one has focus and one does not.
When you look at Open AI, you see five different product skews. They're all kind of all over the
place with different intentions. When you look at Anthropic, they're the best coding model in the world.
And that's it. Everyone can converge on that one point and therefore their velocity is so fast.
That's why we just filmed an episode yesterday.
Everyone should go watch about the eight updates they published in eight weeks that basically
entirely replaced OpenClaw.
They have the focus.
And I'm hoping that Open AI can now kind of shift this focus into this singular place and start
to move towards this new organization that they are calling AGI deployment.
This is crazy.
They're preparing for AGI already.
Yeah.
So the team is basically getting reshuffled under a product organization within OpenAI
called AGI deployment.
And this kind of harkens to a wider strategy that.
Open Air has assumed for the last couple of months now, which is to build the best LLM,
build the best coding AI model, and also the best world model, which is actually where the
SORA team researchers are moving too. They're going to focus on building a world model,
which basically helps AI models see the world and understand the physics of the world.
The advantage of this is you get to better understand how humans perceive and interact with the world,
which is something that LLMs can't do. Think of LLMs as an AI model that sits in the library and
reads all the books, but doesn't actually experience the world for himself.
world models actually help you understand the world.
That's what the SORA team is going to focus on right now.
And that's super exciting for one major reason, which is robotics.
Robotics is going to be a huge thing.
Open AI tease robotics in one of their major announcements a couple of weeks ago.
So we know that they're going to focus heavily on that.
And I do think that's a smart move.
I'm bullish open AI after all of this.
But just to recount, this isn't the only major pivot they've made over the last week.
Over the last couple of weeks, they've done quite a few things.
So they've sunsetted SORA.
They've announced that their Stargate project, which was like their major project to scale.
GPUs across the globe is now canceled one year later due to funding issues. And Altman said that,
like, you know, I'm never going to release ads on the platform. He ends up doing that in-app shopping
completely failed and flopped and now they got to redesign that entire thing, but it's been put on
pause for now until they focus on all this other stuff. All of this to say, I think OpenAI is going
IPO soon, Josh. And I don't know what you think the odds are for this, but we should probably
bet on that. Well, if only we had a marketplace for it. And thanks to our friends of Polymarket, when will OpenAI
IPO by. And the answer is kind of surprising where there's only 36% chance that they'll even
IPO this year. And I think earlier in the year, this would have been a little bit different,
but it appears as if there's been some trouble in paradise for Anthropic. Maybe that is
moving over to Open AI. The reality that we're starting to see now is SpaceX might be the big one,
and that might be it, which is a little disappointing because I was hoping this would be the year
of all of the IPOs, but according to the market, 36% chance. And if you think it's happening
this June, it's 4% chance. So that's absolutely not happening.
SpaceX is going to be the one that will be taking that crown.
Polymarket, thank you very much for sponsoring this part of the episode.
And maybe we should just go right into SpaceX and their IPO, which now is rumored to be filing next week.
Confidentially, this will not be public, but they will do their confidential S1 filing later this week, early next week, which would point them to a public listing around the first couple weeks of June.
This is a really big deal for the world because, I mean, this is likely to be now over a two-true.
billion dollar IPO. That's the biggest IPO by far. This is going to be the largest thing to
enter the market. It's going to be larger than the GDP of some country, countries, and it will just
go live on the New York Stock Exchange or the NASDAQ sometime in a few months from now. We're getting
close. What a sequence of events to get us to this point. We had the rumors of SpaceX and XAI
merging and that it happened. Now we have rumors of SpaceX and Tesla merging, but first I think we're
going to get the SpaceX IPO. Rumors also say that they're going to be raising
$75 billion in this listing, which is just absolutely insane and probably the largest
raise that's ever been done on an American public stock market. The other thing is,
this comes off the back of Elon announcing some pretty ambitious projects for SpaceX itself.
They're going to be launching a TerraFab. We made an episode about this. Definitely go check that
out, which is a gigantic AI chip factory. They're going to be launching 80% of those chips
into space via SpaceX. So it's going to become this whole monopoly of like AI in space,
which I can't wait to see mass drivers on the moon. Definitely go check out that episode earlier this
week. But yeah, it's going to be one of the biggest IPOs, Josh. Are you buying it? Absolutely. I cannot
miss it. I currently own. SpaceX is the only private company I own. I'm ready to buy more. I think
it's going to be the most valuable company in the world, rolled in with Tesla. And I could not be
more bullish, more optimistic, more excited for the future with these companies. If Elon's predictions
are right, we'll be seeing Nvidia at $10 trillion. And SpaceX and Tesla will be right there with them.
The companies are going to grow quick. I think the SpaceX IPO is very much the starting gun.
Now, there is another gun that's been fired, and it has been a not a good one from Apple.
Not a good one because they have been banging out all the applications of everyone who wants to get in the app store.
Now, there's been a recent issue with the app store.
It's now the fact that anybody can make apps means that a lot more people are submitting apps.
So for the larger companies and the smaller companies like, if you have an app in the app store,
it's taken a lot longer to get your app approved, whether it be for just a normal update or to release an actual app.
And now it appears that Apple has stopped allowing.
updates for popular vibe-coded applications? Yeah, they've basically put a not official but
unofficial halt on anyone launching AI vibe-coded apps. And I'm confused about this for a few different
reasons. Number one, if you look at a chart of app launches on the app store over the last year
before vibe coding became a thing, it had completely plateaued. No one was making apps anymore.
and that was becoming an issue for the Apple app developer ecosystem.
They're actually kind of trying to encourage developers to build more things.
Then AI vibe coding came along and suddenly they had a flourish of different vibe coded apps.
Now granted, 90% of them were crap, but 10% of them were pretty good.
We got the replets of the world which helped other people kind of build apps just by typing words or talking to an AI on your phone.
And Replit as a company is worth, I think, $9 billion as of last week.
They did another massive raise.
So we aren't talking about small fish here.
a pretty big fish. Apple just cut them at the head this week or last week over the last seven days
by saying that they are not going to encourage AI vibe coding anymore. And it just signals that
monopolies like Apple, which have a chokehold on the app ecosystem like this, that take a massive
30% cut off of fees. We saw this happen in the crypto world as well, where people wanted to allow
transactions. Apple said, we'll allow it for a bit and then we don't want this. They're now doing it
with vibe coding. I don't get the issue for them unless there's like major security exploits.
I just don't think this is a good move in general.
This topic actually makes me pretty sad as an Apple fanboy,
not because of what they're doing to the developers,
which is messed up and I don't like.
But the idea that Apple is going to exist as the king as it stands now
is just an impossibility in this new world.
When you think about how easy it is to generate one of these apps
and how easy it is just to sideload them onto your phone,
like anyone with a test flight account can vibe code an app
of whatever you've ever wanted an iPhone app to look like,
You can send it direct to your iPhone.
You can't put it on the App Store, but you can send it directly to your iPhone,
and you could have your own version of your dream app,
and it can do whatever functionality you want,
and you're one prompt away from updating it and editing it and changing it to do whatever you want.
And the fact that it's so accessible now,
and Apple is clearly not leaning into adopting this new paradigm shift.
There's this clash that's happening,
and we're seeing it here with this vibe-coded apps.
They're not the good guy anymore.
They're not the person who is helping developers do what they want,
empowering them for this new paradigm of engineering.
They are the bottleneck. They are the, like, hammer who is stopping people from innovating. And that to me
makes me really sad. And that has always throughout history been a losing formula. So I hope they turn it
around. We just got a WWDC announcement, which is happening in the next month, I think. I forgot the
dates. But that's going to be their time where they're going to showcase all of the new AI stuff.
That's when they initially announced Apple Intelligence. It was the biggest flop of all time. They're going to
try to do it again. Hopefully they'll come out with some new policies to address this. And hopefully
they'll have a chance to turn the ship around, but that is their last chance.
So something tells me, Josh, correct me if you think I'm wrong, that this may not be
an outlandish move by Apple. It might be a strategic one, because we also got some other news
that broke this week that they deal with Google. The deal with Google where they pay Google $1 billion
and get access to Google's Gemini AI models, what we originally thought was going to be
some kind of licensing right, actually is Apple getting full access.
to Google's Gemini model weights, which means that they can fine tune and build the model in any way that they want.
They get full access to a model that Google spent hundreds of millions of dollars training, years training, just for a billion dollars a year.
I don't know what's going on here.
And it just means that Apple's got an absolute steel of a model, a frontier LLM from Google, and they can now run with it and build their own AI apps, which is presumably what I think they're going to do.
This is a huge win for Apple.
And we have to ask the question, why is Google doing this?
And I think it kind of pairs to that first topic that we spoke about, which was that research paper
where they're kind of democratizing intelligence as best they can with TurboQuant, right?
It's like they're increasing the efficiency.
They have their own custom TPUs to make the price per token down.
And when we consider what the most existential threat in the world of AI is, it's that edge inference
and local inference gets really good so that you don't need to generate tokens from these megaclod
providers.
And I think the reality is that it's becoming more and more true because we have now these
turboquant models that are going to be six times more efficient. We have Apple who is taking
Gemini models, which are leading edge and distilling them down into something that runs on your iPhone.
And the majority of the people don't need world-class intelligence. They don't need the bleeding
edge stuff that is solving new physics and math. They just wanted to solve the day-to-day stuff.
and this is possibly a hedge against that for Google,
is they're creating the problem by throwing out papers like TurboQuant,
but then they also have access to the solution, which is Apple.
I mean, when we think about where the most local inference is held,
it's just on the entire Apple network, on your laptops, on your iPhones,
you can run these unbelievable models,
and now Apple has the ability to do that with Gemini.
And I think it's strategic, it makes sense.
They're collecting a paycheck, they have a great relationship with Apple,
they've been doing it with the browser forever,
and this is just a natural extension of that.
Apple has all the distribution, right? They have 2.5 to 3 billion live Apple devices active in the world right now.
The Mag 7 stocks, too. When you look at the Mag 7 stocks who've been like suffering the most, Apple suffered the least. It's just out of the race. It's right there. Unaffected by all of it.
Yeah, they stayed number two market cap in the entire world for a company without touching a single GPU expenditure, without a single AI CAPX expenditure, just insane strategy from them.
They hold a very rich and important moat,
but looking at Google as well, they're smart
because they know that Apple won't win any of this AI stuff
unless they get access to a major frontier LLM.
Google's bet here is compute plus data equals the best model in the world.
Apple does not have the compute.
They do not have the model.
Google can supply that for them.
And something tells me that whereas Google might not be getting paid as much by Apple,
they're going to exchange or broker a deal
where they get access to some of,
of Apple's user data, which they can then use to train a better model,
and it becomes this synonymous thing.
Now, if the future is actually locally run,
edge compute models that run on your laptop that are frontier,
then Google's shot themselves in the foot,
maybe shot themselves in both feet at this point,
but it's a bet they're making.
And I think, I don't know, I think Google might have this one, if I'm being honest.
Yeah, I'm rooting for them.
I'm also rooting for meta now,
who has this, like, unbelievable pay package that I just saw.
of nine trillion dollars is the valuation they need to exercise all these pay packages.
Like, okay, Elon, okay Tesla board.
I've seen this before.
What's the deal with this milestone package here?
Because a $9 trillion valuation for meta, that seems like a lot of money.
Okay.
So two things have happened over the last week, which are at extreme odds at each other in meta.
Number one, they are closing down their metaverse division and rumored to be
laying off up to 20% of the company. Now, these employees range from low-level product employees,
engineering employees, all the way up to senior director roles. But this week, news also broke
that they are going to be rewarding or they've offered very attractive comp packages to their
top executives to the tune of $800 million for one person, their CFO. But it's under one
premise and condition, which is, over the next five years, meta needs to hit.
a $9 trillion.
That's with a T dollar market cap.
Just for context,
no company in the history of the world
has ever hit that market cap.
In fact, the biggest and largest
and most important company
in the world right now in Vida
is currently sitting at $4.5 trillion.
So we're talking about a $2.5 for that.
And for Meta, who is currently sitting in,
I don't know where.
Are they still in the Mac 7 at this point?
If so, I think they're like six or seven.
They've got a long way to go.
But it just, it's interesting seeing the incentive
design that Zuck is set
up. He spent, I think, upwards of $30 billion hiring 200 people this year and hasn't released a
single AI model. Lama, their open source model, is dead in the water. All the apps that they've
released have been crap. So Meta's kind of making a big bet here. And I don't know if they're
incentivizing the right people, if I'm being honest. I'm rooting for them. I hope it works. I mean,
in five years, there will be a $9 trillion company, possibly multiple of them. So there is a world in
much meta can achieve this. The problem is that they haven't actually done anything to make progress
towards that recently. Like they've spent all this money, they've hired all this talent, they haven't
released anything compelling. They haven't actually proven that the money that they're spending
is working. In fact, the counter argument to that is true, where meta, the company,
just crushed the division that was responsible for renaming the company.
So clearly there has been a series of big swings that haven't worked, and we've yet to actually
see a big swing that has worked. Everything's failed. Like you said, the overall.
Open source models have failed. The pivot to the Metaverse has failed. Everything besides the core
product of Facebook, which is their algorithm and their home feed and their social media platforms,
hasn't worked out. Even the hardware sucks. So they have a lot to figure out if they want to make
this work, but I'm rooting for them. Zuck's an amazing CEO. I am like very hopeful that he can figure
this out. And this incentive structure is the right way to do it. You want to incentivize the people
to bring the value. I hope they can bring the value. Now, for the lost story,
of this episode, I want to introduce the listeners and watchers of this show to a secret,
a secret within Limitless that none of you have ever known or has been publicized before.
And this secret is about our co-host, Josh.
Josh is a part-time music producer, and he releases absolutely banging tunes for the world to hear.
Yes, and this part-time skill actually originated about two weeks ago with the advent of Google Lyria 3.
and the fact that now I can make music with the single prompt, maybe a sentence or two.
So that's what I did today.
I said, generate me a three-minute song about the Limelous Podcasts and how it's the absolute
best number one podcast in the world.
There's nobody better.
The AI is scared of our show because it's so good.
This just sound like an R&B hip-hop type vibe.
And we've spoken about Lyria 3 before of the show.
Lyria 3 is the music generation software from Google, where you can prompt it, you
say what you want the song to be about, you say if you want lyrics or not, what's
the lyrics you want to sound, or you just let it run wild and choose everything for you.
So that was all the instruction I gave, and it generated me a song named No Logic for the Soul,
which is very elegant. I love the artwork. It has like this cloud and these roots, and it's very good,
but the song is good. So I want to play you a little clip here of this song. I'm not sure if you've
had a chance to really enjoy the full thing. Neither of us have. So we're going to experience
for the first time together, but this is our new original. No Logic for the Soul. Enjoy.
St. Emigera in the ghost in the wire.
Ooh.
We are the signal that the mushees desire.
Code is corrupted when the sheen's desire.
Code is corrupted when the truth gets loud.
It's crazy how good it is.
Limitless, we're the peak of a climb.
Limitless moving outside of time.
No, I'm shocked.
This is very good.
Oh, this is good.
Bye.
The horns, too, is crazy.
Wow.
The neural network is taking a break to me.
That has to be a new outro song, Josh.
It's unbelievable how good this music is.
So good.
The lyrics, not only do they make sense, but they rhyme.
They have the right timing and cadence.
The chorus is great.
They had a full horn section in there.
I mean, it's amazing, right?
Because you think about the previous generation and how they enjoyed music.
We had like the Rolling Stones, the Beatles, the Glems, the Glems.
the Grateful Dead. These were
like generation-defining artists and musicians.
And the reason they were so powerful is because there was
convergence around them. You go on the radio, it's playing
their music. You talk to your friends. They have their CDs and their vinyl
records. You go on the street, this is what's being played
because this is what's available. And it was great, but it was
really the best that was available. And everyone could kind of converge on that
fact. The idea that you can now generate good music,
not great music, but good music. And good enough for people who don't
really care to seek out great music implies the fact that we might never have anything like
that again. There might never be another Grateful Dead, another Rolling Stones, another generational
defining artist because it's so accessible. If you look at the top charts on Spotify, it's the top
trending tracks on TikTok. And there's this direct correlation between the two. And it's stripped out a lot
of the humanness of artistry. It's a whole different paradigm. And listening to this really, like, hits hard.
I'm like, okay, yeah, like this, it's actually over.
The fact that I can do this with a two-sentence prompt.
I did this in 30 seconds.
How long is the track, Josh, as well?
And the track is three minutes long.
It's two minutes and 57 seconds.
That's a regular song.
That's a song that could go viral and like take over the Swamify charts.
And like if we did some prompt engineering, if we really refined the prompt from more than two sentences to like a proper setup where there's many, many paragraphs, many details.
You kind of outline the lyrics and you outline the cadence and what you want.
You can get unbelievably detailed with this.
and it sounds good.
That's a good song.
So it's tough.
I mean, this is one of those bittersweet moments
where you're like, holy shit, this is amazing.
But holy shit, this is pretty powerful stuff.
Now, I know there are a bunch of audio files listening to this
thinking, God, Josh and he just have no taste.
May I just remind you, this is the worst this model will ever be.
It's the worst.
It's only going to get better.
And there's already been a bunch of AI soundtracks
that have gone completely viral.
In fact, last week, Spotify had to shut down an AI music band,
which they didn't realize was an AI music band
because they just didn't like that it was AI
but it was absolutely plugging through a bunch of streams
and gathered like 300,000 streams over the last month
just insane. So these things aren't going away.
Another thing I want to point out is
Open AI just sunsetted Sora
which was the video generation of AI video generation
version of TikTok.
And now we have Google Lyria which is kind of doing a similar thing for music.
So I don't think artistry
and that type of medium is going away.
is an insatiable amount of demand. It helps music producers themselves kind of express the type
of sound or song that they have in their head. So I think as a medium of translation, these AI
tools are really valuable. But to your point, Josh, we are going to end up in a world where maybe
everything sounds kind of beige and consistent. But I'm going to take the optimistic side of that,
which is I think it's going to force humans and AI alike to be even more creative and come up
with better soundtracks. I'm excited. And to your point around The Grateful Dead, you can now,
I don't know, technically license their IP and create a never-ending album of their music. So
maybe that's an angle as well. Deadheads are not liking this at all. Oh man, this is going to
disturb a lot of people. But it's here. And like you said, it's the worst it will ever be. It's up only
from here. The quality will get better of all of these tools. As meta goes to $9 trillion,
as Google disrupts entire industries. It's all happening.
happening so fast. That is another week fully covered in the books. If you've made it this far
in this episode and you've listened to the three previous episodes from this week, you are now
fully caught up. There's nothing you need to know that you don't already know. You can go enjoy
your weekend, go touch some grass, go hang out with a friend with, do some analog stuff maybe.
That's kind of cool because I'm sure everyone's just been in the trenches on their devices,
watching the war zone all week. But yeah, thank you guys so much for watching as always.
If you enjoy this episode, don't forget to share it with your friends who might also enjoy it.
Subscribe to our newsletter, which is awesome.
It comes out twice a week.
We publish on X all the time pretty prolifically now.
We're getting lots of action.
Here's our handles that you can find on screen right now.
Each has any final parting thoughts before we head off for the weekend?
Yeah, I want a few of you to generate both Josh and I individually and together our own songs and jingle.
Let's see if you can come up with the best limitless intro or outro song and maybe we'll feature it on a future episode.
That would actually be cool.
and send it to us on on X because you can embed it in the post and we could just save it from there.
So if you generate something, tag you guys and I on X and we will listen to it and maybe find a new intro outro track, that would be kind of cool.
Or a theme song.
I love a theme song.
We'd be like superheroes.
We could have a theme song before we get on camera.
That'd be sick.
Well, anyways, I know this was a long one.
So if you made it through, I mean, 40 minutes of this show, thank you.
You're a real one.
Much appreciated.
As always, have an amazing weekend and we'll see you guys soon.
