Limitless Podcast - AI NEWS: Meta's $25B Fumble | Anthropic $200B Valuation | OpenAI Acquires Statsig
Episode Date: September 4, 2025In this episode, we dive into the AI industry's game of thrones, highlighting Meta's $25 billion investment in Superintelligence Labs, now facing mass executive departures. We explore Sam Al...tman’s $1.1 billion acquisition at OpenAI, Anthropic's soaring valuation to $200 billion, and Apple's strategic recruitment amidst the chaos. We also discuss Elon Musk's XAI and recent code theft allegations, plus OpenAI's acquisition of Statsig to enhance its offerings.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 AI Talent Wars Return0:43 Meta's Talent Exit2:13 Executive Departures Raise Concerns5:06 The Value of Overpaying6:23 Confidence in Meta's Future7:20 Apple Poaches from Meta8:27 XAI's Talent Issues10:07 Fragility of Intellectual Property13:39 OpenAI Acquires Statsig15:42 Evaluating AI Model Effectiveness18:22 Anthropic's Massive Valuation20:35 Are We In a Bubble?23:03 Grok Code's Rise to Prominence27:38 Apple Enters the AI Arena34:32 Upcoming Events in AI Tech------RESOURCESJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Okay, the AI talent wars are back and meta is bleeding after spending $25 billion to acquire the best AI researchers ever.
They just lost eight of them in the last week.
And Josh, Sam Altman is laughing and fighting back.
He just spent $1.1 billion to acquire a new startup and some of the best AI researchers.
And Anthropic quietly raised another casual $13 billion, which now values them at $200 billion.
And I heard some whispers on the grapevine that Apple might be making a comeback in AI.
Folks, we have got a jam-packed week of news updates and we're going to get right into it.
But let's start with meta-superintelligence labs, which if you remember was the new superhuman AI core unit that Zuckerberg set up and actually spent a lot of time poaching some of the best people from Open AI, Anthropic, and investing in this startup call Scale AI around 15 billion.
dollars to form this kind of like Avengers of AI.
And the hope was that Mehta, who has consistently been dragging behind on AI models,
will jump and leapfrog to the top with a new AI model that is better than chat chipped-t and whatever
you might say.
But within two weeks, Josh, of them forming this team, eight executives, you heard that
correctly, eight executives have left with these guys highlighted over here being some of the main ones.
And if you were thinking, maybe these are just some casual engineers or people that
got caught in kind of like the org shift from the old meta-AI team to the new one,
you'd be wrong.
These are the executive, some of the leaders that were appointed to this new team.
So it doesn't signal the strongest amount of confidence.
I remember when we were talking about Zuck taking this aggressive move in the AI talent wars
three weeks ago.
We were very confident that Zuck is making the right move here.
He's done kind of similar moves in the past when he acquired WhatsApp,
when he acquired the Instagram team.
And we kind of thought this was going to be the same type of move as well.
But this doesn't inspire confidence to me, at least.
Josh, do you maybe have another take or do you feel the same?
Yeah, this seems like things are getting messy.
I mean, when you move this quick, I mean, it's funny, back in the day, Mark Zuckerberg
in Facebook's ethos was move fast and break things.
That was the theme of their company.
That was their motto.
And that's exactly what they're doing now.
I know that wasn't always the plan, but that seems to be the case that what they're doing
now.
Actually, if you can go back to that list, EJ.
I just want to kind of walk through a few of those members.
It's important to note that some of these people have been around for a very long time.
So one of them worked with meta for more than eight years, spent five years at meta, eight years at meta, a lot of since 2024.
So a lot of these people have been there for a long period of time.
These are the executives that have been around that have been building with the company, presumably very happy with the company, who are now leaving as this new order comes in.
So I'm not sure we could view this as totally bearish because this very much feels like an instance where the old guard
is just kind of like not really interested in this new mission as they pivot as a company
as they bring in all this new talent. What I would be concerned about is if the new people
who are recently hired weren't doing well and wanting to leave because that's where I would start
throwing red flags. I'm like, okay, old people make sense they've been around. They've probably
secured more than enough money to take care of themselves and their families and they're ready
to move on and let this new like young blood come into the company. Well, that's actually what's
happening, Josh. If you look here, one of the senior vice president
of scale AI, which was the company that Zuck basically spent $15 billion to acquire their talent,
I think less than a month ago, left, right?
And a few other executives.
That's red flag.
So left tenant ones.
It's a huge red flag.
And I was kind of thinking about like, why this might be the case.
Like, you know, you've just signed $100 to $300 million bonus packages.
You're set to probably make a couple billion for staying at meta and at least giving it a
maybe give it more than two weeks, right?
And then I was thinking, well, maybe they just kind of took the 100 million sign-on bonus and then quit.
But Chi Hao Wu, who is a former AI and ML specialist with Meta,
he was kind of like interviewed about why this might be happening.
And he said, speaking generally and not for myself,
a lot of people in the AI team maybe feel things are too dynamic.
There were a lot of organizational changes.
And in fact, my manager was changed several times.
So we might just be seeing kind of like the effects of,
churn happening throughout the team and the organizational movement and maybe things settle down
over the next couple of months. I don't know. It seems like this is probably too early to judge
the success of these decisions. I mean, he spent a lot of money. I assume it's going to ruffle
a lot of feathers internally as well as externally. And they're just kind of in a period where
they're figuring out this new structure. Because there's a whole new leadership hierarchy, right?
I mean, they hired Alexander Wang here and they brought him over and it seems like it's going well.
But apparently it's not, they may have overpaid.
I mean, which might be like a gross
on justification because like clearly they overpaid,
but how much do they overpay?
I guess is my question I have to you, EJS.
I just want to say when Zoc originally was announced
to spend this amount of money on Scala and Alexander Wang specifically,
there was a massive question mark above that, right?
Now, we've seen a lot of crazy stuff happen in AI investment.
We've seen billions being spent on compute and video chips and all that kind of stuff.
And this was kind of like the next step in that directional trend, right?
is it worth spending $15 billion on an AI data company?
More and so, is it worth spending that much on one person?
And people thought, well, Zuck's been right before.
Maybe he'll be right again.
But the criticism that we're seeing now that some of these key executives from the exact
company is leaving is causing a lot of worry.
And Alexander Wang is kind of like caught in the crossfire between all of this happening.
You'll see like one of the points being made on this tweet over here is that, you know,
surge and Merckor, which is a reference to scale AI's competitors,
are doing as good, if not better, a job than scale AI.
You know, they do pretty much the same thing.
And I'll value that only a fraction of what scale AI is.
So the question now becomes, well, maybe has Zuck overpaid?
I'm kind of thinking, yes, right now.
But again, it's too early to call.
We have to see what they actually produce.
I say give it a couple of months.
Okay, this is what I was going to ask.
Do you still feel confident, or do you not feel confident,
in Meta's ability, in Zuck's ability to turn this around to become a real competitor in the
AI space to the likes of like anthropic, open AI, XAI, the whole crew.
Okay. I'll tell you what my gut is telling me.
Short term, yes, but long term, I'm undecided.
And the only reason behind that is Zuck hasn't really shown his proficiency in infrastructure
before. He's a consumer guy, you know?
He's made some of the best consumer apps, consumer experiences that people have ever seen.
and I'm confident in him translating that towards AI apps.
I just don't know whether he can kind of like operate that at the foundational level.
And AI models very much operate at that layer.
Yeah, I was excited to disagree with you, but I actually totally agree.
I trust Suck's judgment, but again, he hasn't done anything like this before.
He's never rolled out actual infrastructure.
He's never rolled out tools at the scale.
So I guess the answer is we will wait and see.
But that's not all the news for this week.
What do we have next?
Well, I kind of just wanted to kind of rein on your performance.
parade a little bit, Josh, because I know you're such an Apple fan, but despite
despite, Meta bleeding so much, despite them losing so much money on this, on this,
on this, on this talent acquisition, they still have, you know, enough money to, to post
Apple's best in terms of AI. This broke this week that Zuck poached Apple's robotics chief from
Meta's Robotics Studio. I just thought that in the midst of all of this happening,
Apple still somehow bleeds,
even though they are not at the frontier of AI yet.
And especially because Tim Cook is probably the only other guy
that could compete with Zuck at the consumer AI app level.
So I don't know, I just thought it was pretty funny.
Yeah, the robotics chief implies that there is a robotics segment of Apple
that I think the whole world is just grossly unaware of.
So I guess we don't know how important this will be.
Maybe they're not even going to build.
I mean, they were supposed to make a car.
They were going to make a TV.
They had all these things in R&D,
who knows how much that really means,
but that's not the only company that OpenAI has acquired.
No, it seems like there's been a bit of a snowball effect
for the rest of the air industry.
Elon Musk, you know, owns and started XAI,
which is another frontier AI model competitor.
Previously been like the darling child on this show, actually.
I think you and I will agree that we both love Musk.
We love what he's doing with XAI.
This startup is less than two years old,
and they already caught up to the frontier.
Grogfour is performing at, you know,
crazy feats, but even they are leaking talent. One of the main stories is they're suing this
guy called Shue Chen Li, I hope I pronounced that correctly, for allegedly stealing the entire
GROC for code base and then moving to Open AI for a job. This quickly moved from being
allegedly to being confirmed. Elon Musk actually confirmed this in a tweet. I was trying to find
it, but I couldn't find it, that he has proved in the logs that he downloaded the entire
code base and is moving to Open AIA. So obviously this is a massive copyright infringement and
concern for Musk and XAI at large. And so he's trying to sue him and prevent him from joining
Open AI. And just to add kind of like a little bit more spice to this whole affair,
this employee in question also cashed out $7 million worth of XAI shares during the move. So just an
insane turnaround of events. And again, like just a reminder here, like these guys are being traded around
like some of the best sports superstars ever, right? We're talking about $100 to $300 million contracts,
and they're cashing out millions as they're doing so. This might be actually the quickest way
that you could make money in any kind of industry, void of just tech alone. I just found this
insane. Yeah, it's pretty amazing that he was able to cash out $7 million invested stock after just one
year of being at the company and then move over to Open AI. But this is an interesting idea that I've
been thinking about a lot, which is just the fragility of these companies and the IP that
exist within these companies. Because a lot of the breakthroughs that happen exist on a code base.
And perhaps the breakthrough is as simple as like 20 lines of code. And it's just a small iterative
thing that changes how a transformer works. And those 20 lines of code that can optimize and
save billions of dollars of GPU training space or it can optimize and create a model that
blows all of the others out of the water, it's literally 20 lines of words. And if some
someone is able to get into a company, figure out what those 20 lines are, extract them and
make them public or give them to another company, then you've essentially devalued billions
of dollars of value from a company by taking 20 lines. And what we saw here is not just 20 lines,
but the entire code base. So I think this is to the detriment of a lot of companies because
it's going to be very challenging to battle on the frontier of actual AI efficacy versus
building applications and value for the users on top. It's as fragile as a couple of lines.
differentiating between the best models in the world versus just the average company.
And I wonder what Open AI is going to do with it.
I was just thinking, like, how valuable are your employees and how much kind of restriction
can you place on them or should you place on them?
I remember us doing an episode like four months ago, maybe five months ago, Josh, kind of shitting
on Sam Altman's approach to his employees.
We were like, he's being too constrictive.
He's not allowing them to sell shares.
These are the stars of the show.
and he's placing this kind of like garden leaf ban on them.
So if anyone leaves Open AI,
you can't basically join any major frontier lab for a couple years.
And I remember he eventually overturned it
because he faced a lot of pressure in the media.
But we were kind of very bearish against Sam Altman and Open AI.
Turns out he might actually have been right.
Because as you said, like, whether these guys download the codebase or not,
they have it in here.
They know what kind of like parameters they're using,
how they're training the model, how much compute they're using.
And so these individual employees become the most valuable assets of these companies.
And if you aren't placing restrictions on it, it becomes way too easy to poach and then suddenly
become the Frontier Lab.
And, you know, we've seen Meta poach a lot of Open AI.
We also know that X has poached a few Open AI employees in the past not to the extent that
Meta has and that Anthropic has done the same and back and forth and vice versa.
So, you know, this becomes kind of like the main thing.
And X isn't immune to this, right?
You know, I've got up on our screen here and yet another employee announced that
They were transitioning and moving on.
So I'm just seeing this general trend of these AI researchers being the stars of the show.
They know their value.
I think some of them, this might be a hot take, are just cashing out on money because they
probably see some kind of AI bubble forming.
But the other side of me also tells me that they kind of want to work on something that
is meaningful and purposeful for them.
We saw that with Zuck offering, I think, what was it, $3.5 billion to poach the
CTO of Thinking Labs or Thinking Machines rather.
which is Mira Murati, the ex-co founder of OpenAI, which is setting up her own AI thing.
He rejected it, right? He was like, no, I kind of want to work on this vision and purpose.
So I see those two conflicting views, but the rapidness of what I'm seeing here, of employees
kind of moving from company A to company B, off to two weeks just kind of sounds like a cash grab
to me. But maybe that's just because of my experience in crypto. Maybe I'm too cynical.
Yeah. Well, I mean, the money was flowing around not only with employees, but also with
other companies too. Open AI was in the news for another reason this week, which is their acquisition
of Statsig, for people who don't know Statsig. I just learned this week because I didn't.
Statsig and EJS, please correct me if I'm wrong, but I believe it's kind of like an A-B testing
company, which led me down a lot of rabbit holes. So basically what models can do that integrate
Statsig is they're able to kind of serve two results and then people can choose the better result
and you kind of iterately get better through this system. Now, I was thinking of like, why would
they want this, right? Why do they want to have these A-B testing? Well, one is just to make the
model better. One is just to improve outputs. But also, I mean, benchmarks are getting a little
stale, right? Benchmarks don't work that well. People are kind of maxing out on them. So perhaps
they kind of create their own new benchmarking system where they can compare not just their model
results to themselves, but also to other AIs. So I'm curious that EJAS, what do you think?
What was the reasoning behind purchasing Statsig? It's a huge amount of money. And also,
they did that to pretty much acquire one person, by the way. And we can dig into that,
into VJ, who is now the new CTO of open AI applications in a second. But the reason I think
they spent that amount of money is, well, number one, to your point, Josh, benchmarks are terrible.
I don't care what anyone says. These AI benchmarks were maybe like good a year ago. Now they suck.
Now every frontier AI model lab is tweaking or fine-tuning the new models that they release
just so that it passes an exam. And I'm not really interested in that. I want to know how effective
it is in real life.
And actually, to use the example of meta, who we were just talking about, void of AI,
every new social media feature that they've launched on Instagram, on WhatsApp or whatever,
Josh, they're A, B, testing it across 10 different countries or communities every single moment.
I've actually been privy to this, right, where I've got a new feature on my phone and my girlfriend
has it, right?
So they're constantly testing things to see what works and what doesn't and how to, like,
properly launch a feature.
This is incredibly hard to do in a landscape.
or tech sector, that's never been tested before, right? And AI moves so quickly. So how do you know
what to tweak, when to tweak, and who to pitch it to are all questions that no one has any
idea how to answer? And Statsig basically formalizes all of that. And they make a ton
of money doing so. One thing that I found interesting is, $1.1 billion was used to to buy this guy
called VJ, right? And you might be wondering, well, who the hell is VJ? He is a tenant employee
at Meta in his history before he started Statsig.
In fact, he was doing this exact job that Statsig does at Meta for a decade.
And he was working alongside a little someone called Fidg Simo.
If that name sounds familiar, that's because Fidji was appointed as the CEO of Open AI applications,
of Open AI applications, not Open AI, that's still Sam Altman,
but she's the head of basically all the new consumer applications and enterprise applications
that Open AI is going to launch over the next couple of months.
So it's a pretty big role.
And they worked alongside with this guy called Vijay,
AB testing all the applications that she did at Meta.
And so she was probably thinking,
hey, Sam, we're kind of reaching the point
where we're going to launch a bunch of really cool things at Open AI.
I need a guy like Vijay here to do this.
And he was like, okay, name your price.
Should we just acquire his company and his tooling?
Is that worth it?
And she probably said yes.
And that's pretty much what has happened.
And then you might be thinking,
well, okay, Ejaz, you've just said your thesis,
but is there any proof behind this?
Like, who were the customers of this?
Well, that's pretty big customers.
One of their biggest actually being Anthropic,
who I don't know how much they were paying them per year,
but it was for this thing called telemetry services
where basically Claude Code,
which is Anthropics' main product on what's made them super famous,
is plugged in to Statsig to A-B-B-Test a bunch of, like,
coding features that they're launching.
And so you could think that Statsick itself
has acquired quite a hugely valuable data reserve
of all of Claude Codes users and behaviors.
Now, it says here in this screenshot
that apparently the data is encrypted.
I don't know how true that's going to be.
And you know what this reminds you of, Josh,
just to kind of like finalize my point here?
Do you remember when Meta invested $15 billion in scale AI,
we just spoke about it?
Do you remember who their biggest customers were?
It was Google.
It was Open AI.
Yeah.
It's also incestual.
And they pull their customership immediately.
Yeah, it's crazy.
Yeah, it is crazy.
What I understand is the data itself is encrypted, but that encryption only goes down to a certain level.
And you still get a lot of the top level data that is valuable.
It just lacks the personal data.
So they still, they do collect a lot of data for this.
I love that Open AI now is assigning sub-CEOs to the CEO and building out actual executive
and C-suite teams around just creating applications and products. Because, I mean, like we mentioned
a little bit earlier, these models probably become commoditized at some point where, like,
everyone squeezes out the same efficiencies. They're all pushing towards the same frontier.
Eventually, the competition is going to happen on the app layer. And Open AIs, I mean, they are
probably ahead in this by a large margin where they actually have an entire C-suite dedicated
specifically to creating these applications, creating a better user experience. And I think
that's part of the reason why, I mean, the rest of the world, they have the most users,
but also why we keep coming back, why I keep going back, is their applications just,
they rock. And if they're going to stack a hardware device on top of that and they're going to
keep rolling out these cool new apps, I mean, sign me out. I'm bullish on the AI, open AI.
But again, this is not the only crazy news for this week. We have another big announcement.
Yeah, so Anthropic casually raised 13.1 billion, I think, Josh, this week,
in a series F, which values them at drum roll.
$200 billion.
This is a private company.
Sorry,
I'm going to stop talking for a second.
What are we doing here?
183 billion dollars.
Look at this tweet.
This is fascinating.
So from May of 2023 to September of 2025,
which is two years and change,
they went from a $4 billion valuation
to a $183 billion dollar valuation.
That is a multiple.
Oh, my God.
What, 20x in two years?
I can't do math.
I can't do math.
No, 20x would be 80.
That'd be 40x, 50x.
Tune in on Josh and Nijas being one-shotted by chat GBT, and we can't do basic mental math.
We're going to go with roughly 45x.
That's my guess.
In two years, in 24 months, that is outrageous levels of growth.
And I mean, it leads everyone, I think, to ask themselves, how much further can this really go?
Like, where does this train stop?
Can they really do another...
Okay, I'm just going to ask you flat out.
Are we in a bubble?
I think we are in some instances and in some instances we're not.
So this very much feels unsustainable.
We cannot...
Anthropic will not grow.
Cannot keep doubling from here.
I mean, two more doubles than you're at a trillion dollars, basically.
But AI is the most formative tech of our generation.
by a pretty large margin.
Yeah.
And it's not going anywhere
and it's going to keep providing lots of value.
So where everything is valued at now
is probably very high for the current time.
Yeah.
But where it will be at relative to now
and even just five years from now
feels very undervalued.
So it's probably a matter of the rate
at which we're accelerating,
which probably is too high.
It probably is some sort of a bubble.
But I'm not sure the bubble necessarily needs to burst.
It is just really this,
this pivotal, transformative thing that takes decades to play out, and we're still in decade
number one. So it doesn't, it feels like there is, there's overvaluation, but the long-term
effects of this are just up only, I think. That seems that's my take, at least. Do you have any
different takes? I do have different dates. I agree with you on the long term, but I think we are in
a pretty big bubble. I don't think we're anywhere near to it popping. Just to, I think things are
going to get a lot crazy. I think we're going to reach trillions, like double-digit
trillions dollar market caps. NVIDA is well on its way. I think that's like a two-x,
maybe less than a two-x from here. But I was also looking at like the fundamentals of this
anthropic race, Josh. January this year, they were making $1 billion annual recurring revenue.
Now simply eight months later, they're now making $5 billion of annual recurring revenue.
recurring revenue. And I'm looking at this valuation and I use my trustee calculator, that is a 36.5x
value multiple to their ARR, right? Which I don't know if that's a lot or if that's overvalued.
I would say it probably, I would guess it probably is. But like you said, AI is the darling chart.
Literally the mag seven, so the top seven companies in the S&B 500 are purely responsible for keeping
the S&B 500 positive this year, and it just broke all-time highs. So, you know, on one side,
you could argue, ah, it's massively inflated, but on the other side, you could say, well,
if it wasn't for that, the economy would be dying right now. So, you know, it remains to be seen,
but I'm going to call bubble at this point. I don't know when it's going to pop, but I'll take
the bubble side. All right. Bubble it is. So on the topic of bubbles and shirts that go up
into the right, we have another one that is brought to us today by Grock, our good old friends at
XAI, which just hit the number one.
one model on open router, Grock Code. This is a brand new model that they just released. And my
understanding is it is incredibly lightweight, very fast, and very cheap. And as a result, what we're
seeing here is 96.5 billion tokens generated by the model, making it up 53% relative to others,
and taking the number one slot on open router. Now, for those who don't know about open router,
we actually had the CEO on a few weeks ago. And the way open rider works, developers come,
they plug into the open router infrastructure,
and then the actual router decides where to route traffic through
based on the needs of the developer.
So what we're seeing here is developers are requesting a model from OpenRouter,
and OpenRouter is deciding over and over and over again
that the best models to serve the users is GROC code.
Now, EJS, do you have an idea why this is,
what makes this so special,
why it's gone from nothing to a huge deal?
I remember when Alex, the CEO and founder of OpenRourter came on to our show,
when we asked him this exact question,
we were like, you know,
why do some of these models that get listed
get used so aggressively?
He said that he has the largest community
of AI developer nerds
that anyone's ever seen.
So if you want to test your product
or your new model,
specifically the coding element,
you kind of want to be on open router.
You kind of want to access that community.
And so I think this is a positive signal,
be it small and kind of concentrated.
And I think we're actually going to see
the kind of wider
effects of how good this AI coding model is in the next couple of months when regular developers
that are working completely different or maybe even adjacent tech sectors start using it
for their work that is an AI specific, if that makes sense. And you mentioned Josh that, you know,
it was 96 billion tokens. They actually crushed that record, I think it was yesterday or two days
ago, where they set a new all-time record of 138 billion tokens in a single day.
That is insane.
That metric has literally not been achieved with Claude Codes' Sonnet model, which was previously
the kind of Darling child.
That wasn't even achieved on Codex, which is Open AI's coding, darling.
So there's a lot of demand for this thing.
And I remember when they released Grog 4, Josh.
Grog 4 alone, its coding function was amazing.
And it beat Claude's sonnet.
So the fact that they have released another coding model so soon, so quickly after, indicates
that this is a niche within AI that is progressing potentially faster than the actual LLMs themselves
and something to keep an eye on. My bet is we're going to see a software coding agent actually
outperform senior to mid-level engineers much sooner than we anticipate. It's amazing to see
their name higher than Claude Sonid, Gemini 2.5 Flash, Gemini 2.0, I mean, all of the leading models.
again, after only existing for two years and change, and I'm going to sound like a broken record,
but XAI's rate of acceleration is unbelievably fast. The rate that they're shipping, these models,
but also the rate that they're shipping models that matter, I think no one would argue that this
is an actual better model than Claude Sonnet 4, which is kind of like the leading edge of code generation,
but they're optimizing for things that developers want. They're optimizing for cost, for efficiency,
and for just pretty good, not great.
And I'm sure the great will come,
and I'm sure Claude will then work on the pretty fast and pretty cheap.
But what they're doing is they're optimizing for the things that clearly developers want the most
instead of going for the benchmarks.
So I think while Claude and Gemini,
we mentioned a lot of these models we're optimizing for benchmarks.
This is probably optimizing for raw utility,
and that's what we're seeing coming out of these tokens being an ultimatier,
138 billion in a day.
That's outrageous.
Crazy, crazy.
And then for our last news of the day, Apple, we have something about Apple and it's positive.
It is not something negative.
I promise you it's actually cool.
It's exciting.
Apple is doing something interesting in the world of AI.
And maybe this is a teaser for next week.
Next week they have their iPhone release event where they're announcing all the new hardware.
We will be covering that.
But before that, they have dropped two new models.
Ejazz, tell us what's going on here.
Apple is now in the news for AI.
Okay.
I'm glad that I can say something positive about Apple in the AI.
because it has been excruciatingly difficult
to do that in the past.
They've done a few things here
that puts a massive grin on my face, Josh.
Number one, they've released two, not one,
but two AI models.
And they've open sourced both of them.
You know I'm a massive open source fan.
I believe that this technology should be for everyone
and everyone should be able to modify it
to their own tailored niche and use case
or whatever that might be.
It's a giving good
that a lot of these tech monopolies
don't usually do, and I'm happy to see this kind of take place. But then the next question is,
okay, well, what did these models actually do? Is it just another LLM? Is it going to be worse than
ChetGBT? Well, they actually did something pretty novel here. There were these two models,
the first one being something called fast VLM, which stands for vision learning model. It basically
is an AI model that combines vision, which is like looking at images with the ability to understand
text or communicate what it's seeing via text.
So if you look at this demo here, you'll see it's a video of some kind of like Apple thing
happening.
There's like a race car happening.
There's robots falling over.
And the ability for an AI model to kind of like see what's on its screen and accurately
depict what's going on and communicate that is actually a very underrated skill.
The reason being is it's easy to kind of film a video and kind of like feed that to an AI model.
very, very hard to get that AI model to understand what the hell's going on in that video.
That's why we've seen a slower rate of progression with AI video models or AI image models.
It's taken a bit of time versus LLMs, which are getting kind of like to PhD level status here, right?
But it seems like Apple has been kind of working behind the scenes on a model that does this in a super refined manner.
And these words that it's generated, you can see it on the screen.
If I can actually just blow it up, it's doing it in real time, which is incredibly hard to do.
you know, it's saying, okay, there's a race car. Tim Cook has headphones on and he's speaking
into the mic, talking to his employees that's kind of talking about this race car. And the point
is, these words can now be fed back into a model or an LLM to make it super refined and much
smarter. So why is this important? Why does this matter? Because I think this technology
you just, it's existed before. Like models can do this. Models can see videos. They understand this is not a,
at this point, it's kind of a trivial thing for an AI model to do. I think what's interesting with Apple
and the reason why it's important and the reason why Apple's model is different is because of how
efficient and lightweight the model is.
When you think of Apple, they have their iPhones.
They need models that can run on a mobile phone.
You cannot run any of these models that we've talked about today on anything less than a hardware rack that would fit in like the size of an office, like a gigantic computer.
These models, they can run on your phone.
And it says in this post here, the models are up to 85 times faster and 3.4 times smaller than previous work, enabling real-time vision language
models. So this is cool. This is, this is interesting because it's applicable to us. This will be a model
that can run on an iPhone. This can be a model that delivers on the promise of Apple intelligence,
that can be very lightweight, and you could use it in your day-to-day life. So an example that I really
like is that, let's say you're at a restaurant and you have this gigantic menu with tons of
stuff on it. And all you have is your phone. And maybe the menu isn't immediately accessible online.
Well, you could just point your phone at the menu. It'll read all of the items. And you could ask
it, hey, what's the healthiest item based on my current diet? Or what's a gluten-free option for the
person who doesn't have gluten at the table? And it can kind of be this enhanced system similar to
what we imagine the glasses will be like, but currently in the form of a phone. So I think in terms of
usability, this is much more practical than just about any other model we've covered today.
So I have an additional kind of thought about this, which is Apple probably is, you know,
is one of the largest, if not the largest,
holders of photo and video data.
ICloud, right?
Everyone stores everything on ICloud.
And I wonder if there is an angle here
where Apple is like,
hey, we've got all this data.
I'm not just going to feed this to a Google
or an open AI to train their own models,
but we can train our own models
or use that data to kind of sell
and kind of like negotiate a specific
partnership with Google to kind of use, we get to use the AI model, but exclusively using this
data and this feature, that means Apple gets a stake in this. I'm kind of like, um, hypothesizing here,
because rumors have been going around that, um, they might be partnering with Google's Gemini
AI model or Open AIS chat GBT, uh, because I think they're kind of a bit behind to create
their own foundational model. So that's a kind of like moonshot idea, but we'll see if it maybe
plays out. I don't know. You ever seen this kind of split-pronged approach where they'll, they
have the local compute and then they will have the cloud compute. And it's becoming more and more
clear that they'll probably have to offload that cloud compute. The high computation stuff that takes
a lot of time to think that requires a big model, it's probably not going to be theirs. They'll
probably, I mean, they've been deferring to open AI. Maybe they'll do Google. They'll outsource that.
But the actual local models that run on the phone, well, they can get pretty good. And they
can still manage to deliver a lot of the promises that they offered us at WWDC last year of Apple
intelligence using things like these new open source models. So it's very much a step in the right
direction. It is not a seal of approval. We are like, we are pro Apple in terms of their AI capabilities,
but it is definitely a step in the right direction that we like to see. They're trying. They're doing
something. I mean, the bar is literally on the floor. Well, it is tech timber, as you said, Josh.
That it is. This is traditionally the month where all the big dogs, all the big tech companies come out
with their latest releases.
This is across hardware and software.
And I'm excited for a new category of hardware around robotics.
We saw meta is kind of like poaching robotics executives from Apple and stuff like that.
So I think it's going to be a really exciting month.
I'm hoping, no, I am praying that Apple comes out with a flagship AI model that crushes OpenAI.
But I think I'm being a little too optimistic.
I'm hoping meta super intelligence labs stops bleeding their executive team and actually puts out something that I would want to use, along with some consumer applications.
And I hope that a new burgeoning startup in the name of Open AI and maybe even Anthropic that are so lowly valued at $200 billion to $500 billion comes out with a suite of applications that can go head to head with some of the goats, meta and Apple.
That remains to be seen.
Josh, I have nothing else to say.
I just want to fill people in on our schedule for this month because it's packed.
Next week, we have Apple.
Apple, big event.
This is the iPhone event.
This is where they're releasing a lot of new hardware.
The hope is that not only will we get hardware, but we will get supplemental AI software.
So a lot remains to be seen.
A lot will be unveiled next week.
The following week, we have another exciting announcement, which is through Meta Connect,
I believe is what the event is called, where meta is going to release their hardware products.
and most certainly will be paired with a lot of software and AI offerings.
So we will have all of the coverage of that.
And then a few weeks later after that, we have Open AIs Dev Day, which I'm sure they will
announce even more features.
And then a couple weeks after that, we have Microsoft Ignite, which is Microsoft's new
hardware offering day.
So it is going to be like a very crazy next couple of weeks and months.
So buckle up.
This was the news up to today for this week.
There's going to be a lot more.
We will be back for more episodes later.
week, next week, the following week. There's a lot to look forward to in this space. And we will
be right here along your side going through it all, the highs and lows, the downs. As always,
if you enjoyed the episode, please share it with your friends. Don't forget to like, subscribe,
all of the good things. And we will be back again soon for another episode. Thank you so much for
watching.
