Limitless Podcast - THIS WEEK IN AI: Qwen 3.6 is Scary Good, OpenAI's Record-Shattering Fundraise, Oracle Layoffs
Episode Date: April 3, 2026We explore China's advancements in AI with Qwen's new models outperforming leading competitors. We discuss OpenAI's historic $122 billion fundraising round and investor shifts towards rivals ...like Anthropic. Plus, we cover Oracle's layoffs, Meta's AI glasses, and a revolutionary electromagnetic AI chip design. Finally, we highlight Valor Atomics’ valuation surge after a $450 million raise for their modular nuclear reactors.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------TIMESTAMPS0:00 Intro3:33 Model Innovations4:21 Understanding Vibecoding7:06 Jipu's Competitive Edge9:00 Google’s Gemma 4.0 Launch10:14 OpenAI’s $122 Billion Round15:59 Anthropic's New Features18:48 Microsoft’s Shift to Claude22:04 Electromagnetic AI Models24:21 Oracle's Layoffs26:51 Meta Glasses29:22 Valar Atomics' Raise------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
China has officially leapfrogged the US in AI models.
Quen released two models, Quen 3.6 plus and 3.5 Omni, which have absolutely crushed Claude Opus 4.6 and Google's Gemini.
Quen 3.6 can basically do everything called Opus 4.6 can do, but two to three times faster.
It's also dramatically cheaper. It's free.
And it also is closed source, marking the first time that a Chinese AI model is not open source.
this is a signal that China may have finally beaten the U.S.
And we should be concerned about that.
We're going to dig into that.
In other news, Open Air has also raised the largest private round ever.
$122 billion.
We're going to get into all of this and more.
But let's start with Quentin.
I think I need to reel it in a little bit.
I'm not sure Quinn is crushing the United States just yet.
We still have frontier models.
So, like, do not panic yet.
But there are signs pointing towards a future in which China continues to gain pretty quickly on the U.S.
So one of the most noteworthy things you mentioned is the fact that this model is closed source.
This is new.
This is different.
That we always said when we would discuss Chinese models that the biggest threat we'd know when they're getting close-sourcing these models.
And clearly they're getting close-sourced the model.
Also, this model is available for free for a limited time, but totally available for free.
And it's Opus 4.6 or close to 4.6 level intelligent, right?
We're looking at these benchmarks here.
It looks like it beats it in some things.
It's a little less proficient in others.
And again, these are the Chinese benchmark,
so we're not entirely sure how accurate this is.
I suspect it is inaccurate.
I suspect it's actually largely a distillation from Opus 4.6 and other frontier models.
But yet, the results do speak for itself.
And they are very impressive, right?
Like particularly the speed in which they're able to generate high quality tokens.
It's really, it's a pretty powerful model.
Yeah.
So it's about three times faster than Opus and much.
Faster versus Gemini. It also has a 1 million context window, which now matches the American
frontier models. That's usually typically harder to pull off because it helps give your AIM model
understanding and memory of what you're talking to it about. And the reason why I would say
the benchmarks are probably more accurate than previous benchmark releases is they've been kind
of honest about it. So if you look on the left here, Terminal Bench 2.0, which is a regarded benchmark
for coding in particular, it surpasses and beats Opus 4.6. But if you look on the left here, Terminal Bench,
If you look at software engineering as a whole, they admit that Claude Opus 4.6 is still better than them.
So I don't know.
I think this is a point.
This coupled with the fact that they have closed source indicates that China might have the edge over here.
Now, for those of you who are confused why open source and closed source is a signal that China is getting better,
the idea with Chinese AI models was they build an amazing model and they open source it so that it decreases the value, price and stock equity share of frontier American labs,
who are typically private and don't open source their models.
It also gives an opportunity for anyone and everyone to get access to frontier intelligence
and build an amazing AI product.
The fact that they are closed sourcing it signals to me two things.
One, they've figured out a way to leapfrog America,
maybe not in this particular model,
but in the next version model that they release
and they don't want that proprietary information going out.
And two, I don't think they need to rely on American models much
in terms of copying the types of designs or research that they do.
So I think they're finally caught up.
It might be a Hail Mary.
It might be incorrect, but I think this is another deep seek moment.
Yeah, they're doing really well.
And this is only one of two models that were released this week that were very impressive.
The second one being Quinn 3.5 Omni model, which does video audio vibe coating.
So this is multimodal vibe code.
Can you explain to me what's happening here?
Okay.
Instead of explaining, let me just explain what's happening on the screen.
For those of you who are listening, I am staring at probably a drawing that looks like a two-year-old's drawing.
on it. It's of a website, but it's just a bunch of boxes that someone has sketched out. And it's a
video of him explaining what each of the boxes should be. He's saying, hey, there should be a video
of someone explaining my product. And then on the right here, this particular box should have
some text explaining how my product works. And then the box in the bottom left should be the
pricing and allow people to kind of purchase my product. But literally, what we're looking at right now
is a marker pen with boxes around it. This does not exist. This is on paper. This is
is a physical thing. And now what you see is him feeding this into this new model called Quinn 3.5
Omni and it understands everything he's saying. It understands the vision that he has through his
sketch and it codes up a fresh website from scratch in seconds. It's called audio visual vibe
coding and this is the first instance of this ever happening. I will say perhaps the models
aren't always better but the demos are always pretty strong. And I think the demos really carry a lot
to wait when it comes to evaluating models because if you are given a clear use case that a model is
particularly good at like this, I mean, that's a really fun selling point and a really compelling
selling point. There's one other demo from this model that I thought was pretty impressive, right?
Yeah. So it's an Omni model, which means it's not just visual, but it's also audio. It contains
other types of mediums. And in this particular demo, an issue that people have is they like to speak
with their AI model. But sometimes when they're agreeing with the AI model or they say something,
the model interrupts or thinks it's getting interrupted
and stops giving you the information that you want.
It becomes really annoying.
They've developed this type of model
where you can interrupt it
and it knows when you're actually trying to interrupt it
and when it can just continue.
So I'll show you a little clip over here.
Could you give me a brief overview of this file, please?
This document primarily discusses the Quen III series of models.
Its most prominent feature is the ability
to autonomously decide
whether to respond quickly or engage
in deep reasoning.
So there's a few things
I want to point out here.
Number one,
he's...
He's got to explain
why this is important
because that seems so lame.
It's like,
obviously it's not answering.
Okay, there's a...
This is actually really cool.
Okay, so there's a few things
going on here.
Number one,
he is scrolling through a research paper
on his laptop,
and he's holding his phone
which contains this model.
And he's got the camera feed on.
So he's showing
the pages that he's scrolling through.
This model is reading
this entire paper
as he scrolls through,
so much faster than a human,
can, it's digesting and understanding what it's reading, and then it's summarizing all the key
points to him. But as he's listening to her speak about what it's seeing on the screen, he's saying,
oh, I see. Oh, that's interesting. Now, typically when you have a human conversation with someone
and you're making those same comments, the human doesn't stop talking and telling you the information
that you wanted to hear. But this model is smart. It understands that he's not trying to interrupt it,
and it just continues flowing and speaking smoothly,
which I think is a breakthrough.
It's super cool.
Yeah, very impressive model.
And this isn't the only news out of China.
Also, there's another team.
Okay, this team, I'm trying to figure out how to pronounce this.
Say the name, Josh.
So it says Zipu, but I'm aware that it's not actually pronounced Zipu.
It is Chippoo.
Chippoo.
So, so, yeah, so China is now good enough at building models that we're learning.
their dialect because we have to reference it that frequently. So that's probably a high signal.
If we're having Chinese lessons on the show from Ijaz, then China's getting pretty good.
What makes this new Jipu released model so impressive here? So GLM, which is their flagship model,
has kind of been similar to Quinn in terms of capabilities, reasoning, coding. And when we looked
at the Quinn models just now, we saw it go from a simple sketch, a handwritten sketch,
to a fully functioning web app,
coded in real time in a couple of seconds.
And you might think,
that's just one AI lab.
They just got lucky, who cares?
We can crush them in the future.
What you're seeing on your screen right now
is another entirely different AI lab
which released their own model this week
and it does the exact same thing,
sketch to code.
So the point is whether these Chinese labs
are working together or not,
whether they're training on the same hardware or they're not,
they're able to achieve the same frontier research
and breakthroughs around the same time,
which is the reason why I think this isn't just a one-off.
I think China's collectively caught up with the U.S.
and are posing a significant threat.
Yeah, this is, it's very impressive what they're doing.
And this comes off the back of a lot of drama, I guess.
I was hearing rumors that some of the developers from Manus,
the company that was acquired by Facebook,
we're not allowed to leave the country.
And this is happening with a few other AI labs.
I think Quinn may be one of them or Deepseek,
where the developers are kind of,
they've become government assets, if you will,
in a way that is a little unnerving,
but progress is getting good.
So it's at the point now where, I mean, we're still ahead.
I'm telling you right now,
trying to just not have Capybara.
Because if they did, they'd probably be doing some serious damage.
But they are making some serious progress.
And also another company making serious progress
is our good pals Google,
who just as we started recording this release,
Jemma 4.0.
This is a purpose-built model for advanced.
reasoning and agentic workflows. On the hardware that you own, the thing that is optimized
to Gemma is the model performance per bit. It's like how much intelligence can you get per bit of
information? And according to this chart, it's very, very high. Yeah. So if we look at this chart,
it's showing that Gemma 4, which is the recent model, is only a 30, it comes in two versions, a 31 billion
parameter model and a 26 billion parameter model. So that is significantly small when you consider that
Capoebarra, the next anthropic model that you just mentioned, is probably in order of
magnitude higher. It's five to ten trillion parameters. So it's a much smaller model, but it keeps
up with some of the bigger models, specifically the Chinese AI models that we were just talking
about. So if you look on the right over here, we've got GLM5, Kimmy K2.5, and Quinn 3.5,
which are each around 400 billion to a trillion parameters large. So the amount of intelligence
per square density is not the right term?
is really high with Gemma 4,
and it's cool to see American labs releasing open source models.
Yeah, so shout out to Google.
We have another shout out this week for another American company
that goes by the name of OpenAI.
You may have heard of them.
They just closed the largest funding round in history.
No one's ever raised more money than OpenAI just did.
At $122 billion in committed capital,
it's important to note that this capital has not been inserted into a bank account.
This is committed capital.
at an $852 billion post-money valuation.
Now, there's some highlights here.
Amazon is putting in $50 billion,
which is ironic because OpenAI signed
a $100 billion AWS deal.
NVIDIA is putting in $30 billion,
which is ironic because Open AI runs on Nvidia GPUs.
SoftBank is putting in $30 billion,
which is ironic because they're building Starget together.
So there's a lot of this circular economic,
element to this. I mean, 35 billion of Amazon's investment is just contingent on OpenAI,
achieving AGI, or going public. So this leads us to the natural conclusion. Well, this is,
this is their final hurrah before they go public. This is the last private round that we will
ever see. This is the largest of all time before their blockbuster IPO sometime possibly later this
year. Yeah, exactly. And I don't know if the public is buying it. Bloomberg,
released a report this week, which claims that Open AI shares have fallen out of favor on the
secondary market. And can you guess which company, Josh, it's fallen in favor of? Surely it's Anthropic.
Ding, ding, ding, ding, ding. The pretty cool new kid on the block. Yeah, although Anthropic and the
core team have had a nightmare and security this week and done two very important leaks, the public
doesn't care. They still want all of that anthropic shares and equity and they're selling or willing to
sell OpenAIA secondary shares in favor of this. But there is still some cool news from the
raised, Josh. Did you know that $3 billion of the $122 billion raised was available for individual
investors, not retail investors, but individual investors to invest in? They were able to get
exposure through their banks. Now, I don't know which banks. I don't know what the vetting
process was like, but the fact is Open AI did not know who these individual investors were,
but they offered up $3 billion to allow retail to get access to.
But I think you needed to be an accredited investor.
So it kind of sucked in that respect.
But they kind of threw the retail audience who didn't get access to the $3 billion, a bone,
by claiming in this latest fundraising round that AHRQ invest,
which has several ETFs which give you exposure to private AI companies.
They're going to add a bunch of open-air equity stock to that,
which I actually don't think is a good bit of news given the VCX drama from last
week. Can we talk about that private fundraising around? Wait, so who was available? Because I never heard of
this. Yeah. I had no idea that it was even possible for money to be raised by individuals who are now on the
cap table. That's pretty cool. I would have loved to have participated in that. I think the arc news
here is interesting because the day that word came out that secondary shares are having a little bit
of trouble being liquid, Arc decided to add those secondary shares into a public venture fund.
Like, oh, we've, we've had enough of this now. We're done with this toy. Now the
public retail market can get access to it. And that's kind of what happened. Perhaps the timing is
ironic, but it's certainly a bit coincidental that three of the new Ark Invest ETFs included these
open AI shares on the back of the most recent fundraising round and the rumor that is becoming increasingly
difficult to offload these shares. So the market, I mean, I don't want to say it's getting
frothy. It's just growing remarkably fast. And it is showing some signs of trouble, I guess,
in paradise for to some extent.
I mean, again, all this is pretty warranted and justified.
But it is important to note that these are commitments.
Again, they did not close this money.
These are commitments.
And we've seen what happens when commitments happen.
Open AI has committed to buying a whole bunch of RAM that they're no longer buying.
And the price of memory, the last couple weeks, has not collapsed, but it has returned to
very high levels instead of unfathomably high levels.
So there is a lot of work that's being held in that commitment work.
But nonetheless, congratulations, opening I.
$122 billion.
It's more than anyone's ever raised in history.
And it's a really big deal.
A lot of the capital was raised from the lives of Nvidia, Amazon,
and presumably all that money is going to go back to them in the form of compute.
Open AI president Greg Brockman went on an interview, I think, like yesterday,
and basically said, we're focusing all our resources on compute.
Compute and scaling laws are definitely intact.
And he says, quote,
If I could get my hands on all of the compute in the world and purchase it, I would do it because it's become a positive revenue cost center.
So Open AIA thoroughly believes that more compute that they have equals to more revenue and that trend isn't going to break anytime soon.
So we'll see if that pays off.
This is a pretty big bet.
Foot on the gas up only, man.
I'm still bullish.
I'm still very bullish.
I'm also bullish on Claude, which had a new update this week in terms of computer use, right?
Now it's like pretty much capable of doing anything on your computer for you.
This is OpenClaW, Claude Edition.
And it seems to be working pretty well.
I was actually playing around with it.
It's really cool.
It's good.
It's good.
Okay, so it seems to be every single week, heck, every single day, Anthropic has released a new feature.
And they've done that for the last, I believe, 18 days, which is just an insane feat for any frontier startup or company alone.
But the point is, these features collectively together have automated a very important thing, which is the entire software engineering role.
and I'm not dramatizing this.
Previously, we had Claude Code,
the most popular Claude product, arguably,
earning billions of dollars for them every single month.
But it just generated the code.
You would then need to launch the app
that it had built, test the app out yourself.
You would need to run a bunch of tests,
give feedback to Claude,
and it would still need or require human in the loop.
This update, computer use,
completely removes it from you.
And I actually summarized it in a tweet here
where it basically says,
Claude writes the code for you,
then it opens up the app that it coded,
then it clicks through the entire app
and finds the bugs that it itself created,
does the security testing and everything,
then it fixes the bug and improves the app.
Now, if you consider the fact
that Anthropic also released a remote control feature
where you can basically text Claude from your phone
and you don't even need to be anywhere near your terminal,
you start to see how you can just message it
and come back a few hours later
to a fully-fledged app
that has been reiterated, tested a bunch of times with real human feedback or agentic feedback,
and you just have award-winning product in a couple of hours or maybe even a couple of weeks.
It's just insane. It's cool.
The clarification of everything has been a prominent theme of everything that has been released
over the last few weeks. And I mean, we have new news from Manus as well where they just
rolled out remote phone control for your desktop. So if you are running an instance of Manus,
you can just text your computer from the go on your smartphone. And it will,
will, again, take over your computer, do all the computer use things that you expect from a product
like OpenClaught, except they're rolling it out themselves. So there's a lot of this qualification.
I think it's great. This is like the transition into the operating system, AI first OS type
progress. And it's really exciting to see. Now, on this agenda here, we have something that
says Anthropic leak too. Is there another anthropic leak? What else has happened? We've been
talking about anthropic leagues all week. No, I just wanted to point out that like it all looks really.
Rosie with Anthropic, but they've just come off like a nightmare of a week. Two leaks in a week.
The second one being the largest one that I'm showing on screen, the entire source code of code
is now or was openly available. And I think it had by the end of the day 90,000 forks,
which means that it's living on at least 90,000 devices, which is just insane to think about.
And if you want to understand how crazy this is and also the secret features and products that
they're about to release that were revealed in the code, we created an entire episode about
this, definitely go check that out and watch it. We reveal all the details there. But the point
being is Anthropic is kind of succeeding and winning in a lot of different ways, but they
might be running too fast for themselves. They're building everything. I think 100% of the
features that they build now is using code. So it might also leave them up to exposure and
weaknesses such as this. I love it. Move fast and break things, man. Just get stuff out the door.
If it doesn't work, you'll fix it in hindsight. I mean, sure, this was a big L, but then they
automate this approval process. I saw Boris, the person who created Cloud Code, he actually addressed
this publicly on X, and he said, it was a single person, it was an honest mistake. They've built
in implementations for that to no longer happen again, and they're just going to keep moving on.
And I really respect the velocity and the transparency that they've been doing as they just get
all the stuff out the door. Velocity is such a big thing. There's a company who has absolutely zero
velocity, perhaps to be negative velocity that we're going to talk about next, which is Microsoft,
who has launched a product using Claude. And when I first read this headline, I was a
little confused because if my memory serves me correctly, Microsoft owns what, they're the largest
single shareholder in Open AI by an incredibly large margin and 31%. And that gives them access
exclusively to all of the Open AI IP. They have internal access to all of the model. So please
explain to me why Microsoft, aside from them not making anything valuable in the world of AI,
why are they choosing to use Claude as their provider of choice instead of ChatGBT? I have
have a simple answer for this. It may sound biased, probably because I am a little biased. I think
Claude is just better for certain use cases. I'm not going to say all use cases, but for certain
things. This isn't the first product that they've launched that is powered by Claude. There is one
product that they released two weeks ago called Microsoft Co-Work. Does that name sound familiar to
you, Josh? Co-work? Yeah, I've used Co-Working Cloud for the last couple months, actually.
There you go. So they released an entire blog post talking about how they have managed to
automate a bunch of desktop tasks. So basically Microsoft's feature takes over your desktop and can do a
bunch of things like operating your browser, opening up files, a bunch of cool stuff like that.
And right at the end of the blog post in tiny little print, it said, oh, by the way, this is powered
by Anthropics Claude Co-Work model. And now today, or this week, they announced two new products.
It's called Microsoft Critique and Microsoft Council. It is a deep research feature, which basically
is now state of the art in terms of research. You can ask Microsoft's feature, critique or counsel,
a question, and it does a ton of research, really high quality in the answers and outputs
are very, very impressive. You can get access to it via co-pilot. But what's interesting is
it's powered by two models, Claude and ChatGPT. And the way that it works is arguably the most
interesting part. They spin up a bunch of instances of Claude and a bunch of instances of ChatGPT
and get them to talk to each other. One produces the research, one reviews,
the research, by the end of the back and forth process, you end up with an amazingly, perfectly
polished research article or paper. And I actually think this is cool because it shows that
running multiple instances of AI models and agents are actually a better way to have a better
product. You know, my favorite Microsoft News of the Week, it just happened on the spaceship,
the Artemis Mission. They were live streaming, the live comms, and they're using Microsoft
Outlook, and all of the instances crashed. And they couldn't figure out how to get them to work or to
open and they were trying to remote into them and the software is falling apart. And I think that's
a testament to kind of where Microsoft is right now in the world of AI. It's just clunky, very slow moving
and very corporate, I guess would be a good way of describing them. Now, the next topic we're going to
talk about, my BS meter is going crazy. I'm like, there's no way this is right. This seems like they're
using a lot of big words. The two words that this company has shrunk together is electromagnetic
super intelligence. Generally, when people speak like this, it's veiled in a lot of
complexity that might not be true. Can you explain what our Earth is going on with this
electromagnetism story? Yes. Okay. So this company, our startup, rather, has come out of stealth
claiming to have built a new type of AI model. It is called an electromagnetic AI model. And what it does
is you can describe the type of electromagnetic behavior that you want and it will generate
the physical shapes and structures, a high fidelity design,
that will produce those electromagnetic signals.
Now, you might be asking, who the hell cares?
Why would I care about this particular model?
Well, if you are building antennas,
if you are scaling 5G and 6G,
but most importantly, if you're designing AI chips in particular,
which requires microscopic attention to detail shapes
that allow these transistors to talk to each other,
you need help in terms of an AI model describing that.
Now, previous models, they tested this with Claude Opus 4.6
and GBT 5.4 can't get to the precise design because it doesn't actually understand how the physics
of these AI chips and transistors works. Now, this company is claiming that their model does exactly
that, and they had a demo chip which they portrayed in their video or opening announcement demo.
Now, I don't know whether this chip works, but if it does, it might be able to automate the most
important and valuable company in the world right now in VDivir, who spends all their money,
and TSM designing all these next-gen
GPUs and chips and gives them
a valuation of $4.5 trillion.
Yeah, okay, so I guess it's one of those things
where we'll see. I imagine all of the frontier labs
know that this exists.
Reading the comments section, it appears as if this is
not new research, they've just actually acted on it
and are trying to test it. So one of the things that we
will keep on the watch list of things to follow
along and see how they progress.
The next on our agenda list, and stick with us.
We have three more topics to go.
is Oracle apparently fired 30,000 employees. What's going on here? That's exactly what it says.
30,000 employees, between 20 to 30,000 employees received an email two days ago in the morning,
basically saying, sorry, you have been laid off. Now, there's a bunch of speculation as to why this is happening.
Maybe something to do with the $300 billion partnership deal that they signed with Open Air and a bunch of other Sebi-conductor companies.
Maybe it's something to do with that. Maybe they overstepped. Maybe it's something to do
with Stargates Abilene data center getting shut down
because they weren't able to finance it.
Oracle and Open AI were the main partners in that
along with SoftBank.
I don't know.
How does this look to you, Josh?
Is this a warning signal?
Is the bubble popping?
No.
Every company is grossly overinflated
in terms of how many employees it has.
Like, during COVID in particular,
how many companies were hiring like tens of thousands of people per year?
So many.
And for no reason.
And I think during that time,
when it became to okay to do remote work and to switch away from in-person stuff,
it was just very easy to hire all these new people. Interest rates were zero. It was free money.
You can just, why wouldn't we double our workforce and get more productivity? But now that the
reality is setting in that they're really just, and this isn't even as a result of AI,
it's just there's not a need for that amount of workforce in order to do these things. People are just
getting laid off. And that's kind of what's happening here is there's just absolutely no need to
keep hundreds of thousands of dollars on the payroll that aren't being super productive. We saw that with
what company laid everyone off? Jack Dorsey Square. Yeah, Jack Dorsey, Square. They laid off people. The stock went up
30%. 40% in one day. Sorry. They laid off 40% of people, right? And then the stock went up 30%.
I think the stock went up 40% too in the day. Like it was a crazy game. It was this unbelievable thing.
And I think the market is starting to reward efficiency, right? It's like it's obvious you don't need this
bloated head count. A lot of people are not working.
working. They're just getting by on their laptop. A lot of them work remotely. This is just a
testament of this recalibration that's happening. And I don't think you should really credit AI with
too much of this because a lot of these effects started happening before AI was even good at writing
code. Like, AI just got good at writing code like three months ago. It hasn't been that long.
And companies haven't even really implemented this in a meaningful way at scale. So I wouldn't worry too
much about this. Um, I think, can I, can I pull up the Oracle chart for you just to, to,
what are we got? I mean, it's not. It's not.
major, but it's up. It's up on the week.
And this is after like a month of downtrend.
So the market's obviously rating that as positive.
Nice. Good. Well, that's a W.
Another possible W. I don't know if this is W or not.
Is the meta-glasses news? You know I'm the biggest hater of these classes.
I'm actually, I'm becoming the biggest meta-hater ever.
Because like meta, why am I calling you meta when you just cancel your meta-division?
But we have new glasses news. What's going on with these glasses?
Okay, okay. Listen, let's not hype this up too much.
it is an optically friendly pair of their AI glasses.
So typically if you had eyesight issues
and you required prescription glasses,
you couldn't use this.
Well, guess what, guys, now you can.
But which excites me the most
is some of these software updates
which are included in this,
which involves nutrition tracking, Josh.
Now, you know me, I go to the gym quite a bit
and I'm trying to figure out my calories
so I can lose a few pounds before the summer.
Okay, like I'm just, I'm being honest.
these glasses supposedly can see the things that you're drinking, that you're eating, that you're cooking,
and measure the amount of calories that is in the portion of your bowl or that you're making
and can tell you when you've been eating too much or when you need to stop.
Now, I don't know if the tech is actually accurate.
I remember that there was a company which just got sold to, what's that fitness app, my fitness pal?
It got sold to them.
It was vibe-coded app that basically you can show your food in a camera, and it can basically estimate
the calories. It got sold for upwards of $10 billion, I believe, or something crazy like that,
or it was acquired for like between $2 to $10 billion. This is a proving point that maybe something
like this could be cool in the future. I don't know if I want to wear meta glasses, though,
to get this feature. I would rather just use my phone. I don't believe them. I don't believe
a word that they say. I think the last time when they came out and they revealed these metarayband
glasses, they went on stage and they actually had to demo it in real time in front of everyone.
Every single demo failed. It was a nightmare. And I just can't trust a company that continues to
fail over and over at all these promises that they share. It's like our meta promise,
our glasses promise, all these promises are just unfulfilled. And where's AI? They've spent billions
of dollars. There is none. So I'm remaining the biggest meta-hater. I am a Zuckerberg lover.
I hope he can figure it out. He's an incredible CEO and an operator. He's running a company that he
found it and I am so bullish on it. But man, I just, I will not be participating in these glasses.
Well, something is working. They said on their last quarterly earnings, they're on track to sell
20 to 30 million units of these this year. I don't know who the hell is buying that, but they have
some kind of a market. And I do believe that glasses will be one of, not the, but one of the main form
factors for AI. So we will see. But on our last news story, Valar Atomics, raised a crazy round.
What's happening here? Yes. This is huge. If you made it this far in the episode, chances are you
are a real one. You are an OG. You have been listening to the show. You love this show. And if you've
been around for long enough. You were here when we interviewed the CEO of Valor Atomics, Isaiah Taylor.
Since then, the valuation of Valor Atomics has gone up, I think, 27X. And it has come to a point
in which they have just concluded their $450 million in fundraising at a $2 billion valuation.
Valor Atomics, for those who are now familiar, create modular nuclear reactors. What you're
seeing on screen is a little image of what this reactor looks like. And it's just this tiny little thing
you could put next to a data center and power it using clean, reusable nuclear energy in this
very beautiful package. They were the first private company ever to have a successful version
of nuclear energy. They actually used a radioactive isotope. They generated energy from it. And things
seem to be going very well. So for those of you who haven't seen that episode, it is still timely.
It is still relevant with Isaiah Taylor. You can find it linked in the description below. But just to
congratulations to the team at Valor who are building some bad ass technology and getting rewarded for it.
And I think it's just a nice way to wrap up the week here is with the big win for the people building the frontier of technology and what's possible.
Would it have not been nice, Josh, if we could have invested or even helped our audience and our listeners invest back then?
27X?
Yeah, it's pretty good.
Yeah.
According to Claude, at least, the limitless portfolio of when guests have come on versus where they're at now has been pretty good.
We've hit every single one.
All of the companies have done incredibly well.
And yeah, it's really exciting.
It's, Isaiah's a great dude, really ambitious, really smart, really talented, along with the entire
rest of the team.
So a nice win to round up our week with.
And I think that concludes everything for the week.
We've covered a lot of ground this week.
There is so much stuff that happened, as it always does, right?
We are on the frontiers of the hottest industry in the world.
But if you've listened to all of the episodes, if you made it this far through the end
of our final episode, you're caught up.
You can go touch grass this weekend.
You're done.
You can just like wipe your hands clean.
You know everything that's going on.
Go share it with your friends.
Share this episode with your friends if you enjoy.
it and yeah thanks so much for joining us for another amazing week and I will just say
if you are listening to this and you are a startup founder that is roughly valued between
10 to maybe a hundred million dollars and you're going on to your next round hey maybe reach
out to us maybe we'll platform you you see the effect that we have we're also looking for
additional sponsors as well so if this sounds like your shtick please reach out we would love to hear from
you awesome yeah well on that note thank you guys so much for watching and we'll catch you guys
next week see you guys
