Limitless Podcast - This Week in AI: Anduril's Makes Call of Duty Real Life | ChatGPT Goes Erotic | AI Cures Cancer?
Episode Date: October 17, 2025This week: Anduril unveiled Eagle Eye, a real-life Call of Duty-style combat headset that gives soldiers x-ray vision, drone control, and 360-degree awareness, turning reality into a video ga...me.OpenAI made waves by allowing adult content in ChatGPT while simultaneously announcing plans to design its own AI chips with Broadcom, signaling full vertical integration. Meanwhile, Google’s AI discovered a potential breakthrough in curing cancer by identifying a protein link that boosts immune detection of tumors. To top it off, Google’s new Veo 3.1 model now generates stunning 30-second AI videos, marking another giant leap in creative AI.------🌌 SORA CODES: DM US A SCREENSHOT ⬇️https://x.com/LimitlessFT------TIMESTAMPS00:00 Intro to EagleEye03:55 It Can See Through Walls?09:26 OpenAI Erotica16:51 An... Interesting Sora Cameo18:20 OpenAI's Chip Domination26:10 Google's Cancer Curing Update33:25 Veo3.1 Is Amazing36:19 Sora Code Giveaway!------FOLLOW USJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
I've been playing Call of Duty probably for the last two decades, and it's one of my favorite games.
I love it. And for that entire time, video game developers have been spending all of their time and resources,
trying to make these video games more immersive, feel much more like real life.
For the first time, we've taken all those learnings and we've actually applied it to real life,
where now it's the opposite way around reality looks like a video game.
And that's what And A Real has released this week with the Eagle Eye.
We're going to talk all about that.
And we're also going to mention a few other topics that I think are interesting this week,
starting with Google, who is now curing cancer using AI models. It's pretty incredible stuff.
Open AI is getting into erotica while also making their own chips vertically integrated into
their systems to probably just get to like a couple billion users and take over the world.
So it is a pretty eventful week. I want to start with Eagle Eye because that is the most exciting
to me. EJazz, can you walk us through like what Eagle Eye is? I'm looking at this picture on
screen of a war fighter that looks like kind of almost like a funny anime thing because it has like
the ears in its head. But like what's going on with Eagle Eye and Andrewill's new product?
So for those of you who are just listening, I pulled up the announcement tweet from Anderil
of this new next generation warfare helmet. And it looks like something straight out of the future,
Josh. As you said, like if for any of you who have played a call of duty before,
especially with some of the newer versions of the game, you are able to see behind walls.
You are able to neutralize enemies from afar. You are able to control drones.
from your single helmet.
You can see through walls, all that kind of stuff.
That is now real.
That is now reality.
I'm going to show you a quick video of this happening right here.
What you're watching right now on your screen is not a game, even though it looks like one.
Even though it looks like you know, you've got a little radar in the bottom left.
You now have x-ray visual or night vision where you can see kind of like an supposed
enemy from far.
This is just insane and it looks like a game.
And I think to kind of like expand on your earlier point, Josh,
video games and reality now kind of one and the same.
And I think it's super important to, you know,
not only move warfare into like the modern age,
but to also prove that games maybe had it right to begin with.
Maybe it's just a better UI and now reality is now adopting that.
Yeah, I think that's one of my favorite parts of this is that,
and we often see this with sci-fi,
where it predicts the future in some form.
form. Video games have kind of predicted the future in some form. And in a way, they've been
building this user interface for two decades based on user feedback. Granted, it's been in a
video game, but it's really interesting to see meta taking that feedback, taking this interface,
and actually applying it to the real world. And what's so cool is you actually get these
abilities because of this new cutting edge technology from Anderil that was actually kind of
happening with meta. So if you remember, Ejazz, Palmer Lucky was the founder of Oculus, which
was the VR headset company. They got sold to meta. He left. But
Recently, they merged to create a partnership to build this eagle eye technology.
And some of the cool stuff that's in here, they have like all these different sensors
that allow you to see walls.
So you have, I think, like thermal sensors and range sensors.
And they have all these really amazing features.
One that we saw earlier was this thing that they call ghosting.
So when a target goes behind cover, like when an enemy is walking behind cover, the system
can actually show a like skeletal box or this like downing box around the ghost target
because it has the ability to see through surfaces.
So in video games, I mean, we call this wall hacks.
You get wall hacks in real life.
But it's like this really unbelievably sophisticated headset that I would imagine if it was
available for consumers would probably be like the best headset on the market by far.
The technology that they packed into this is like pretty remarkable.
And the fact that it's all up to military grade is like they did a really amazing job on this.
I just want to go over some crazy things that this helmet can do, Josh.
So you mentioned a drone, right, that you can use to either help you see behind walls or see from afar,
but also like you can get the drone to do things for you.
So in this first feature that I'm showing on my screen here, you have a soldier that's wearing this helmet
and he has kind of like a radar and bottom left hand of his screen, which is telling him, you know,
making sure that he's going in the right direction.
But on the right, you can see that there are notifications coming up from something called ghost.
That is the name of the drone that his helmet is connected to.
And what you can't see here is that the soldier or the troop that's wearing this helmet is signaling to the drone to neutralize an enemy from afar.
This is all, like, if you can see the bottom right hand of the screen right now, that's it.
The target is neutralized.
Literally something out of a game.
So I think like the drone maybe shot a missile down or whatever that might be, but super crazy to see this happen in real life or even being capable in real life.
The second really cool feature is you now have eyes in the back of your head, Josh.
you can see 360 vision at any time so that you can't be taken by surprise.
And whilst it may seem like something like super simple,
I can only imagine this changes the way that war is carried out in the first place.
Yeah, and we have a really cool demo to show, right?
Can we show like the longer demo?
Because this, when I saw it, it was really amazing.
Because what we're seeing here is exactly what the soldier sees.
You could see them, they're flipping through different like infrared scanners,
thermal scanners that are just built right into the headset,
device. You can see when they walk behind the wall. You can see the skeletons and you could see the
enemy target getting taken down. And I mean, what's amazing is, I mean, at the beginning of the last
video even, you see this rearview mirror where like when you're looking forward, it looks like you're in a
car, you're in a rear view mirror. It's just really designed to be optimal, I think to get out of
your way and be predictive. And this is something that's enabled only by AI because a lot of these
things are predictive. Like, okay, a drone sees someone. It's predicting that you want to,
that person out, it's predicting that it's an enemy. A lot of AI integration into this headset
is really what makes it so special and predicting what you want to do, predicting what you want
the system to do. I think that's one of the most interesting parts. And also just the form factor
of it. Like, this thing looks so freaking cool, you chess. If it wasn't like a war machine, I would
really want one at home just to have because it looks cool. Clearly, the quality is great,
the resolution is great. But also the integrations are really cool. The fact that it can sync up with a
drone that is flying in the air.
I mean, like, can we just take a look at this for a second, Josh?
Like, okay, so like, I've zoomed into the image here.
This, I see bunny ears.
I was going to point it out.
I was going to be like, it looks cool, but whoa, why are there ears on this?
It looks kind of ridiculous.
Okay, but you've got three of probably like the most powerful lenses, which I'm guessing
gives you access to the infrared display or the night vision or whatever that might be.
You've then got, like, what are in the bunny ears here?
Like, am I going crazy? Are these, like, sensors that can kind of like detect around your surround?
Oh, I'm guessing it allows you to see out of the back of your head. That's the thing that gives you the
rearview vision. Is that right? I'm not sure. We got to get the design specs we can see.
Hang on a second, Josh. We're missing the most important thing. The coolest, nerdiest thing.
Yes. Tell me about this. Okay. I'm not going to attempt to translate this post,
but I'm going to translate what the image is showing. So what we're seeing, if you scrolled on just a little bit,
Oh, there we go.
If you scroll down just a little bit, what they did is they kind of took a page out of Apple's Vision Pro Playbook, where the battery of these things is very heavy.
So what they did is they just removed it out of the system and they turned it into an armor plate.
And that plate actually holds all the energy, the battery.
And that plate is made of this really cool new thing called a ceramic solid state battery, I believe is what it's called.
Ceramic solid state battery is interesting because it doesn't blow up.
it's a battery that uses like the solid ceramic electrolyte instead of a liquid gel electrolyte.
So traditionally batteries are wrapped up in this like really tight coil.
And in the case that they get penetrated, they actually overheat and explode.
And when you are taking on bullet fire, you do not want your battery to overheat and explode.
So they use this really cool novel technology that allows the armor plate to double as an armor plate, but also a battery,
that when penetrated doesn't actually explode.
So it's this really beautifully elegant system, really built for.
military use cases. And what you'll see what's cool about this system is, the whole facial
front is covered. The previous ones that we saw, they just have glasses at the interface. This one
has a full facial cover. So it doesn't look like there are holes for eyes. There's just cameras.
So I'm wondering if this is like a totally augmented headset variant of Eagle Eye where you just
place on this headset and all you're seeing are screens. It's really like, this thing is badass.
I know it's like military, so we're not going to get too many details and secrets, but like, man,
We'd love to get a demo that bad boy.
I'm not even going to try and attempt to relate to what it's like to be in a warfare scenario.
But if I saw some guy wearing this helmet coming towards me, I would be pretty nervous.
I mean, he looks like some kind of like robot.
And we're probably not too far off from that, right?
We start off with humans and robots.
And now eventually robots will wear whatever this bulletproof battery helmet thing,
contraption is.
But Josh, there was equal strides.
by other top companies in the AI space this week.
Most notably from Sam Altman at OpenAI,
he didn't announce any cutting edge warfare type weaponry.
He didn't announce that he's cured cancer,
but what he did announce was chat GPT is going to become unrestricted
and allow erotica,
which means that you, me and anyone else that has access to chat GPT
can engage in adult content.
I have mixed opinions on this.
What you're seeing on the screen right now is his message announcing this,
which actually kind of came as like an afterthought.
I'm highlighting the last sentence where he says,
in December as we roll out age-gating more fully,
and as part of our adult treat users like adult principle,
we will allow even more like erotica for verified adults.
Now, there's been a mixture of reactions about this news,
one stronger than the other.
The main one being this is going to be terrible for human society.
It's just trying to suck attention and capture people's eyeballs, eventually to feed them adverts so that they never click away.
And this is just a classic, you know, Facebook trying to get all your attention, what every other social media companies tried to do for the last couple of decades.
So overall, very bearish opinions.
Josh, do you have a different take on this or do you see something that I currently don't?
It's funny.
We were talking to look before the show.
and we were talking about this idea that once you reach a certain level of power and domination in the AI space,
you start to change your values and you could see the values shifting in real time.
And I mean, one of the examples we were using is like Dario and Anthropic,
they're kind of losing the AI race in a way where they have a great coding model,
but they don't really, they're not at the leading edge of really anything else.
And Dario is going on stage and he's talking about how, well, AI is like kind of dangerous and we got to be more careful
and open source AI actually isn't a good thing
and kind of contradicting a lot of the popular thought
of what really is true.
And then in the case of Sam,
well, kind of the opposite is true,
where he was very high on his moral horse
until they got 800 million weekly active users,
until they started generating a ton of revenue.
Now it's kind of like, hmm, maybe we will roll out
some little more edgy features.
And this comes after a time
where he was very critical of Elon and XAI.
releasing the Ani feature, like the companions within the applications. And I think we see this a lot
with open source models too, where everyone wants an open source model and everyone's supportive of
open source until you get a leading frontier model. And then suddenly all the doors slam shut and you
are like holding that super tight to your chest because it is so valuable and you become emperor
of the AI world. And that holds a lot of value. So what we're seeing is this kind of like
slow degradation of the principles that they've held themselves on for,
long. And this could be an overreaction because Sam did have a follow-up post kind of like trying
to temper expectations around this. But it's interesting to see them moving into this space
when I very much thought they would be strongly against it. I have a slightly different take
to yours, Josh. I think that Sam is just trying to make money. And I think he originally sold the
vision of open AI around this concept of AGI, artificial general intelligence. This is this form of
intelligence that can find the cure to cancer, that can help take us to space, and a number of
other things that humans haven't been currently capable to do for the last bunch of decades,
right? And I think when he sold this vision, he used it to raise a bunch of money. Why did you want to
raise a bunch of money? Well, he needs to buy more compute, buy more GPU, scale out data sense.
These are all really costly things to do. But obviously, that takes time. That's going to take like a decade
to like get to the amount of energy that we need to train this superintelligence. So in the means,
In time, how are you making money?
Is it going to be from chat GBT subscriptions?
Because they just released new figures this week.
Although they have 800 million weekly active users,
only 4% of those users actually pay for chat GBT.
And of those 4%, I think only around 10% pay for the pro version.
So you basically have very minimal paying users of the 800 million active users,
and most of them are paying 20 bucks a month.
That's not going to cover your cost.
I think it's something like what we were talking about earlier is one in $25 spent or like one in five dollars spent.
Well, three dollars.
Was it three dollars?
I believe the number was one.
One in every $3 was revenue.
So for every $3 spent, they earn $1 in revenue.
And the average user is spending $27 per use per month.
Exactly.
So Open AI is running a loss.
And I think he wants to close the gap between the amount that they're spending to, you know,
fuel all of these AI searches that everyone's making versus, you know, trying to build the
grand vision of AGI. So I just think this is a ploy for Sam and also like XAI, Elon,
like he released GROC avatars, which kind of do a similar kind of kind of like soft, explicit type of
chat. I think it's just to make money to engage more people. My third view is I think this is
employed to try and get as much data on the individual as they can, Josh. I think that if they can
engage people in some kind of erotica, that individual will feel more forthcoming about personal
information that they wouldn't give to any other web application. Yeah, I think, you know,
I would love to be a fly on the wall in the conversations that were had around that decision.
Because, I mean, when you think about it, essentially what they're trying to do is give these language
models infinite access to our limbic system. It's like this feels like a civilizational decision,
like a billion people plus are going to be affected by this. And it's not necessarily a product one.
It's like personalized erotica. It's basically, it feels like aish trojan horse like into
emotional stimulation. And like we already had the visual desire stuff that which is on
Instagram. And then we have like this parissocial relationship with streamers. But this adds
this very immersive desire that like reacts and learns and remembers and adapts to all your
preferences. So, like, what's funny, I think a 30% was the number of books that are sold that are
romance novels. And in a world in which you can hyper-customize that to all of your desires and
fixations on a, like, regular basis in real time is, it's a big deal. So there's large implications
for this outside of just making money, I think. But I do agree in the sense that this was very much
a profit-driven decision. Sam has spoken about this briefly where he's like, yeah, we actually just need to
make money. And also that he is now no longer fully against advertisement-based model inside of
chat UPT. So there's a lot of changes happening, a lot of shifting of the goalposts in Overton
windows here. And I'm looking at another picture on your screen. He does, why do we have,
why do we have this up here? Well, we were discussing this before we started recording this episode.
This is from an engineer that has worked at Open AI for a while now, who's responsible for
their SORA product. And we've mentioned that on a bunch of times on this show, you should watch our
episode that is dedicated to it, came out last week. But he basically put out a tweet that was now
deleted where it's captioned, Bonnie Blue, who is an adult content creator, made a public cameo
on SORA. Go generate funny videos. And if you don't know what the cameo feature is on their SORA app,
it allows you to basically create an AI generated video with your friends, with that person or whoever
is tagged, whoever's allowing you to make a cameo with them. And so it kind of broached this uncomfortable
topic of discussion where it's like, should anyone, including like, kids, it's not age-gated,
right? So our app isn't age-gated. Should they be allowed to create cameos? And obviously,
the answer is like, no, but these adult content creators. So, you know, he deleted that tweet,
and presumably you're not allowed to even create some kind of explicit content. So it calls into
question the genuineness that Sam and Open AI as a company has towards, you know, keep on repeating
their vision of like, hey, I want to build up AGI and Kilkan's and all that kind of.
and stuff, but then they're releasing products like this, which kind of like hijack your
limbic system, as you said, Josh. So my mind's kind of in this like purgatory state where I don't
know whether to believe some. I want to believe them. I want to believe their genuine intention,
but their actions aren't really speaking for them. Yeah. Well, I guess we can believe one intention
is that they are dead set on building AGI because we have more news this week that they are
vertically integrating their chips into their system. Open AI is making their own chips.
You just explain the deal. This was a big deal. Broadcom stock was a ton. There's a lot going on here.
Yeah. So if you've listened to the show at all, you know that Nvidia is the top dog when it comes to designing and building chips. These are the hardware that you need to train next generation AI models. And Open AI announced this week that they're going to create their own. And they've partnered with this company called Broadcom to deploy 10 gigawatts worth of chips. That is a lot.
of energy designed by Open AI.
So building out their own hardware, which is a very ambitious thing to do.
Josh, you mentioned that this is their first step towards vertically integrating.
And I want to jump into that.
But before we do, I just want to highlight a few things that happened from this announcement.
Number one, classic.
This is the rule.
This is a law of physics at this point.
If you announce a partnership with Open AI, your stock will pump minimum of 20% on the
announcement. Broadcom's chart,
I don't know if you, for those of you who can't see this,
is a literal straight green lightsaber
up. Like, it is
up. Like, it jumped like,
what was this? My God, 30 bucks in a matter of seconds.
That's equivalent to $200 billion.
And the deal was worth significantly less than that.
So it's like infinite money glitch. Announce
the deal, print a giant candle, make more
than the deal is worth. Yeah.
And the deal is therefore free.
That is just insane. So the key
details from this is it's a multi-year strategic collaboration for custom AI chip development.
Open AI is going to be designing its own accelerators. That's where they refer to GPUs as.
And it's built entirely using Broadcom's Ethernet framework to help scale this all out.
So Broadcom has all the architecture and kind of like instruments to allow Open AI to scale
this chip manufacturing to the extent that they want to. And it's 10 gigawatts of power.
One thing that I found really interesting about this, Josh, and they spoke about it a bit on
their kind of like announcement episode that OpenAIA has their own podcast where they talk about
these kinds of things is Greg, the president of Open AI said that Open AI's models themselves
helped design this chip. That I found super cool. Like they're actually using AI to help them design
the chip. But then you might be asking like why, why design their own chip? Like, why not just
use NVIDias? The argument that they gave on this episode is that whilst invidia's GPUs are great,
they're great for general purpose stuff, but there are some niche use cases.
that Open AI's models really specialize in, that chips don't really serve. It becomes really
inefficient. So if Open AI is able to design their own chips, it increases efficiency,
and it really helps maximize the use of their chips. Which brings me to like the overarching point
here, which is for anyone who's building frontier AI models, your singular goal should be to
reduce the cost of intelligence per what? It's no use creating superintelligence.
if only like five people who are
elitly wealthy can use it.
You need everyone to be able to access this stuff
so that they can go and build cool stuff.
They can go and build cool stuff.
They can go and rebuild this world
into the future that we want it to be.
We're not going to be able to do that
if it's gate kept, right?
There are a number of features on Open AI right now
that a lot of people can't get access to
because you need a pro subscription, right?
You need to pay 200 bucks a month.
That just doesn't work at scale.
And Greg makes the point that in order to get there,
like let's say that there's 10 billion people
on Earth, you need 10 billion GPUs. You need a single GPU per person to be dedicated to that
person. It's going to take us, like, it's impossible to get that right now at this scale. So this is an
attempt for Open AI to get that. Yeah, computer robot is the benchmark here. We're trying to just lower
that price as much as possible. And vertical integration is how you do that. There's two points I want to
make here. One is that this is an incremental 10 gigawatts of compute, meaning this is stacked on top of
all of the other deals that they've been doing.
So you have like,
I forget all the numbers,
but you have like 10 from Ambita,
10 from AMD,
10 from Oracle.
Like,
they're just kind of packing this all on
and they're,
they're going to attempt at least to build
this unbelievably large super cluster.
Oh,
here's the totals.
10 gigawatts was Mbedia,
10 gigawatts.
10 gigawatts.
6 for AMD.
And 26 gigawatts in total until now.
That is just like this gigantic number.
So one, like,
okay,
let's get going.
Like we haven't even hit a full gigawatt
running right now.
Granted, Open AI has two gigawatts of total compute, but not a single gigawatt in a single coherent cluster.
The second thing is the power of vertical integration.
This is obviously a no-brainer, and they probably stand to benefit the most from this decision
and this deal more so than any other deal that they're doing.
And there's a few different examples we can reference here.
One of them that I love the most that we always talk about is Apple, and Apple making the decision
to move from Intel to a M-series ship, their own in-house chip.
And that resulted in not only cost per watt, going down, or computer.
compute per watt going down significantly because it was much more energy efficient, but also the
amount of compute that they were able to integrate into the machine doubled, tripled, quadrupled
year over a year every single year, because you're able to optimize that chip for your specific
hardware and software stack. With Nvidia chips, they're amazing. They're the best in class, but they
are not entirely optimized, and there's a lot of loss happening from electron in to intelligence
out. And with the OpenAI chip design, they get the opportunity to custom build this from
the silicon all the way up, or even from the transistor, all the way up through to the final
output token. And there is so much efficiency to be unlocked along the way that I imagine
if they actually do get 10 gigawatts of compute on their own chips, running their own software
and hardware stack, that will be more efficient, more effective than all of the other
GPUs that they have coherently like training combined. It's a really big deal. We saw it again with
Tesla who vertically integrates the entire supply chain. It's just in order to win this race,
you need to get, you just like you said, the cost per watt down. And that is the single biggest way
to do it. So very bullish on this decision. It's fun to see them announce this after announcing everyone
else's deal. Like they're just building with everybody and everyone's stocks are going up. So I guess my
challenge to you, EJez, and the listeners as well, is please send your suggestions for the
next roulette that we're going to hit next week. It's kind of like, which dart can you
throw at the right stock that OpenAHA is going to partner with to double in price and double your
profits? That's the end that I want to start playing. But yeah, this is amazing news. This is super
exciting. Okay. So if I had to throw a dart right now, Josh, it wouldn't be Open AI. It wouldn't be
XAI, though I love both of those companies. It would be Google. And let me kind of like take you
that. So one, you just mentioned vertical integration is super important, right? And like, you know,
it could unlock capabilities that you've never seen before. Google's kind of been doing that at the
GPU level, right? Like, they've trained all their AI models on something that they call TPUs, right? So it's all
in-house. They haven't been relying on Ambidia. And, you know, Google is a huge company and their models are
at the frontier. They're super, super cool. And what they've been able to do with this vertical integration
is find new novel ways of training their model.
So less energy, less compute,
but still the same standard of models
that could compete with the top dogs, right,
that are trained on an invidio chip.
So they've proven that,
but that's not what I want to talk to you
about Google and why I want to throw a dot in that direction.
There was some really cool news that broke this week, Josh,
which is Google's science department,
their AI science department,
released a new model called Cell 2 Sentence.
It's a 27 billion parameter model
which is trained on a decent amount of scientific data.
I say decent amount because it was only one billion tokens,
which if you and I know know anything about this,
is not a lot.
You've got most models trained on like, you know, hundreds of billions of tokens, right?
So one billion worth of tokens, but it did something really cool, Josh.
Google's new AI science model, cell to sentence,
found a new potential cure for cancer.
This is how it works.
They identified a compound, a protein,
called Silmitersetib. I definitely just butcher that pronunciation, but that's the name of the protein.
And what it does is it helps the body, your body, your immune system detect something called cold
tumors. Cold tumors is basically tumors, cancerous tumors, that go undetected by the body's
immune system and therefore allows the cancer to spread more. That's generally how cancer works.
But here's the catch. Scientists already knew about this compound. They knew that it, you know,
can help identify these cold tumors.
But what they didn't realize
and what Google's new AI model realized
was that it boosts the production
of a second protein called MHC1,
which basically acts as like a flag
or a hand waving to your immune system
saying, hey, this is cancer, you need to kill it.
And the reason why this is so important
is because, well, one, humans hadn't even thought about this.
There were no scientific studies on this.
No one had made that connection.
And so the scientists were like, okay, this is a cool hypothesis.
Let me test it with real cancer cells.
And Josh, guess what happened?
It improved.
Yeah, it improved the body's ability to kill that cancer cell.
Can you imagine if there was a way to kind of exemplify this into like a person that or patient that actually suffers from cancer?
You could have their immune system just attack the cancer itself.
I just thought that this was super cool and more resemblance of AGI.
Like, meanwhile, you have like open AI building out, you know, adult content creator for chatGBT.
And then you have Google here that are quietly, you know, vertically integrating their entire stack,
training their own AI models and finding the cure to cancer, maybe.
Yeah, the new science thing is really exciting in the path to AGI because there's a lot of,
there's a world in which like our old benchmarks would have very clearly satisfied AGI.
And creating new science would have qualified for AGI.
And this very much feels like it is creating some sort of new science.
I want to try to synthesize what you just said into a shorter little, like, because I've been trying to
to understand it myself. So I think I have a description. You need to let me know if this is right or wrong.
But basically, they created this thing called CS2 scale. So it's, you can think of it like chat chabit,
but instead of learning human language, it learns the language of cells. So it reads how genes and
proteins they talk to each other inside your body. Then they ask it a question. How can we make these cold
tumors that you mentioned, the ones that the immune system can't see turn hot so your body can actually
attack them. And then what they did, which is super cool, is it ran virtual tests on 4,000 drugs
in two different settings, one with and one without immune signals, and found that one worked
when those immune cues were already there, which is perfect because of the boost the immune
system response, only when it's needed. So they test this in a lab. It worked amazing. A 50%
increase happened in this immune visibility, and that is the breakthrough. And it is this remarkable
achievement that we have gained a 50% increase from a model. How small did you say it was each
this is like a microscopic model.
27 billion parameters.
We're at like multi-trillion parameter count models.
And with 7 billion, we were able to get a 50% increase in something that's critical
to saving a lot of people's lives.
So this is like unbelievable new tech.
Of course, coming out of Google, they are the rock stars when it comes to developing new science.
And just like, hey, shout out to the team.
That's awesome.
Yeah.
I found this tweet pretty funny.
This is great.
Patty McCormick goes, everyone else, behold, an AI,
can beat off to Google DeepMind, protein folding, weather prediction, new materials are now
an AI that can make its own cancer discoveries. Him making the point, obviously, that Google's
AI models extend well beyond the chatbot arena and they're actually making real impact in the
world in this case, in the case of science and in the case of new discoveries. Just super cool and
inspiring all round. I just wanted to zoom out quickly and make a wider point around Google's
That's, again, why I'm throwing a dot for Google next week using your prompt.
Josh is Google is, it's slowly developing this trend of building AI that can self-iterate,
that can self-improve.
And there's a distinction to make here.
When you look at the models that Open AI is creating, that X-AI is creating, it's so
intelligent because they're shoving a bunch of compute that way.
They're spending a gargantuan amount of money to train these new models on.
on datasets and parameters that they design, right?
We just mentioned that, you know, 27 billion parameter model
is quite small compared to the trillion parameter models
that these next-gen models are being trained on.
But for Google, they're taking an alternative route here,
which they're repeatedly highlighting,
which is we'll create a smaller model,
but will make the model able to learn itself
from its own mistakes, from its own logic,
and apply its own lessons that it learns from reasoning and other problems
to new problems that they face.
So instead of panicking and relying on some,
kind of data that you provided it to train it to become a genius at that thing, it just learns itself
similar to like how a human does. And I just thought that was worth highlighting. Google has
always taken an alternative approach. And they had a really bumpy start, Josh. I remember when
they released their first model. What was it called again? It was, oh man. Did you remember the name?
I don't remember the name. Damn. Okay, whatever. It was called something ridiculous. And that defined an
era where you could type in, hey, show me a picture of a tree and it would show you a picture
of a door. Like, it was that inaccurate. And fast forward to now, and they have V-O-3, a text
video model. They have one of the best chatbot models ever. And now they have a model that's
creating new scientific discoveries, just an amazing 180 on their entire effort here.
It was called. BARD. It was BARD. That's what the old model was called. Google BARD. I mean,
it's funny. They're kind of, in a way, they're not being as loud about it, but they're doing what
Open AI just announced this week, which is the whole vertical integration thing. Like they have
their TPUs, which are their tensor processing units. So they're built just for AI math. They're
not like a GPU that's good for general purpose. These are built specific for math. They have all
the resources in the world. They have all of the greatest minds and developers and people working on
this. So they've really, they've turned it around. And they're focusing on interesting things,
which I really admire. Like seeing all these science breakthroughs every week, it's really cool.
And they're seemingly the only ones that are doing this currently. So yeah.
Big ups to Google. Is that everything? Have we covered all these this week? We're already here,
so we might as well finish it off. There's one more update, really cool one from Google.
I mentioned V-O-3, which is their next-gen text-to-video model. What you're seeing on the screen
is all AI generated, despite it looking like a Hollywood movie production. This is all generated
by AI. It is not real. They released their latest version, V03.1, and I'm going to briefly go over
some of the hottest features that you can do with this new model.
Number one is one of the big bits of feedback that they got from users of V-O-3
was that the videos that were generated were just so short.
It was like 10 seconds long.
Well, now you can generate up to 30 seconds of continuous video.
If you don't think that that's cool, it is extremely cool
because this stuff used to be really, really expensive,
and somehow they got the cost of generation really, really minimal, which is great.
So you can have much longer continuous forms of video.
Number two, Josh, you can provide it with three reference images.
So you could be a picture of my mom.
I could give it a picture of a pet that I don't have.
And I could say, hey, create a fun little scenario where we're sitting in the living room and I'm patting my dog.
And it would do that with those images as a reference.
The third thing that is really cool is you can take one scene and extend it into a second scene.
So let's say you generated 30 seconds of a clip and you were like,
you know what? I wish I could see what happens next. You can just put it into VO3.1 and it can extend
that. So theoretically, if you spend around, I saw a tweet about this, if you spend around, I think
it's like 5,000 bucks, you can create a feature length film just like out of normal compared to like
the millions that you normally would spend. That's pretty amazing. And I think it was Wander.
There was a travel company that actually had their AI ad published and it looks great. I think one of
the biggest things that we're seeing breakthroughs with recently is this like,
continuity, the storyboards, which actually SORA just rolled out 12 hours ago, FYI.
Storyboards are available to web and pro users. And with SORA, users can now generate
videos up to 15 seconds long, and pro users can generate up to 25 seconds long. So Vio3.1
and SORA now both have these abilities to create this continuity throughout the videos.
And yeah, we're seeing the example that you just just pulled up here, which is a really well-done
commercial that actually went live on the away page, that they,
seemingly spent a lot of money on, the OA company at least, not the people who spent money on
producing it because it was all generated by AI. And it just looks increasingly better and better.
You get the character continuity. You get the design style continuity throughout the whole thing.
And again, this is the worst it's ever going to look and it's pretty damn good.
So, yeah, good news on the video front. Very bullish, slightly nervous. This is very powerful,
very capable. But nonetheless, it is happening and it is happening very, very quickly.
That is it for this week's AI Roundup, folks. We hope you enjoyed it. Our goal with Limitless,
as we've said right from the start, is to bring you the cutting edge hottest news. We're in as much
depth and with as much insightful commentary as we can give it. That is Josh and I's goal and to interview
the coolest people as well that gives us a lot of energy. Keep giving the feedback that you guys are
giving us. We mentioned this on a previous episode, but a special treat for those of you who have
listened this far, our friends at Open AI blessed us with around 300, I say 300 was 500
yesterday, because 200 of you have already taken them, SORA, two codes. So if you want to
sort a code, we just ask for some humble requests. Please like, please subscribe on whatever
platform that you are, and give us a rating. That could be a three-star rating. That could be a
five-star rating. I would more likely give you a code if it is a five-star rating. Let us know,
some feedback and DM us. Send us some proof and you have a code.
on us. You can find us on X, or all popular social media platforms. And thank you for listening.
We will see you on the next one. See you next week. Been a good week of AI.
