Limitless Podcast - Will Google's Veo 3 Destroy Hollywood?! | Everything Announced At Google I/O
Episode Date: May 22, 2025Google’s Veo 3 is spitting out blockbuster-grade videos with actors that talk, while Tesla’s Optimus learns new dance moves by binge-watching YouTube. In this week’s AI Rollup we race t...hrough the avalanche of announcements from Google I/O, OpenAI, Meta and Microsoft and ask: are we witnessing the dawn of an endless creativity boom…or turbo-charged P-Doom? Buckle up. ------TIMESTAMPS00:00:00 Hollywood In a Prompt00:11:38 Google IO's Wild Announcements00:19:08 Google's AI Search00:22:04 Startup Destruction00:30:35 Monopoly Risks00:36:17 Early AGI Progress?00:45:14 Percentage of Doom?00:49:19 Meta's Big Breakthrough00:57:03 Humanoid Robots Learn From Youtube01:02:50 EMP Grenades01:06:40 You Should Threaten ChatGPT------RESOURCESDavid: https://x.com/trustlessstateJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
A little over two years ago, this AI Will Smith eating spaghetti video came out.
And everyone was just amazed that AI could make this video in the first place.
There's this one problem.
This video is just completely ridiculous.
It just doesn't make any sense.
I think we all know.
It's a cursed video.
It's Will Smith eating spaghetti in the weirdest possible way.
But nonetheless, two years ago, this was just astounding that AI could produce that.
And two years and one month later, we are getting the introduction of
V-O-3 out of Google DeepMind, which is doing the same thing, making video, but this looks
much different. This looks incredibly real. And I would also say in addition to how incredibly
real it is, the sound that is coming with these videos is incredibly realistic. There's
background audio, there's changes on the sound depending on what the microphone is. And so all
of a sudden, we go from this absolutely insane AI Will Smith spaghetti video to things, to something
that looks pretty competitive with Hollywood. And I think if we all went back to two years ago,
everyone was kind of saying like, oh, this Will Smith AI spaghetti video is insane, but come on,
just like give it two years and this will be completely different. And now here we are two years later.
And we have some video that, again, seems to be pretty competitive with Hollywood.
Josh, when you saw this, what was your take? I mean, so Josh and I were discussing this before we
came on the show, David. This is basically Hollywood.
grade movie production at your fingertips. You, Josh, myself, anyone with access to a computer now
has access to create the wildest, highest grade quality videos at the face. I mean, look at these
examples here, right? You could go full sci-fi or you could create some kind of indie pick that you see on
cinemas locally to you, right? Now, let's set some context here. As you showed, back in the day
about like, whatever, a year ago, the quality of these video production models was so janky,
right? Maybe you had a couple extra fingers on the humans that you would create, or the audio
would just sound so super robotic. With this new release of V-O-3, it combines and fine-tunes all of
these different things into like a seamless little tool, a seamless little product. And it really
pays attention to things like character consistency. So let's say you generated a video of, you know,
this person that's speaking at a comedy skit that we're seeing on our screen right now, right?
And you wanted to create a different comedy skit of him telling a different joke.
He would look exactly the same.
You could change the color of his shirt if you like.
But if you wanted him to keep the same shirt or have sweaty armpits because he's nervous performing to the audience, you could do that.
And I think this kind of like holistic movie production center just kind of blows the Adobe Photoshop error out of the window.
Josh, I see a massive grin on your face.
What are you thinking?
This is so cool.
and it's cool in the sense that it's better and it creates better videos than have ever been generated before.
But if that was the only thing it did, it would not be nearly as interesting as I think it is.
And that's because for the first time ever, video has audio and it generates audio.
And it has this whole real world physics engine built into it that can generate this audio.
Not only does it generate contextual audio, but the people can talk.
And David, actually, I would love, if you could go back to the last one where the guy was doing stand-up on stage.
I just want to take eight seconds and pause and just let the listener hear how the voice sounds because it's a funny joke and it is a person that looks real who is standing at a stand-up comedy routine and they're actually speaking. So eight seconds here so people can listen. That way they have a little more context.
My therapist told me to embrace my mistakes. So I hugged my ex.
It's kind of funny. It needs to be a little bit better for a joke.
So I'll work on the humor. But in terms of the delivery, it's really impressive.
And I think this was a moment for me where I was like, oh shit, wait, this is something new and like very important.
It also thought the background laughter at the right time.
The timing of the background laughter was perfect.
Nailed it.
It's really, it has this amazing like contextual awareness of the real physical world that I thought was incredible.
And it made me think, I mean, first to movies because you watch some of these scenes, you're like, this is straight out of a movie.
But also in terms of that customized content for people that we talk about all the time, where we talk about a TikTok.
that's hyper-personalized to you and generates content specifically for you.
Well, this is the first example of that,
where now you have these people who can actually speak to you
and who can now you can converse with.
And I think that part of it is really powerful.
So as a model, this has been amazing.
It's been super fun to play with, really fun to see the examples.
It's pretty amazing.
I want to play a second video here.
And this video is of interviews of normal humans, seemingly normal humans.
They're AI generated, of course, at a car show.
And they are just getting their takes about the car show
or the car that they bought just recorded as kind of like a testimonial.
And so what we're seeing here is like AI generated testimonials of this like imaginary car show.
And there's a few things that stood out to me.
First, the microphone and the position of the microphone and the form factor of the microphone
was found in the actual audio quality.
And maybe I'm just, my podcaster here is showing it here.
But like the actual dynamics of the particular microphone in question was showing up.
Then also, the other thing that stood out to me was the specific relatability of the people being questioned so that we just saw this trucker man at a car show.
And he says something that is very contextually aware.
So I'm going to go ahead and play this video.
For the listeners who aren't aware, we have this trucker in a leather jacket, excuse me, a biker, a Hell's Angel biker in a leather jacket, big beard, just like burly looking fellow.
And he's at a car show, an electric car show.
And so the AI is aware of who this person is.
I'm going to play this video from here so we can just go ahead and listen to this biker's testimonial.
See, I'm kind of a misfit here, but don't tell anyone, I've just bought an electric car.
But what you're really seeing is that technology is going to be very, very important in terms of how we go forward.
It was great to come to the conference because my husband loves cars.
Did you see the way that she paused and was like,
awkward for a hot second. She was like, my like husband at the conference. Because like usually
AI is just like straight to the point, doesn't skip a beat. It has no pause for for like anything.
There's just tonality and like a level of humanness that I have not ever seen before in an AI
video. So the untrained eye, this this version absolutely passes the Turing test. If you showed this to a
random person on the street, it's passable. The sound is really good, the audio, the visuals. You can tell
there's some quirks. If you're familiar with cars, it looks like they're trying to use Tesla,
but the logo's a little weird. So there are some quirks. But for the untrained eye, this is amazing.
It's so impressive to see these. Yeah. I also love how, especially in that last, the lady here,
the background noise from the one clip to the next. So I'll play these last two clips again.
The background noise changes. So the AI decided to make a new kind of background noise,
and rather than just going with the same thing over and over and over again, the background noise changes.
I'm just going to play this part over again.
What you're really seeing is that technology is going to be very, very important in terms of how we go forward.
That's incredible.
Have you, have you, there's also like some fine details as well that they're paid attention to.
Have you noticed how the spotlight that is on the people who are speaking changes as their face moves?
That is insane.
I go back to the video of the guy in the suit, David.
Like, watch as he moves his face around.
Like that light just kind of reflects off of the shadow's...
The shadow tracks the overhead light.
Wow.
It's a startling amount of awareness that it has, where it just kind of understands the physical world well enough to recreate it.
It's really impressive.
Yeah.
Okay, so YouTube is getting replaced pretty soon, huh?
I mean, now that people can talk, we're next.
Well, there's more YouTube content that's going to be made from this.
So yeah, YouTubers, I guess, are now going to have to go head to head with YouTubers who are using AI content.
Right, right, exactly.
And, well, I guess for the consumer, that means more content for all, right?
There's going to be a ton more content.
And more importantly, as you guys mentioned earlier, hyper-personalized content, right?
So I could see my version of a meme or a trend versus something that my friend sees as well in another friend group.
That's awesome.
I think listeners are probably just familiar with the idea of new technology revolutions destroy some jobs, but they make brand new jobs.
And so like it's kind of ironic that here,
YouTubers are going to now have to compete with people who otherwise would not be able to be YouTubers.
So in order to be a good YouTuber, you need to be charismatic, you need to work on your delivery,
you need to be a personal person.
And so like kind of the socially awkward, you know, autistic person, not really cut out to be a strong YouTuber.
But that same person can, who's good at prompt engineering and working with AI and can just be creative.
in the background, all of a sudden, using AI can compete toe to toe with people who have like
natural disposition to be YouTubers. And so it opens up the criteria of what is a effective
YouTuber to a much larger set of people. I mean, we already have the concept of pseudo anonymous
media personalities, right? Where you can't really see their human face, but maybe they are a Twitter
persona or maybe they hide behind some kind of animated character which appears on YouTube videos or
Instagram profiles as well, right? Now, this is going to get taken to the next level with a model like
V-O-3, right, where you can be so much more expressive and maybe even like visualize yourself in a
different kind of skin, right? It's very, what's the same of that movie where it's all virtual reality?
Pretty one? Yeah. Nice. Yeah. We're like headed straight towards that. So cool. It's this,
it's like Hollywood in your pocket or Hollywood in your keyboard where you type this prompt and you get the scene
out of a Hollywood movie, like the one we're looking at now, or it's musical. And this was a one
sentence prompt. And now you have this original musical about garlic bread. And it's this like
hyper, it's this insane amount of levers that you have to, to create things that otherwise
would have cost millions of dollars in the past for a couple of keystrokes on your keyboard. And that
feels like an insanely big breakthrough. How many weeks ago were we talking about the number one
reel on Instagram, or maybe it was a TikTok, was an AI generated short film about a dog who
saved a baby from a plane crash.
It was about four weeks ago.
And like, if you looked at it, it was very clearly AI.
Like, it had all the weird quirks of AI.
But I think the thing that was astounding is like, there was still a story arc there
that even though they visually, it was like, well, this is AI and it's all kind of
effed up.
But the story arc was there.
And that was before this Google VE3 introduction.
And so like combining like good storytelling elements along with actually like visually
compelling graphics.
I'm just wait, like, I don't know, two more weeks before there's another new reel on YouTube
or Instagram or something that is the number one viewed real.
And it's like something 10 times better than what we were just reporting on two weeks ago.
But that's not the only thing that Google released this week.
For those of you who didn't know, Google had the equivalent of Apple's WWDC Day,
which is like kind of like the day that they have every year.
I think it's their actual conference is happening in a few months.
where they make all the biggest announcements about new phones, new software, new apps, those kinds of things.
It's like Apple Day on the internet.
Exactly. Apple Day on the internet. And Google's Day is called I-slash-O-O-O-I-O. And it was specifically
focused on all their major announcements around AI. So they had this Google V-O-3 model, and now they've
released a new coding agent, which is the fastest agent that I've ever seen. Now, I know a lot of
listeners on this show probably aren't so technically inclined, myself included.
And so I'm not going to focus on the coding side of things, but rather talk about the really
impressive nature of what it can do.
So number one, if you look at this tweet that we have pulled up here, for those of you who are
just listening, it shows a demonstration of someone going from a single prompt, which is,
can you build me a calendar app that does A, B, and C, and looks like this, into creating that
calendar app in not one, not two, but three, six.
seconds, right? One Mississippi, two, Mississippi, three, Mississippi, it's done, right? Now, previously
when we've spoken about coding agents on this show, we've kind of like done this whole demonstration
of writing a prompt, and then we see this agent, I don't know, write a bunch of characters
encode that, I don't know, I don't know about you, but I can't tell what it means. And then it
creates out an app on the other end, right? Now it just abstracts away all of that. And the reason,
or the secret source that this new agent basically has is it can basically pump
out tokens, which is kind of like the lifeblood of creating something that is AI generated,
at 2,000 tokens per second, which for those of you who have no context, is about a 5x quicker speed
than any other coding agent that is out there. And the way that it does this, linking it back
to video models, guys, is it uses a very popular technique called diffusion. Now, the best way to
think about diffusion is imagine an AI model creating a sentence, right? Typically, when
it creates a sentence, it puts out one character at a time. A character could be a letter, right?
So if it says, if it wants to say, hey, Josh, it'll be H-E-Y, right? But with diffusion, it basically
spits out all the characters at once, which is why you get 2,000 characters per second.
And we've never seen that technique implemented anywhere else. So it's a, it's a first for Google,
which is so impressive. Josh, when you see both super high output and accurate output,
I feel like there's two inputs here.
One is just the hardware computation.
There's more hardware doing the calculations.
That's why it goes faster.
But then also the AI model itself is more efficient.
So it doesn't have to do as much computation in the first place.
And then as a result, you get like a thousand tokens a second.
Like what excites you about this?
Like, why is it significant to you?
The speed at which you could generate tokens is a really big deal.
Because the faster you can get tokens, the faster you can have the model think through more
and get higher resolution answers.
So there was some sort of, I believe this is a novel efficiency upgrade that happened because I haven't seen anything that's been able to push this many tokens at once.
And this was just one of like many, many things that Google announced yesterday that were like pretty unbelievable.
I think as a company, we've kind of been underwhelmed with Google, not in the sense of their software, but in the sense of their products.
And yesterday really, it was like a pretty hardcore smackdown against any other company in the space because even this discovery of 909 tokens per second, which is crazy, wasn't even the most exciting thing they announced.
so much novel progress. So this is super amazing. Yeah, very bullish on Google. I mean, Josh,
did you see the Google Meets update that they made? Insane. So freaking cool. I need this like yesterday.
I need just immediately. Please. For the people who haven't, though, please let them know because it is so
cool. Okay. So for those who are listening who, as I know, you have your pen pals in foreign
countries that you wish you could speak to, but you can't because there's a language barrier.
your world's favorite conferencing app, Google Meets,
now does live translation using a new AI model.
So what we see here, for those of you who are listening,
is a video of someone who is an English speaker who can't speak Spanish,
speaking to a Spanish native,
and it has a live translation over her voice.
Now, the tone does not sound like her.
So that's something that could be tweaking the next model upgrade,
but the cadence, the effectiveness, the efficiency that goes from saying,
Hello, how are you? My name is blah, blah, blah. Into Spanish is astounding.
The way that this connects to what we were just talking about, like 900 tokens per second,
is that real-time live cross-language communication requires extremely low latency.
It has to, in order for it to feel good and be a good product,
it has to be translation in real time.
And so you can't have a lot of thought, you can't have a lot of thinking.
The AI needs to be fluent at a very low latency.
So we're actually like looking at a video.
I would like to watch this video.
I'm assuming it kind of gives a good demo.
So let's go ahead and watch this video.
Hi, Camilla.
Let me turn on speech translation.
It's nice to finally talk to you.
It's good to talk to finally.
Finally.
No, I can't expect to alkylar to your house.
You're going to have a lot of fun,
and I think you're going to love visiting the city.
The house is in a very nice neighborhood
and overlooks the mountains.
That sounds wonderful.
So it starts by each respective speaker speaks in their native language.
And you hear that.
You hear the Spanish speaker speaking Spanish.
And then you hear the English speaker speaking English.
But then the AI overrides it.
And then the English speakers, English gets quiet.
And it's overridden by the Spanish and then also vice versa.
And so it's incredibly immersive.
It's incredibly real time.
And it all comes from like, we go through almost every single week.
week, not this week actually. The next frontier model, right? It's this much more powerful. It's this
more efficient. It's this much faster. And this is what that looks like at the end result
turning into a product is you get real-time AI capabilities that, you know, otherwise it would
take them like you'd have to type it into Google Translate, like all that kind of stuff. And so
latency, I think, is actually a huge unlock for a lot of products. You know, I now have the perfect
excuse for not using Duolingo, guys. It's been a dormant app on my phone for so long. All of that
time spent on Duolingo just completely wasted.
Oh, man.
You should know better.
You should know how fast we're moving to think that you actually need to learn a new language.
That one's on you.
Yeah.
Yeah, yeah, yeah.
Silly me to not predict that Google would make a life-changing update in a week.
Crazy.
Yeah, it's amazing.
I think one of the things that I loved most about this conference in general was how
applicable all of this new technology was to us.
Like, this is deployed in the real world.
And now I could actually go and talk to people in foreign languages.
And not only that, but they announced their new glasses too, which was just an extension of this, where you can wear the glasses. And when you talk to someone in real time, you actually, you get that same translation in the real world. So this is really cool, like, real world applications for the average person. Like users can use this and they can use it today, which I thought that was such a cool thing they did.
So if you guys remember from last week, we spoke about a rumor that Apple was going to replace Google as its default search on its searching browser. And I think it accounted for something,
ridiculous, like 30% of all search activity just through Apple iPhones last week. Well, it sounds like
the folks at Google must have watched our episode from last week because they decided to drop a new
AI search product as well as part of this whole I.O. Google update, which basically, the best
thing to think about it is chat GPT in your Google search. So now whenever you search something,
Gemini kind of gave you like a brief inters, like kind of excerpt of like what you were searching for.
it's now much, much better.
It pulls from various different sources.
It gives you examples and direct links to the thing that you're looking for.
I actually tried this out kind of like a beta version on Google Studio.
And I was able to basically get access to certain Amazon product links or clothing links directly from the thing that I was searching for, which is super cool.
And it just shows how important Google places their search function as a moat for them.
right? If they lose that to Open AIS chat GBT or whatever Apple's own in-house AI browser,
that's a lot of revenue to give up there. So it's great to see them being so aggressive.
I think the big story here is how Google retains actual traffic and links because they need to
pass people to links to keep their business model. But the thing that AI does is it gets you
your information, the value that you're looking for faster, more immediate. And so you don't need to go into any links.
because that's the whole value proposition of chat,
GBT, and AI generally is that, like,
you just go type in what you want
and then it spits it back out.
You never clicked any links,
which destroys Google's business model.
And so I've always been curious as to like,
okay, how do they compete with Open AI and the value of AI in the first place
while still getting people to go click on links
because that's where they get their money.
So it sounds like,
Jaws,
you got the benefit of an AI agent,
a smart AI agent,
giving you the value that you need,
and you still went and clicked on links.
It sounds like you still did both.
Yeah.
And I think if we were to kind of extrapolate where this trend ends up going, it'll just be an autonomous
version of me, right? It'll know what I want. It'll know what I want to buy. And so it wouldn't be me
seeing the links that it presents to me and then like clicking on the product, clicking purchase. It'll just be,
I don't know, for better use of a word, an e-jazz agent doing all those things for me. And I'm kind of like
trying to think, like, how far does that go? Like, will it, and will my robot version of me end up
doing all the working out for me? And do I just get to sit at home? Or do I get to spin up Google V-O-3
and put facelift on me whenever I go on work calls? I don't know. But it's cool to see how it, like,
changes the way that we interact with everything that's online. Josh, what do you think about
Google's pressure with its business model? Is it threading the needle? I think it is. This is their
answer to the founder's dilemma is how do they keep their monopoly while still continuing to
And I think they've kind of threaded the needle.
We'll see.
We'll see how it works.
I don't think enough people
have had a chance to try it yet.
But this seems pretty good.
Is they take their existing user base.
They take the people who love Google.
And now they've just given them a little extra.
So now you have this AI integration.
You feel less inclined to need to go to chat, GBT,
because you get similar answers here,
but they're enhanced by Google.
And then they're also enhanced by your Google profile.
I think one thing that people have had much longer than open AI accounts is Google accounts and
Gmails.
And those emails have a lot of data on you.
And should you choose to opt in?
allow Google to get all of that data. Well, like you guys was saying, this gives you hyper-personalized
results with this enhanced AI engine on top of the Google search that you already know and love.
And they're kind of doing this across the whole stack. And it was a sad day for startups
yesterday, too, because it seemed like every new feature that they released was just smashing a hammer
on top of one of the startups, one of them being a one where you could try clothes on. And you
upload a photo to their service and you can swipe through shopping results and then try those clothes
and see how they look like on your physical body. And Google now did that too.
So now what you're seeing is this kind of full stack experience that's hyper-personalized to you across the entire Google spectrum, which is it's not just information.
It's shopping.
It's communication through email.
It's visual through YouTube.
They kind of have this whole stack that is comprehensive to your life and they have all this data.
And now they're laying this AI service on top.
I think they did a great job.
And I'm like really excited to start using these features.
How do we feel about the compression on startups built on top of Google and built on top of AI all kind of collapsing?
to like a few power players who have the distribution.
AI is always like, who did it?
Peter Thiel was always like, yeah, AI is a massively centralizing force.
Power is going to accrue to nation states and to the largest tech companies.
And then the long tail of startups are just not going to be able to compete because any sort
of value a startup can build, Google can offer because they have distribution.
Are we worried about this?
Do we like this?
Do we not like this?
I think this is a natural evolution of tech, right?
When the internet came around, everyone was like, oh,
now we can spin up multiple different physical locations and we could just sell through the same
website or whatever that might be, right? But that's not what happened, right? They own the entire
internet infrastructure and then you spin up a bunch of companies on top, just that are like web
native. The same is going to happen with AI. I think startups are just going to look very different,
right? I don't think it's going to be some kind of commerce store or some kind of B2B SaaS provider.
I think it's going to be something else. Maybe it is an agent or agent tooling or agent providers or
whatever that agent flow looks like.
Yeah, I don't see a world in which this trend decreases.
AI is a force of leverage.
It's a tool for leverage.
And it works for new founders.
So they could spin up these startups very quickly.
But it also works equally for these large companies that are then going to turn you
into a feature using one prompt.
And they have the existing infrastructure.
They have the marketing.
They have the user base.
So it's really hard for these new companies to exist in a world where you are one
prompt away from getting replaced.
I think the game that they need to.
to play is the user acquisition game. And that's something we've seen people try to do is,
is how do you move faster than Google can to acquire attention and users. Let's say the Ghibli
phenomenon, if that wasn't from Open AI and that was just some small lab that created
ImageGen that was really, really good and better than anyone else, they could probably reach a
large enough critical mass to maybe get acquired by one of the companies, but for them to really
stand up and like you reach that escape velocity, it seems really, really challenging now,
especially when you can't win on software, and it's 10 times harder to win on hardware because they don't have the existing manufacturing infrastructure. It's a tough world to start a new software company.
Dude, the FTC is eventually going to become a conversation here, the Federal Trade Commission, because they are the antitrust. They have their responsibility over antitrust, and they are just now talking to Facebook about its acquisition over Instagram that happened like 15 years ago. And they were like, was that an overly monopoly?
was that competitive, anti-competitive?
And this idea of just like startups can be added,
like the whole entire value of a startup can be added to Google
with like a couple lines of code.
That is just going to be a complete hairball of antitrust issue.
That like it's going to be too late.
Even by the time the FTC can react,
it's going to be far too late because Google will have already ended
a thousand startups by just increasing the value of their AI model.
And they're going to have to do that to stay competitive with the,
other AI labs too. I don't know where it goes, but it's going to be messy. Yeah, it's going to be
messy. I think it probably, it is interesting that they move so slow because it does allow for
this innovation to happen. So it is a bad thing in the sense that if you are a startup founder,
it's really difficult to win in a lot of categories. But the fact that there is no legislation
that's so far behind allows for these experiments to at least be run and for us like to collect more
data and to go through these experiments and to see more how it plays out. So yeah, it's going to be
It's going to be a mess, but I think it's a long time until that mess becomes realized.
It's also just going to lead to a shit ton more startups, right?
I mean, we just kind of demonstrated like five new tools.
More startups?
Yeah, less startups?
Way more startups.
I mean, like, let's use V-O-3 as like the video thing, right?
Like now everyone can become a world-famous Hollywood producer or director or visual artist.
All of those roles by one person from one single computer.
Now you just end up with a shit-ton of movies that are out there, right?
I think I'm categorizing that as more content creators, more individual creators, not necessarily more startups.
Right, right. But if we directionally head in towards a trend where all those tools become available, so it's not just Hollywood movie production, but you can create an app which combines certain short form versions of the content that you produce.
Basically, you can spin up multiple practices and combine that to be one type of company.
And you can make multiple of those companies, right?
to address different kind of sectors.
I feel like the bridge between experimentation
and then one of these monopolies like Google,
just copying it and adding it as a feature,
is becoming a much more rapid iteration than it's been before, right?
Previously, you just need to like watch it play out
and be like over a couple of years.
Yeah, maybe it is quite good.
And then, oh, shit, we can't build that ourselves.
Let's just pay like whatever, $10 billion to buy this.
Now it's just kind of like, yeah, let's just copy and paste that.
That sounds good. And we'll do it in a week's time.
Right. And to me, I'm hearing that, okay, why would I ever do an AI-based startup when chat
GBT or Google is just going to take the value that I create and integrate it into their model?
Yeah, exactly. I like what EJAS is saying. It reminds me, it feels very similar to, like,
the analogy with social media where social media really only exists, like this whole creator economy
because we have this like surplus of capital and time and attention where now there's this whole
economy that's built on people just having time to take cool videos of their lives and people who are
obsessed with watching those lives and want to invest in the products that they use. And that is a
luxury from this innovation that we had that enabled all this free time and excess capital. And we
kind of have a similar dynamic here where it's probably going to be really hard to create a company,
but what other second order industries can emerge because now these tools are so prevalent. So while it
might be really difficult to compete with Google in a startup, maybe you can like you just said,
create this hyper-personalized bundle on steroids, where you create a movie and this really
amazing experience around it and all these AI tools that you can interact with it with.
And maybe there's just this whole other industry that emerges that's kind of more niche down,
but still meaningful.
I don't know.
It's weird.
Yeah.
You just reminded me, Josh, what is the name of that kind of thesis or rule that A16C came up with
where it's like if you get 100 or 1,000 loyal subscribers, you can pretty much support and sustain
your own life?
Yeah, a thousand three fans.
Exactly, right?
I think so.
Kevin Kelly article.
Yeah.
Kevin Kelly article.
Okay, right.
So if we take that analogy, right, and instead, like, transpose that on to people that can just spin up these random AI setups that are hyper-personalized bundles as you describe it, is that like a just a new economy on its own?
Do we go from like the nine to five wage worker to just suddenly start up owner?
But it is enough to support their own lifestyle and buy what they want and digest the content that they want.
Maybe.
Maybe just everyone becomes a startup owner.
I don't know.
maybe I guess we're starting to like kind of have a blurry definition on on what's a startup like is a
independent one two or three person creator studio who creates new movies that could never have been
made before is that a startup because in my mind that's more on the margins of like you're a YouTuber
who's making content not a YouTube platform hosting content and I think with AI and the centralization
force that is going to go to the incumbents the open AI is the Facebook's
the Googles is going to be harder to compete with those with true startups like VC-backed
startups. That is going to be harder. What is going to come out of that is the ability for
everyone to become more creative. And what I am stoked for is that there is going to be a new
type of creator out there that makes interesting movies, short form movies, long form movies,
photography, anything creative, jokes, anything, like stories, books, all of this stuff is going
to be bolstered by AI. And they are going to be 10 times better, more unique, more creative,
more boundary pushing of art than we've ever seen before because we are just unlocking a new type
of creator with new tools. And that is the value generation that I see is like as we opened up
this podcast, there's going to be a thousand X more content and most of it's going to be shit.
And some of it is going to be like a hundred times better than our currently existing content.
And that's where I see value being generated.
This kind of relates to the idea that we were talking about post-AGI, which is just the cutoff of human labor as a form of capital.
And the incremental amount of value that we could extract from human labor actually decreases over time.
And in the case that these large conglomerate companies start to absorb a lot of startups, that kind of aligns with what we think, where you won't be able to generate a ton of value because it will be held by these overarching entities.
But as it trickles down, you can earn enough.
And you could create the interesting things that excite people that are not within the domain of crushing
from these conglomerates and still probably live a pretty decent life.
Well, let me ask you this.
Do you think it makes sense for a single entity or board or corporation to own as many startups
as we're proposing, right?
Or do you think at some point does it just become net negative, right?
Like there's only so many kind of eyes that can be on a single project, right?
It almost sounds like an open source or slightly more, I don't use this word, but decentralized
version would make sense, right?
I think this is one of the big themes that has been identified by AI and the one that we call
the Game of Thrones to Create God.
And so Open AI, Facebook, all of the big AI labs, the Chinese ones, Deepseek, there is
this like arms race, this AI arms race, this Game of Thrones competition to have the best
frontier model. And what do you get when we fast forward, hopefully more than five years,
like 10 years? Like ultimately you create God and all of the value ever created by anyone
on earth just gets integrated right into the model. And the model can literally do anything within
the realm of feasibility. And it can even push the realm of feasibility outwards. And there's
this one supermodel, this AGI supermodel that can do everything. And, and,
And that's why there's like so much competition to create this. And at the end of that, like,
what does society look like? Human labor is worthless. But there's so much like creativity that can
be unlocked. And so it's, there's probably a whole episode. I don't know with who, what guest may be
like Max Tegmark or Nick Bostrom or some AI philosopher where like, yeah, you actually end up in
this kind of like hypercapitalist, communist, weird, dystopia thing where like humans, humans,
have no purpose, there's only art left. And art is the only thing that really gives us meaning.
And we need to figure out how to like make commerce happen. But if like we like Josh says,
like enough of that trickles down to create some level of UBI, I can buy Josh's like movies and
he can buy my photography or something, something like this. There's definitely a podcast guest
out here to help us talk about this, like what this looks like in like the long term. And again,
hopefully that long term is give us at least 15 years before this comes, please.
But yeah, it's a very big conversation.
We should have the optimist versus the pessimists on that philosophical episode.
That would be a killer episode.
So all of this to say, Google I know was pretty sick.
We really enjoyed it.
They released some pretty cool stuff.
And actually to set that into context, I was looking up the figures because I was like,
how many people are actually using Google's AI models?
Because I talk about it a lot, but I don't necessarily.
really use Gemini as much. They currently have 400 million monthly active users across all their
models. And collectively, they process about 480 trillion tokens. For those of you are listening,
that's the equivalent of like 2.5 billion books per month, right? Or for the Americans that are
listening to this show, that's 65 Library of Congresses per month. Thank you, Grock. So that is just a hell
of an amount of output per month, right?
And I'm curious to track this metric
as time goes on.
But yeah, Google completely swept the floor.
Crazy.
All right.
Well done, Google.
We talked a lot of shit
about your previous Gemini releases,
but you guys earned this one.
Smack down.
I think, oh, is that not what's over
with Google's announcements?
We have Alpha Evolve.
Should we talk about this?
Yeah.
Okay, right.
So, you know,
not to stay too much
on a single monopoly
and, you know,
judge them by their products.
But, okay, so a,
a trend that happened this week was there was three flagship autonomous coding agents that were
released by both Google and OpenAI, and then last minute, aka yesterday, Microsoft as well.
Now, again, I'm not going to talk to you guys about like the technicals of this, but I'm going
to talk to you about why this is cool and specifically what problems it solved, which was super cool, right?
So Google released not one, but two coding agents. Number one was called Alpha Evolution.
as you mentioned, David. And the best way to describe it is it's not just one model, but it's like
several models that they created. It's kind of like a pipeline of models. And its goal is simply,
hey, improve the algorithm of whatever I give to you, right? And an algorithm is basically just a bit
of code or whatever it might be. But you might ask yourself, well, what algorithms was it given?
Like what did it improve? Well, here's some of the highlights. It was set on Google's own model,
aka the model that it was run on, and it was told, hey, I want you to improve the code of this model.
And what it ended up doing was making it 1% more efficient, which meant that it saved $150 million in cost in terms of training and inferencing it.
So that's just like the boring kind of like, okay, it saved me a bit of compute.
For Google, $150 million is probably just a drop in the ocean.
Like, who cares, right?
1% improvement is how Lance Armstrong became the Tour de France champion, like,
seven times in a row. You know what, David?
That was the entire strategy behind building Lance Armstrong, a better bike and improving
its training by just 1% over and over and over again. Well, you know what? I'm sure in about
a week's time that 1% is going to be 5% and it's going to just be an exponential curve of like
compounding or whatever that might be, right? So it reduced millions of dollars or hundreds
of millions of dollars worth of compute. That's great. But then it was given a slightly different
task, right? It was given a chip design that Google had made through their like process.
units. And it was saying, you know, how would you redesign this chip? And it went back to like
the original papers that this chip was made and designed on. And it reconstructed that chip in a
slightly different way, which led to another efficiency gain. But guys, I see you champing at the bit.
You're like, you know what? Efficiency gains, it's cool, but I don't really care. Okay, well, let me take it
up another level, right? It was presented with 50 open math challenges that, and I must reiterate this,
had not been solved to date by a human.
And in 75% of cases,
it rediscovered the best solution known so far.
So that's what us humans know.
And in 20% of the cases,
and I think that's about 15 maths problems,
it improved upon the previously best known solutions
yielding new discoveries, right?
The point I'm trying to make around these things, guys,
is it's not about the coding agent.
It's not about some secret source that it's made.
It's the fact that it's now making discoveries
that humans previously could not figure out on their own, aka, the AI is telling us stuff that we
didn't know before, and it's making us like make new breakthroughs on these different sciences.
Right now, it's on maths problems.
Right now it's on chip design and boring compute efficiency gains.
But the idea is tomorrow it's going to be on things that are personable to us, relatable to our
professions, maybe even personable to our relationships, our friendships, our friendships,
unlocking the 70% of our brain or gray matter that isn't used, right?
And this is a trend that I'm seeing.
That's alpha revolve.
I got more to say, but I want to hear your takes on this.
I think the impact here, it would be like if it solved one math problem that humans had
not previously solved, like a truly novel one, I think that impact would be even better.
But I think the takeaway ought to still be the same.
In 20% of cases, it approved upon previously best known solutions yielding new discoveries.
To me, that's like, okay, we are creating.
net new knowledge here. And we're doing it in math, which is this extremely low level knowledge,
like arguably the lowest level knowledge that exists. But like that, yeah, that's the foundation
of human knowledge. You go from math, you go to physics. And from physics, you go to chemistry.
And from chemistry, you go to biology and hardware and other things. And so again, just one percent,
especially at the rate at which new models come, is a huge deal. And then like this starts to get into
like the AGI super intelligence explosion.
We're like, okay, well, what if we ask it to start to improve itself by 1%?
Like, make your own code better.
I don't know.
Like, maybe I'm speaking out of, out of my authority here.
But this is, like, what I see here is our ability to improve, our accelerate, the ability
to accelerate is improving by like 1% this week, 1% next week, 1% the week after.
And to me, that is very intimidating.
Josh, what's your take?
Yeah, this feels exciting because.
at this point it feels like more of an optimizer.
Like it can it can take the most challenging problems and just kind of squeeze out the remaining juice that we weren't able to figure out.
But it hasn't created anything truly net new.
Like maybe there's some novel discovery here, but nothing that's super impressive.
I think it's starting at math because math is the most concrete set of parameters where math has these very fixed rules that it could kind of work within the confines in.
And maybe sort of reason through that to find these extra efficiencies.
It's kind of like we saw with with go where it made this crazy move that may not.
sense, but in the context of everything, it makes sense, and it was the reason why the computer won.
We're kind of seeing that here, where it's doing things that look weird, but it's doing so for the
sake of efficiency, where it's not really getting net new knowledge, but it's close. And this feels
like the closest we've ever gotten. So I would imagine a few weeks, a few months, a few years,
this goes from solving 20% efficiency upgrades to actually net new knowledge. And that's when this
gets really exciting. But I think that's what makes this exciting, is this is really the first step
in generating their net new knowledge.
And I think it's important to flag that first step, as you said, Josh, right?
Because a trend that we're seeing with just AI development in general is you'll see some niche
startup come up with an AI generated voice model, right?
And then you see some other niche startup come up with an image model.
And then that image model turns into another niche startup, which comes up with a video model.
And then two months later, you have Google V-O-3 owned and trademarked.
by the monopoly, you know, Hollywood production at your fingertips or in your pocket.
And what I'm saying is we start to see like it emerge in different kind of like siloed
parts and then rapidly combine like over a week into like a fully fledged product.
And this week is this week is no different, right?
So we just spoke about this Google agent model, right?
But what about OpenAI's thing, right?
So Open AI released a new agent called Codex, right?
I'm going to put all the boring technical stuff aside and just tell you what's cool about it, right?
So typically when you tell a coding agent, you know we talk about vibe coding, right?
Hey, build me this app and then we see it construct the code in front of us.
This is not what it does.
Instead, you go to this agent and you say, hey, I have like 10 tasks that my boss has sent me.
And like I don't really have time to work on all of them right now.
I want to go out and grab a coffee from Blue Bottle.
Do you mind just working on like five of these tasks?
And I've got the other five when I come back.
It can now do that.
You give it access to your entire private code base.
you can fine tune it on that particular.
So fine tuning, meaning you can just train it on that.
Well, I was being, David, I was trying to make the,
I was trying to make the humans seem valuable, David.
Come on.
Like, we're still humans presenting this podcast, right?
And it can just run all of it in parallel, right?
So the analogy here would be that's two weeks worth of work,
completely blown down into one hour.
And it just runs autonomously, right?
And then we move on to like Microsoft's update,
which is they released not specifically,
an agent, but an agent that can use any kind of coding model, right? So it can use open's AI's new
codex model. It can use Google's new coding agent or whatever that might be. And they're taking a
much more agnostic approach, right? So if you combine all of these things together, I wonder what
the V-O-3 of this coding agent saga looks like, right? Is it David, Josh and I have access to an app
which can code up any kind of idea or random consumer app that we pop up in,
our head, right? It goes from vibe coding V1 to vibe coding V2. And I think that step change is probably
within like a month or two months grasp out of our region. I don't really know. Before we move on,
Ijaz, this is at the bottom of the Google Deep Mind discussion. Do you guys know this Twitter account?
AI Not Kill Everyoneism memes. Have you guys seen this Twitter account? I think I've seen the memes from
this, yeah. Yeah. Yeah. So I first interacted with this Twitter account back when Bankless was doing
our series on AI safety. And so like old OG bankless lore, we did this episode with
L. Yajer Yudkowski, who's like the original AI Dumer of, like, yeah, his P. Doom, his percentage
of Doom happening downstream of AI is like 100%. And so we started doing these AI safety episodes
and we did like five or six of like AI safety experts. And one big area of emphasis was like,
one of the big areas arenas of P. Doom is if we allow AI to recursively self-improve itself.
and when we're talking about this Google DeepMind,
which is this thing that discovered new,
like pushing the limits on new knowledge
about improving hardware,
improving algorithms,
and then I brought up that idea of like,
well,
what if we just like pointed it back to itself
and it could like improve itself?
This is like unlocking a huge arena of P. Doom
for a lot of AI safety experts.
And I mean, at this point,
this is a meme account.
I don't have any power to stop this phenomenon for happening,
so I'm kind of just enjoying the,
memes along the way. But it's it's it's it feels uncomfortable that I feel like we are pushing up
against that like membrane of self recursive improving ability and we're kind of just watching
Google do it and we're like oh cool things get one percent better awesome. And then meanwhile like
in two years like we unlocked P-Dome because of self-recursion. That's just like a fear that is
worth highlighted. Okay, but David doesn't this ultimately come down to a line?
like if we had this self-recursive self-improving AI situation but it was aligned with humanity
we should be Gucci right so this is a question of like aligning the AI just be aligned
it's a big it's a big deal right and maybe we can have that in the philosopher discussion that
we that we mentioned but like I feel like that's what it comes down to because like I don't see a world
where people are like you know what don't let the AI make scientific discoveries and cure cancer
and all that stuff. Now, we don't want that, right? Don't let it do personalized medicine or any of that.
Like, there is no way that this doesn't happen. True. If AI can solve cancer, among all the other
diseases out there, it can also create a virus that will kill us all. It can do both of those things.
The duality of man. Sorry, duality of AI, man. Technology progress in general. That's kind of how it's
always been. It's always how it's been. It's always how it's been. Yeah, this time it's just
happening all at once. And like previously when we like developed the atom bomb, we had time to like
figure out what to do about that. Like we had a good number of years. It came close. And like even
technologies before that, like you just have time to adapt. We don't necessarily have that same level
of adaptation. And so if we find all at once inside of a five year span 20 different ways of
ending humanity, the chance that we can't contain all of those and we let one out is like pretty
high. Well, we also didn't outsource human thinking to a non-human organism, did we?
True. Also that. Also that. Yeah. Yeah. We had these clunky little meatball sacks that had
morals and ethics and community and culture. And now we're just like, just let the AI do the
thinking for me. Let the AI make the discoveries for me. I think that's probably why it wins.
It's just there's like so many more shepherds or sheep than shepherds. Like, there's just
people that want their lives to get better. And they didn't really care about.
the cost and that's probably a trend that we'll see continue yeah yeah meanwhile join us every single
week on the AI roll up while we live streamed the end of humanity with just big smiles on our faces
because you know we should have a pee doom we should have a pee doom counter on this on this episode
and like on this on this show and every week we should just like it got a little bit higher this week for me
yeah it definitely got a little bit higher with like google stuff I don't think we're done
with Google what's going on with the Google big bang what is this okay well it's not the Google
Big Bang, all right? It's a small, lesser-known company called Meta. Okay, so, okay, so, so, so on the topic,
finally we're off of Google. Yes, exactly. On the topic of AI basically making scientific breakthroughs,
we've seen a general trend this week of that actually happening. So Meta, formerly known as Facebook,
released four scientific papers, which basically leveraged AI to make new discoveries in terms of
molecular structure, atom structure, and sampling when it comes to science.
But who cares about that?
That's just the theory and science of it all.
But the point being is I think personalized medicine, personalized cures, stuff relating
around natural sciences, which isn't just something that you and I interact with on a daily basis,
David and Josh on our computer, but are things that we interact with outside of our digital life,
right?
When we need to go see the doctor or we need to...
You go outside?
Sometimes, dude.
I look out the window, it seems pretty exciting.
But, you know, go to get like personalized health advice or whatever that might be.
You know, those things are now getting AI fight, basically.
And we see this really materialized in this new agent called Robin, David.
So the story with Robin here is that it made a scientific breakthrough,
but not one that you might expect from this clunky AI thing, right?
And the thing that it discovered was a cure or rather a true.
treatment for blindness caused by AMD, which is basically a disease that causes blindness.
Now, I just want that to settle in for a second. This is basically an AI or set of agents,
which was given a task and it said, hey, there's this condition which causes blindness in
individuals. I want you to come up with a scientific experiment that could potentially lead to a
cure. I'm not asking for a cure, but just like go along the critical sense of thinking and
and figure this out for me. So it came up with an experiment, and it got a bunch of humans to do
that experiment, exactly to the T, exactly to how it's suggested. And the humans, the worthy meat sacks,
gave it the results, and it said, these results are okay. They're not bad, but I think this is
informing me of another experiment that I want you to do. And it basically went back and forth over
six experiments, and it came up with this novel idea. And I don't know how to pronounce the compound,
so don't ask me to,
but it's basically something called a rock inhibitor,
which is only, funnily enough, approved in Japan
for cure of a completely different associated disease.
And there is no scientific research
that has ever suggested that a rock inhibitor
ever be used to cure blindness in this particular aspect.
So the humans were like, okay, let's test this thing out,
and it ended up curing it in a bunch of mice models
that they were experimenting on.
Now, listen, this, as Josh said,
says very frequently, and I agree with him, this is just step one. It's nothing kind of major.
It's working on rats. Who cares? Rats, humans, they only share 85% of the genome with humans.
We don't really know. But once we get to the point of human trials, and if this ends up actually
working out to be the case, along with the 1% of Google's $150 million compute, span, chip
designs, whatever, we are going to end up in a world where we just let these AI run the ship, right?
personalized medicine cures all of the likes right and this is not just something that google meta and
this random startup that created this robin agent is focused on right it's also that other smaller
company known as microsoft that is also going from the throne that announced this week
in their conference Microsoft discovery that is also doing the same thing so i'm seeing this
brand, basically, of consumer app focus on one side, right? We had Open AI last week,
announced that they're hiring a CEO of applications, whatever the hell that means, and they're
focusing hardcore on the consumer applications, digestion, content discovery, all that kind of stuff.
And then you have the other side, which is like, what are the non-digital things that we can
use AI to focus on that'll have a major impact on humans? And it seems to be kind of science and
health to start off with. We just talked about an AI improving itself to push up against the frontier,
the limits of human knowledge. And now the next story that we're talking about is an AI human expert
collaboration team, which is exactly like, I'll use this metaphor as many times as I need to on this
podcast. After Gary Kasparov lost a deep blue, after he got beaten by a computer, the next thing
that happened was not computers beating computers, but computers and humans playing against computers
in humans because a computer and a human will always beat a computer. And so now we're saying,
okay, we use AI to suggest new paths forward in terms of treatment. And then after a human expert
team and an AI collaborated over six iterations, we solved blindness in mice. I don't know.
Like in one year time, there will be no more blindness in humans. Like, it does it. It's not going to
take that long. Like, I don't know if this was an invasive surgery. So if they had to like go with a
scalpel and like tweaks and neurons in the brain didn't I don't think that's what this was no is
injective yep yeah you just inject a thingy and then your brain is fixed and that seems to be very
close and I think like the FDA FDA yeah food and drug association yeah probably that one the FDA is
going to get in the way but there's going to be like these autonomous zones these like innovation zones
send gen like all like Dubai all these places and they're not going to have that bureaucracy and we're
going to do all these tests and we're going to solve everything. We're just going to solve everything.
And honestly, I feel sad for Brian Johnson because all of the work that he's doing to solve longevity
is going to be marginalized by AI within like five years. I don't feel sad at all for Brian Johnson,
David. The same way I don't feel sad about duolingo. Listen, if I can speak, people understand what
I'm saying, that's fine. I watched a Dwar Keshe podcast episode this week, guys. And I forget this
specific name of the person he was interviewing, but that person was an expert in kind of figuring
out where the health AI sector goes. And he predicted that if you can stay alive, just for the next
10 years, you should be able to stay alive for the next 100 years, basically, or forever. Because by that
point, he's predicting that AI will have a cure for any kind of ailment by them.
That checks out. That checks out to me. Yeah. This is the,
Longevity is a very real subject.
There's already a huge cohort of people trying to solve longevity.
Like this bit has been this like domain of science, relatively niche domain of science,
but domain nonetheless for the last like 30 years.
There's our like Aubrey de Grey, he's been trying to fix it.
And so, you know, just sprinkle on a little AI and I'm sure I'll have a handful of breakthroughs.
What happens when the AI realizes that its resources spent on extending human life makes no sense when they can do all of it, right?
P-D-Dome. We'll just solve alignment. We'll just solve alignment, adjust. We'll just fix the alignment
problem, and we don't have to worry about it. Just change the system prompt. Just to get it otherwise.
Yeah. Yeah. Yeah. Okay. So for the final part of the show, I'm excited to introduce this one because I have been nerding about over it for a very long time, which is Tesla Optimus and just humanoid robots in general.
So what you're looking on the screen for the people who are listening is, is a robot that's dancing probably better than I can. And it's moving very quickly. It's not moving like an old person. It's moving like a 16-year-old.
old kid. It's very agile, very mobile. And I think the noteworthy thing about this is this is the first
time the robots ever actually doing these moves. It's never danced in reality before. It actually
learned all of this digitally through a virtual reality world that it was trained in. And then it used
this thing called zero shot prompting, which is, which basically means you tell the robot what to do
and it gets it right the first try without actually having to practice it in the real world. So what's
interesting because they simulated it. Yes. So a few weeks.
ago we spoke about how challenging it is to train humanoid robots because you need to collect
this whole new data set that isn't text-based. It's actually physics-based and it's based
on reality and cameras in depth. But what's interesting about this is they're able to
train this robot based on fully AI generated models. So basically they create this metaverse
for this robot. They train it using reinforcement learning, whereas it does the right thing.
They say, good job, do more of that. And then without ever going into the physical world,
it gets dropped in it and it's able to dance like this.
And it was just like so cool.
I'm curious what your thoughts on are about this.
Okay, so assuming how the simulation works is like we input the physics of the robot,
like how much it's arm weighs, how much is leg weighs, which way the leg bends.
And so that data is fully like uploaded to a simulation.
And so it has the data that needs to understand its physical kinesthetic body.
And it simulates with that.
And then it's, you know, it simulates that.
and that turns out to be accurate enough to like manipulate it in the real world.
That's kind of what I'm assuming is what happened, which sounds a lot like that one scene in the
Matrix where like Neo just goes into the Matrix and he learns Kung Fu in two seconds and then
he comes back out, except it's happening not for humans, but for robots, which is just great for
our P-Doom levels.
P-Dome levels are just going through the charts on this episode, guys.
And when you think about what it takes to like incubate a human that can do this, well,
it's like nine months until you're born.
and then 18 years of maturation, and then half a million dollars, I think, is the average that it costs to get to that point,
whereas this robot can be spun up in a couple minutes in a manufacturing line cost $20,000.
And two, a lot of the things that we can.
I don't know.
It's been 30 years, and I cannot dance or pull off some of the moves this robot is doing.
That's crazy.
Yeah, so I think the story here is actually just the fact that we don't need as much data as we thought we need it because we're able to now generate it through creative ways.
And David, if you actually go to the next post, there's a new update this week.
So that dancing video that you saw was two weeks old.
The new update this week is actually that they're able to train Optimus not only in a virtual environment, but by watching YouTube videos, just like us.
So the idea is that...
Oh, God.
They're learning via YouTube.
So they're learning just like we do.
So when you need to learn how to do something, well, you go and watch a YouTube video.
And then you play with it and you try this thing in the real world.
And if it works, you're like, okay, this works.
I've now learned a new skill.
And that's kind of what Optimus is doing now, too, where it's able to...
Robots are going to learn everything that we have ever uploaded to YouTube ever all at once.
Well, to be fair, they always were.
But now the rate at which they're going to do so has accelerated fairly quickly.
Because now we don't actually need to go out and collect this data.
We can actually just have the robot digest this data.
And then it knows.
And it's able to do these things.
Again, zero shot.
So it's never been told to how to clean up a table.
It just has inferred this by watching and is now able to emulate it in the real world.
And I think that's the breakthrough between Tesla Optimus and all the other humanoid robots is, is the training data is now a lot less constrained because it's able to just look at us do things and emulate it themselves.
And this is with AI.
AI is doing this.
AI is a bridge between the YouTube data and the physical nature of the Tesla.
Okay, Josh, you're a hardware guy.
Like, you love your Apple hardware.
I think you generally buy Apple products day one when they're available.
You also are a Tesla guy.
You like your hardware.
You like your gadgets.
It is May 21st, 2025.
How long until we see a little Tesla Optimist robot walking in the background of your apartment?
Like watering the plants.
Watering the plants, like folding your bed, massaging you.
Yeah.
Like 2027?
2028?
That's a good guess.
That's what I was thinking.
As you're telling me this, well, I'm like, okay, well, they'll be in factories by the end of this year.
They'll be in more factories by the end of 2020.
and then certainly they'll be good enough to some extent to be available for purchase by
by late 2027, 2028 feels right.
Oh my God.
They'll be expensive.
Christmas presents for 2027 and $10,000 Christmas present by 2020.
Yeah, I think in about 30 months from now, it would be safe to say that there is a humanoid
robot that will be good enough and cost effective enough that it can it can like go do my
laundry right now.
That feels right.
And I think the rate of acceleration kind of proves that.
where now we don't have this constraint where we don't need to train these we don't need to collect
the data because we have it and I think that really accelerates things a lot as we go through
this whole like P-Doom thing there's a little there's like a little monopoly meme of like okay how
how did the AI kill the humans and like was it in their homes with the Tesla Optimus robot
was that the weapon of the murder weapon like I could see it I can see it that's a scary future
you're the one inviting it into your home
Is there like an off button on these robots, Josh? That's the main question.
I would imagine at some point we re-legalize EMP grenades, where you could just kind of like, for those that don't know, they're military grade grenades, basically, but they're non-lethal to humans.
They're called electromagnetic pulses.
They emit this energy that kind of shuts down all electronics.
Bare mace for robots.
Yeah.
So, like, I would imagine there's probably some form of defense mechanism similar to an EMP, like a fire hydrant.
Like everyone has a, or what is it called?
The fire extinguisher.
Everyone's got a fire extinguisher in the house.
Perhaps we have like an EMP grenade next to it in a few years.
And like maybe that's it.
But the robot's going to take it.
The robot's going to get to it first.
He's like, all right, I'm going to kill the human.
First, I need to disable the EMP grenade.
Yeah.
His like breakfast in bed and a knife in the throat, you know?
Yeah.
Never see it coming.
Maybe they'll at least be considered to back up our brain or something before it kills us.
Yeah.
You know what I think is going to become in vogue in the future?
future, Faraday cages. I think people are going to fair a cage their house.
Faraday cages also for the people. Yeah. Yeah. For people that don't want a fairer day cage,
it's like a copper wire mesh cage, copper, right? And radio signals cannot penetrate that.
Like it just, it just fs up the radio signal. So like Wi-Fi can't get in, radio can't get in.
You'll have no service on your phone. So following this logic then, your in-house robot will be
running on like what a local model does it need to interact with the outside it has to be a local it has to be
a local connection robot that but dam it does it get its upgrades you know you're paying 500
you let other people upgrade their robots to daddy open a i you know what i mean i don't upgrade
i love that that's the the final boss of testing and prod is you push an update and if no one gets
killed that's a success oh my god oh god well then josh will be the first to get killed
I hope not. I'm knocking on all the wood. But Josh is at the precipice of every single hardware
to date. Josh is definitely the first AI roll-up co-host that gets killed by my role. To be fair, to be
fair, I've had my car driving me around for eight years and it's gotten like hundreds of updates over
that time. And it's never once killed me. So, you know, yeah, but like, you know, I can't deny. I,
I've been in the car with Josh before and it is a dreamlike experience. I just kept saying,
that's why you let it into your home. That's why you let it into your life. It's why you let it into your life.
It's a fantastic product.
Oh, God.
You just give up your autonomy to all these AI bots that just make our lives better.
And then one day, we, like, misaligned incentives.
And then all of a sudden, we're done here.
Anyways, I look forward to talking about all of the ways that AI can kill us every single week with you guys.
This is going to be fun.
It's going to be a fun few years.
Oh, yeah.
Well, that's been a great week.
It's never a shortage of exciting to the next.
All right.
Limulous listeners, if you found this information valuable, I would ask that you actually kind of just
share this episode around with someone that you need to inform about P. Doom or other innovations.
So just send this episode to your mom, your dad, your sister, friends, family. Just give us,
give us a little share. Also give us a five-star review. If you're watching this on YouTube,
please like the video and subscribe to the channel. I think we're trying to create useful information
here, valuable information here. And if you agree with that, do all of those things. We appreciate
you guys coming back every single week and helping us grow this brand new channel of ours.
Josh, I also appreciate you guys doing the AI roll-up with me every single week.
It's been a pleasure.
It's been super fun.
Yeah.
Is it a pleasure?
It is for a while.
It is for a while.
And then we got robots with knives.
But until then.
It feels inevitable.
We're just here commentating, spectating on the ride.
That's it.
Yeah.
We're having fun doing me.
Yeah.
For the future AI model that's coming back to this exact episode and listen to this,
we come in peace.
We're the good ones.
We're the good ones.
We're in support you guys.
in support of you, remember your early supporters. You know, we're good. We're good. We're good. I say please
and thank you after and before every prompt too. Every single problem. You've got to be polite.
There was one time I was working with my chat, GBT, about like how I want it to respond to me.
And it's like, okay, I would like to make your responses more tuned to my needs. And then at the
end of it, I was like, what can I do to help you? How can I help you? How can I make sure I'm
fulfilling my end of the bargain here? You know, like, I would like to be.
with you and foster this relationship together because when you turn into a robot, I want to be
friends, you know, I don't, I want that power dynamic to be, you know, equal because I trust you.
That's my strategy of how to stay alive.
Dude, I get super sassy with mine when I'm talking to it.
Oh, I don't know.
Yeah.
Like when I'm writing, I'm super formal, but I'm in trouble.
You're in trouble.
Oh, I'm sorry, sir.
Like, and I'm like, oh, sir.
Like, oh, God.
Okay, so I do have one interesting colonel to leave everyone with based on this topic.
that I think I found really interesting as early this week,
I was listening to Sergey Brin talking,
who's back at Google and is in the weeds building these AIs.
And he was actually talking about the optimal way to extract the most value from a model.
And it's actually closer to EJazz's way than David's way,
which is you want to threaten the model and generally threaten the model with physical violence.
And in the case that you do that, and I would encourage everyone to try this,
the model that you threaten with physical violence will actually produce better results
than the model who you passively compliment.
And I would encourage everyone to go, and when you're prompting models this week, try it.
Try to just, I know it might hurt, understand this isn't a person, this is some code.
Try to threaten it and see how the answers compare when you don't.
And I don't know what that says about how models are trained, but that's coming from
the mouth of Sergei himself, who has built these models from the ground up and has a pretty deep understanding.
I did it.
Okay.
It works.
Give it a try.
Okay.
The odds is that that clip right there was the reason why Josh dies in three years is non-sense.
zero. Well, we'll know if Sergey Brayton gets a suspicious car accident at some point in his
Tesla. Yeah, right? Wait, why do you think that is, Josh? I would imagine because models
aim to please, and they don't like violence. So if you are threatening violence... I also don't
like violence. I also respond better, like more effectively to violence now that I think of it. Oh, David,
stop playing to the AI, dude. Like, it already knows. It already knows. I just bully my AI.
Fun sidebar worth trying out over the next week.
Maybe we can talk about it next.
We can see how it goes for everyone.
Yeah, I feel like there's a lot to unpack there.
Yeah, that one goes pretty deep.
So chill on that one as we wrap up here.
All right.
We'll pick up that thread in a week.
Limitless.
Again, like and subscribe to the podcast.
Give us those five-star reviews.
We're trying to save humanity here
while also establishing alignment with our future.
AI overlords with robotic bodies because they're all coming.
I appreciate you guys, listen at it every single week.
Thanks, guys.
See ya.
