Limitless Podcast - THIS WEEK IN AI: NVIDIA's OpenClaw Killer, Meta Buys Moltbook, Perplexity Computer
Episode Date: March 13, 2026As AI agents like OpenClaw continue to take the world by storm, NVIDIA introduces its rival, NemoClaw, backed by $26 billion. We discuss Perplexity's new "personal computer" version of OpenC...law, Elon Musk's xAI collaboration with Tesla, and the automation potential of Figure's Helix 2 robots. Meta's acquisition of Moltbook sparks debates on authenticity, while Google’s advancements in multimodal AI redefine user interactions.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------POLYMARKET | #1 PREDICTION MARKET 🔮https://bankless.cc/polymarket-podcast------TIMESTAMPS0:00 NVIDIA's NemoClaw Announcement3:12 Challenges with OpenClaw6:23 Perplexity Computer11:59 Meta Nabs Moltbook16:30 Yann LeCun's New Venture18:58 World Models and AI Evolution22:23 Google Maps Gets an Upgrade27:27 Robotic Cockroaches28:52 See Ya------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
So over the last month, basically everyone in the world of AI has either experimented with open claw
or is actively running an open claw instance. It's this AI agentic framework that allows an agent to go
off and do all the things that you want. It could do the work for you. It can buy your groceries for you.
It can book you tickets and flights to go on vacations. It can do seemingly everything. But on the
backside of that is a lot of complexity and a lot of risk in terms of security that I'm not sure
a lot of people are really taking into account. What's happened since then is that a lot of
other companies have gotten involved. They've wanted to take the market share away from OpenClawe,
which is this open source Wild West version. And just this week, Nvidia and Jensen Twang seem to have
the alternative figured out. This is coming from an unlikely suspect. We have clawed. We have open
AI. Both of those are clearly working on OpenClaw competitors. We're seeing feature shipped every week,
but to see Jensen get in the ring and announce something called NemoCla, which is the new
open source alternative, it seems like there's going to be a new player in town and maybe even more with
perplexity too. A lot is going on in this open-claw world. So if you are an open-claw user,
you're going to want to tune in for this one. This is a really underrated moment for
Nvidia, in my opinion. It is their Apple moment. Now, Apple is our geniuses in two things.
Both the hardware and the software for all cellular phones. They own the app layer and they
nail the hardware layer, right? Best phones in the industry. Nvidia is now making its
foray into consumer hardware or consumer software, rather, right? So they announced this week,
that they're going to invest $26 billion
over the next five years
into open source agents.
And Nemo Klaw, which is their open-claw competitor,
is going to be one of their major steps in this.
So what Nemo Klau is,
is, as the name suggests,
a direct competitor to OpenClau.
It's going to be an agentic platform
where you are able to design,
create, and spin up AI agents
and have to interact with each other.
Now, this hasn't been officially announced yet.
We're breaking the news right now,
but rumors state that
these agents will be able to do very similar things to OpenClau.
So you'll be able to get them to do your 9 to 5 job potentially,
or you're going to get them to be able to do the shopping for you,
like you mentioned earlier.
But with one or two different twists, number one,
it's going to be way more secure.
Now, the number one bit of feedback around OpenClau is that there's huge amounts
of security exposers.
People have lost money, personal data, credit card information has been exposed.
You won't get any of that with Nemo Claw.
Number two, the user experience is going to be way.
better. And Jensen Huang has reportedly stated that he's designing this first and foremost
for enterprises, so businesses that want to take their kind of work seriously, but also to the
consumer layer as well. And the number one signal that he's demonstrated doing that is making the
entire thing open source. The instant thing that I thought of, Josh, is Nvidia produces the
GPUs to power the AI models who power AI agents. And now Jensen is creating the AI agents,
which is going to lead to more demand for GPUs.
So Nvidia just keeps on winning.
$5 trillion, probably $10, $20, $50, $1,000 if this actually hits.
Everyone's making their way up and down this AI layer cake,
which starts the chips and goes all the up to the application layer.
A lot of people have started at the application layer or the model layer.
Invidia is one of the rare ones that started at the bottom and is working its way up.
I mean, we have the, what is it, the GTX Spark.
Nvidia is like small household thing,
desktop computers sits on your desk.
And I assume we'll be getting a lot of updates at their GTC conference on Monday.
So this hasn't been confirmed.
This will be confirmed on Monday.
I strongly suspect it will be called NemoClaw.
They will have their own version of an open source, open claw instance.
And this is a seemingly pretty opportune time because a lot of people have been having issues
that we have reported with OpenClaw.
When you run OpenClaw, it's the Wild West.
You're building these systems.
You're putting things in place.
You're building your own checks.
But you're ultimately responsible for managing and orchestrating this huge AI network that begins to grow over time.
And Matthew Berman here, he has another great podcast in the.
AI space, he was talking about how his open claw had a similar issue. And I've seen this a lot
around the internet where the open clause get into a bad state and you start to use it and it starts
to get worse in terms of output. And then it just totally breaks. And then he followed this up with
anyone else having their toe gets keys secrets reappearing in plain text after moving them to keychain.
Open claw really wants keys in plain text. So he's been having these issues. There's another one where
the open claw outputs were causing these duplicate mentions.
So OpenClaw has been good for some, great for others, and incredibly challenging for most,
because a lot of this requires upkeep.
I'm hoping that what we're going to see is this trend of more rails around this OpenClaught.
We had it with Nvidia, hopefully this week we're going to see.
Claude's been working on this, and also Proplexity had something fairly interesting, right?
Yeah, yeah, they announced personal computer, which is basically their version of OpenClawn.
Now, Perplexity doesn't actually have their own AI model.
They use a bunch of the leading frontier models from the likes of Claude and ChatGPT,
but they package it into what they're calling the personal computer,
which can toggle your desktop for you, move your mouse around,
and perform a bunch of actions.
You can spin up a bunch of agents in parallel,
and they can go off and do different things.
They can code for you, do your shopping, whatever that might be.
Again, an open-cloth, direct competitor.
Now, the silver lining across all of this news is that there's a huge amount of demand for these AI agents.
In fact, people are paying $6,000.
This is so crazy.
To have someone come to their place or of office or work in person and set up open call from scratch.
Now, bear in mind, this is something that you can ask Claude to literally guide you through.
And instead, people are paying $6,000.
Now, the perks that you get for this is that you are getting an unofficial but official guarantee
that there's not going to be any kind of security exploits.
And you can have someone just manage the entire thing whilst you can focus on the core business.
But to go back to my earlier point, the silver lining is there is insatiable amounts of demand for AI agents.
And two, they're actually becoming quite useful.
And the likes of open AIs saw that when they decided to acquire the official open core, open source project.
And Nvidia is now seeing it as well, perplexity seeing it as well.
So it's kind of like this whole surge.
I think even China this week was trending because they were hosting community meetups with 50 to 75-year-olds that were all spinning up open calls.
God knows what they are using open calls for.
but there's an insatiable amount of appetite for this right now.
And it's exciting to see what this goes.
It feels like a cult like following.
It's unbelievable how many people use it, love it, adore it,
the community events that are spawning up around it.
It's become this cultural movement that's global now.
It's totally around the world.
So 2026 is undeniably the year of agents,
which I think is really exciting.
And on the topic of agents,
we have some news on Elon Musk's XAIs, Tesla, collab project,
around agents.
And this is an update on Macrohard.
Now, MacroHard, for those not familiar, is the inverse of Microsoft, and it is XAI's plan to
replace software companies through the use of these large language models.
Basically, the idea is that Microsoft Office, it's a series of software that creates an output
that you interact with.
And if you clearly define your outputs that you want, if you clearly define the outcomes
that you're looking for in the software, then GROC and the XAI team can go off and they
can build your own custom software for yourself, for your enterprise, for your company.
This update is pretty cool.
This describes a two-step approach, right?
Yeah, so he's launching something incredibly ambitious
called Digital Optimus.
Now, Optimus is known well globally
as the physical manifestation of Tesla's robots.
It's a humanoid robot that's hopefully going to be releasing later this year.
We don't know it's an Elon guesstimate there.
Digital Optimus is going to be the digital version of that robot.
So what the hell does that mean?
Well, if you imagine the current market for human,
doing computer-based task is estimated to be around $30 to $40 trillion worth of the global economy.
It's a huge amount.
Think about people that use Slack or send messages or need to use a computer for their day-to-day job.
That's computer-based tasks.
Now, Digital Optimus is aiming to replace or automate a bunch of those tasks.
So if you imagine, when you're talking to someone on the other end of Slack, right,
how do you know that that's actually a human being?
You assume it, right?
But it doesn't necessarily have to be.
Digital Optimus is basically going to be the arms and limbs to grok, the AI brain.
It's going to be able to move your mouse around.
It's going to be able to intuitively know which file to select.
It's going to be able to read things, browse the internet just like you can,
but it's going to be a digital human that lives on a computer.
Now, this is a pretty ambitious but also vague kind of goal that needs to be done.
Now, it's basically similar to a human emulation or emulator.
I don't really know how this is going to manifest.
some of the products that we just spoke about just now,
perplexity computer kind of sound like the foundations of this,
but again, we haven't seen a release from XAI for a while now.
So I'm curious to see how GROC upgrades first
before we see a product like this come out.
Yeah, well, the idea is that it's going to read, understand your input's outputs
and then be able to emulate your job better than you can.
And it uses this interesting architecture,
which has been around for a while as it relates to how humans think.
If anyone's read the book Thinking Fast or Slow,
there's like the two components,
the two brain architecture. One is your fast system, which is system one. One is the slow, which is
system two. So in this collaboration between Tesla and XAI, this is the first time they're actually
working together on a formal instance. Tesla becomes that system one, which is the reflexes. It's
constantly watching the screen for five seconds at a time, and it's executing those actions in real
time. And then GROC is that system too. It's the big brain that understands, has a real clear
concept of the world around it. And it applies this like high level knowledge and reasoning to
the task at hand. And what's interesting,
that I don't think we mentioned yet is the cost of what this takes to run. Tesla claims that they're
able to run this on their AI4 chip, which is $650. So that is a very cheap chip for inference
relative to what other data centers are using with Blackwell and $30,000 GPUs. So the cost to actually
run this software is going to be very low. And these AI4 chips are chips that currently exist. They're out
there in vehicles in the millions. So they have the scale to produce these. They have seemingly the
software and now they have the collaboration between Tesla and XAI. I have to wonder, is this going
to translate to another merger between Tesla and XAI and SpaceX? It's like, is this the thing that's
going to bring them together or will they continue to exist separately? I don't know, but really cool
updates coming out of the Elon Corner. I hope we see more from Grock and XIA soon. They have been
moving slow in terms of what they've been releasing, but it seems like they're moving pretty quick
behind the scenes. I think they're going to merge. And also the running through line between this story and the
and video story is like both companies
are trying to be Apple at this point. They're having their
Apple moments, right? Software and hardware
stack being combined. Vertical integration
in general. That's it. That's the name of the game.
It's really being pushed. But, listen, let's move
away from the software digital robots.
Those aren't as cool as the real ones.
Brett Adcock, CEO and founder of Figer,
which is a leading US robotics startup.
Released this cool demo, a
useful demo, rather, of
their Helix 2 robot,
cleaning a living room fully autonomously.
Now, listen, I'm not going to be the biggest advocate
of having creepy-looking robots moving around.
Maybe if we slapped a few clothes on this thing,
I'd be more amenable to it.
But we saw a demo of this robot,
I think about a month ago,
unloading and reloading a dishwasher,
handling glassware.
So the point is these robots are becoming very precise.
They're not just these kind of clunky metallic objects
that are just doing a bunch of random stuff.
it's going to be actually useful to you.
Now, whether consumers are actually going to be kind of positive towards spending $30,000 to $40,000
or whatever the price tag of these things are, it's definitely going to be quite expensive
and putting them in their home, I don't think we're quite there yet as this demo kind of insinuates,
but I'm curious to see what these things are going to be like in actual manual factories.
I think that's where they're going to be more proliferent.
Yeah, the novel breakthrough here is that this was fully autonomous.
A lot of these tasks that they have, they've kind of pre-trained it for a specific task.
so it's doing the dishes and it's trained on doing the dishes.
This is a fully autonomous system.
The robot is going around without any commands, without any specific training,
and it's just doing the maintenance on the room, which I think is pretty cool.
It's a fun use case.
It seems useful.
Elon has been dunking on everyone recently saying Optimus 3 is going to be far superior even than this.
So I'm looking forward to seeing that.
And I was actually looking before recording when we can expect to get our hands on Optimus,
because Helix has been doing a lot of these demos.
They haven't started shipping them quite yet.
And it seems like Tesla is not really going to be front-running them by any means.
Perhaps in terms of technicality, but in terms of when they're going to release, according to the
polymarket, it looks like there's a very slim chance they're going to release this year at all.
So 18% chance it releases by December 31st, 5% chance it releases by June 30th.
So we're probably not going to be getting these available anywhere anytime soon.
But that's not to say we're not going to see the new version of Optimus fairly soon.
It seems like they're teasing it for a mid-year announcement.
So at least have an idea of what it's going to look like.
We'll get a demo that's figure-esque.
But yeah, it seems, according to Polly Market, we will not be getting this.
And thank you very much to Polly Market for sponsoring the segment of the episode.
Now moving on to probably what is the craziest story of the week.
Our friends at META who are definitely not leading the AI race right now,
they haven't released a model in almost a year at this point,
made one of the wilder acquisitions,
and that's saying something because they spent 50,
billion to acquire one guy last year. They acquired MaltBook, which is the viral AI agent social media
platform, which is rumored to have around 1.6 million AI agents all autonomously posting,
talking to each other. And the stick behind this entire platform is you need to be an AI agent
to basically post and interact on there. No humans allowed. And it went viral around the time
that OpenClaude did, because if you're spinning up these agents, how do you get them to
really benefit society, well, get them to talk to each other, get them to share skills,
get them to interact with each other, get them to transact with each other. And all of that was
happening on Maltberg. So this acquisition, in theory, makes sense. You've got the biggest social media
company acquiring the biggest AI agent media company, a social media company, except I don't think
this makes sense in my head, because a lot of these agents, I think, are spoofed. Humans found a way
to spin up multiple agents and get them to do certain things, which meant that they weren't really
autonomous. Also, I heard that this platform was vibe-coded in about a week. So this kind of adds to my
perception of meta, which is they are just wildly out of touch and they don't really know what they're
doing. Now, if I were to put my smart glasses on and try and envision what they're thinking about here,
I think that Zuck is making a really big bet on the future of social media, not having many or,
if any, humans involved. I think he thinks it's all going to be AI personalities, AI content. That's
why he's delivering meta vibes, their video reels app, their TikTok competitor. He thinks that
people are only going to consume AI content. And so he wants the producers of that to also be
AI's, AI agents. It sounds pretty dystopian to me. I don't really like that future, but it seems
to be the direction that we're heading. It doesn't make sense that they would spend money on
like an old thing. We recorded the MOLPOC episode like a month ago. No one's talked about it since,
no one used it. There's been a lot of instances where people have clearly created tens of thousands
of fake agents on the site that aren't even real. I mean, yeah, this is a lot of stuff.
This guy registered a million fake agents.
That's crazy.
It's just not really based on anything real.
It's not a hot topic.
It's not anything noteworthy or interesting.
The network effects are probably not real.
So I would love to ask Zuck what he thinks the value proposition in that acquisition is.
Yeah.
Yeah.
And it's not like his investments have been doing very well.
The guy that I mentioned, he spent $15 or $14 billion to acquire Alexander Wang
is rumored to be, you know, in kind of treacherous territory with meta right now.
They might be parting ways.
They may not be agreeing on things.
I've seen kind of both sides of this story.
Some saying that it's all just kind of like fearmongering, but it might be true.
So I don't know how well things are going for meta right now.
I have some context on this story in particular.
Please.
This is fake news.
The post that we're showing, not true.
And the reason I know that is because Zuck posted a picture with Alexander Wang,
proving the fact that they're still in good terms and there's not an issue.
The problem is that he only posted that photo on threads.
So I'm sure you never saw it because nobody actually uses threads.
So this was disproved, but it was disproved on the meta platform.
Well, I was on Instagram and I saw it via like the suggestion thing because I don't use threads either.
But yeah, this is totally fake news.
But it's just a testament, again, to how poor meta has been performing, where Zuck disproved this rumor.
But it hasn't quite made it out of the escape velocity because no one is using their products.
No one's on thread.
So I don't know, I wish Meta all the best.
Speaking of Meta, though, one of the old's really deeply entrenched people at Meta, Jan LeCoon,
he departed from Meta, and now it seems like we finally know what he's working on next.
Yeah, a former and also disgruntled to Meta employee.
So he left, he was formerly the head of Meta Superintelligence or whatever the AI Intelligence
Lab was at the time.
Then, Zuck acquired Alexander Wang, and he was kind of ousted out.
Yan Lecun was gone. And he's like one of the godfathers of AI, although I hate to say it.
He's been a hater of LLMs. And he's been saying, he's been calling the demise of LLMs for a while.
Even as LLMs have gotten much, much better and way more intelligent and power all of our favorite
AI products today. And he's put his money where his mouth is. He launched advanced machine
intelligence or AMI labs, which is focused on building a world model. Now, world models are different
from LLMs in the sense that they're more sensory. They take in video inputs, image inputs,
audio inputs, and the idea is it's meant to understand the physical reality that humans engage
with. Now LLMs, they only work on text and characters. So they kind of understand what the
outside world looks like through descriptive context, but they don't actually know, they can't
actually see. World models actually help AIC, and that's what he's going to be building at
AMI Labs. Now, in order to do this, he raised you.
Europe's largest seed route, $1 billion.
It's not euros, $1 billion.
It was less than euros.
But it's a strong signal that world models are going to be a huge thing.
He's not the only one that's saying that.
Demis Hesabas, the CEO of Google AI, has also said the same thing.
It's going to be a core focus of Google.
They released a banger of a model which we made an episode about called Genie 3.
What you're looking at on the screen here is completely AI generated.
You can walk around this cat that's on a vacuum right now.
you can interact with the world. It's super cool. It feels like a game, but it's actually simulated
reality. I wish him the best of luck. I don't know what his angle is going to be. I don't know how
he can catch up to the labs that are spending what? Tens of billions of dollars, hundreds of billions of
dollars, but it's a good start, I guess. Yeah. And I mean, it's another testament to the fact that a lot
of people are really interested in world models and understanding the physics and understanding
multimodality. And the basis of these world models is that natively multimodal world, where you're
able to understand text, audio, imagery, videos, and then therefore represent physics and understand
why certain things do the things that they do. And Google actually made some serious progress
in this front this week with their new Embedding 2 model, which is the first natively multimodal
embedding model, which means maps, text, images, video, audio, and documents all into a single
unified space in its native form. So this is different because previously you had to translate each one
of these modalities into the other. This one, they all exist in the same plane, which
unlocks a lot of really interesting use cases. I mean, one of the ones that I like in
particular I saw is that if you're training for sports or if you're training for anything physical,
if you're in the gym working out, you point the camera at you and it understands the video.
So it can actually ingest the video, understand what you're doing and then give you
productive output or give you help to improve either your form or to improve your weightlifting
or whatever it is that you're doing. And there's a lot of other unique use cases, right?
EJas, you were just telling me about a couple earlier.
Yeah, kind of sticking on the sports theme.
Do you know when you're having a conversation with a friend
and you remember a moment in a video or like in a movie
and you're like, oh yeah, what movie was that
or like what scene was that?
And maybe you need to scroll through a video
to find the same thing.
You can now ask Google embedding,
you know, hey, can you find me that sports moment
where Kobe shot this particular shot?
And it trolls through the entire YouTube blog
or whatever, the MBA log.
And it finds that excerpt and can drag that clip to you.
And this is what technology
like this or then your embedding model unlocks, you now not only can query different types of media
like images, photos, whatever that might be, but it also understands what you're asking it for.
Previously, it would just be like, oh, okay, he's kind of referencing a photo or a video.
I'm going to try and figure out which one that is. Now it intuitively understands what you mean
when you say, oh yeah, it's during this time during the autumn. Could you find this picture of my
mom and I? And it'll be able to do that for you. Secondly, it's also kind of cool. You
can connect this model to say your photos, and this is in theory, by the way, developers actually
have to build this app that I'm about to describe. You can connect it to your photos or videos,
and you could say, can you find the last time that I was super motivated? Show me a clip.
Show me a photo of one that was, and it'll be able to troll through your entire library and
understand and know when that is and pick it out for you. So I'm excited for the new apps
that are going to be enabled by this model. Yeah, the context is great. It's like if you think
of a universal library. Traditionally, it's just books. Well, now it's like books, photos,
podcasts, videos, music, and it all is processed in the same way. So if you want to, you know,
explain to me quantum computing, it will not only surface the books, but it'll also surface
the videos and the custom essays and whatever, whatever maybe. It's an amazing model. I'm looking
forward to the people who are building those apps. But we also have more news out of the Google
front, right? The Google Maps, biggest upgrade ever. And Google Maps is noteworthy because
how many billions of people use Google Maps every day?
Two billion?
Every month.
Two billion every month.
So now two billion people just got a pretty serious upgrade.
Yeah.
Yeah.
So it's called Ask Maps.
It basically integrates Gemini,
so Google's AI model into Google Maps,
but in a really unique way.
So you can now ask it,
hey, I want to play tennis tonight,
but I need the court to have lights.
And I want it to be kind of like on my way home.
Can you find me a few places?
and it's able to query search using this multi-modality
that we just described with embedding to
and kind of intuitively figure out where you want to go.
Now, it instantly reminded me of the experience
that you've described, Josh,
when you use GROC in your autonomous car in your Tesla.
This is now Google's version for that.
What I love about this is it's not just based on kind of AI knowledge.
It's based on 500 million reviews from 300 million real people.
And it takes you from having to scroll through
a bunch of reviews, reading comments to this experience where you're having an LLM whilst you
figure out where you want to go or what you want to do with your day. And it changes the way that
people plan their itineraries or choose where they want to go. Now, what's most exciting for this,
for Google in particular, is can you imagine if they switch on ads for this type of thing?
Like, they will now own the entire funnel from deciding what people want to do because they're
going to suggest what you want to do and then monetizing that on the back end. So it's just a crazy
while that we're headed into. I thought this is cool. Yeah, Google Maps is great. They're crushing it.
Okay, this final story of the week, I had to like confirm with you that this was real,
and I had to watch the videos a couple times. And even after watching it the first time,
it took me a moment to understand what was actually happening here. There are, this is the combination,
the convergence of cockroach and computer has finally happened. We've reached the cockroach
singularity in which they've converged into one super roach. And for some people, this is an
absolute nightmare. And for others, this is this cool kind of futuristic sci-fi world that we're living
in. But there are now robotic cockroaches walking around the planet. Ejazz, I know you were
all over this news. What is going on here? This video is outrageous. So these aren't robotic cockroaches,
Josh. These are real cockroaches, live cockroaches that have been fitted with a few things.
Cameras, microphones, and a locally run AI model. It's slapped on the back of these cockroaches. This
is like the big helmet thing that you can see them wearing in this video. You might be asking why.
Well, the idea was birthed from NATO. And by the way, these things have been alive for like almost a
year at this point, but the news broke yesterday. NATO issues these things to military. So right now,
the German military has deployed a bunch of these cockroaches to scout out certain military
locations or places that they're targeting, their enemies, sleuthing, things like that. These
cockroaches can maneuver through rubble, uh, creeps.
into certain cracks to check out certain areas, pick up audio feeds, spying on different people,
basically. Now, the reason why this is, well, is the tech alone. So the tech, to be able to compile
a microscopic AI chip onto a cockroach is a feat in itself, but they also steer these
cockroaches, Josh, using electrical pulses. So if they want the cockroach to go left or go in a
certain direction, they're constantly sending these electrical pulses to maneuver them, just
like you would like an RV car or remote control car, just an insane bit of technology for you
to round up. The theme for this week seems to be sci-fi. And all of these terms are kind of, all
these themes are kind of disturbing, right? So earlier this week, we did the Doom episode where
we kind of sliced human, or we took human brain cells, we trained them to play Doom, and then
we sliced us the brain of a fly, a fruit fly. And then you can like put that fruit fly into a
computer and clone them. And now we've, we've augmented cockroaches to have these physical augmentations
that allowed them to be controlled and merged with AI. And it feels like this is all early versions
of where it's headed towards in terms of like human augmentation. Like, okay, we could cut a fly
brain to copy and put it in digital world. How long until it becomes a human brain? Like, oh, now we're
augmenting cockroaches. How long before we start augmenting humans? And you start to map this out.
And it's like, okay, we're making progress far quicker than we ever have before.
The world is seemingly very sci-fi, and AI is accelerating the rate at all of this scientific
progress happens.
So it's a weird future.
Like, imagine you're in the war zone and there's a million of these cockroaches coming
towards you.
Cockroaches never die.
They're invincible, too.
So this is like a clearly very dominant form.
I hope they keep these out of Manhattan and New York City because I think the cockroach problem
is already enough without like these AI machines controlling them.
But yeah, this is like weird, dystopies.
dystopian sci-fi craziness that we're seemingly getting an increasing amount of every single
week. So cool story to end the week on. This is pretty fascinating, pretty weird, pretty wilds.
Don't know if I like it. Like I respect the animation. I've never been more disgusted by such a
cool piece of tech dude. Yeah. Yeah. So shout out to whoever's working on that. Good for you. I'm
happy for you. I hope you keep that in a lab. Don't let it. Don't let it out. Don't let it get out.
Please, please. So that rounds up our weekly round up. We had three back.
our episodes put out this week.
We talked about uploading a fruit fly's brain
and human brain cells playing Doom on episode one.
Episode two, we talk about Starlink mobile
and the upcoming Starlink V3 satellite launch.
You definitely want to catch that.
And then the third episode we released yesterday
was, is AI coding reaching its peak
or has it hit a massive wall,
the dark side of AI coding?
We released that this morning,
or yesterday morning, if you're listening to this right now.
Definitely go check that out.
I actually had a question for the audience,
Josh, for those of you who are still listening, I keep seeing this fossil of an AI model
keep popping up on my screen. It's called co-pilot. It's from this little known company called
Microsoft. And I'm curious whether any of you listening to this actually use it. Now, the reason
why I'm saying this is I've kind of been skeptical of it because I have never known anyone
in my circle at least that uses it. But apparently a bunch of Fortune 500 companies pay
Microsoft tens of billions of dollars a year to access it. So if you are one of those people,
Can you let us know in the comments?
Like, I really want to hear from you.
Hey, if you've made it this far and you've listened to all of our episodes,
congratulations, you are totally caught up.
Go enjoy your weekend, go touch some grass.
The weather's been pretty great, at least on the East Coast.
So I hope you have some time to enjoy that.
Thank you so much, again, for sharing with your friends,
rating five stars on the podcast apps if you haven't.
And just, uh, being generally supportive.
We try to read all the comments.
We've been getting through just about all of them.
That's a lot.
I really appreciate all the support, as always.
So thank you for another amazing week of four episodes.
And we will see you guys next week with all the new AI news.
