Limitless Podcast - Revealing Elon's Tesla AI Secrets: $TSLA Is Winning AI and Automation
Episode Date: October 29, 2025Tesla is one of a kind in autonomy, focusing on its benchmark autopilot and innovative neural networks that emulate human decision-making.Tesla processes vast real-world and synthetic data to... enhance adaptability in driving. They discuss the upcoming AI5 chip, set to transform Tesla's computing capabilities, and speculate on the economic implications of a future with humanoid robots and autonomous vehicles. ------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 Intro0:53 Behind the Scenes of Tesla2:46 The Secrets of Neural Networks6:51 The Curse of Dimensionality7:57 Data Curation and Edge Cases13:13 World Modeling and Simulation16:15 The Future of Humanoid Robots18:04 The Financial Landscape of Tesla21:48 The AI5 Chip Revolution25:38 Closing Thoughts on Autonomy------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
One of the most important technologies in the world that is happening as we speak every day is the rise of autonomy, and particularly around autonomous robots.
Robots can be many things. Robots can be humanoids. They can be cars. And today we're going to talk about both because there's one company that is at the frontier of both of those areas. And that's Tesla. Tesla has the most unbelievable set of autopilot software that I think exists in the world. I've been using it personally for eight years now. And it's been amazing to see how good it's gotten.
And EGES, now there's, for the first time ever, we have the secrets.
The secret sauce that shares exactly how they've been able to get autonomy this powerful,
this impressive.
And there's now very clearly a world in which I can imagine waking up in the morning,
getting ready to go to work, stepping outside, and there's a cyber cab waiting for me
outside that would just take me wherever I want for a fraction of the cost that it takes
for a normal driver.
And I think this is an incredibly powerful unlock.
And to see a behind the scenes of this is awesome.
So the entire episode today is behind the scenes of the scenes of.
the most impressive new front-in-tier technology that exists.
I think what I'm most excited about today, Josh, is the fact that I've always thought Tesla
AI and robotics is so cool, but I just don't know how any of this works and they've refused
to tell us and finally they've spilt their secrets today. But to quickly paint some context
for the listeners here up until yesterday, we only thought of Tesla AI as something called
a neural network. That's their secret source. And a neural network, and a neural
network can be thought of as a software program that is designed to function like the human brain.
So it takes in information and it discovers patterns, trends, and it can also sometimes make
predictions. Now, this contrast directly to some of Tesla's competitors, which do self-driving
and robotics in a very different way. They take more modular and sensor-driven approaches, right?
The reason why Tesla's neural network is so special is they have an end-to-end neural network,
which means that they feed a bunch of raw data from one side and out comes the output,
which is an action.
In this case of Tesla cars, it would be driving, steering, and acceleration.
And they took this approach for a few different ways.
The most important being, it's really hard, Josh, to codify what human values are.
And what I mean by that is, let's say in this example that you're seeing on your screen right now, you are driving your car and there's a massive puddle on your lane. But you see that you could potentially drive into the oncoming lane to skirt around it. Now, for humans, it's really easy to do that, right? It's like, okay, maybe I should just go through it because there's no cars coming. But for a machine to do that, it requires a lot of effort. It's hard to hard code. So that's one special thing around the neural network. But Josh, I would.
want to jump into the secrets. Can you lead us with the first one? What, what you mentioned is is really
important, the end-to-end stuff. And I want to walk through a little experiment. So when you kick a
soccer ball, I think this is an experience everyone's kind of went through, right? Is it how,
what do you do when you kick a soccer ball? Yeah, I see the soccer ball coming towards me. I kind of
prepare my legs ready to kind of kick. I'm right-footed. So I'm kicking with my right foot.
And then I guess the rest is kind of intuitive, Josh. I just kind of run up to it and kick it.
Yeah, yeah, and I think that's exactly the point is when you kick a soccer ball, this is something a lot of people have experienced.
You're not actually thinking about all of the parts of kicking a soccer ball. You're not thinking of where it is on the ground, where your ankle is, where your knee is, where your leg is, the positioning, how hard you're going to kick it. It just feels very intuitive.
And with a lot of other car companies, they're hard coding these intuitions as code. So it does have to think about each section. It does have to calculate each section.
What's different about Tesla and what we learned from this article, this is from Ashok, who is the person who's in charge of,
of Tesla AI is that they use this thing called end-to-end neural networks.
And what does that mean in like a fun, simple way?
It's basically the intuition that you just described with kicking a soccer ball,
the AI model, the chip on a car, is able to emulate that.
So instead of making these minute decisions all the way through through a fixed decision tree,
they're able to take a ton of data and use these things that we've learned over time,
which are gradients and weights, and basically move the gradients and weights throughout the
decision process to reach an end goal.
So if the end goal is to kick a soccer ball, there's a very clear.
stated end goal. And the neural network's job is to figure out the full sweep of gradients as it goes
across to get to that end goal. And it uses a bunch of this training data that they collect in
order to get there. So this is this remarkable technology that breakthrough that they have. And they
have some really interesting examples here. So in the case of the ducks, like we're looking at an
example on the screen right now, there's ducks standing in the middle of the road. When you're coding an AI
system, when you're coding a car, you're not hard coding in. If you see ducks, do this. What the car is
understanding intuitively is like, okay, there's, there's an obstacle here and they are ducks.
They are not moving. The interesting thing is the example above is the car recognizes that the
ducks are actually moving across the road. So it knows to wait and then it could pass once they've
moved. But the second one, it notices they're just kind of chilling. The ducks aren't going anywhere.
And what does it do? It understands that intuitively and it is able to back up and then move around
them. And that's the difference in how Tesla does it versus some other companies is they're not
hard coding a series of fixed parameters. They are doing it all entirely through
these neural networks. If we move on to secret number one, Josh, it kind of explains how they're able
to achieve this at a pretty high level, right? So it's titled the curse of dimensionality.
And what it basically describes is you can imagine for a car to self-drive, it requires a ton of
data. I think Tesla, the average car has about seven cameras. It ingests a ton of audio data,
a ton of navigation GPS data, and kinematics. So speed is tracking your speed.
And so all this data is roughly equivalent to two billion tokens.
And if you think about it, it needs to run through this end-to-end neural network that you just
described, Josh, and it needs to output pretty much two tokens, one token which determines
which way the car should steer, and the other token determining how fast should that car
be at that point. Should it decelerate or should it accelerate? And you can imagine this is an
incredibly nuanced and complex process. And the way that the Tesla
neural engine or the neural network is designed
is it has really special data lanes
that process this data in a very nuanced way
to understand what exactly it needs to map onto
when it comes to steering and acceleration.
Now, you might think that's pretty cool,
but Tesla's secret source when it comes to this particular component
is the driving data, right, Josh?
So they get access to all the camera data,
audio data, GPS data that I just mentioned
from their entire fleet of Tesla.
cars. So the equivalent of data that they get every day is something crazy like 500 years worth
of driving data. Now you can imagine if it processes this amount of rich data and not all of that
data is important, right? It's kind of like the same kind of standard things. But over those years
of data, you get access to the one or two random nuanced incidents which feed in and improve the
collective intelligence of the entire Tesla fleet. So whether you're on the other side of the
well driving a Tesla or you're in the local neighborhood, you still benefit from the same types
of improvements. I want to talk a little bit about the scale because you mentioned 2 billion inputs,
and it's kind of difficult to comprehend what 2 billion actually means. And as a good example,
I want you to imagine your phone processing every TikTok that exists on the platform, every single
second in order to determine the next turn. That is 2 billion inputs. It is an astronomical amount
of data. You're basically, you take the whole TikTok catalog every second in order to make
every decision. And you distill that entire data set into two single points. And it's just,
it's a remarkable amount of compression and then a remarkable amount of precision to make the
right decision over and over and over again and then adjusting calculate as things change.
The way that they do this, they're not doing this raw. They're not actually ingesting all this
data. They have this data curation process that they use in order to help them kind of figure out
what is important and what is just noise. And what they do, and we have a great example on
screen here is they pick the juiciest clips. It's like kind of curating like a viral playlist
and they use it to train AI on these weird scenarios. So we're seeing on the screen,
there's someone rolling through an intersection of wheelchair. It's actually very funny to see,
and scary to see what types of things happen. This is crazy. Two cars crashing right in front of
you, driving on a snow blind street. There's kids that are running out in the middle of the road.
There's these tremendous amount of edge cases that are really difficult to understand. And because of the
500 years of driving data every single day that they ingest, they're able to to analyze and to
kind of sift through and they've come up with systems to curate the most viral clips,
not viral, but the most, the clips with the most implications of safety that are kind of the weird
edge cases. And then we have this example here. Do you want to walk through the chart that we're seeing?
Because it's really fascinating how the car can kind of see it before the human does.
Yeah. So what's interesting is when I first watched this clip and for those who are listening,
it is a car driving on a very rainy evening on the highway,
and a car in front of it kind of crashes out and goes and starts to spin
and kind of enter its own lane.
When I first watched this video, Josh,
I didn't even notice the car spinning out because it happens so far away.
And so what's effective about this particular video is,
given everything that you just described,
the Tesla self-driving software and machinery is able to detect,
things that you necessarily as a human aren't able to do this. This graph specifically, Josh,
can you explain what I'm looking at here? Yeah, so this is the gradient. This is the weighted
decision tree in real time. So you can kind of see every single frame that it receives.
The chart moves. And then you could actually see the point in which it realizes there's a threat
and it adjusts very quickly. So what you're seeing here is the real time visual representation of
what the brain sees. And we're getting into this a little bit later where you can actually
communicate with this system. You could talk to it just like.
It's a large language model.
It's pretty insane.
But I want to move on to the next section
because this is my favorite.
When I saw this, it just really blew my mind
on how they were able to basically emulate
real-world driving scenarios.
And I want to start this section
with an example that they showed.
If you don't mind scrolling down
and sharing the one of the fake screen.
So after these splats,
there's one a little bit later.
And basically it shows a driving further down even.
Sorry, the next section.
Then we'll go right back up.
Oh, sure, sure, sure.
This one.
Yeah.
Yeah, so this example that we're looking at on the screen, this looks like a standard
traditional driving setup.
So the car has, what is that, seven cameras and each one of them ingest data.
The thing with this EJAS is what you're seeing on screen is not real.
That is a 100% virtual representation of this real world.
And it's unbelievable because it looks so good.
And as I'm watching this, I'm like, man, I hope GTA6 looks like this because the quality,
the fidelity of this artificially generated world
is indistinguishable from real life, the entire thing.
And the reason they're able to do this is by ingesting all this data.
So now that you've seen how impressive it gets,
this is kind of how they build it so we can go back up to the Gaussian splouting examples.
And Gaussian splats are kind of a fancy way of saying,
as the car drives through, you could imagine the cameras as scanners.
So if you flipped a camera into a scanner,
it maps this 3D world and creates a world.
And then they're actually able to move around and navigate
the 3D world they create using just the cameras on your car.
And I want to reiterate that every Tesla you see on the road,
regardless of when it was made,
is capable of collecting this data and creating these 3D models
that you see on the screen.
So the interesting thing here is that top bar is what the car sees.
The bottom bar is what the car is generated to see.
And what it's able to do as a result is able to kind of get a better understanding
of the world around it and make much better decisions that,
in turn, make it much safer than a human driver does.
This just looks like a computer game, Josh.
like one of those massive MMORPGs that kind of generates the world as I navigate and move through it as I interact with different characters.
This is kind of that, but for self-driving specifically.
And why I think this is so cool.
And these are kind of like widely known as world simulators.
It's like an AI model that generates simulated realities.
Is that this data can be modified in so many different ways and so many different scenarios to train the car for.
or experiences or accidents that hasn't even encountered just yet.
And this is really cool because I think one major constraint
that a lot of AI models and self-driving models come up against
is sometimes there's not enough data to account
for every single different type of scenario.
So a way to kind of address that is to create something known as synthetic data.
World Simulators is one step towards being able to do that super effectively
whilst bending this simulated reality to how the actual world works, right?
Physics is super important, but hard to translate into an AI model.
And so seeing something like this at scale for a product, a car,
that is used by almost every human art on the world is just so amazing to see.
And the answer to the question, well, why hasn't everybody done this?
It's because to generate these world models generally takes tens of seconds to do.
Tesla's figured out a way to do it in 0.2 seconds.
So it's a remarkable efficiency improvement that allows them to actually do this.
It's not like the rest of the world doesn't want to do this.
It's that technically speaking, it's just very, very difficult to do.
And the next example they shared was one of my favorite ones because it really just created,
it made it feel very familiar where you can actually talk to these models like they're a language model.
Yeah.
And the example above where you could just say, well, why are you not turning left?
And it will explain to you, well, there's a detour sign.
And why shouldn't you turn right?
Well, because the detour sign is pointing to the left.
And it really, you start to get a sense
the same way yesterday in EJA
in our episode yesterday
where you can see the behind the scenes
of how the model thinks when it trades.
You can now see the behind the scenes of the brain
and you could start to understand
how it works, why it works, how it's reasoning.
And the results from this is pretty fascinating.
Not only is it interpreting inputs
like where the lines on the road are,
but it's also able to read signs.
They have an example where you're able to see a human
who's like kind of giving you a high five,
like saying, wait one second, I'm about to pull out.
And then the car recognizes that and stops.
So there's these, like, unbelievable improvements that they have.
And this section I want to get into next is because they can reevaluate these new decision
trees on existing historical models.
So my car, I've had a few near collision experiences that have been a little scary,
but they've been narrowly avoided.
What they can do is they can actually take the exact camera inputs from the car
and emulate if the collision had actually happened.
And then they could run these new tests on it and see how the new models would compare
to the old models.
So in the case that you narrowly miss an accident,
well, you could test it on a new model and see if it does better.
And in the first example, it does.
And it actually moves away faster than the others.
The second example that they have here is that you can create artificial examples.
So you can take a car, remove it, place it into this virtual world,
but it looks like the real world.
It emulates a real world scenario.
And it just, as I'm looking at this,
EGES, to your point, it all feels like a video game.
And it's a really high fidelity video game where they can take things from reality,
they can distort them, they could create fake realities.
And as I was scrolling through this post,
I started to lose track of what was real and what wasn't,
because it all looks so real to me.
And to the video game point,
which you might be able to share,
is that they actually allow you to play it
as if it was a video game.
You can drive through these virtual worlds
without actually needing a Tesla vehicle.
Yeah, so what I have here is the Tesla's neural world simulator
where you have someone that is in basically a driver's seat,
but it's one of those video gaming driving setups.
And they are driving through what looks like a pretty pleasant suburban neighborhood
on a sunny blue sky day.
And it looks really real, Josh.
It looks like something that would be recorded from Tesla's seven cameras,
except that none of it is real.
He is navigating through roads.
He's skirting around cars.
He's narrowly avoiding collisions.
And every single perspective and angle that you see from the three different cameras
on the screen here is completely.
completely and utterly simulated.
The most remarkable part is that all of this amazing stuff
that we've just talked about for the last 20 minutes,
it's actually cross-compatible with the next most important form of autonomy,
which is robots.
Now, everyone knows Tesla's making optimists.
They signal plans to make hundreds of thousands of these by next year.
And the problem with training robots for a lot of other companies
is that they don't have the data, they don't have the neural models.
Well, all of the progress and all of the data that's been made previously through Tesla
is cross-compatible directly with the robot team and Optimus as a humanoid robot.
And that is one of the most impressive things because as the program gets better through AI's
autopilot stack, it improves dramatically through Optimus.
And what you're able to see is a lot of, like you mentioned, EJAS, the gold mine is the digital data
because you just want more data to train.
Optimus gets better.
And that moves us on to the price of Tesla and the second order effects of Tesla.
Because now that we have humanoid robots that are learning quickly,
now that we have cars that are able to drive themselves,
well, there's two things.
One of them is being the chip that unifies the two.
The other is the second order effects of what happens
when this gets rolled out across the world.
And he just,
maybe you want to tee that up for us,
because this is very bullish scenario that we're guiding towards.
Okay, so this is the most exciting part for me for this entire episode.
Because as you mentioned,
this data and these neural networks aren't just,
super valuable for the Tesla cars. It's for the robots and pretty much any other kind of robotic
machine that they create in the future. And the beautiful thing about this is that it's self-recursive.
So whatever is learned from all the camera information and audio information that's pulled from
the cars can feed into the robots, which is like kind of what we're seeing in the demo on
our screen here with this Optimus robot navigating through what seems to be a manufacturing site,
right? This is incredibly bullish for Tesla, the stock, in my opinion.
opinion because it takes it from, well, it's currently breaching or sitting under its all-time
high, right, Josh? What is that market cap right now? We're just under an all-time high, which
puts it right around $1.5 trillion. Okay, so $1.5 trillion in today's age seems pretty small.
You just had Microsoft and Apple today cross-4 trillion dollar market cap. If you compare that to Tesla,
and if you factor in the fact that these humanoid robots are largely going to replace or
work in conjunction with a large
suede of the human manual
labor force, that prices this up
at least up until a $10 trillion
company as this scales out.
Josh, I have a feeling you're probably
similarly bullish on it comes to this.
Obviously, I share your sentiment.
I have been maximally bullish on Tesla
for over a decade now.
It's about 12 years.
Did your dad buy your Tesla stock for you
at the start? You asked him to?
Yeah, I was too young to have my own brokerage
account.
So we were very early to share some Tesla and continue to be maximally bullish on it.
And we're actually, I'm going to be recording a bull thesis episode about Tesla because I'm so bullish on it.
So if you're interested in that, let me know.
But I'm going to pull some notes from that to use here just to kind of outline the humanoid robotic opportunity.
Because EJS, you said $10 trillion, which is an outrageous market cap considering
Nvidia is the largest company in the world sitting at $4 trillion.
So that's a long way to go.
And Nvidia is on top of the world.
But if you think of humanoids as labor, right, you have kind of four million, four billion people
in the labor market. And this becomes a global trend. This is not just for the United States.
And if the average wage, which is what it is right now, is about $10,000 per year, that's a $40 trillion
market size. So the labor opportunity is $40 trillion, assuming we don't have any productivity
unlocks that generate brand new opportunities that generate more use cases for labor.
So that's just given the current state of the world today. So one humanoid, at five,
$2.00 an hour can replace two humans working at $25 an hour, the value per humanoid becomes $200,000
per robot, which is pretty high given that the costs are projected to be around $20,000 to $30,000
once it's all set and done. The U.S. labor market, there's 160 million people. So if just one percent
is substituted by humanoid robots, that is greater than $300 billion in value, that's a lot of
revenue. That is a tremendous around revenue. And then you get to a point where you're starting to offset
significant percentages of GDP. So in the 1950s, the U.S. manufacturing share of GDP, it was 30%.
Today, it sits at 10%. And if this goes further, we'll have a total reliance on foreign entities.
So there's all the incentives in the world to bring robots into the United States so we don't
continue this trend of decreasing our manufacturing capabilities. So there's a lot of headwinds and a lot
of trends that all converge on the humanoid robot opportunity. It's just a matter of making these.
And it's possible because of this new software stack and also because of this new chip.
which is the AI-5 chip.
And the AI-5 chip is the brand-new golden child of Tesla.
And it is going to be cross-capadable between both robots and cybercabs.
But maybe you want to walk us there exactly why this is interesting.
Yeah.
So the way I think about this is this is Tesla's bold attempt to replace the GPU.
And as we've spoken about many times on this show before,
Nvidia kind of rules the kingdom.
We mentioned that they are sitting at a $4 trillion,
dollar or above a well above a fortune a dollar market cap they are the kings of the roost and the reason
why is because they provide the hardware that kind of fuels all these different things now what tesler
identified is whilst all these GPUs that they've been using are really helpful they're not
specifically designed to fit certain niche use cases when it comes to a range of different things that
they're involved in right cars humanoid robots and an array of different things and now they've
release their AI5 chip, which is basically their brand new chip, which is going to be used
across all their different robots. So it's going to be used in cars, on humanoids, and the like.
And the coolest part about this, Josh, we were speaking about this before the show,
is it improves this whole GPU experience for them by 40 times. But can you help me unpack
as to why exactly? Is this like a sizing thing? Can they add more compute? How does this work?
Okay, so first thing, AI5 isn't out just yet. It's coming. They have completed the spec. Elon's been working on it. He said on the most recent earnings call that it has been his number one focus for weeks and weeks and weeks on end, which is very high signal that it means a lot. So it is coming soon. They're working on tooling and they're working to roll this out. I assume in companion with the optimist robot that is probably coming next year. You mentioned it's 40 times better. Why is it 40 times better? And why do companies make their own chips? I think this is an important question because a lot of people don't know. It's like, well, Nvidia makes awesome GPU.
use? Why would I go through all the R&D budgeting costs and pain in the ass because to make my
own chip? And the answer is because vertical integration allows you to be hyper-customized in what you're
able to do. So what Tesla has done is they, it's funny, they do this with everything, but they kind of,
they looked at the chip through first principles. They looked at all the different modules that sit
on this chip. You could think one of them processes graphics, one of the processes images, one is
processing math. The reason why all these GPUs from other companies need to have all of these is because
they need to satisfy their customers. They need to be able to be diverse in the types of computing
they can do. In the narrow band of use cases that Tesla has, they're able to reconsider this
and optimize for it. So for example, there's this image signal processor that sits on a chip,
and it's meant to what it says. It processes image signals that come in. What Tesla has done is
they're not actually processing images. They're processing photons. And photons can be binary. They
can be expressed in code. So there's this big chip that sits on a larger chip. They're able to completely
remove that image processing chip because they said, actually, we don't need to look at images ever.
We're just doing photons and photons out, baby. And that unlocks X percent of this board to add
more compute power to the specific type of compute you need. So for the first time ever, you're
getting these chips that don't actually look like traditional chips. They're built very different
because of the narrow band use case that's required. And that allows them to not only be much more
efficient in terms of compute per watt, but also cost per watt, and also the cross compatibility
across all these devices.
So a lot of companies they have,
like if you think of Apple,
they have the M-series chip
for the computers and the iPhones,
whereas NVIDIA has 12 different GPUs
for mobile devices,
for power general computers,
for data centers.
It's this really remarkable unlock
that we're going to start to see rollout next year
in both, that enables both the CyberCab
and the humanoid robot.
There's an increasing trend of these new age AI tech companies
that once they,
reach escape velocity for a bunch of consumer and enterprise-facing products, they start to
vertically integrate part of which includes creating their own custom design GPUs and chips.
The most recent example I can think of aside from Tesla is OpenAI, who announced a partnership
with Broadgate that they're going to be developing their own custom GPUs to fuel certain
niche use cases that their future GPT6 models and ahead will utilize.
they haven't quite revealed what those chips are going to be facilitating exactly.
But what we do know is that they're using the AI model itself to help them design this chip.
So this thing around AI5 is the most Elon thing ever because we've seen what he's done when he's taken a hammer to data centers.
And we're seeing now what he's done by creating probably the most valuable resource going forwards for tech companies at the GPU layer.
So I don't know.
I'm excited about this, Josh.
It makes me unfathomily bullish.
My earlier $10 trillion estimate is probably too conservative after what we've just discussed.
Well, with Elon's new pay package, there is a direct incentive alignment.
One thing on the Broadcom partnership with Open AI, the difference there is that Broadcom exists.
And Tesla is a single entity.
So Open AI doesn't really have the resources in order to create their own chips in house.
And I think that's a really big difference because when there is that,
physical gap between different companies when you're designing these chips, it makes it a little
bit more difficult to do that really hardcore, like, cost cutting vertical integration that Tesla
has. Tesla's doing this. They're making their own ship in-house. They're designing it in-house.
Open-Ahy's outsourcing that responsibility. And that's where you'll maybe start to see discrepancy.
So I am hopeful that they will do great, but I still suspect Tesla will do better. And Tesla also
has manufacturing prowess. So, yeah, I think if we walk away with anything from this episode is that
Both of us share the sentiment that we are unfathomily bullish for an assortment of reasons.
And this is just one of them.
The Tesla bookcase will be coming soon, I promise.
And there's a lot more to the company.
But this is autonomy.
This is autopilot.
This is the secrets of Tesla finally unveiled for the world.
And I imagine the rest of the world, great, they've probably been trying to emulate this.
It's not really much of a secret.
But we'll have a very difficult time in doing so.
I think that wraps it up for today's episode.
We hope you enjoyed this.
breakdown. We are unfathomely excited and bullish, as I've said multiple times about Tesla,
but are you? Let us know in the comments. Are we crazy? Is the vision that we're engaging in
around Tesla completely insane? Are robots not really a thing in your opinion? Let us know in the
comments. We're also going to be releasing one more episode this week, which is going to be the AI
weekly roundup, which we're going to cover all the hottest topics. There's some crazy stuff
that has happened this week. And if there's anything else that we've missed or that you want to hear about,
Let us know in the comments.
DM us.
We're always available.
And we will see you in the next one.
Thanks for watching.
See you guys.
