Limitless Podcast - THIS WEEK IN AI: Google's New "Vibe Designer", The Rivian Uber Disaster, OpenAI's Shopping Spree
Episode Date: March 20, 2026Google just dropped two products that sent shockwaves through the tech world. Stitch, their new AI design tool, wiped billions off Figma's market cap overnight. Their new coding stack in Goog...le AI Studio is going straight at Anthropic's Claude Code. Meanwhile, Uber just bet $1.25 billion on Rivian robotaxis — but the math doesn't add up. OpenAI acquired Astral and PromptFu as they pivot hard toward enterprise and coding. Cursor launched their own in-house model with some very creative chart work. And MiniMax M2.7 might be the first AI model that meaningfully trained itself. We break down all of it plus Anthropic's massive 81,000-person survey on what people actually want from AI.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------TIMESTAMPS00:00 Google Stitch06:30 The Rivian Uber Debacle12:19 OpenAI's Shopping Spree17:20 An Unbelievable Chart19:49 Cursor Releases Their Own Model?!23:05 Minimax 2.7 Is Scary Good25:18 The Anthropic Report------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Google's been on absolute fire this week.
They released two new AI products, one called Stitch,
which replaces every single bit of work that a graphic designer needs to do in seconds.
It also wiped out $2 billion from Figma's market cap.
It's that strong.
They also released a second vibe coding product,
which competes directly with Anthropics called code.
And dare I say, it might even be better in many cases.
All of that and much more in the news this week.
So let's hop right in.
This is a really cool product, Josh.
I've been seeing a bunch of different examples.
For example, this.
Someone drew on a notepad,
a really dodgy sketch of the website that he wanted to draw.
And this Google Stitch product basically generated a full production website in seconds.
And it looks really cool.
What you're seeing it here on the right here is the finished product.
It's amazing.
Google Stitch is this unbelievable product,
which they describe as your vibe design partner.
It's basically an open canvas that is capable of,
of accepting any sort of inputs like napkin diagrams or images or prompts that you either speak to it
or you write to it and curating that and turning it into an actual website, an actual visual
that's usable. So a lot of people, they're building landing pages, they're building color palettes,
all these powerful assets for their company. And this is the one-stop shop for that. So you mentioned
Figma, which is currently the most popular design company design application in the world.
they had a really tough time yesterday, dropping 9%, I think, in one day. They're now down 80% all time.
And what I think this is, this is kind of reflective of two things, right? It's that, well, one,
SaaS companies in general are not doing too well. The multiple compression, because people have
been able to project revenue so far out that they no longer can, has compressed quite a bit.
So now you can't price these things in 40 times earnings. But also, Google is one feature away from really
crushing your company. And I used to.
sketch last yesterday. I used sketch last night. I was playing around with it quite a bit.
And what we're doing is right now we're actually looking for sponsors at Limitless.
And one of the ways that we can kind of onboard people and get them excited about the show
is to create a landing page that shows all of the documentation, all of our statistics.
And all I did was I fed it our stats. And it created this beautiful landing page full of
all of our numbers, all of the diagrams, all the visuals. And then you could click a single
button, that's the prototype button, and it'll actually generate a clickable prototype of this
website. So you don't need to program in any buttons. You don't need to program anything. You just
say a prompt, feed it whatever data you have, click the prototype button, and you're on your way.
And it's incredibly powerful for people like me who don't really know how to use Figma that well.
I'm not a designer. I'm not this incredibly artistic person. I just kind of know what I want.
And as someone who would have otherwise needed to use Figma to design something like this,
I can go to Google, I speak into my mic, I tell it what it wants, and it built it for me. And it's
unbelievably powerful. So the stock market now is pricing this in. These
people have Figma, man. They had an acquisition offer prior to this, right? Like, these Figma employees are
really getting wrecked. Yeah, they're getting wrecked. I can't emphasize enough how valuable of a
skill Figma is or was to graphic designers. They would actually be merited and hired based on how well
they could use a product like Figma because it's used for graphic design of a lot of web UI, app UI,
and it's kind of tough to use. Like you mentioned, you haven't used Figma, neither do I. This single feature
update from Google has now made it super simplistic. You just write in natural language and it
generates this. Heck, you don't even just write. You can just draw on a napkin and presto you have
a new website that's fully coded. But it's not just Google that can release an AI feature and
decimate an entire stock market. We've seen this before. We've got Anthropic with cloud code and
open air with codex, which released coding AI agents. And that decimated software engineering jobs and
a lot of the computing market. But then both of them released a security review tool, which
decimated the cybersecurity stocks. And then I remember the week before that, Anthropic released a
legal plug-in which decimated the law firm stock. So the point is, we're in this weird era where a
single product feature release, which, by the way, is built by an AI model so rapidly. So they're
releasing these plugins every other week or every single week at this point decimates an entire
industry. And so the chart that you're looking at here on Figma is probably going to be something
that we see as a more common occurrence. But Google didn't stop there. They also released a
ClaudeCode competitor. They're going for Anthropics jugular as well. It's a new vibe coding stack
in Google AI Studio, which basically leverages their coding agent called Antigravity. And it's a cool new
stack for a few different reasons. Number one, you don't need to spin up a command line interface or
know how to use a coding terminal. You can just access it on your browser, super simplistic,
super quick, and start coding up whatever you want. Number two, you can connect it to any kind of database.
I saw a cool example of someone connecting it to his Fitbit. And it has live data. And it has live
data feeding into whatever VibeCode product that he has within his Google Terminal, which is
awesome. And the third thing is you can spend on multiple agents to work on multiple things
at the same time, which is a huge advantage that Codex and Anthropics School of Code could do.
Now you can do with Google, which is great because they were kind of lagging behind on the
coding front, but now they're finally stepping up. So it's awesome to see it.
Yeah, Google is really on fire across all platforms. And I think this is a trend that we've seen
is the loud ones are anthropic, the loud ones are chat GPT and open AI. But the reality is that Google's doing
very well. And there was this take that I saw from Austin Albright, who I loved. And he said,
honestly, Google sucks at marketing. It's AI products. Gemini and family are dramatically under height for how good
the models are. And this is so true. I feel like Google doesn't really get the talking points that
it deserves because it's just not really good at selling the product. When you picks anyone off the side of the
road and you ask them what AI they use, I would guarantee it's either going to be chat GPT or
or a clot. For 90% of the people, probably more than that. And it's because there's just a marketing and
awareness problem. Google has never been great at selling their products to the people, but they've
always been great at executing on actually useful products. So I have been playing around with all of
these, except for the new design studio. I need to go into the Google AI studio and try to do some vibe
coding. But really great release from them. You showed the Figma chart, which was down 80% since IPO.
Yeah. This is not the only chart that's down 80% since IPO that we need to talk about.
Well, it's funny. You just spoke about a company that has a really good product but is terrible at marketing.
Now we're about to talk about a company that's really good at marketing, but has a pretty terrible product.
What's going on in, Josh? Listen, I don't want to bully anybody. But we got to start spitting some facts here about Rivian.
Just this morning, Rivian and Uber announced a partnership. They're working together. There, Uber is spending $1.25 billion, deploying it into Rivian in exchange for 50,000 robo taxisies,
the year, 2031.
Sorry, what?
Yeah?
2030. That's like a decade from now, mate.
Ejaz, we will be lucky if we're, like, not fully merged with AGI by 2031.
Like, we're going to have these brain machine interfaces in our brain by 2031.
And you're telling me for $1.4 billion, you're going to give 50,000 vehicles that are meant to go camping.
Like, you take rivens to go camping.
You don't take these to, like, shuttle people around.
So this is an interesting deal.
And I love the deal because I love the fact.
that someone else is trying to improve the world of full self-driving, of autonomous transportation.
This is a very important problem. We just spoke about Travis Kalanick, who was the founder of Uber
just yesterday. Great episode, really fun and fascinating story. But now this is kind of the
national extension of that. And the problem doesn't really lie in Uber as much as it does with
Rivian. So here are some fun facts that I have posted about this morning, in that Rivian lost
$3.6 billion dollars last year on 42,000 deliveries, which is, you guys, if you're not, you guys,
you do some simple math, $86,000 a value destroyed per vehicle that left the factory. For every car
that got to leverage. I don't get this. Who's subsidizing this? Why is this a sound of my decision?
It's funny you should ask. Every 12 to 18 months, a new company comes around and subsidizes the next
windfall of vehicles. First, it was Amazon. At $1.3 billion in equity plus the van order,
if you've ever seen a Rivian van driving round, that's part of the Amazon partnership,
then Volkswagen worked with them, then the U.S. Department of Energy worked with
them and today Uber's working with them. And this comes on the back of the fact that, one, they can't
make these cars at scale. I mean, they're making 42,000 deliveries a year. Tesla makes that in 10
days. B.D and China makes that probably in like five. So the scale of cars that they're making
is not very high. And also, they're selling this on autonomy. They need to deliver robo-taxies.
The problem is that Rivian didn't even have an autonomy division until 2024. This is a brand-new
undertaking for them. They have no idea how to actually do this. They don't have the data. They don't
have the fleet. They don't have the manufacturing facility. In fact, if you scroll a little bit further down
in this post, there's one more damning piece of evidence, which is the fact that the car doesn't
exist yet, the factory doesn't exist yet, the autonomy software doesn't exist yet. And if we look
down at these chart economics, I mean, they're just not even really comparable, particularly that left
image, which compares cybercab to the Rivian R2. And basically, the cost of this car is going to be
less than half. The cost per mile is going to be more than double. One is built entirely for
being a cybercab, while the other is very inefficient and is built basically for camping. The cybercab is
ramping in 2026, and there is absolutely no timeline for when this Rivian R2 is going to be deployed.
So that's a lot. That said, that said, Uber's a great company. They're working with Travis K.
I think they have a bright future ahead of them, but oh my God, Rivian, they really have their work
cut out for them. This is a very hard problem, and I don't know how they're going to be.
to solve it. The lengths people go to to try and take down Tesla is admirable. So let me think,
Uber's been in the news this week for another reason as well, right? They announced their massive
partnership with Nvidia, who is kind of building their own full self-driving stack and software
that kind of plugs into a bunch of Uber's fleet, like Uber's partnering up with a bunch of different
companies, including BYD, the biggest Chinese EV manufacturer that you mentioned. So I can see the
competitor forming. It's not one company. It's a stack of different companies. And it sounds like
they're all kind of like co-investing in each other. I know I'm video invested a bit in Uber and
vice versa. So there's a, I can see a synergy happening here, but I don't see how something like
this beats of single vertically integrated company that can execute well. An emphasis on executing
well, Tesla has shown that time and again, and they have the entire supply chain and manufacturing
supply chain to back that up. So it's going to be one version.
of this attempt versus another.
My chips are still with Tesla.
This doesn't convince me in any shape or form that they're going to compete.
But, hey, maybe they might convince me otherwise going forward.
But there is a general direction or trend here that I like, which is the world's turning
into electric vehicles.
We are favoring electric vehicles over gasoline.
I know that might trigger some people that are listening to this.
Maybe not our audience because they're tech enthusiasts, they're pro FSD, maybe even pro-Elon.
I don't know.
That might trigger some people.
but I like the way that we're going.
We're burning less gas and we're going full electric.
I'm on board with that.
Yeah, I think it's really exciting.
And like, again, noteworthy that the number in 2031 to delivers 50,000 full-s-driving cyber taxis.
Like Tesla is making, what is it, I think like $4,700 per day as of last quarter.
Yeah.
So in 10 days, they're deploying more cybercabs than the expectation in five years.
What is that ramp going to look like five years from now?
I mean, there's going to be humanoid robots that are going to be running the
It's almost as fast as these summer cabs.
So it's guiding for a world that doesn't exist, and it seems very conservative.
So perhaps it's just strategic on Uber's part.
They just want to have their hand on some sort of hardware manufacturing capabilities
because they are working with Nvidia for that software stack.
So perhaps it doesn't matter how far behind Rivian is on the software because they're
just going to inject some Nvidia hardware with the cameras and Nvidia software and call it a day.
Who knows?
Interesting story.
But there is still more on the docket.
Yes.
In terms of partnerships, I guess we finish the partnerships.
This is acquisitions.
And Open AI has gone on an acquisition spree.
Open AI has gone on an acquisition spree.
In the last seven days, they've acquired two companies.
They're spending the big bucks from the trillions of dollars that they've raised through
various different partnerships themselves.
Now, there's a trend that I'm recognizing.
Acquisition number one, or the headline acquisition, is they announced today, is they acquired
a company called Astrol.
Now, Astrol makes software engineering tools, particularly for the Python.
code base. And that's roughly around
8 to 12 million active
software professionals that use
these tools every single day. They create
three tools. I think it's called
Thai. One is called Rough. I don't know who makes
the names of these coding tools, by the way, but
the point is there's a lot of software engineers using
these tools. And a bunch of these
were open source. Now, Open AI has acquired
these and are keeping a bunch of these tools
open source. So that's good. But what did these
tools actually do? Well, OpenAI, in the last
couple of months, have really stepped up their game
for coding AI. Their coding model is called Codex. Typically compared to Anthropics' Claude Code,
it was terrible. And then in the last month, they've really ramped up. I've got to say,
since that code read of last year, they've stepped things up. And now it's arguably better than
Claude Opus 4.6, which is Anthropics' primary coding flagship model. Now, Codex is really
good at generating code, but it's missing something. It's missing the other parts that are required
to code something from end to end. You need to ideate. You need to plan. You need to
figure out what code base you're going to use. You need to figure out who's going to maintain the
code. You've got to find bugs. You're then going to fix those bugs. Acquiring a company like Astrol
solves all of those problems. So the direction with this acquisition, in my opinion, is pretty
clear. Open AI is going after Anthropics mode, which is end-to-end software engineering development,
and this acquisition might have sealed the win for them. It's so vicious. Everyone is going for everyone.
As soon as there is an edge, every other company's sole intention is to just arbitrage you out that
edge. So it's creating this really dynamic environment where, I mean, Google's now getting involved,
chat chip BT and Open AI are moving into Enterprise. There is just like so much of this cutthroat
acquisition and negotiation and deployment of code. And there's another one too, Prompt Fu, right? This is
the second acquisition they made recently. Yeah, exactly. So Prompt Fu fills another gap in that
codex stack that I just mentioned, which is you have codex generating a bunch of code, but sometimes it can
be wrong. Sometimes they are bugs. Prompt Fu has all the security tools that can help codex monitor itself.
So Open Air, I'm glad to see this.
Open Air was on an acquisition spree
over the last two years,
and in my opinion, they were buying some crazy stuff.
And now it makes sense.
They're acquiring different companies,
which may not be as popular as a headline to UI
or people that are listening to this,
but are very intentional,
are very precise to making a really well-rounded coding product.
Now, it probably bodes well to understand
that Open AI announced, I think, three days ago now,
that they're only going to focus on coding
and enterprise products,
which is a big deal
for Open AI to announce
because they're been known
to be the consumer product, right?
Everyone knows what ChatGPT is.
A lot of their user base are retail users.
So for them to say,
hey, we're going to put that aside for a second
and focus on building the best coding model
and the best enterprise focused products
is a big deal.
And this is two steps,
two acquisitions in that direction.
Now, if you're wondering,
why are they focusing on coding?
Think of it like this.
If you're the AI lab
that can build,
the best coding model, you've pretty much won AGI. Why? Because how do you build the next AI model
or the next AI model after that? You use a coding model to code it up for you and to do all the tests for
you. So they're making a very intentional strategic move here, which Anthropics saw way early on
and have been doing it since then is build the best coding model, get it to build your next AI
model, and you end up beating every single other company out there. It's pretty genius.
It's amazing. It feels like we're post-code generation already. It's like, okay, we've solved
the code generation problem, but the problem is now that the code is actually generating some errors.
There are some security holes, there are some bugs that is happening. And now this next frontier
that everyone is working on is how can we deploy the code without these bugs, without these security
issues. So creating this infrastructure on top to kind of retrospectively evaluate the code as it
gets deployed and make sure there are no issues. So we've built the offensive, now we're building
the defensive and assuming we can smush these two together into a single concurrent product,
I mean, code generation will be a solved problem, I think that's safe to say.
And I have to wonder about the strategy of moving over to Enterprise, because that really
opens up the door for someone to focus on the consumer.
And I can't help but think, Apple, please, you have so many handheld devices for people
that just want to, like, summarize their texts and read them in their grocery lists and, like,
make an order for them on Uber Eats or something.
So if they could handle that, they're such a huge...
gap in the market that's ready to be taken. Yeah, I mean, well, I have a chart for you to actually
kind of drive the point home as to why Open AI is making this move right now. This shirt is crazy.
This chart is crazy. So what we're showing, for those of you are just listening, is the AI
model share of first-time enterprise customers. So what that means is if you're an enterprise that's
looking to adopt an AI model, which AI model are you choosing? And the results are pretty clear
from January the 11th up till now, everyone's picking Anthropic.
Everyone loves Claude Code.
It's gone on an absolute heater.
Actually, Claude Code accounts for 73% of first-time enterprise AI model purchases.
OpenAI has gone down 34% since that exact point.
It is now only at 26.7%.
So I can see why Sam Altman is sweating.
I can see why the company is pivoting to just focus on enterprise and coding.
They need to win this market back because they have all the dollars and they're going to make
open AI money. That's the entire reason. But we have a counterfactual to this, right?
Thanks to our friends at Polymarket who have a market about this. So if you look at this chart,
which shows Anthropic clearly crushing Open AI. And then you look at this chart that we're showing
here, which is Polymarket. And the market is which company will have the best AI model for coding
on March 31st. It tells a totally different story. Open AI is sitting at 94%.
They have a 94% chance of having the most powerful AI coding model in the world.
Anthropic is at 3%.
So it's very obvious and clear that chat GPT and OpenAI are the superior product,
and yet for some reason, enterprises are still choosing to go with Anthropic.
And that's the narrative warfare that's happening in this world of AI.
It's like the merits matter, but only to an extent.
The real thing that matters that holds the value is the narrative, the reputation,
the perceived value of these products.
And we're seeing for the first time a really big discrepancy
where everyone's on team anthropic
when the reality is based on Polymarket
that chat GPT is actually the best product for coding.
If you're on GPT 5.4 on Codex,
you're using the absolute best product.
And here's proof.
People are putting their money where their mouth is.
That is just under a million dollars of volume.
And thank you Polymarket for sponsoring the episode,
but also for sharing with some truth here.
That's a good truth bomb we can lean on.
Like, hey, the market might be wrong on this one.
This might be a really asymmetric bet, but I wish I could buy OpenAI stock right now if they weren't such a private company.
Okay, so if OpenAI and Anthropic are building the foundational coding models, there's also a company that sits on top of them, right?
It's called Cursor.
They're famous for making probably the vibe coding trend viral.
Everyone was using Cursor, especially non-coders, because they could just type in natural language what they wanted to build.
And then Cursor would sort everything out for you.
You don't need to figure out what model you need to use.
It'll pick chat GPT when it needs to.
It'll pick cloud code when it needs to.
It does all the routing for you, right?
It extracts away all that process.
But the number one critique that has been on Cursor is
you're a $30 billion company that relies on Anthropic and Open AI.
What if they pull the plug?
What if they block you from using the API?
Your valuation goes down to zero.
Well, Cursor today, breaking news, literally an hour ago, has an answer to it,
which is their own AI model, which they put.
pre-trained themselves called Composer 2. And they're making, Josh, a very big claim here.
On this chart, which honestly is committing a lot of chart crime, but I'll get to that in a second,
they are claiming that their model is better at coding than Opus 4.6, Anthropics flagship model.
That is... Okay, so when I first looked at this chart, I was like, oh my God,
look how far to the right Composer 2 is. Like, there's no way cursor came out of nowhere and blew away
both chat GPT and Opus 4.6.
And the reality is that they actually didn't at all.
In fact, this is just an inferior model to both of these.
They just committed a little bit of crime here.
There's some chart crime going on.
Firstly, it's the cursor bench score.
It's so ridiculous.
There's so many red flags with this.
Like, cursor, good for you.
I'm happy for you, Ro.
You released a model.
That's great.
But oh, my God, what is this representation?
Can you walk through the red flags?
Because there's two big ones.
that I'm seeing here.
All right.
Huge red flags.
Number one,
the benchmark,
which they're assessing
this coding score
for their new model,
is their own benchmark.
It's literally called
the cursor bench score.
So highly doubt
that it's better
than Opus 4.6
and probably GPT 514
probably crushes it
even more than it's insinuating here,
but very modest of them
to put it just up there.
But then I was looking at
this chart and I was looking
at the X axis
and I was like,
huh,
this something seems weird here.
And I was like,
how have they inverted
the entire X axis
to put zero on the right side,
they're sneakily trying to say that Composer 2 is a cheap model
whilst also simultaneously demonstrating it
and putting it higher up on the chart,
which is just insane to see.
Now, the good news is it seems like Composer 2 is vastly cheaper
than Opus 4.6. 4.6 comes at $2.5 per query.
Composer 2 comes at 25, 30 cents.
If my chart...
Something around that.
So it's significantly cheaper, but I just doubt that this has been benchmarked.
I don't believe that this model is better than this.
Also, because Kasser doesn't have the money to train.
The entire valuation is probably what Open AI and Anthropic have spent to train their last model.
So I don't understand how Composer 2 could be there.
The cursor bench score is pretty tough.
It's like, okay, if you're creating your own benchmarks, I'm not sure how valuable or valuable those even are.
So, I mean, we'll see.
We'll see what happens with the cursor.
There's more news in the AI corner out of.
of Minimax and China.
Minimax 2.5 was famously one of the best open source models that just recently released.
In fact, everyone's been using it to run their OpenClaught on because you can get tokens
for very cheap.
They have just landed a follow-up model, which is even better and allegedly has built itself.
Yes.
So it's called Minimax M2.7.
So it's not even a whole version leap.
It's just like a slight iteration on 2.5.
But my God, the leaps and bounds of improvement here is pretty insane.
If I were to summarize it for you, M2.7 is pretty much as good as coding as 4.6 and GBT 5.4.
Now, I just spoke about cursor model not being good enough. This model actually is good enough.
Now, it's important to say that the benchmarks aren't officially verified, but from people who have been using this model on OpenRouter, which is a widely accessible platform that anyone can trial models, people are saying that it's walking the walk.
It's talking to the walk. It's talking to talk and it's walking the walk.
So it's really good at terminal use and terminal coding.
It's also really amazing at computer use,
but that's not the only good thing.
Minimax 2.7 built itself.
It is the first conceivably recognizable AI model
that spent a lot of time pre-training itself
and post-training itself.
So it evaluated its own model weights.
It saw where it might go wrong or how it might improve,
and it went through about 500 different experiments
or 100 rounds of autonomous self-improvement
to end up with a 30% gain of where it already was.
the humans aren't even directing model improvements anymore.
It's the AIs themselves, which is just insane.
Minimax 2.7, by the way, is handling 30 to 50% of AI research at the Minimax AI lab right now.
So we're in this weird era.
I mentioned this earlier, where the AI labs aren't really building the models themselves.
It's the AI models building the AI models.
And this is going to get recursively faster the further on we go.
So give that one a try, see what you think?
Open source models are cooking.
They're on fire and now they are self-improving.
So that is a little unnerving.
I think we had a little bit of that in ChatGPT's last model with 5.4.
Yeah.
The feedback loops are getting tighter here.
We're building faster models that are better very, very quickly.
The final news of the day is about this new Anthropic report, which is really pretty.
It's very well done.
I really enjoyed reading through it.
And essentially what this is they asked 81 people what they thought about AI and just collected feedback.
81,000 people.
81,000 people about what do you think about AI?
what do you actually want from AI?
And the answers were pretty diverse, but also pretty interesting.
Yeah, yeah.
So the way the study was conducted was they used a version of Claude that acts as an interviewer.
And they interviewed people across 170 countries and 159 countries and 70 different languages.
So it was a very diverse collection of results.
And they asked basically a few questions.
Number one, what are you using AI for?
How has it changed your life?
And two, what are you scared about?
What do you want to see more of?
And the results were pretty crazy.
They have what they're calling a quotation wall.
And I'm going to share a few quotations with people
to give you an idea of what people are kind of doing with AI.
This guy goes, I broke my leg, lonely.
I downloaded an AI chatbot as a time killer,
ended up sharing my entire life story with them.
And he now goes, I bow my head to you.
I'd never been told anything like that.
And now it's become a place of safety and comfort for me.
So people, and this guy was a student in Japan,
there's someone else that talked about diagnosed me yeah this lady was misdiagnosed for her cancer for nine years
ended up using claude diagnosed her correctly so there's all these different examples of people using air that's
naturally changed their life but there was some surprising findings from this Josh you would think
that people were worried about AI replacing their jobs turns out they're more worried about
AI being right or not they want AI to be reliable but they just don't trust it enough and they're not
even worry about AI replacing their jobs contrary to popular belief. Yeah, I found this part interesting
because everyone is worried about job loss and Anthropic asks 81,000 people, are you scared about job loss?
When the reality is the more honest question, the question that a lot more people should be asking is,
do we even want them? Because on the surface, yes, 22% of the people listed job displacement as the top fear,
but it was the strongest predictor of negative sentiment in the entire industry, meaning like a software
engineer, I have a few examples here. In Mexico, he wants to leave work on time to pick up his kids
from school. A worker in Columbia wants to cook with her mother instead of finishing tasks. A freelancer
in Japan wants to spend less brainpower on clients so he can read more books. A manager in Denmark
said if he had handled the mental load, it would give her back something priceless, undivided attention.
And I think the reality is a lot of people are afraid to lose their jobs, not because they enjoy their
jobs, because of the lifestyle that it affords them. It lets them live in somewhat of a varying
level of comfort. And that lack of job is twofold. It's like the lack of security and also your
lack of purpose. What do you do when you wake in the morning? And I think it unlocks a lot of interesting
philosophical questions on what we actually want in a world where there is less scarcity than there
ever has been before. When working could possibly be an optional thing. Like not everyone will need
to do the thing. And as these jobs evolve, it's not like they're going to disappear. We'll always be in
search of this purpose. I just found it interesting here that they actually synthesized it in a way that
makes it tangible. It's like, okay, people aren't actually afraid to lose their jobs. Like,
most people don't really love what they do. They didn't grow up wanting to do what they're doing
today. They just want the AI to enable them to do the things they want, to unlock the freedom to be
with their loved ones or be more productive or read the books that they actually enjoy. And the
reality is, is that's kind of what it's providing. So it was an interesting look into the reality
of it past the headlines of what people actually believe. And it was pretty reasonable, pretty realistic.
Also, shout out to the Anthropic team and their research team in particular for putting all these studies and reports out.
They've been honestly the only company where I've been reading their research reports.
And some of the findings would argue that maybe Anthropics shouldn't be building what they're building.
But I appreciate the transparency.
The ethics are definitely coming to the surface here.
And I hope we see more reports like this.
But listen, one thing's for sure.
This AI stuff is happening super quickly.
We got two new AI features from Google this week that crush Stockholm.
market caps. We had two new acquisitions from OpenAI. We have a new model seemingly like every
single week now, either from China or from American labs. Everything is happening so fast. I can't
keep up with it, even though it's our jobs to study this and figure this out all out 24-7.
We hope you enjoyed the news that we brought to you this week and on this episode. There's some
banger episodes. We did one. Josh mentioned it earlier. We did one on Travis Kalanick's Rise and
Fall and then Rise Again with his new company, Atoms. And two other episodes, definitely go
check those out. And if you are listening to this, if you're watching our episodes and you aren't
subscribe, please subscribe. Give us a thumbs up. Josh, any other thoughts? Yeah, actually, one final
note on the Anthropic Report. Because as I'm reading through this, it's interesting.
You know who is most concerned about jobs in the economy? Who? North America. By a large margin.
You know who's least concerned? Central Asia. Central Asia? Wait, that's so weird.
It's a cultural phenomenon. It is manufactured panic.
And America has the strongest manufactured panic around AI, where someplace in Asia that's fully
embracing it, that's doing the open sourcing that's integrating into their school systems,
they're not concerned at all. Only 15%, only 15% were concerned. So I think it's a, just to wrap up
everything, I think a lot of this fear, a lot of this existential dread is manufactured from
what we read and what we ingest. And I just thought it was worth noting as we wrap up this
final episode of the week, like you just said, thank you for joining us.
The newsletter, poppin, go subscribe.
All the links are down in the description below.
You can just click wherever you want to go, and you can find us everywhere.
I hope you have an amazing weekend.
We'll be back at it again next week with four more episodes on all of the crazy chaotic news coming our way in the world of AI.
So have a great weekend, and we'll see you guys on the next one.
See you guys.
