Big Technology Podcast - Anthropic vs. The Pentagon, Bloodbath at Block, The Citrini Selloff
Episode Date: February 27, 2026Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) The origins of Anthropic's stare-down with the Pentagon 2) Claude's use in the operation to capture Vene...zuela president Nicolas Maduro 3) Was Claude really being used for autonomous warfare or mass surveillance, and did the military seek it out? 4) Maybe this is just a culture clash 5) Anthropic's marketing win 6) Should AI be used for autonomous warfare? 7) OpenAI raises $110 billion 8) Is that money real? 9) Block to cut nearly half its staff 10) Can AI be helpful in managing large companies? 11) Another science fiction story leads to a market panic. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Anthropic Showdown with the Pentagon reaches an endpoint.
We dig into what it means.
Block is laying off half the company, as Jack Dorsey tells everyone,
AI might be coming for their jobs too.
Open AI finally raises its $110 billion fundraising rounds,
and we have yet another AI science fiction sell-off.
That's coming up on a Big Technology Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional cool-headed and nuanced format.
show for you today. We're going to break down everything that's happening between Anthropic and the
Pentagon and discuss what it means for the company. And maybe the future of war and defense.
We'll also talk about the big layoffs at Block. Half the company seems like it's on the way out
the door. We'll talk about Open AI finally raising the $110 billion round. That round might grow
even larger. And of course, the Citrini sell-off. We're joined as always by Ron John Roy of
Margins, who's back from Europe and ready to podcast. Let's podcast, Ron John.
AI science fiction driven sell-off is catnit for me, so I have to come back for that.
We love it. And it's been such a big week of AI news that that's the fourth most important
story somehow. Wait, was Citrini this, when did it get published again? It was like,
that was this week. My God. My God. Every week is a month, it feels like. All right,
let's get into the big story. This is one I've been really looking forward to speaking with you about
We haven't talked about it on the show yet, but today is Friday,
and that means it is the deadline between the Pentagon and Anthropic,
the deadline for Anthropic to accede to the Pentagon's requests that Anthropic both give it the option to use its technology for autonomous weapons
and conduct domestic surveillance.
So this is sort of, let me just take, and of course Anthropic is Friday,
but Anthropic has already said no to that on Thursday.
we're going to get into what the repercussions are. But I think it might be helpful to actually
talk through what's happening between Anthropica and the Pentagon and maybe give some context here.
Walk me through it. Walk us all through it. So you may recall that the United States
captured the leader of Venezuela, Nicholas Maduro, in a raid that the United States didn't
lose any military servicemen and actually seemed like it pulled off in a remarkable way. Now,
turns out that Anthropics technology might have been involved there.
This is from the Wall Street Journal's a little while ago.
Anthropics Artificial Intelligence Tool Claude was used in the U.S. military operation
to capture former Venezuelan President Nicolas Maduro.
The deployment of Claude occurred through Anthropics partnership with data company Palantir,
whose tools are commonly used by the Defense Department and federal law enforcement.
Following the raid, an employee at Anthropic asked a counterpoint at Palantir,
how Klud was used in the operation.
So, you know, it did seem like this was just Anthropic, you know, kind of leaking this news
that it was working with Palantir to help capture Maduro.
It's great marketing if you want to show the capabilities of your tool.
But in fact, Anthropic really didn't have much idea of what was going on within Palantir
as far as its technology being used for the raid.
And it even had to ask a Palantir employee about it.
And that's where these conversations of like the tech company going,
to the Defense Department or the Department of War now
and saying, how's my technology being used?
This is how it all began.
Yeah, I think especially in terms of how it was being used,
again, the employee saying it was Palantir layered on top of Claude
and that basically Claude has been helpful for synthesizing satellite imagery
and different aspects of the Intel picture.
The thing that just kind of jumps out to me here is like, yeah,
how, what kind of responsibility should Anthropic have in here? And this might surprise you a bit,
but I'm not going to say I'm like full Department of War, Heggseth on this one. But I mean,
the capabilities are embedded in Anthropics model. And like what, you know, like what kind of
control they actually have over how it's getting used. It's computer vision in the end, in this case.
And it's kind of like doing the analysis in the sites on top of it. So.
So I don't know. I'm, I've been having a tough time trying to figure out where I land on this.
Where, where are you landing on this?
Well, first of all, this is sort of, I'm telling, I'm telling the story here to discuss to basically
set up this idea that I'm not really sure if there's a there there between Anthropic and
the Pentagon.
That's what I, okay, okay.
I think there might be a lot of posturing and positioning.
And now there might be an argument that these hypotheticals matter and we'll get into that.
But this was, this initially started on.
something so minor.
It was, and by the way, this is from Dave Lawler, who's an Axios editor, who responded to
my tweet about what did Anthropic do with Venezuela.
We don't know.
And they didn't know either.
And so what he said is, yeah, it might have, in the past, Claude has been helpful for
synthesizing satellite imagery and different aspects of the Intel picture.
We don't know that the technology was being used either for mass domestic surveillance or
autonomous weapons in Venezuela.
this all this whole thing began simply with Anthropic inquiring how his technology was being used
by the Pentagon. That's when Dario Amode, the CEO of Anthropic, makes his way down to D.C.
And again, I don't think this was a disagreement that happened based off of real-world pictures
because the Washington Post and Semaphore have both reported on what happened with the discussion next.
So this is from the Washington Post.
A defense official said the Pentagon's technology chief
whittled the debate down to a life and death nuclear scenario
at a meeting last month.
If an intercontinental ballistic missile
was launched at the United States,
could the military use Anthropics Clawed AI system
to help shoot it down?
It's the kind of situation where the technological might
and speed could be critical to detection and counterstrike.
Anthropic Chief Dario Amadez,
answer rankled the Pentagon,
according to the official who characterized the CEO's reply as,
you could call us and we'd work it out.
So basically the Pentagon's version of events is,
you know, maybe these conversations began around this Palantir thing.
They start having these conversations together about how Anthropics technology can be used.
And somebody from the Pentagon presents this like nuclear scenario to Dario,
basically saying we might need your technology to be used quickly.
And Dario gives the most Dario.
answer ever. Yeah, call us and we'll let you know, right? You could totally see him saying this.
Now, this is from the post. An Anthropic spokesperson denied Amo Day gave that response, calling the
account patently false and saying the company has agreed to allow Claude to be used for missile
defense. So here's my read. I don't think the Pentagon went to Anthropic and said,
we need your technology for autonomous weapon use and mass surveillance of American. I simply think
that the disagreements about hypotheticals became so out of control where there was a culture
clash.
I mean, think about the culture class here.
It's Dario Amadee, CEO of Anthropic.
We know how he acts.
And Emil Michael, who's on the other end.
By the way, both of them have been on the show.
I like, I mean, I've enjoyed speaking with both of these people.
And Emil has basically said that, you know, who's, Emil is the Undersecretary of War for
research and engineering at the department of war.
he says that Dario wants nothing more than to try personally,
to personally control the U.S. military and is okay putting our nation's safety at risk.
I'll just can turn it to you. This is kind of how I look at it. It's simply conflicting culture.
It's not a specific disagreement over technology that will be used in the moment.
Okay. Sorry. I didn't even realize this is Emil Michael of Uber fame, right?
Correct.
Like the mid-2010s kind of like very aggressive brash personality that kind of like,
like the face of tech spreading it all costs,
screw the taxi unions, all the, okay.
Okay.
That's a meal.
Yeah.
So I.
He happens to be a very interesting guy.
I've enjoyed speaking with him, but sorry, go ahead.
No, no, no.
That's what,
so I do agree.
And it's rare that when the topic of like autonomous weapons killing civilians,
I would say there is not a there.
And it's just kind of like, as you said, culture clash.
I agree.
That's really what it feels like.
I think the other thing I keep thinking about as I'm reading through all of the stories coming out on this one is it's so rare in the past when you would hear about some kind of like, you know, like nebulous defense department technology, war games, whatever else.
As an individual, you would have no real concept of what that might look like.
To me, one of the most fascinating parts is we all use Claude.
We all like understand how AI works.
So like actually like thinking through, I don't know, I kept thinking like what's the query?
What's the analysis?
It's like, Claude, how do I capture Maduro?
Here's like 10 documents.
Give me a strategy.
Like I don't know.
I keep trying to think through like what does it actually look like?
I don't know.
Right.
Yeah.
I mean, my belief here is that Palantir did the heavy lifting on Maduro.
and then maybe someone was using natural language to synthesize some information there.
I mean, the jokes have been amazing on X, right?
It's like you tell Claude, you know, Claude Cododes, capture Maduro, make no mistakes.
And it just goes out and does it.
Like, we're not at that point yet.
And we even joke last week with Aaron Levy that, like the fact that people were like, yeah,
Claude has been used for warfare and, you know, was responsible for the capture of Maduro.
And everyone's like, yeah, of course they are.
Not asking any questions has kind of been like a test.
testament, in fact, to the company's capabilities.
But I think its involvement in this specific operation has been blown completely out of proportion.
Just speculation reading between the lines here.
But the other side of the argument is that, well, these hypotheticals do matter.
And you want to have a defense contractor because Anthropic, working with the Department of War, is a defense contractor
that won't say, that will basically be ready to, you know, to do.
what you need them to do when you need them to do it.
And this is from Sam Parnell, the Pettigan's chief spokesperson.
He said the department had no interest in conducting mass domestic surveillance or autonomous or deploying autonomous weapons,
but wanted to use AI for all lawful purposes.
This is a simple common sense request, he says, that will prevent Anthropic from jeopardizing
critical military operations and potentially putting our warfighters at risk.
Again, this just kind of goes back to like, you know, this is obviously.
not anything in theater right now. However, yeah, the question stands, do you want to even make
the Pentagon think that you might say, you know, in a moment of war, we're not ready to go
that far? I don't know what's your perspective. I'm going to present to you a hypothetical here.
Alex, if you are the CEO of a massive AI research lab that has some powerful foundation
models, do you allow your technology to be used for autonomous warfare?
Because, yeah, do you?
I don't think so.
I don't think so.
But I'll tell you what.
I'll tell you what I will do.
I'm going to preface this by saying,
I think that Dario has real values and Anthropic has real values.
And they've mostly stuck with them.
And I give them credit for doing that.
However, if I have this moment where the Pentagon is,
let's just say this,
I have this moment where the Pentagon is saying,
we want to use it for all,
your technology for all lawful purpose.
And I say, oh, all right, just, you know, don't use it for autonomous warfare or mass surveillance.
And they're like, just sign the all lawful, you know, degree here, decree here.
We don't want any caveats.
It'll just make it easier for us.
I might be tempted to blow that out of proportion.
I might be tempted to, let's say, release a blog post and say, no freaking way.
I'll never work with the Pentagon on these things.
Lo and behold, Thursday night.
Statement from Dario Amaday on our discussion.
with the Department of War.
Dario said, and this is, I love the way that Dario writes.
I think he's a great communicator.
He says, I believe deeply in the existential importance of using AI to defend the United
States and other democracies and to defeat our autocratic adversaries.
Anthropics therefore worked proactively to deploy our models to the Department of War
and intelligence community where we were the first frontier AI company to deploy our models
in the U.S. government's classified networks, the first to deploy them at the national
laboratories and the first to provide custom models for national security customers.
He says in a narrow set of use cases, we believe AI can undermine rather than defend democratic
values. One is mass domestic surveillance. The other is fully autonomous weapons. Now again,
to our knowledge, these two exceptions, Dario writes, have not been a barrier to accelerating the
adoption and the use of our models within our armed forces to dates. Regardless, he says,
we are not going to change our position.
We cannot in good conscience accede to the Pentagon's request.
I don't want to reduce this to public positioning, but I'm going to, just for this sake of argument.
It's almost as if, even if the Pentagon's request was reasonable, they didn't need them to agree necessarily to these demands.
And Anthropic just ran with it.
and now they're going to position themselves as, you know, again, once again, hammer home that branding,
the ethical company, the company that works for you, the company that has values.
The company is not growth at all costs.
And, you know, who knows?
Because there are some consequences that that could happen and we'll talk about them.
But I don't think it could have been a better situation for Anthropic than the one that they were just handed.
I think we've been hanging out too much because my, my affliction of looking at everything through a marketing and communications lens.
seems to be rubbing off.
Because I'll admit, like, as this is all happening, that's the first thought that's
going through my head.
And I'm like, oh, my God, this is gold from a standpoint of anthropic.
We're the good guys.
Do we say, like, do you not support mass surveillance?
Do you not support fully autonomous weapons, potentially killing civilians?
But, but it, okay, let's separate those two out.
Mass surveillance, bad.
fully autonomous weapons, I don't, if that's the direction that warfare is going, I feel like
that's just going to be part of whatever China or other countries are developing anyway.
So it's just going to be kind of as awful as it may sound, it's going to kind of be standardized
unless there's some kind of like global agreement to actually ban autonomous weapons.
But again, as someone who works in agentic AI, the more fascinating part of this,
to me is like having autonomous agents in anything, the assumption is that they can be controlled.
And this is where I think this is actually kind of weird for Anthropic to be pushing this hard to say,
if you are actually kind of like at least hinting or implying that there is this world where they're not being controlled,
and then even in my day-to-day enterprise AI workflows with autonomous agents, can I actually rely on them?
this whole promise of autonomous work being done and agents running around doing all different
types of work, it does tie to the autonomous weapons thing. There has to be like at least this
idea being pushed that they can be controlled. And it's weird to me that Dario is kind of saying
actually they can't. I'm sorry. Like, isn't there a difference between Claude Code running,
running a command and, you know, bugging out on your website and then having to go out and fix it when
you're like, this isn't working to like potentially conducting a military operation where people
are going to get killed. But it's the same underlying kind of process. It's the same underlying
technology. That's what I mean. That like, yes, the scale and the gravity of it all is kind
of terrifying. But it's the same way with anything, autonomous self-driving cars. Like at a certain
point, do we all accept that autonomy is good and predictable and will work? Or do we say there is this
level of like uncertainty that does lie around it, that it can go haywire and kill the wrong
people or, or I guess actually is the argument that not that it will go haywire and go kill
a bunch of random people? Or is it, are they implying that the Department of War, it's still
weird for me to say that, actually will use it for nefarious purposes and that's the risk?
What do you think is implied in there?
again like my perspective here and i understand why am why anthropic would not want to sign this away
to the pentagon to like have like full use and whatever it because if it's a company comes in
with values it has its values but again like i don't know if there's a concrete war here so i'm trying
to say yeah i think it's mostly just like you know blanket no this is against our values
and and let's go and even amel michael was on i think he was on fox business or fox news
saying that like we're in the middle of this discussion and this blog post comes out
You know, again, like, I do think, and again, I don't want to feel too cynical about this, but I do think that this is, you know, sort of a PR opportunity.
But back to your point, I mean, like, you know, I would trust, think about this, I would trust a Waymo.
I would get into Waymo. I would trust it to drive me.
I know it's probably good on like 99.7% of rides or whatever, much better than humans.
But I don't want Waymo to be the police.
Like, I'm not giving Waymo a gun and saying, if you see a crime, go arrest somebody.
Like, I don't trust it to that extent.
That's the difference I'm trying to make you.
Okay.
No, no.
Okay, I'll give you that.
Waymo as basically Robocop coming to life.
Get in the car.
Anti-Robocop here.
Yeah, okay, I can see that.
But going back to the PR standpoint, it is.
Like, I was trying to be level-headed and take this kind of
genuinely seriously, but this is just like, and I mean, I have to imagine we'll get into
Open AI. And I mean, Sam Altman even came out and said that like the company would potentially
be working with the Pentagon. I think, uh, same restrictions, by the way, with Open AI. And Sam is like,
well, we're just going to hope that we can defuse. We'd like to try to help deescalate things.
Yeah. That's just Open AI being like we opening I agreeing with my statement that this is not a real
agreement yet. And so therefore, let's go ahead. Where's Sundar going to fall on this? That's what I
want to know. He's going to sit back and he's going to be like, we're printing. We don't have to be
involved in this. This is great. This is why you got to have a monopolistic ad business,
just printing cash and making Gemini better. Don't have to worry about autonomous weapons.
I mean, you can't pay for marketing like this. I'm sorry if this comes off too cynical,
but this is an axios. This is from a defense official. The only reason we're, we're
We're still talking to these people is we need them and we need them now.
The problem for these guys is they are that good.
You can't pay for that type of marketing.
That's why it is.
It's like the, but it is interesting how, okay, yeah, again, not trying to be too cynical here again.
Like Claude catches Maduro, great headline, great marketing, kind of just like quite exciting, which is, but hey, that's, that's, that's the narrative in the meme.
But then, yeah, Dario, realizing this is a great opportunity for us, both to be the ethical AI company, to kind of, I mean, it's a pretty good positioning nowadays if you're like setting yourself up for that conflict to have the heggsets of the world kind of like coming at you on Twitter that actually can help your case.
So, so, yeah, I do think, yeah, again, I never would have thought.
On the subject of autonomous warfare, I would say it's a me story, but I agree on this one.
It's, it's, they're, and they just raise their giant round.
They don't need to, they don't need this marketing.
They already have enough, but good for don't.
If you think they don't need the marketing, I think you're underestimating the level of
competition right now.
Every bit of marketing helps.
Yeah.
I mean, think about it, it almost follows the same line as the Super Bowl ad, right?
like Claude won't ever do ads.
Claude won't, you know, kill you in your sleep.
That should have been the Super Bowl head.
I just run this, the fan fiction of this episode.
But I think that like, ultimately, like, there is still, I think part, I don't want to say it's entirely cynical marketing.
I think part of Dario really does believe that this is not the uses that Cloud should be used for.
And I think on the Pentagon side, you can totally see their side as well, where they're like, we don't want to be.
in a mission critical moment and have Dario say, you're not ready to do this.
Actually, so where do you fall on model companies like regulating, I guess?
I don't know if that's the correct word, but use cases.
Again, companionship, AI erotica, we've debated in the past.
An open AI has one view of it versus others.
Like, do you think as this evolves, and I mean, it almost kind of comes down back to the
the great content moderation debates of Facebook and others, like, do they have the responsibility
to do that moderation? Because I do kind of think they do, but this is just going to get
messier and messier and more and more complex. Yeah, I think they do. I mean, I think that if you're
a private company, you have, you know, at least the right, if not the responsibility to try to
make sure your product is used in ways that you think are beneficial to society.
I don't see what the problem is there.
That's so idealistic and optimistic.
Well, allow me to take this moment to maybe not be as cynical as I've been in our first
few minutes of this show and say, yeah, I think that's, that is important.
Tech companies have some responsibility to society writ large.
I mean, I know it's controversial to say.
That's a hot take, but.
I can see people just hitting play on something else right now.
But that's where I'm going to stand there.
I'll die on that hill.
But this is not without potential consequences for anthropics.
Let's talk about it.
In preparing for the Pentagon might now label Anthropic a supply chain risk.
And Pentagon officials from the journal have reached out the defense contractors, including
Lac Heade Martin and Boeing in recent days, to gauge how much they use Claude.
I love that scene, by the way.
It's like, can you imagine the Pentagon on the line with like Boeing and being like,
how much Claude do you use?
Because we might ban it for this hypothetical reason.
Now, are we not going to be able to make planes anymore?
It's just crazy that we've, chat Chhabiti came out three years ago and we're at this stage already.
Critical infrastructure right now.
Yeah.
Yeah, I think, I mean, that's the, the politics element.
I guess that is kind of like that part is almost more terrifying to me in the actual near term of, again,
Like if if that level of kind of like tit for tat Twitter fighting can actually lead to like that, you know, some some kind of like the supply chain risk application actually kind of like derailing a private business that that I don't like.
They might also invoke the Defense Production Act, which would require Anthropic to supply its technology to the Pentagon the way the Pentagon wants, which would be.
Unprecedented. Again, I mean, the tweets coming through the timeline this week, and I know Twitter's not real life, but a lot of people in the AI world, a lot of buyers are paying attention to this. Here's from another Twitter user, best proof Anthropic has that it has the best internal models. The Pentagon would rather invoke the Defense Production Act than use someone else's AI.
Do you think they get rate limited?
Maybe that's what actually started.
That's what, actually it was, they were like about to capture, actually that's, this might be too dark, but I was going to say that's why we didn't invade Iran yet.
They're getting rate limited on Klaude.
I don't know, Ron, I'm not going to go there.
I'm going to take that one back.
I don't know, maybe by the time we publish this podcast anyways, let's let's move on to lighter news like funding rounds.
All right.
Open AI announces a $110 billion funding round with back.
from Amazon, Nvidia, and SoftBank.
So it's finally here, the round we've been talking about, man, that was quite a transition
with the round we've been talking about, has arrived.
It is bigger than expected.
Remember, it started out at $50 billion.
Amazon, and now it's, and then it went to $100 billion.
Now it's 110.
And this is from CNBC.
Other investors are expected to join as the round progresses.
So it's not even over.
We just have these big commitments from these three big companies, $50 billion from Amazon.
I mean, that is.
that's wild. Speaking of
Dario, I wonder how he's feeling
now that, you know, one of his biggest partners
in Amazon is making a deal like this
with Open AI. We won't
spend too much time on it, but Ron John,
your takeaway from the size
of the round, who's in, and
any other things that. I love how
we're going to glaze over the biggest funding round
of all time. Only in February
2026 could a $110 billion round
in a private market actually
be like, let's not spend too much
time on it. But I think
I want to call out it being, and I'm very glad that OpenAI is making, again, this funding round
the most OpenAI-ish thing possible, because 110 billion is the headline, it is impossible to tell
what the actual round is.
Because again, one of the, from the information, Amazon's decision to invest up to $50 billion in
open AI could hang on whether Open AI goes public or reaches a loosely defined milestone
known as artificial general intelligence.
That was my favorite part because we might, after we lost that benchmark with Microsoft
in OpenAI, but now it's back.
And this idea of like declaring AGI actually potentially unlocking tens of billions of
dollars, it's back again.
And again, we have our benchmark here on the podcast.
Waymo's operating in New York City.
Officially will mean AGI is here.
But I think, I don't know, did you like,
The complexity of the funding, do you call this, do you genuinely call this $110 billion round?
There's so many stipulations here.
No, it's not, it's certainly not that.
It's, I think it's a very important point that you're calling out here.
I mean, remember when Open AI and NVIDIA said that they were going to do $100 billion together.
And it was another one of these, well, it's $10 billion now.
And in time, turns out that $100 billion was actually $30 billion, which is what
NVIDIA will be investing.
Although Jensen Wang said, we hope they invite us, you know, to come back.
but certainly a intent to invest 100 is very different from an actual action to invest 30.
To me, the interesting thing here is where, you know, what happens as a result of these major deals.
So this is from CNBC. OpenAI said it's expanding its existing $38 billion agreement with Amazon web services by $100 billion over the next year, eight years.
So it's going to get 50 from Amazon, but it's going to put back either 100 or 138.
That's why I'm so happy about this announcement.
It's got everything.
It's got kind of like nebulous benchmarks and tranches.
It's got circular funding and financing.
It's a classic Sam funding round.
Again, yeah, like, as you said, potentially putting in 50, potentially over eight years.
getting that $38 billion up to $100 billion, it's perfect open AI funding.
That's right. Yeah. I mean, Sam was on CNBC earlier today and basically said, look,
this is only going to work if the revenue goes up and answering the circular funding thing.
And I think he's right. It's only going to work if the revenue goes up.
That's basically it. That's good business right there.
Good business. But also like, yeah, of course.
If the exponential continues, then it'll continue to get the money.
And it'll make sense.
And if it doesn't, he won't get the money.
Yeah, I think, I mean, I definitely want to get into kind of like where you see Open AI's business at this exact moment.
But, but like one thing that was also interesting to me as well was there wasn't a lot of talk around like where this money gets invested into.
Like in the old days of a year or two ago, I feel.
any of these big funding rounds would really kind of center around getting to that next generation
model, building data centers. It was still a bit, I don't know, did you see anything? Like,
it still wasn't, there wasn't a big kind of flagship push around what this money actually is going
to mean to both open AI and the ecosystem at large. Yeah, it has to be infrastructure, right? And just
the support for inference, especially when you're working with partners like Amazon and
V-Vidia. I think that's not an accident. And opening I basically told us what the game is, right? They're like, if we are
able to build more infrastructure and serve more demand, we're going to make more money and we'll keep
building until that proves to be untrue. So to me, that's just one step here along the way on that
front. Well, but where do you view Open AI competitively right now? I'm curious. I'm just going to say,
like, it is crazy to me.
My chat GPT usage has declined dramatically.
Between Gemini, actually for like day-to-day, just basic stuff, I'm using a lot more.
Like it just, again, and we've talked about switching costs and moat actually.
And I know like the idea of memory, which everyone has been talking about for a long time,
is supposed to kind of start to build that moat.
But it's still such a reminder to me of how brittle a lot of these kind of.
of like foundations are that might see them again, what is it, 900 million users now?
Yeah, they just said today, 900 million users. Yeah, 900 million. Yeah.
So they're definitely on track for a billion by mid to late March. Yeah, exactly.
And again, chat GPT is the like Google kind of like trademark brand name, whatever, of like
AI for the average person. It's like a verb. But still like I actually was looking this up the other
day because I was curious.
Back, us on this show a year ago, there were headlines.
Anthropic was screwed.
Like, usage was going down on the consumer side.
And again, massive credit to them.
They had such a clear bet.
And we outlined this very early that, like, they were going all in on coding API.
They were giving up the consumer product, basically.
And it worked brilliantly for them.
But again, 12 months ago, 14 months ago.
the narrative was very strongly.
Anthropic is in a bad position.
It's kind of where perplexity is now.
Open AI is just dominating Gemini's on the rise.
Two years ago, Gemini and Google are dead.
Like, it just, yeah, it keeps reminding me just how quickly things can shift in this market right now.
Yeah, it changes fast.
I mean, obviously, like, when people think about generative AI, they think about chat GPT.
That's what you hang your hat on right now if you're open AI.
Some of the other bets, SORA, you know, haven't worked exactly according to plan.
We still don't have the device.
But I think that basically you have this two-pronged strategy.
You're growing Chichipti from the ground up.
It's the leading consumer product.
And you use that to leverage and move into enterprise.
And, yeah, they're making their move into coding.
And it's very interesting now what's happening in the coding market.
What didn't you say?
Because you have cloud code, which, you know, I've been.
you know, I've been in like crazy. I'm hitting my limits every couple hours, just in Claudecote.
It's amazing. And there's some other players like Cursor that are, you know, starting to go up and down, you know, as the two big boys get involved.
So what's happening with Cursor, Ron John? Okay. So I wanted to highlight this story. There was kind of like a, there was one tweet from a Kyle Russell around how the company was removing 90 seats and basically overslaught.
people were like, hey, can you unsub me from cursor?
Yeah, I'm not using it anymore too.
And I think, again, this like momentum, the speed and like inflection with which people can shift or this idea of moat, it's just so fascinating to me because, again, a year ago cursor was synonymous with autonomous coding or code gen and saw like any kind of like AI driven coding.
and just how quickly people can switch, how the moat was never really there.
And so I think it raises two questions.
Like one, does everything kind of like only condensed to the foundation model labs?
And my argument that it's the product, not the model, is completely wrong.
And if that happens, I will say that.
But I think, like, to me, that was one side.
The other, that the cursor story.
I mean, we don't know definitively, like, where things are internally for them from a revenue perspective.
But, like, I hope that this starts to raise every claim around ARR or, like, annualized recurring revenue, that starts to go away.
Because anyone who knows it's, like, taking one month of data and extrapolating it times 12 when you have a good month, I mean, there's like, I've seen people joking, but maybe it's the case you have one week or one day.
and times by 365 and call it ARR.
I think, like, in the market,
actually trying to understand stickiness over time
is going to become much, much more of, like,
a valued thing right now.
Yeah, I think that's great.
I mean, I think that there's definitely been a,
some title inflation when it comes to ARR, right?
Like, certainly companies are putting releases out there about ARR
and you have to, like, you know, kind of shake your head a little bit about,
Is that really the number or not?
But I think going back to your previous point,
I think that's the most important point,
is that when it comes to AI applications,
because of generative AI's general purpose nature,
you always have to worry about one of the big companies gobbling up what you're doing.
And certainly that's what's happened with Cursor, I think,
is that CodCode, right, which was initially like, you know,
seen as a friend of me or maybe just a different version of Cursor,
for different use cases.
And it's not like not being an IDE, maybe.
It's fully competitive now.
And people are just working within code
and they're working within codex.
And so that's what's happening
is the big models are just gobbling up
smaller competition.
So question, 12 months from now,
is Anthropics still the King of the Hill?
Or have things shifted again dramatically?
Because if we're looking at,
there's some kind of directionally similar things,
I mean, Open AI is growing and raising more money, but still kind of who owns the narrative
in conversation and kind of like the next wave of innovation.
Do you think it's still anthropic six months from now, 12 months from now?
On coding?
Yes.
No, no, just overall.
Yeah.
Well, I would argue that they're not like, I mean, clearly they're ascendant right now,
but I would still argue that Open AI is the leader.
But I think coding is going to be very interesting because that is, that is the, that is
use case right now. That's clearly economically valuable and exploding. In fact, I think
some of the numbers on Anthropic paid subscribers are quite impressive. Actually, I have them in my inbox.
You want me to read them? Read them to me. All right. Let's see. So, let's see. Free users on,
Claude are up more than 60% since January, fastest growth in Claude's history. Daily
signups have tripled since November. Every single day, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh,
Okay, I'm just sorry. I'm just making sure that this is on the record.
Every single day this week has consecutively broken the record for Claude's largest ever day of signups.
And paid subscribers have more than doubled since October.
People are staying and upgrading because they value Claude's most advanced capabilities and consistently say it sharpens their own thinking.
So it's more than a Super Bowl bump, they're saying.
It's adoption from months before the ad campaign.
So they're doing really well.
I think that the coding fight is going to narrow, but a year from now, I think they're still
going to be in the lead, and I still think that will be the biggest use case for these models
as the vibe coding stuff continues. What do you think? Well, I'll differ again, as the company
worked for writer, autonomous knowledge work is where we play. Cloud Co-Work is in there. Manis is
kind of one of the only other competitors, really. I'm still standing by my prediction that that's
going to be the big trend of the year. It's self-interested in Talk My Book, but really, and I got to
say, I was, it's been interesting to me because, like, how Claude Code has been the entry point
for most people, because, like, our company, we only work with enterprises, so it's not a
consumer product, so just not as many people feeling it, but I'd just say, like, I think you
get it now, right? Like, in Claude Code, it gives you that feeling of, like, what I've been
trying to say since October of, like, actual autonomous, agentic work. Like,
agents out there doing stuff for you that actually works with many steps and you feel it now right
i heard you talk i feel it yes but i also think there's a long way to go although it's done a great job
building some internal tools for me i have to say so okay you're you're pro agentic now you're
coming around i'm feeling the agentic i'm feeling it but i'm not i'm not i don't happen fully
drink the kool-culate like you have okay we have to go ahead i was going to say what i was
actually come away with is it's the word agent and agentic was so just beaten down and kind of like
mischaracterized for all in 2025. That's why everyone just has a hard time saying the word
agentic. Whereas in reality all this, what's happening is it's act this is actually agentic that this is
what we were promised. But we just heard it for so long and it wasn't working or none of it made
sense that that's why people people are uncomfortable saying it. No, here's a, I think this
is a good distinction of where we sit. You are happy to, you believe that AI will, this
agentic stuff will eventually be good enough to take the shots. And I'm like, do not take the shots.
Let us, if we're going to have to shoot, let a human do it. Okay. Maybe that is the difference.
Yeah. Okay. I think you're far too trusting of it. But anyway, we'll go go down that rabbit hole another
day. Or you can respond if you want.
I think that that's that's a reasonable characterization.
That's going to, I like, I like our regular standing debate around is that the product or the model probably is a little more tasteful than should AI take the shot or humans take the shot.
These are both real questions.
These are both real questions.
Yeah.
All right.
I'm going to take a break and come back.
We're going to come back after this and we're going to talk about Jack Dorsey laying off 4,000.
at Block, and then we're going to talk about the Satrini research paper in the time we have left
that caused this sell-off in the market. We'll be back right after this. And we're back here
on Big Technology Podcast Friday edition. All right. So the news is that Jack Dorsey is from
SFGate. Jack Dorsey is laying off 4,000 at Block and saying others will do the same within the
next year. So it's not that the size of the layoff is massive, which it is, but the real headline here
is that Jack Dorsey has said AI has helped us become so efficient that we're able to
play off half the company and be as productive. And by the way, this is coming for others as well.
What do you think about that, Ranja?
This one killed me, I got to say. I like, and there's, I have two kind of minds here.
One is, and again, as we've been discussing today, like everything is comms and marketing in my mind.
And like, it really just feels like, again, blocks revenue has been, revenue growth has been slowing.
Profitability 2025 was not, it wasn't a bad year, but it's certainly like they were a company that saw incredible growth during COVID.
And it's been slowing.
So like the stocks down 75%.
It's overall, business is not great.
So to say it's AI kind of bothers me.
It feels like a cop-out versus, listen, like a lot of big tech were a little bloated.
We over-hired.
We're just trying to right-size the business a little bit.
Like, to me, that's what this is.
And again, I'm saying that as someone who genuinely believes workforces are going to get transformed
and there's going to be some problems and, like, dislocation in the industry, I felt this one was just kind of Jack being like,
AI when he has to lay off a bunch of people.
Right. And I think we should give the context here, right? So Block is profitable. It's a profitable
company making these moves. And still, it's up 14% today after the news. So here's what I'll say
about it. This is not the first time that Block has been doing layoffs this year. Block did
layoffs earlier in February. This is from Wired after hundreds of workers were laid off in early
February from Jack Dorsey's block, some of the people remaining at the company say the internal
culture has devolved to a point where it's where performance anxiety is running rampant,
using generative AI is required, and overall morale is rapidly deteriorating. Listen to this.
Black employees are currently expected to send an update email to Dorsey every week,
who then uses generative AI to summarize the thousands of messages. I don't know. I don't
know if this is the most effective use of the technology. And I kind of hate to make the argument
here, but is Jack onto something? And are we going to see more of this? Because the idea that a CEO
could get a weekly email from all their employees, thousands of employees,
throw them into a generative AI engine, get a feel of what's going on in the company,
that his reports can do the same with their legions of employees and become.
more effective through that.
Is that kind of where this technology's heading?
I mean, it was interesting when I read that.
Like, on one hand, to me, that actually would be like the wrong process,
because if you're asking everyone to give you essentially sell themselves and the work
they're doing on a weekly basis and using that as your like foundation for understanding
the state of your company, it's going to be biased.
positive. Like, that's not actually good because everything's going to be like, oh, did amazing things.
Everything is great. And then you summarize it and Jack's just sitting there thinking everything's
great. So I think, but it's funny to me that you took that as a negative. Because again, like,
it is kind of, you don't think it's cool at all, the idea that now you can kind of manage in different
ways that like you can act at scale that never would have thought possible before, like really
getting a view that's semi-true or at least versus having like a bunch of people spend a month
doing a report that you're going to have like an all-hands meeting or a board meeting where they
update you that you can actually have more real-time feedback like that you don't like that i don't think
i'm criticizing that uh oh you're not let me tell you this i think instinctively i you know i'm on the
side of the workers here i think it's kind of gross that a CEO is instead of talking with them having
AI summarize them, summarize their notes and making them write these notes.
You just think about how much work you have to write and that gets fed in.
Although you're probably writing with AI.
You're probably almost certainly using a, your agent is talking to Jack's agent is what's
right here.
But ultimately, you know, I have to, I think I have to get past that.
I actually think that if I was running a company of that size, I would do this.
I really think it is a great way to stay on top of a company, actually.
I don't think it makes the company 50% more efficient.
And it's like natural to get these quotes that I read from SFGate from employees who just don't like the mandating of AI and also aren't happy that half the company is leaving.
But maybe the truth lies somewhere in the middle.
And I think we should really focus on the warning, so to speak, that Jack gave to everybody else saying that, you know, I just think we're early and I'm going to be honest.
about it. And I expect many others to do the same thing. Because I got this note from somebody who's
worked with Jack in the past. This block news is going to cascade hard. Jack just put the question
to every CEO in tech and maybe beyond of whether they are carrying dead weight that could be shed.
If a few more tech companies pull moves of this magnitude and we know they will, then the
odds of it crossing over increase tremendously. I would say it sounds right. And probably we're going to
have many tech companies, CEOs saying, and by the way, they know that Jack can run a bloated company.
I mean, look at what happened with Twitter, but saying, you know, maybe we don't have to do 50,
but can we do 20? It's a little bit scary.
I think it's scary, but when the hiring at these companies was up by 100% or 200% in a
condensed amount of time based off of like extrapolated revenue and growth numbers from COVID,
we weren't all complaining.
I think, like, I think I'm almost, cynical isn't quite the right word, but like, I don't
know, I know a lot of people at a lot of tech companies that get a lot of money for not
a lot of work.
It's become the case increasingly more so over the last five years, seven years.
The Google's and the Facebooks for a while, I would say, but like, like, there is, like,
it's one sector of the economy that became the most valuable sector of the economy for a 15-year
period or whatever it is, 10-year period, and it became bloated. And now, like, to me,
AI, what it's doing, it's just kind of like the value of work and software and technology
that we had assigned to it over the last decade is not the same. And that's happened to many,
many other industries over time and it it causes disruption. But to me, that's almost like natural
business cycle rather than, again, I'm like not too dumer about it. Maybe that's short-sighted,
but like it's not that different than other shifts that have happened over time. Yeah, like I guess
you could, you could condense like two, you know, 75% effort email jobs today into one email job
if you have generative AI.
But, like, I also think, you know, as we have this conversation,
I don't think either of us are going to discount, like,
the fact that there's real people in these jobs,
and this really sucks.
And, you know, especially now,
we're like in a no-hire, no-fire time period
that for every person that gets laid off at a place like Block,
it's just like, it's a disaster in each one of those cases.
And I don't want to, you know, leave that out.
No, no, that's the problem about all this.
It's like, I don't want to,
want to short change, I mean, getting laid off sucks and like, it is just, it's, it's, it's,
it's sad, but it's also like, does Metallica and Benson Boone need to play Dreamforce?
And is that like the sign of a healthy industry or an industry that might be getting a bit
soft?
I ask you.
Benson Boone and just one of them, Metallica or Benson Boone, but.
Look, as long as we keep Metallica, right?
We got to keep Metallica.
I don't know.
I'd rather, that always kind of saddens me that they went, that they're playing a dream force.
Benson Boone, put him up there.
Not Metallica.
This is going to be embarrassing.
I don't even know who Benson Boone is.
He's a guy who did the flip at the Grammys now, no?
No idea who that is.
All right.
So maybe you cut him.
Maybe you bring, keep him and you'd cut Metallica and keep your employees.
That would be my preference, maybe.
I think I'm going to create an agent for you to keep up better with pop culture, Alex.
Well, I would like that.
Yeah.
I would not.
I would have to filter that agent's emails.
Just too much.
But by the way, so here's where this becomes a real problem, as if every company, I don't know.
I actually don't think Jack is right.
We're going to talk about it in a moment.
But even with these AI tools, the software engineering employment numbers are going up
fairly quickly, which is fascinating. But if Jack is right, in the case that he is,
that could be rough. I mean, if you think about every company coming out and doing, I mean,
we've seen Amazon do these big layouts, right? Like, every company comes out and does a 20%
layoff. That is, that's tough. That's tough if you're a tech worker. But the amount of money
everyone in tech has made relative to every other industry over the last 10 to 15 years, like I think
that's going to be actually one of the more interesting things politically how this all plays out, I think,
is that it's targeting, this is causing a disruption in a sector that got a lot bigger,
but is still a small percentage of the overall kind of like employment in the economy.
So do you think people will be as strongly reacting or up in arms around this, or it's going to kind of be like,
and again, as someone who works in tech, I'm saying this, that like,
it's just it's harder for me to be like that saddened by it given just how these companies have
been able to operate for a long time. I mean, if you're asking me whether there's going to be
an outpouring of national sympathy for tech workers, I don't believe so. I mean, this is,
remember, this is a country where many people celebrated the palisades fire when they saw that
there were people in a different socioeconomic status and them that lost their houses. So I don't really feel
like we're a nation of empathy right now, at least.
Could we should be, but we're not.
All right.
Speaking of cascading crises, let's end with this.
I'm sure you saw this Citrini letter, talked about the 2028 global intelligence
crises.
I'll try to summarize it as best as I can.
Basically, this research firm who may or may not have shorts, I don't know 100%,
but it's been speculated that they do in some of the companies that have tanked because of this
letter, basically looked at what happened if generative AI works.
They write, it should have been clear all along that a single GPU in North Dakota generating the output previously attributed to 10,000 white-collar workers in Midtown Manhattan is more economic pandemic than economic panacea.
So basically they say, look what's going to happen.
There's going to be a human intelligent displacement spiral where people will automate jobs way.
Maybe this is kind of where that Jack memo can actually end up in the bad scenario, right?
because then you have people with their mortgages.
They can't pay them.
Stocks go down.
And then, you know, there's so much of our economy,
so many large parts of our economy that are based off of wanting to avoid annoyance,
not wanting to cancel certain things and not disputing certain fees.
The agent goes out and, you know, cancels these things and takes down those fees.
And then all of a sudden, consumer spending is down.
Growth is down.
And private equity, that depends on all this, starts to go.
go up in flames.
And they, even these, these at, you know, you can build your own delivery apps, for instance.
And then all these displaced, uh, white.
And so those businesses go away and all these displaced white collar workers end up
taking blue collar jobs and there's just no jobs left in the economy.
I think I boiled it down.
That's kind of the argument.
I think you can tell by the tone of my voice.
I'm not convinced that they're right here.
What was your reaction?
So my reaction to the actual.
content to the piece. And then there's my reaction to like actually kind of like it causing a
stock market sell off are two different things. I think I don't know. It was a good piece of,
it was like an interesting piece of writing. It was like like, like it. And it is. It raises
these kind of questions I thought were like as trying to assign kind of like value to the idea.
Like if we're locked into subscriptions we forget about like imagine if I, if I tell you,
imagine you have an agent that is actually able to track your Netflix, Disney Plus, Hulu,
all of your utilization of those services, and then the ones you're not using, it goes and cancels
them for you.
Like, that sounds pretty good, right?
Oh, I would love that.
Yeah, exactly.
But the argument is that that will cause cascading economic problems.
But this is where if that is the foundation of the U.S. economy, that's the more terrifying part to me.
than the actual unwinding of it, if that's the case.
So I think that part of it is, I don't know,
it was interesting just like the kind of questions it raised.
There's a lot of like I saw arguments over like it using DoorDash as an example.
And I actually do agree with the idea that that was like,
and as not the biggest fan of DoorDash,
as readers of margins will know,
I do think they're going to be the hardest to displace out of
any, given it's a marketplace, there's like physical labor elements of it. So I thought that part,
like there's definitely weaknesses in it. To me, the idea of some kind of like cascading potential
just downward spiral here, I do think, I don't know, it presented a pretty like interesting,
consistent narrative that actually told a good story. So I see why it had the,
it did, but I don't know, my hot take on this one is I think it raised actually a more interesting
issue is that, again, the state of the current stock market, the valuations of a lot of companies
have gone in the same direction for a very long time. And I think it is more unmasking just
general unease and worries about valuations as opposed to it's like AI is going to destroy society.
And people, again, it's an excuse to just kind of knee-jerk sell things that you're sitting there on paper have just been marked up insanely over the last number of years.
But you just, you don't feel it's actually that valuable.
That's kind of how I am reading it.
I like that take.
I like that take a lot.
I think we're agreeing with each other a little too much.
Yeah, I know.
Because that feels spot on to me.
And, you know, it is this, I was asked about version of this on CNBC this week.
and I had to cite that something big is happening paper.
I think this is kind of what it is.
There is this belief that something big is happening.
It is, in a way.
The question is what the magnitude is.
And there's this instinctive race to go and say,
you know, it's going to take our jobs and destroy our economy.
The one issue that I had with that paper,
and this is sort of my core issue,
is it just wasn't imaginative at all.
It didn't think that somebody who's displaced has,
like, any dreams of their own that they might go build
now that these tools exist, right?
And that it sort of felt the economy is stagnant.
I really believe that, like, if these tools work the way that you think that they're going to work,
then, you know, they're just not going to cause vast economic displacement.
Here's this is my, this is the line that I really hated.
In every way, AI was exceeding expectations and the market was AI.
The only problem was the economy was not.
I just think that, like, you know, if the AI,
exceeds all expectations, then the economy is going to become AI and enable people to grow much more
than they have previously. And so the economy will be AI and the economy will grow. That's just my
perspective. Yeah, I do you know what? I saw this stat around the number of professional
photographers where there was worry that like once like first digital cameras and then phone
cameras came out, that it would come and destroy all the entire industry. I mean,
I guess this is Jevin's paradox in action, whereas, like, actually the, like, increase in
access to taking photos created massive new demand in industries around photography.
And, like, it was actually this nice little encapsulation I felt around, like, what can
happen.
Like, suddenly, now everyone needs professional quality photos or in the past, you wouldn't have
cared as much.
And because you could take photos on your phone, it created social media.
And like, like, there is whatever that means for society.
That's another question.
But like there's that argument that, or it was like a really nice, simple picture of in the last 20 years of our lifetime, seeing something that could have been pure doom actually turning into something like in a positive unexpected way.
That's it.
I mean, if you think that everything is static and that people don't want to do new things or grow or they're satisfied with whatever they're.
doing, then you believe the Citrini paper. If you don't, if you believe that there's growth and the
economy changes and people find new things to do, then you don't believe it. And in fact, this is from
Citadel, which wrote a rebuttal. The number of software developer jobs being posted on Indeed is
far outpacing the total number of job posting in terms of overall, in terms of percentage growth.
So that says everything you need to know. If AI is able to code much better than so many people now,
why are software engineering jobs, you know, outpacing the rest?
It's just that when you have these tools and you're able to be more productive,
you're able to do the things that you couldn't do previously.
And so you want to do them.
And you don't shrink into a corn cob and say, I'm done because of these things.
And that's why these papers like this, the Citrini paper annoy me,
because they just don't have any imaginative thought and they're not realistic about the way the world works.
And they, you know, scare people and the fear transatlore.
slates into clicks. And God, I guess I'm being very harsh on them right now. But I'll, I'll keep with
it. Like, it just seems to me to be the worst way to do things. And I think the other worst way to do
things is the idea that the stock market is so brittle right now that an AI sci-fi imaginative
paper can cause a sell-off is still get a handle everybody. Come on. Come on stock market. It's okay.
Just take a breather here.
I mean, if anyone shouldn't be freaking out
into the stock market,
we know that the market reacts so coolly
to any bit of.
So, relax, God damn it.
Just not a substack, not a substack.
Not a freaking substack.
Obviously, substack goes out and being like, we move the market.
It's like, is this the way you want to move the market?
Yeah, yeah.
I don't think so.
All right.
Let's pack it up and go home
and we'll cool off and come back.
next week and hopefully the world will still be standing. Does that sound like a good plan?
I think. I hope so. I'll see you next week.
All right. See you next week. Thank you everyone for listening and we'll see you next time
on Big Technology Podcast.
