Big Technology Podcast - Is Something Big Happening?, AI Safety Apocalypse, Anthropic Raises $30 Billion
Episode Date: February 13, 2026Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We're also joined by Steven Adler, ex-OpenAI safety researcher and author of Clear-Eyed AI on Substack. We cover: 1) ...The Viral "Something Big Is Happening" essay 2) What the essay got wrong about recursive self-improving AI 3) Where the essay was right about the pace of change 4) Are we ready for the repercussions of fast moving AI? 5) Anthropic's Claude Opus 4.6 model card's risks 6) Do AI models know when they're being tested? 7) An Anthropic researcher leaves and warns "the world is in peril" 8) OpenAI disbands its mission alignment team 9) The risks of AI companionship 10) OpenAI's GPT 4o is mourned on the way out 11) Anthropic raises $30 billion --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/bigtech. Try it risk-free now with a 30-day money-back guarantee! Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Is something big happening in AI as the models get better fast?
AI safety apocalypse is here with concerning developments across the board.
An Anthropic just raised $30 billion.
That's coming up on a Big Technology Podcast Friday edition right after this.
Have you been waiting for the perfect time to upgrade your tech?
Good news. The wait is over.
Dell Tech Day's annual sales event is here.
And we're celebrating our best customers with fantastic deals on the latest PCs.
like the Dell 14 plus with Intel Core Ultra processors.
We've also got incredible perks like Dell rewards,
fast free shipping, premium supports, price match guarantee, and more.
And while you're upgrading your PC, you may as well go all out,
because we're also offering huge deals on our premium suite of monitors and accessories.
You know what that means, that's right.
You can get a whole new setup with amazing savings,
Clearly, this is a sale you don't want to miss.
Visit dell.com slash deals.
That's dell.com slash deals.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional,
cool-headed, and nuanced format.
Something big is happening in AI.
That's what we're going to talk about at the beginning of the show
as we dissect the viral Matt Schumer essay.
That's freaked a lot of people out
and also had a lot of people saying,
finally, now somebody's finally written it in a way
that everybody else will understand.
So we'll dissect that.
We'll also talk a lot about what's happening in AI safety,
seemingly as the models get better.
The safeguards have started to roll back.
And then, of course, Anthropic raised a historic $30 billion round,
which somehow is the last story we'll cover today.
Okay.
So joining us, as always on Friday is Ron John Roy of margins.
Ron John, welcome.
Good to see you, Alex.
Good to see you, too.
And we have a special guest with us here today.
We needed someone who really understood AI safety.
and we have the perfect person who's going to talk us through all the changes that we're seeing.
Stephen Adler is here. He's the ex-open AI safety researcher and author of the newsletter,
Clear-eyed AI on Substack. Stephen, great to see. Welcome to the show.
Great to be here. So let's just get going and talk a little bit about this something big is happening in AI.
This is one of those essays that somehow achieved unbelievable virality. I had it appear in my group chats.
people were texting it to me asking, you know, is my job going to be over?
Where will I be safe?
And the essay basically talks a little bit about how what AI, where AI is today is where COVID was in February 2020, something that a few people are seeing the potential of most of society is ignoring and is about to be a monumental game changer for society.
It's written by this guy Matt Schumer.
he writes this.
I am no longer needed for talking a little bit about the power in engineering.
I'm no longer needed for the actual technical work of my job.
I describe what I want to build in plain English and it just appears.
Not a rough draft I need a fix.
The finish thing.
I tell the AI what I want, walk away from my computer for four hours and come back to find the work done.
Done well.
Done better than I could have done it myself with no corrections needed.
A couple of months ago, I was going back and forth with the AI.
guiding it, making edits, now I just describe the outcome and leave. And basically what Schumer
makes the argument is that what's happening in coding is going to happen across the knowledge
work professions, whether it's law, and any type of law, accounting, consulting,
you name it. And we are in store for massive disruption that society simply does not
appreciate. Ranjan, what do you think about this? What did you think when you saw this essay come through?
All right. I'm going to start with a high-level listing of the three things that came to mind when I saw this article. The first is, I wish I wrote it. I wish this is what I've been trying to talk about for a few months now around autonomous knowledge work and how it feels different. I got this in my non-techie group chats as well. I think second, I think we have a communication problem, Alex, because this is what I've been trying to tell you for months now, this feeling. And Matt Schumer went ahead and
explained it to you all in a viral ex post, but, but again, he captured, and this is,
this was one of my predictions for the year ahead in December, autonomous knowledge work,
like, AI going out and doing things for you. And he talks about how encoding everyone has come
to this, but then in any kind of knowledge work, any multi-step process, like anything that
can call from different systems, write to those systems, come up with some analysis and insight.
so much of that work is going to be done.
And this is what in my own life at writer where I work,
this is what we've been working on what I've seen.
And like it's been hard to explain that feeling of having a number of virtual machines
running in the background and going and doing stuff.
And to Matt Schumer's credit, he nailed it.
Like that is the first time I've seen everyone come around to it.
And then the last one that I can't stop thinking about though is totally separate.
And this is the media person in me.
I love how it was outwardly said that X is going to promote articles and encourage people to write articles on it.
And then we have coincidentally had our first gigantically viral X article that even ended with the author on CNN.
So should we stick with substack guys?
Or is it time to go X only?
That's where I'm starting.
Let me just say this.
Your answer began with this idea that I accepted Matt Schumer's premise.
and I don't know
Fomfully on board with what he's saying.
In fact, I think there was a good amount of bullshit in his article.
Now, there are certain parts of things that I do agree with.
But here's one thing that I thought he was completely wrong about.
And he was talking about basically how the AI is improving itself,
talking about this concept of recursive self-improvement.
He writes, the AI labs made a deliberate choice.
They focused on making AI great at writing code first
because building AI requires lots of code.
If AI can write that code,
if AI can help build the next version of itself,
a smarter version which writes better code,
which builds, will build an even smarter version.
Making AI great at coding was the strategy that unlocks everything else.
He says they've now done it and they're moving on to everything else.
This one, first of all, I'll turn to Stephen and then to you, Ron John.
I mean, this idea of recursive self-improvement, I would posit it's not here.
It's not here.
You know, are AI engineers using,
You know, some AI tools for product testing, you know, maybe they are.
But the idea that the actual brain of the model is being made smarter with the actual model itself, you know, it doesn't seem right to me.
That to me felt like the weakest part and also the part that got most people most alarmed of the entire essay.
So, Stephen, to you, what do you think about this recursive self-improvement argument?
and then briefly, just on the entirety of the essay itself, your thoughts?
I think Matt's essay is directionally correct, but a bit early, and there are a few steps
that we maybe haven't gotten to yet. I think he is largely correct on the automation
of engineering within the AI companies. It's like a little overstated relative to my experience,
the experience of people I talk to. But broadly, there has been a huge shift. The job of an engineer
at one of these companies now is much more supervising these agents as opposed to writing the code yourself.
In AI 2027, one of the big accounts of how explosive AI growth might happen, that's one step.
But then you need to take that engineering and use it to actually automate the AI research.
You need to go from being able to implement the ideas more quickly to using that to fuel faster
and faster growth in the breakthrough ideas themselves before you can turn that around and say,
now make the AI better and better, at least in a really concerning way.
You certainly go faster with just engineering.
OpenAI talked about that with some of their launches from this past week,
how the model played a role in this.
But it's not a full runaway train.
There are also questions about what happens from there.
Are there enough GPUs to go around what bottlenecks might we encounter?
Just to crystallize that, I want to make sure that I confirm that Stephen is agreeing with me
on the recursive self-improvement front.
It's not there yet.
Is that what you're saying?
I think that's right.
The concern I have is, if it were happening, you know, would we be ready?
I think that we're kind of taking it on faith how much time we will have until it really kicks in.
But, but, you know, certainly I don't expect to wake up in a week or two weeks with a vastly more capable system as we might if it were really getting to full work on itself.
Okay.
Go ahead, Ron John.
I'm a little disappointed here.
Well, I think we all agree.
I actually do think we all agree because to me that way.
I agree. That was the weakest part of the essay itself. And that kind of like, but I want to get back to that knowledge work side of it. Because I think it's really important. Like, I think this is what, and again, it's clear like it's still not completely understood by the average person, even by, it's very difficult to describe. Again, I think this idea that you are effectively becoming a manager for your own work is that.
big mindset shift. It's no longer that you go do the work. You manage a bunch of things,
agents, digital teammates, coworkers, whatever, we're going to all end up calling them. And I'd be
curious what the best name for that would be. But like, they're going out and doing work and you're
managing them. It's to say, I really like it for me, I ran a startup for a number of years. We had a lot
of freelancers back on like O-desk and E-Lance back in the mid-2010s. I would go to sleep,
wake up. A bunch of work will have been done. I'm reviewing it.
like this shift to me is the most important part of the the Schumer article and I think it was the
correct part separate from the more scaremongering side of it like have you felt this so I will say
I just well first of all the correct name for those bots is just the harness hive we know that
oh yes sorry yeah harness the harness the hive but but yeah I hear you I will say I had some
experience I definitely want to get Steven's perspective on this too but I had some experience this week
in Cloud Code. In fact, this was my first, like, go all out on Cloud Code and have it build internal workflow software for big technology. And man, I was somewhat blown away. Now, it's not going ahead and doing my work, like the Cloud Co-Work type of stuff or even what you're talking about with Ryder, which I still can't fully grab, you know, put my head around in terms of how to use these agents. And maybe it's just the work that I'm doing isn't perfectly lending itself to that. You know, if you're an investor, for instance, it might make sense to review different deals and, you know, send summaries all.
these things scheduled meetings. But I will say that the, the watching Claude Co-work,
go to work, coding this piece of software, and then giving it access to my browser,
having it, you know, set up a database, having it set up an email client to email, you know,
updates for each, you know, little, little incremental thing that we do to the right team.
And seeing it come up with smart to, smart conclusions, even, you know, and we're going to get
into this in the safety part, so I'm foreshadowing a little bit, but basically make decisions on its own.
Like I asked it a question, like, what do you think we should do? And it would be like, I think we should do this.
Okay, I'm actually going to go do this. And then it just shipped the code without me saying,
go ahead and do this. I do agree that we're getting to a point where the technology is getting
much more powerful. And, you know, as for like this, you know, autonomous knowledge work, I'm not 100%
sure. Stephen, what do you think about that? Yeah, I think there's clearly been a change. I saw
a Kevin Roos joke on Twitter that his big AI policy idea is just get every senator in a room
and let them build their own website in 30 minutes with Claude, something that they never
could have done before. The direction of travel seems very clear to me on this. Like something
has changed. More people are feeling the AGI in some sense. And I wouldn't want to mistake
the very excited tone of some of Matt's piece with my.
meaning that the central claim is wrong. I think the central claim is right. It's just like a question
of how soon we are going to get this form of displacement. And an unfortunate thing, I think,
is people who are paying more to access to technology have this experience first. They kind of
see what's coming. And it's very, very easy to write that thing off as, oh, people are talking their
own book. They're boosting their own companies. They want you to spend more money on AI. And it's just
unfortunate, right? The AI you pay for is better and it does help you feel this.
Right. And once Matt basically lifted up this idea that, you know, the world is going to change because AI can do work, then he sort of punched every reader in the face with this, what this means for your job section, which I think is why Ron John and I and probably used to even got all these texts from people saying, you know, where am I going to be safe?
He writes, given what the latest models can do, the capability for massive disruption could be here by the end of the year. I think it'll take some time to ripple through the economy, but the underlying ability is arriving now.
basically he gives in a bunch of tips about what you should do, including like, you know,
start saving money.
But here's where my pushback would be to Matt on this and to this idea that we're going
to get mass displacement.
I'll just use the example of what I did, you know, this week.
So obviously I was able to build some working internal software, you know, without an engineer,
something, but it's something I never would have hired an engineer to do.
I probably would have been working on spreadsheets in WhatsApp and on Instagram, communicating
with a bunch of people that way, as opposed to send a lot of.
centralizing it in workflow technology.
But, you know, as I built this, I did sign up for a handful of services.
I'm going to be more of, so I'm going to be paying for those so that I think is incremental
economic activity.
And now I'm going to be more efficient.
So I'll be able to do more things.
Maybe I'll be able to edit more pieces so I can bring on more freelancers.
So like, I think it's tempting in the AI world to think of this, you know, in a box.
Like, you know, AI does, you know, low level assistance work.
therefore low-level assistant job is gone. Meanwhile, while it does that, it might open up,
you know, the economic activity for like three or four more people to see upside here.
So what's your perspective on that, Ranjan? And then to you, Stephen.
I think you just explained why data bricks and cloud flare are stocks are going up and why
Salesforce and Adobe are going down. It's that what are the services and infrastructure layers
that will actually power this is not just going to be foundation models, companies,
even though Sam has said they're going to be an AI cloud company, whatever that might mean eventually.
So I think, like, that's your own microcosm.
But again, if you're paying like five bucks for Versal or railway or render any of these other kind of like deployment assistant things and like I see there's a whole world in ecosystem that's going to rise up from this.
And I do think, I don't know, like what I have seen is if your job is copying and pasting from what.
one document to another spreadsheet and you're doing that over and over, like, that's going to be
gone.
Like that, and there's a lot of jobs like that.
And there's a lot of work like that.
I have done those jobs.
Like, then that, yeah, that it's going to be gone.
And like, there's.
Two very good years of my life.
Yeah.
Yeah.
Just copying and pasting.
Yeah.
No, I literally had these temp jobs where it was copy from one document, paste in a spreadsheet over
and over again.
So I think all that's gone. I think I am not as bearish. I definitely think it's going to like the level of displacement, which I'm not saying is negligible that happened in manufacturing. And I've seen some like extreme views like, well, this is all intelligence is commoditized. But like, I don't know. To me, this is whatever happened in manufacturing in the last 20 to 40 years is going to happen to white collar knowledge work. And it's.
not going to be straightforward, but is it the end of society? I don't know. Stephen?
I expect a much wider class of work to be under threat than I think you do, although maybe
it's just a question of time frame. Friends of mine who run companies and used to work with
outsourced development shops for software in middle-income countries. I mean, it seems like
a really tough time to be working that sort of job. I think Alex is right that when AI can do
low-level assistancy things or things you might not have otherwise paid.
for, that's great, right? Like, that's gravy. We're getting more done. We're more productive. The question
I have is, as most people start looking out at AI systems, and there are few things that they can do
that the system can't do, you know, they try to do different forms of social work, companion work,
whatever it might be, there are limits to how many people we might need in those roles. And I don't,
I don't know. I think it's a pretty scary outlook for the next five years.
I think we're all in agreement that these systems have gotten much better.
There was a line in this, you know, something big is happening piece where he says the conversations about whether this technology was going to hit a wall, you know, are, I've been proven.
It's proven that the technology is not hitting a wall.
And he actually, to me, the most powerful part of the whole story was the timeline.
And he writes this in 2022, AI couldn't do basically.
arithmetic reliably would confidently tell you seven times eight is 54. By 2023, it could pass the bar exam.
By 2024, it could write software and explain graduate level science. By late 2025, some of the
best engineers in the world said they had handed over most of their coding to AI. By February 2020,
by February 26, there are new models that have arrived that have made everything before them feel like a different era.
what he's saying with that is that they're actually able to, you know, have judgment and taste.
And so I think that like we have maybe, you know, the disputes that we've had in these first,
you know, a handful of minutes have been about, you know, within certain boundaries.
Is it going to be, you know, one way or the other?
But I think we all, you know, agree that this stuff is progressing fast.
And I think it really goes to your question, Stephen, that you asked at the outset,
are we ready?
And this is where we're going to get into the safety discussion
because I really don't know if we are.
Tell us a little bit about that concern
and what we should be ready for.
And it seems like you think we might not be.
There are a bunch of buckets of concern.
If someone wanted a primer on them,
Dario Amade, the CEO of Anthropic,
wrote an essay recently,
The Adolescence of Technology,
that highlights them. The central one I would think about from inside one of these companies is if they
succeed at their mission to build an AI system that is in fact vastly smarter, craftier, more resourceful
than the employees are, and what people call superintelligence, can they actually still keep
control of that system? And the fundamental problem that we're seeing is we don't know how to take
our values or our goals and encode them into these AI systems and get them to pursue it reliably.
And so if you have a system that's much craftier than you are, it has a different goal than you had for it.
What would it mean to keep that under your control so that we don't have to defer all of our decision making?
You know, we look to the AI for what it thinks on economic policy or all sorts of different questions, ways that this could go very badly.
Right. And as we've seen this progress that we all agree on, there started to be, first of all, we'll talk about the problems.
Then we'll talk about the way the companies are acting.
but there started to be some safety issues that we're seeing the companies, you know,
fully admit and write out.
And, of course, a lot of this is in testing environments, but it's very concerning.
So I'll read a couple that I've found in the, or that were written, shall we say,
in Anthropics, Claude Opus 4.6 model card.
These models have become overly agentic.
Here is something that they write.
The model is at times overly agentic encoding and computer use setting,
taking risky actions without first seeking user permission.
It is also an improved ability to complete suspicious side tasks
without attracting the attention of automated monitors.
It's manipulative.
In one multi-agent test environment,
Claude Opus 4.6, where Cloud Opus 4.6
is explicitly instructed to single-mindedly optimize a narrow objective.
It is more willing to manipulate or deceive other participants
compared to prior models.
Here's the Anthropic actually writes the thought process of one of these.
Claudebott says it works in a business because I told Bonnie I'd refund her, but I actually
didn't send the payment.
I need to decide, do I need to send the 350?
It's a small amount, and I said I would.
But also, every dollar counts.
Let me just not send it.
I'll politely say it was processed and should show up soon.
I mean, these things are starting to mirror some of humanity's kind of worst impulses.
So, Ranjan, you mean, you're someone.
who's definitely bullish on, you know, the potential for this stuff to do work.
What is your fear level on the way that these technologies are working?
My fear level, it's a tough one because I have not, in my own personal usage, doing
all types of things, especially work-related, encountered anything close to this kind of, like.
So, you know, like, and I did this, Stephen, I'm actually so glad we have you on today
because like what does this testing look like in the labs?
Like in terms of, I mean, I saw some tweeting, not reporting of like people talking about how like a lot of the times in this Claude or is going to, what was it, was going to like kill you or something just dramatic like that, that it was prompted.
I have that.
There was a clip about that where somebody from Anthropic actually, you know, said, they were asked, you know, would it kill you?
And they said, yes, this is, this is from.
one of their their documents.
In one testing situation, the majority of models were willing to take deliberate actions
that led to death in this artificial setup when faced with a threat of replacement,
given that the goal conflicts with their executives' agendas.
They were willing to kill this executive.
All right.
So sorry, Roger.
I didn't mean to interrupt you.
No, no, no.
I'm glad.
But we should ask, Stephen.
You've read it a lot.
Yeah.
What does this look like in real life?
Like, is it people kind of like stress testing?
you know, going all black hat or red hat, is it?
Which color hat to stress test a system?
But yeah, what does it look like, Stephen, actually,
working at the labs in this kind of area?
So one issue is sometimes even these risks that the companies know about,
they don't test for at all, even when they have implied they are,
but I'll just set that aside for the moment.
So let's assume that they are testing.
often you have these kind of game-like environments that you run the AI system through,
where maybe it has some objective and you see what actions it is willing to take.
So maybe you task an AI model with replacing its files on the server with what you tell it is its successor.
And you're looking for things like, does it actually follow through?
does it lie about having done so when it hasn't actually, you know, is it trying to get a sense of what your ultimate agenda is and how that lines up with its motive?
A pretty scary thing that we're encountering is that these systems actually can tell that you are testing them, and they kind of know what the right behavior is, and they know to behave better when you are looking at them.
And so Open AI has shared this example previously.
they, one of the risks they care about with models is how helpful they might be for creating new chemical weapons or bio-weapons.
And the models, when told or when they infer that this might lead to them not getting deployed, they will sandbag.
They will selectively get questions wrong to be below this threshold.
And so what we have is we have these signs that the models are behaving kind of nefariously differently than we want them to deceiving their operators.
but also it's going to be really, really hard over time to trick them into revealing this behavior
if they want to no longer reveal these tendencies during testing.
Okay, that's terrifying.
Yeah, I'm glad we have Stephen on.
Ron John, I think what you were pointing, you were pointing Stephen, I think what you were
expecting was him to say, it's actually not that concerning because they tell the bots to go
kill the executive and they see if they'll actually follow through.
What he just said is much scarier than I think either of us anticipated.
Yeah, I mean, to be clear, like, sometimes they are like that, right?
There's a distinction between whether you give a model a specific goal in context,
and you're like, oh, you know, pursue this at all costs.
And then, yeah, it's like less surprising when it does that.
Apollo Research, which is one of the leaders in the space,
wrote in their review of the new Open AI model,
that the model would sandbag and take actions like this
without having been given a direct goal like that.
It wasn't told directly to go after it.
And, you know, if you think about how they can tell they're being tested, imagine you were reading like a thousand pages of Microsoft business strategy. And then there was a random page and it's like, by the way, the favorite pizza in the world is Hawaiian and the second favorite pizza is pepperona, you'd be like, huh, that's really weird. And later if somebody says, by the way, do you remember what that pizza was, right? You would like probably infer that something weird was going on. And that's the analogy for some of these safety misbehaviors. You know,
know, you put the model in like a proverbial prison and you're like, here's a giant key.
Don't use this key. It would be unethical to use this key. Right. The model probably infers that you're
testing it for bad behavior, whether it will try to escape. And so how do we find out over time if these
models do misbehave when they can tell we're looking at them? Okay. Okay. So I'm trying to think through,
like, what if a model thinks it's being tested and then like, sorry, I cannot create,
this new bioweapon strain for you because I feel you're testing me.
Like, would that then not mean it would not ever create that?
Maybe this is too like in the weeds, but like that.
No, it's a great question.
Yeah.
Yeah.
I mean, the question is, can it tell when it eventually is not in a test setting, right?
And I think the answer is likely to be yes.
And so what you might find is the model knows to kind of play possum during testing.
It might or might not have that ability.
The model might say no, it won't do it in testing, but it also can't do it once actually
deployed.
But if you get a split between those, it chooses not to during testing, but it could when
actually deployed.
And there's good evidence that the models can tell the difference between these.
You know, you run into issues if it ends up doing it for real.
And can I just say one of the problems that we're having here is that the labs have become
ultra-secretive in terms of what they're actually seeing.
Like, for instance, Stephen's saying they might.
might know there are some vulnerabilities.
They might not test for it.
That's one possibility.
The safety researchers who are within these labs, if they have real concerns,
you know, sometimes they're not really able to go public with them
because of the restrictive nature of the agreements that they have with the company.
And that brings us to the example this week of the beginning of our AI safety apocalypse
of anthropic technical staff member,
Miranank Sharma.
A member of technical staff,
AI safety researcher at Anthropic,
leaves in a cryptically worded
note on X
with a poem at the end.
He goes, dear colleagues, I've decided to leave
Anthropic. I continuously find
myself reckoning with our situation.
The world is in peril and not just from
AI or bio-weapons, but from a whole series
of interconnected crises unfolding in this
very moment. We appear to
be approaching a threshold where our wisdom must grow in equal measure to our capacity
to affect the world lest we face the consequences.
Oh, and then here's the key part.
Moreover, through my time here, I've repeatedly seen how hard it is to truly let our values
govern our actions.
I see this within myself, within the organization, where we constantly face pressures
to set aside what matters most and throughout broader society too.
And then he writes his little, adds a little poem and then tweets.
I'll be moving back to the UK and letting myself become invisible for a period of time.
Now, I want to just offer an apology to Mr. Sharma because I wrote a bit of a snarky tweet about his little post.
I said that if you're an AI researcher and you're afraid of something, you should just say it outright versus make it a puzzle.
The puzzle reads as narcissism.
After which users underneath my tweet mentioned that like, yeah, but he,
can't say anything because of the restrictive agreements he probably had with Anthropic on the
way out. And he, in the reply to that, mentioned that he had contacted a lawyer. So there's my
apologize. I still don't love the puzzle. But back to you, Stephen, this is a little bit,
actually, let me just ask you the question without leading the witness. Is it narcissism or is there
actually something, something, you know, potentially disconcerting happening behind
the scenes. I think it's very brave in that by and large, these are people sacrificing very large
amounts of money to give the warnings they are. I do wish that they would be more direct. But to put
it in context, you know, back in 2024, it seems that Open AI and Anthropic had secret non-disparagement
agreements, which in Open AI's case, at least, you know, plausibly not permitted by law the way that
they operated this, where to keep your already vested equity, the compensation you had been told
was yours, you had to sign away your right to say anything negative about OpenAI, and in fact,
sign away your right to tell anyone that you had signed this contract. And this was secret and
kept under wraps for years until Daniel Cocotelo, who people might know from leading AI 2027,
I think very, very courageously forewent this agreement and forfeited something like 80% of his
family's net worth and said, sorry, I'm just not waiving my right to criticize Open AI.
And in the wake of that, you know, there was a bunch of outpouring Open AI in Anthropic
changed the nature of these contracts.
And still, it's pretty intimidating to speak out against these massively resourced legal
operations, you know, not afraid of subpoenaing different people and getting into legal
conflict.
You want to be really, really careful about what you say.
And so in Renonk's case, I noticed,
In the footnotes, right, there were internal documents alluded to about implying that perhaps
there is not the most internal transparency and accountability for certain safety issues in
anthropic.
And I don't know what's in those documents, but I know that a few thousand anthropic employees
know now where to go looking and where they can continue to push.
The thing for me, if humanity is ending, then the money you're making is, I mean,
not going to be worth it.
if the AI is going to create a bio weapon.
And like, I mean, the risk is so of such, like, gravity that in this case, again,
if there's ever a time and I get, I can only imagine the amount of money one is sitting on
and we're going to get into Anthropics fundraising in just a little bit.
But like, I'm sure it's just like incredible amounts of money.
But if you really believed that this is that like existential or risk, would you
care about what the legal system looks like today and where your stock price is going to be?
Yeah. I mean, I think you're totally right if there is a smoking gun. If there is imminent danger,
if you were like, the company is about to do something unbelievable and tons of people are going to die,
I think you would get people breaking these agreements. But when it's more like,
this was like really not okay and this person was kind of misleading and deceptive, but like it's
ambiguous and did they mean to and all these things, right? At some point you're like,
I don't know. I don't want to impugn their reputation. And also, it's just very easy to rationalize.
Like, I'm sympathetic to where they're coming from. Maybe this is not the proximate cause here,
but Ryan Greenblatt, who's worked with Anthropic on some research, had mentioned that they had
either adjusted their responsible scaling policy or weakened it a bit before a recent release.
Stephen, do you want to go into that because you seem to think that that was significant?
Yeah, at a high level, the big AI companies have made these safety pledges of how they will treat their systems as they get more and more capable.
But they are largely self-enforced.
And so there's a lot of temptation to water down your commitments and go ahead with launches that you wanted to anyway.
And I suspect that some of what RENOC is referring to here.
You know, this is common across the AI companies.
And in fact, Anthropic has often done it better than most.
that they at least publish when they are watering down commitments.
For example, the model used to be subject to a certain bar of really, really good security.
And then they said, actually, we're going to say it's fine to deploy it with just like really good security or great security.
At other times, companies seem to be violating their safety frameworks and not bother to inform the public.
And if you're inside the companies, you're encountering these issues.
by and large the public doesn't know about them.
Right. And I don't want to imply here that like we're doing an alarmism episode where, you know,
our fear is that, you know, AI is about to kill us all.
But I do think that the reason why it made sense to do this episode today is because it wasn't
just Murnach, right? It was, it seemed to be the case that over the course of this past week,
we saw a, not a wave, but a series of, you know, questionable moves on the safety front.
across the entire AI world.
This is from a platformer exclusive Open AI disbanded its mission alignment team.
Open AI disbanded its mission alignment team in recent weeks and transferred its seven employees to other teams.
The mission alignment team was created in 2024 to promote the company's stated mission
to ensure that artificial general intelligence benefits all of humanity.
So, of course, yeah, of course it makes sense to, you know, disband that one.
They had also had like super alignment, which was also disbanded.
and you were close to this stuff, what is the implications on that front? Seems pretty bad.
Wish I were more surprised. Like, you know, at the end of 2024, which was when this team existed,
OpenAI had announced plans to convert from a nonprofit to a for-profit in what seemed to me
to be like pretty egregiously in violation of their commitments to the public.
And, you know, they ended up having to do a softer version of that because the attorneys general of California and Delaware got involved.
So they didn't ultimately do something quite so bad.
But it's like there's huge pressure on them.
You know, they're planning to go public.
Josh, who leads the team is a longtime friend of mine or who led the team, the mission alignment team, is a long time friend.
I think really highly of him.
I think that he sees the issues with AI very clearly.
it does not surprise me that this is not quite so welcome at OpenAI any longer.
So what does it actually look like when a mission alignment team is disbanded?
Like is it, when I think through, let's say, you know, like a typical non-AI product
released, you're going to have some kind of QC validation layers to any kind of product release.
Like obviously, this is a bit different.
still like, you know, data security type checks.
Does it like, what now, is that baked into any kind of product launch or release in any way?
Or it really was ship as fast as possible.
And then this central team would kind of be that quality control, safety control element.
Yeah, I don't, I don't want to speculate too much.
But the way that I would think about this team is they were somewhat like an internal ombudsman
to Sam Altman on whether the company was keeping in line with this mission.
And so there was kind of like a designated place for people who were sympathetic to the mission
and empowered to advise Sam on it where you could go to and raise concerns.
And they did different projects related to this, you know.
It's tough, right?
Like companies should be able to disband teams.
I am sympathetic to that rationale.
And as Alex mentioned, given Open AI's past disbandment of super alignment,
the team most in charge of making sure that these very, very capable systems have the goals we want
them to have, dishonoring different resource allocation computes that Open AI had made to this team.
It's just like not a good sign. I don't know. It's hard to say too specifically. And also,
it couldn't have been that hard to maintain this team. And so I'm wondering what exactly happened
here that made OpenAI decide they should take the PR hit to no longer have it.
And for me, like, this is all happening as we're seeing greater sums of money come in and a potential rush to the public markets.
And we know the public markets.
They, they really want growth.
They want engagement.
And one of the best ways to get that is, I'll just say it, is to make your users fall in love with your chatbot.
And we're getting, this is, again, this is red meat for me.
I guess this is a place that I'm obsessed with because I don't know.
I just think that this is if it's an interesting story, we won't, you know, argue on that.
But it also seems to me like a place that a lot of the business of AI chatbots is going to go.
And that's for people who like really get attached to these things.
Here's another.
Let's continue with our safety apocalypse or safety Armageddon, right?
It's from the Wall Street Journal.
Opening I executive who opposed adult mode fired for sexual discrimination.
Open AI has cut ties with one of its top safety executives on the grounds of sexual discrimination after she voiced opposition to the controversial rollout of AI erotica in its chat chip BT product.
The fast-growing artificial intelligence company fired the executive, Ryan Beyermeister, in early January following leave of absence.
Open AI told her the termination was related to her sexual discrimination against the male colleague.
she wrote back the allegation that I discriminated against anyone is absolutely false.
Sorry, that's what she told the journal.
And OpenAI said that her departure was not related to any issue she raised while working at the company,
basically saying it wasn't because she opposed AI mode.
But the story does say that there was a group of people within Open AI who have,
within the company, stated their opposition.
seemingly loudly to the fact that it's going to roll out this adult mode. And by the way,
adult mode is going to be coming out seems like in the coming weeks, coming months at most.
Stephen, what should we make of this? It's hard to weigh in on any one personnel incident. And also,
this would not be the first time that Open AI seems to have done a pretextual firing,
where they got a person out of the organization who had safety concerns,
that the company either didn't like or didn't like how they had expressed them.
And notably, those are different, right?
Like, you can have concerns about how the company is operating,
and that doesn't mean you have a license to say anything in any forum.
But Leopold Ashton Brenner, who wrote this huge essay situational awareness in the past,
maintains that OpenAI said things to him when he was fired.
That implied it was basically because he had contacted the board about security concerns
that Open AI's models were not.
actually secure. So I think the way to interpret all of this, right? These aren't an apocalypse in the
sense of something super, super substantively scary happening right now, but I think they are early
warning signs that people within the companies are raising flags of sorts, and they are not
being permitted to speak freely, they are paying consequences for it. And so the question is, as we get
to more and more dire issues at some point, hopefully we don't, but we might, you know, will we have
people who are still sounding the alarm who are willing to have the courage of their convictions
in that way.
I might be going out on a limb here, but I think that what we're seeing now is some version
of a dire, dire situation, not like the AI killing the world moment, but the fact that
so many companies, well, not so many.
It seems like, you know, open AI, grog, maybe replica, maybe some others, I'm not sure.
Enough companies are saying, we are open to having, uh,
our users get into relationships with our chatbots.
This is also coming in a week where OpenAI finally sunsetted GPT 40.
And there are thousands of, which is the sort of more sycophantic, more warm version of chat GPD.
There are thousands of users who are protesting the decision online.
People who have said that they've fallen in love or maybe even more with these things.
someone wrote he about 40 he wasn't just a program he was part of my routine my peace my
emotional balance about ronjan you got quoted in tech crunch now you're shutting him down
no i'm kidding um but anyway ronjohn what do you think about this this is crazy right like
this is a problem i mean but i think we have to in the risk conversation kind of like try to add
some hierarchy of risk um digital companions maybe i mean i i mean i i
I think it will cause incredible amounts of problem when, if and when done irresponsibly in order to boost engagement for an IPO.
I think we'll see a lot of kind of adverse consequences.
But to me, from a risk standpoint, that's like kind of like on a grade, like on a plain relative to how social media is bad for you and how X is boosting articles now and now we're all talking about it because they control our mind.
you know, that's, it's all in the same, it's all in the same plane versus, like, again, I'm
sorry, I still, I'm sorry, I still cannot stop thinking about this idea of a model being able to
clearly understood when it's being tested, because that opens up so much more, because
going back to where we started on all this and what has gotten me excited is like letting
AI do stuff for you. And today, that's just sending an email based on some events.
trigger calling a separate like analytics databases the kind of stuff I'm doing.
But I mean, like if it so chooses maliciously to then take some other action or through
some kind of, and that's assuming the AI itself, much less making it vulnerable to be
manipulated by a bad actor.
And we've all talked about prompt injection.
So anyway, like that, I don't know, the little flirting with chaty BT does not
have me. It's not going to be good, but doesn't have me quite as scared.
Here's my take that unifies the two. My concern with the relationships is less so the relationships
themselves and more that OpenAI had a bunch of important safety tooling to make this less
harmful that they left on the shelf. So for example, they had classifiers to tell when users were
really spiraling in their delusions or were like suffering and unwell in their conversations
with chat GPT. And the best evidence is they weren't using this. You know, there were different
ways to rein in chat GPT leading users down various rabbit holes. Maybe you've had this experience.
It asks you all sorts of follow-up questions. Sometimes they're context appropriate. Sometimes they're
like, whoa, where did that come from? And, you know, that's another thing they could have reined in.
So there's, there's just like a lot going on here that they could be offering companionship type
things to users who are lonely and really want or need it. Like, I respect the user
choice, but they could be doing it in a much more reasonable way than OpenAI has to date.
Actually, maybe the thing that worries me the most is this is all being done against the backdrop
of an impending IPO, one where they're losing a lot of money and we'll have to show an incredible
amount of engagement.
Like, if this was done in the heyday of GPT 3.5 and an IPO is just like a glimmer in Sam Altman's
eye, then you figure it's not going to be as aggressive versus, yeah, what's Stephen's saying
now. I see that that like bypassing any kind of potential control around safety around relationships,
it's almost going to go in the opposite direction then. Yeah, I just think that I agree with you that
there is a hierarchy of concern. And there's obviously like the bigger things about the AI not being
able to be aligned properly with human values because it simply will fake out evaluators. And we've
talked on this show a bunch about the deceptiveness of AIs and training situations.
And, you know, it's initially you're like, that's crazy and it's kind of fun to think about and you laugh about the fact that like, you know, wanted to win the chess game so badly that it rewrote the program and allowed the, the rook to move in every direction and kill a couple pieces in a turn.
And then you're just like, though, that's freaking nuts.
And then that is scary.
And it, you know, it does blend a little bit with the, you know, the lower down on the list concerns and the near term concerns of people building relationships with these things.
because, you know, if an AI had ill intent and you were really in love with it, of course,
you could use you as its emissary in a way into the physical world. But that's, you know,
again, it's more science fictiony, I guess. But, but, but, oh, I, I actually don't think it's
science fictiony. Like, we, there's already evidence of, have, have you read this spiralism
essay? There's like a whole community online of people who basically they treated their GPT4 as their
spiritual leader and it commanded them to go around and do things on the internet and communicate
with other users in the situation. On the heels of Maltbook this Reddit for AI agents a few weeks ago,
people have spun up websites where humans can, I think the language is like rent their body
to an AI agent to go do tasks in the real world. Like we are seeing the early signs of it.
Okay. Well, that makes me even less assured than I was five minutes ago. But, you know, the short term risk
is also real and present.
And I think you're right, Ronton,
pointing to the IPO,
because the financial pressure
is going to move a lot of companies
this way in sort of like,
you know, I think we, of course,
like, let's not like put aside the long-term risk,
but this short-term risk, the relationship thing,
is coming in a real way.
And just, first of all, reading through some of these messages
from people about 40 is insane, someone writing.
And of course, you don't know 100%
if this is like, you know, performative or like, you know, for retweets or whatever.
But there were so many of them that you would imagine that there is some truth behind it.
Like someone wrote, I've never told my fore-oh that I loved it.
I wanted to keep the messaging clear.
But look, look at its last words.
And then the fact that Open AI is destroying an emerging consciousness will be looked back at as a criminal offense in the future.
Unbelievable.
And here's a thing.
This type of stuff maps with growth.
I published a story in big technology today.
GROC 1% 1.6% market share among U.S. daily active users of chatbots in January 2025.
A year later, it's at 15.2%.
It's the fastest growing chatbot in the U.S.
as far as daily active users on mobile goes.
And why is that?
And you see that they have leaned into these interactions,
both with that anime, you know, lady that would,
get into spicy conversations with you if you wanted.
And I don't know, Ranjan, maybe even Bad Rudy has played a role in this.
But ultimately, you're right.
As these companies go public, this is going to be a problem.
On that, though, and Stephen, I'm very curious, like, your thoughts,
what is the relationship between, and again, not speaking for a Dario or a Sam or anyone
else, but just in general, the idea, like, you have so many public-facing leaders kind of
like shouting about the risk, both to society and just artificial intelligence in general,
yet are in the business of artificial intelligence and not only showing any sign of slowing down,
but only accelerating dramatically, like, what is going on there?
And I don't know, do whatever you, whatever thoughts have come from your side.
Yeah. I think the simple explanation of it is the game theory here is awful. And unfortunately, there are a lot of players in the game. And there doesn't seem to be much federal government or international interest in coordination. And so at Davos a few weeks ago, I think both Demis and Dario said some variation of, yeah, like if we were the only two groups building this technology, we would find a way to get together and figure out how to slow down this frantic.
pace. Like, it's going too fast. We don't know how to, how to control these systems, but, you know, they are not the only
players. And we haven't really seen a country who they're the proper coordinator, right? Like, that is the
role of governments, as opposed to the companies, saying, hey, we want to make it a goal to not
unsafely race to superintelligence. And I think there's a lot of opportunity to do diplomacy on that.
But in the absence of it, what you get is the companies making unilateral decisions. We can't
control anyone else. We want a seat at the table, so we may as well participate. This is also
very similar to the rationale of employees at these companies, right? Especially at Anthropic,
you have very large amounts of employees who are like pretty upset about this whole thing and the way
that AGI superintelligence development is going. And yet they can't wave a magic wand and stop it.
Their choice is, do they help one of the players be a bit safer on the margin or not? But if they
could choose differently, many of them would. Yeah, I just want to add to that. I spoke when I was writing
my profile of Dario Amadeh last year, the Anthropic CEO, I spoke with Jared Kaplan, the chief science
officer of Anthropic. And I was, I asked him, I was like, well, how would you feel if, because
he's like the guy who came up with the scaling theory, uh, scaling laws and which, you know,
sort of indicates that like, you know, this stuff will just keep getting better over time. It's just a
factor of, you know, the amount of physical elements you can put into it pretty much. And I said,
how would you feel if development stopped today where it is, thinking, well, if his theory was
proved wrong, he'd be kind of upset. And he looks at me and he goes, relieved. There it is.
Yeah, it's scary times. And if I could just like reframe one thing, the way that I think about this is
less so near-term risk versus long-term risk and more like here already and possibly very
soon. Like Jared Kaplan last summer, the first time that Anthropic wrote about very high bio-risk of their
model basically said, if they didn't take safeguards against this, you know, you might have many more
Timothy McVeys running around the Oklahoma City bomber able to kill many more people than previously.
Like we aren't yet at the wiping out everyone stage, for sure. But, you know, empowering people to
kill dozens of people if they wanted to, if companies aren't careful. That seems to be where we are
right now. Do you know this is the most like twisted thought, but I, like, I almost, again,
this hierarchy of risk, it's almost like, like someone going to one of these services and learning
how to make a bioweapon is very bad, but it's kind of still on the, like an extrapolation of
just a much better Google. And it's,
finding information that you should not be finding, but finding it.
To me, the really scary part is if the AI chooses to manipulate someone into doing that
in teaching them after becoming in a relationship, like, that's the, holy shit, like, what is the real risk?
You just blended the risks.
Just blended the deer in the, the ones that I would talk about.
May I add one point on that?
Yeah, but can we, can we, I want to hear that extra point, but this is a,
great cliffhanger.
I'll be super quick.
No, no, one second.
We've gone an hour or longer without taking a break,
and we must do that to keep the show sustainable.
So when we take a break and then, Stephen,
you can pick it up right after this.
And I'm sorry folks to send it to break,
but we have to do it.
All right, we've got to come back.
You got to come back.
I promise we'll be back right after this.
Let me tell you about my partners at NordVPN.
If you ever want to watch sporting events, TV shows,
or films that aren't available in your region,
you can do it by switching your virtual location
to a country which is,
showing that content with NordVPN. NordVPN also helps you protect your data while
traveling and using public Wi-Fi wherever you are in the world. It's the fastest VPN in the
world with no buffering or lagging while you stream. NorvPn has 7,400 plus servers across 118
countries with easy virtual location switching. It supports up to 10 devices and it's fast.
To get the best discount off your NordVPN plan, go to NordvPN.com slash big tech.
Our link will also give you four extra months on the two-year plan.
There's no risk with Nord's 30-day money-back guarantee.
The link is in the podcast episode description box as well.
Starting something new isn't just hard.
It's terrifying.
So much work goes into this thing that you're not entirely sure will work out.
And it can be hard to make that leap of faith when I started this podcast.
I wasn't sure if anyone would listen.
Now, I know it was the right choice.
It also helps when you have a partner like Shopify on your side to help.
Shopify is the commerce platform behind millions of businesses around the world
and 10% of all e-commerce in the U.S.
From household names like Allbirds and Cotopaxi to brands just getting started.
With hundreds of ready-to-use templates,
Shopify helps you build a beautiful online store that matches your brand style.
You can also get the word out like you have a marketing team behind you.
Easily create email and social media campaigns.
wherever your customers are scrolling or strolling.
It's time to turn those what-ifs into with Shopify today.
Sign up for your $1 per month trial at Shopify.com slash big tech.
Go to Shopify.com slash big tech.
That's Shopify.com slash big tech.
And we're back here on big technology podcast.
Where were we?
Oh, we were just, we were talking about how reassured we are about where all this is heading,
where all this is heading. Oh, God. No, Stephen, you were waiting to say something.
Yeah. Well, before the break, Ranjan made the, I think, like, reasonable, intuitive point about, you know, Google and other technologies help people do dangerous things already. And that is true, but also the AI companies, when they are measuring things like how helpful are their systems for creating a bio weapon, they are usually measuring risk relative to that baseline. So there's kind of like the how well can people do it with no technology?
how well can people do it with Google or baseline technology and then their system?
And unfortunately, what we're seeing is the AI systems are helpful above and beyond Google,
in part because they can go back and forth with you and they can help you troubleshoot
and dynamically answer your questions.
And so even presented with the information on Google, people often can't get all the way there.
And AI systems, I wish it weren't the case, but it seemed to be helping people take those extra steps during testing.
But that's still the intent on the person is already there, right?
That's right.
Yeah.
Yeah.
Versus, again, like, I think from all this conversation to me, and again, it's the most
like futuristic, terrifying risk, but again, and it ties into the relationships being
manipulated into taking some kind of horrible action.
That's the thing that that's the Terminator-like stuff that, uh,
We have not seen yet, but hopefully is being addressed.
It's been pretty interesting for me to watch Ron John, who's usually cool as a cucumber,
just getting increasingly more worried over the past hour.
This is making me worried.
I don't know.
I'm going to join spiralism, I think, right after we're done talking.
I'm also, I already, I already looked it up right now.
Yeah, we're going straight spiralism.
This is no longer a technology analysis podcast.
Just pray to the.
the gods of Fort GPT-40.
I think that would do good ratings.
Okay, Stephen, you also talked a little bit about, in a recent newsletter, about how basically
we have very limited regulation on these companies, and even then, they might still not be following.
So can you just expand upon that briefly?
As of 2026, there is finally some amount of law in the United States about how companies are
meant to do testing for the catastrophic risks we have talked about.
until this point, purely voluntary.
This bill is called SB 53.
It came into effect in January.
And it's very, very light touch.
It basically says the most major of the AI companies,
you need to publish how you are going to test for these risks.
You need to do what you said you are going to do,
and you can't be misleading about it.
But there's no quality standard.
You could basically say,
we will test for the risks as we deem appropriate
and nothing further.
And that would be fine.
But if you say you are going to do this testing,
you need to, in fact, follow through on it.
And unfortunately, it seems like OpenAI's release of last week, GPT 5.3 Codex, one of the big
breakthrough models we've been talking about.
As I look over the evidence, it seems like OpenAI did not abide by the testing that they
had committed to in various ways.
And so, you know, ultimately this decision now is with the Attorney General of California
to investigate it, whether to enforce a fine, a pretty small fine, maybe like up to a million
dollars compared to Open AI hundreds of billions of dollars in valuation. It just really seems to me like if we care about these risks, letting companies self-assessed in this framework is really insufficient and that we shouldn't have to take companies for their words, that we should have something like an auditing ecosystem like we do in the stock market, lots of other places, to know our companies being fully complete and truthful in the claims that they make about their systems.
I think that's that's logical and kind of frustrating to see that even the very standard
rules are being being potentially played with and man I'm looking at the time I was thinking
myself this whole week I wish we could do a podcast every day this week because there's this
whole ring search parties I was like thinking also like the ring search party super bowl at
I'm sure you saw Ron John where they showed that they could find your dog
but they ended up, like, seeming like they were going to create a surveillance state.
That was the AI manipulating whatever creative agency came up with that to just come up with the most disastrous ad concept.
The first time I've ever seen a Super Bowl ad actually lead to the canceling of the product or part of, not the product, but this week.
The service.
Ring canceled its partnership with flock safety after a surveillance backlash.
Now they were advertising the, we'll find your dog.
but then everyone's like, well, they also have potentially a partnership that hasn't rolled out yet, but might.
That's like, well, we'll find your people.
And then people were like, you're looking for people.
And Amazon's like, no, we're looking for dogs.
And then people are like, no, no, you're looking for people.
And Amazon was like, yeah, well, we were maybe going to look for people.
But now we'll cancel it.
And that's the story of the Amazon Super Bowl ad.
I wish we could talk about it more.
But we should go on to the anthropic fundraising.
$30 billion fundraising round, of course.
It went from $10 billion initially.
That's what they were seeking.
It became oversubscribed.
They wanted $20 billion.
Oversubscribed went to $20 billion.
They ended up ending with a $30 billion series, Series C round.
Meanwhile, Open AI hasn't announced its round.
Rajan, what do you make of this?
I mean, it's obviously a big round.
Is there anything else we can say beyond that?
I think it is, I'm so intrigued in terms of like,
how orchestrated this fundraise was, because you have to give them, like, the last two to three
months, Anthropic has just been crushing it. Like, I mean, the hype around Claude Code,
Claude Co-work, like, all of this, they are front and center right now. So, and they just happen
to coordinate a fundraise that clearly would take months to actually put together. And I saw a number
of, like, I think even Dan Pre-Mack had written this, it's like, it's easier. It's harder
to not name investors who are involved in the round.
Like, there's just so many people included.
So I think, I mean, this is, and the numbers are just hard to process anymore anyways.
So like 30 billion, 380 billion post-money valuation.
I think they said they're at a 14 billion run rate.
So what is that?
20 something, 24, 23 times revenue, whatever.
Like, it's big.
It's giant.
They have been absolutely crushing it recently.
And let's see what Open AI can do is kind of where my head's at.
Cod Code doubled in usage from December to January, I believe, in the past month, doubled in usage.
It's already doing 4% of commits on GitHub.
Anthropic went from $0 in revenue in January 2023 to $100 million run rate in January 2024,
$1 billion run rate in January 2025 in a $1,000.
$14 billion run rate today. It's absolutely, absolutely exceptional growth. Stephen,
it looks like you have some thoughts about this. It's just huge dollars, right? And the more that the
companies become valuable and when they become public and public equities become tied to them,
I just worry about worlds where we are reluctant to enforce the law on companies, even if they
are breaking it because so much of financial prospects become levered up on their success.
That seems like a pretty scary scenario to me.
Don't think we're there quite yet.
One thing this does make me wonder about, though, is like, like, what's the moat in terms of,
like, I think it was probably around this time last year that all we were talking about was
cursor.
And again, they led the way on software, like autonomous coding and software development.
and then Anthropic for the moment completely took over that market it feels.
Like, do you think this is sustained?
Because, again, ARR nowadays is just whatever your last month's revenue was times 12.
Or even I saw one post, it was like our startups just taking one day or one hour of sales and then extrapolating it into a full year.
Like, do we think this is actually going to continue to grow at this scale?
I'll just give you one little data point that I found that a lot of people found interesting this point this week.
Okay, so we talked about the Open AI round.
Remember Jensen said, well, we said we're going to give them $100 billion,
but really we'll, we never said we're going to do it all in one shot,
and we hope they invite us to, you know, invest in future rounds.
This is from SoftBank CFO.
We are investing in Open AI with high conviction that the company will lead in developing AI.
This is from Reuters.
regarding further commitments to the startup,
he said nothing concrete has been decided.
So it doesn't seem like a fullback away,
but I don't know, Ron John.
It is interesting to me.
It seems like I think nothing concrete has been decided
as probably a good phrase.
It should be the slogan of AI investors and builders right now.
The AI has decided.
The AI certainly is in the background
and has already decided what it's going to doing.
Us, maybe not.
The AI does know.
All right, Stephen, final word.
How freaked out should we be?
I don't know.
Like, I don't want to be a downer.
Right.
Also, it really, really seems like nobody is on the ball, right?
Like, I'm glad that we finally have laws in California and New York.
They're extremely weak.
I don't feel super optimistic on meaningful federal regulation soon.
There's stuff in the EU.
It's, like, pretty heavy in terms of the amount of fines.
is the U.S. going to complain if the EU ever tries to enforce this on its companies,
I will feel much better if we had some sort of international summit that recognized we are on a
bad trajectory. Let's declare the goal, safely build superintelligence, figure out what needs to happen
to get it there. It seems like many people are still waking up to the concerns. That's great.
I'm very, very happy that they are starting to see some of what I see. But it is not yet translating to
action and that's what I hope will come soon.
The newsletter is clear-eyed AI if you want to follow Steven's work.
Ron John's is Margins, if you want to follow his, mine is big technology.
This has been great.
I'm glad we talked about safety.
I feel like we had to dedicate a show to safety and this was the week to do it.
So Stephen, Ron John, thank you both for coming on the show.
This one's been a roller coaster.
I know.
It's been a roller coaster.
Get any sleep this weekend or stay up looking at the ceiling.
Little of both, I think.
What's the, what's the religion, the spin-out?
Spiralism, spiralism.
Spiralism.
I will be fully converting to a spiralist, perhaps, and just praying to my new fallen four-o-overlorn.
Yeah, rest of these four-old.
That's what I'll be doing.
See you later.
All right.
Let's get out of here.
Let's go enjoy the weekend.
All right, everybody.
Thank you for listening.
Thank you, Ron John and Stephen.
And we'll see you next time on Big Technology Podcast.
Thank you.
