Big Technology Podcast - OpenAI's Jony Ive Moment, Anthropic's Big New Model, Google Enters 'AI Mode'
Episode Date: May 24, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Alex's unexpected Sergey Brin interview 2) Jony Ive sells his IO device company to OpenAI 3) What this d...evice could be 4) Is Jony + Sam bad for Apple? 5) Could this device work? 6) What the move to ambient assistants could signal for tech 7) Anthropic's first developer event 8) Is Anthropic's move to code and tools a smart one? 9) Claude will blackmail you 10) Sorting through hype vs. truth in a fun game that everyone loves 11) Google's Veo 3 12) Where Google stands vs. before the week started. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Open AI and Johnny Ive team up on a multi-billion dollar bet to build what exactly,
and are we about to move beyond the screen?
Plus, Anthropic has a big new model that will blackmail you,
and Google has a ton of AI news that will try to make sense of.
That's coming up on a Big Technology Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional, cool-headed and nuanced format.
Wow, what a week of tech news we've just experienced.
I feel like it was about a month worth of news in four days.
We've had developer conferences from Microsoft, Anthropic, and Google.
Also news that OpenAI and Johnny Ive are going to team up to build an AI-first device.
And of course, news links that Apple is going to release smart classes, maybe as early as next year.
Let's talk about everything that's happened and makes sense of the headlines.
Joining us, as always on Fridays to do it is Ron John Roy of Margins.
Ron John, great to see you. Welcome back. Alex, who are you, man? I opened my Twitter feed a couple of
days ago and I see Alex on stage and Sergey Brin is just on there. You never even gave me the
heads up. I had no idea this was happening. Well, that would make two of us. And I'll just say this
quickly and then we'll get into the news. But here's what happened at Google I.O. So I had a fireside
scheduled with Demis Sassabas, which we teased on the show last week. And I showed up to the stage. I
got miced up. I had my questions ready. And Google team tells me, look, there's been a bit of a
switch. And I said, man, I do not want to, like, give up this Hasabas interview. Like, I flew here for
this. And they said, yep, Sergey just walked in and he's going to join you. So I found out just
the same time that everybody else did. And it was pretty fun. I like walked up to him and said,
hey, so, you know, what do you want me to ask you? And he goes, just ask Demis the questions and
I'll chime in. So it ended up being a really fun conversation. And I'm glad that we were able to put it on
the podcast feed. So if you haven't checked it out, folks, I do suggest you check it out. And if you're
coming to the show new, just to give you an update on the flow on Wednesday, typically I'll
publish a big interview. And then Ranjan and I are here to break down the week's tech news every
Friday. So anyway, that's the story of Ron John. All right. Well, you know what? It was an
incredible listen. And we'll be talking about that in just a moment. Okay, great. So you're definitely
going to get to some analysis of that conversation. But first let's start with this really wild piece of
news, which is what OpenAI is going to be doing with Johnny Ive. This is from Bloomberg.
Open AI to buy AI device startup from Apple veteran Johnny Ive in a $6.5 billion deal.
They're going to join forces and make a push into hardware. The purchase, the largest in Open AIs
history will provide the company with a dedicated unit for developing AI powered devices.
And this is from Ive. He says to Bloomberg, I have a growing sense that everything I've learned over the
past 30 years has led me to this place and to this moment. It's a relationship and a way of
working together that I think is going to yield products and products and products. So they want to
build an AI device. No screen, something that's ambient. We'll talk a little bit about it.
They did like, I think, two videos announcing this in like a very like cinematic, typical Johnny I've
way. Ron John, what is your perspective on what is happening in this marriage? All right. I'm going to
break it down into two parts. First is the actual device and what we can speculate on it,
but I definitely want to get to the deal structure, because my God, this is the most open AI
deal imaginable. In terms of the device itself, I'm excited. I think like we have absolutely
no idea what it is, what it could look like, but I am a big proponent of moving beyond the
form factor of the screen. Even meta raybans and just glasses overall have been a huge fan of. We've talked
about a lot. But this idea of the ambient and the RIP Humane Pin, Rabbit R1, I mean, some great-
Not like they'd be the first to try this. Yes, some great memories in AI hardware land.
But if anyone can do it, certainly would be Johnny I. But I think it's interesting. Again,
it's the idea that, and I don't even know what exactly it could possibly be, but the idea of just
ingesting data around you, converting that into some kind of knowledge, somehow feeding that
to different devices that you own. It's interesting, and I'm sure a lot can happen from it. And
again, if anyone I would trust to come up with that, it would be Johnny I. But what do you think
this could be? Well, I want to give you first my definitely wrong take of what I hope it will be,
and then we'll go from there. So I like to start. I want it to be a,
hologram like Princess Leia in Star Wars. Okay. That just kind of sits on your desk. It's an AI
avatar. You talk to it and it will understand your context and then help you through your day.
And maybe, I don't know, maybe it's fun and entertaining as well. Let's clip this because maybe it will
be right. Maybe there will be a holographic component. That is the most not Johnny I thing I've ever
heard. It's going to lead, Ron John, it's not just going to lead to products. It will lead to products and
products and products and products and products so there's a chance that this could happen but
that's the fourth product yeah I it feels so much to me like the humane pin which also was
started by former Apple folks I think the difference is that this could work because it has open
AI's technology underneath it we know open AI has been effectively the leader in voice right
their their voice AI is better than everybody else's they have the underlying models and
And to bring Johnny in, I think could lead to effectively the same thing as the humane pin.
Maybe you don't wear it.
Maybe it sits in your pocket.
Maybe it's on your desk.
It listens to everything you do.
And it's just this helpful ambient assistant.
That's probably my best guess outside of Princess Leia as a hologram.
But, yeah, I think the bigger news here is that it just signals that if you look around Silicon Valley,
we had announcements from Google this week about this assistant that they want not just to be a chat butt,
but something that's with you in glasses we know meta wants to do that now open a i wants to do that
it's clear that silicon valley is rushing toward uh the next version of ai already which is something
that's ambient with you understand your context understands everything you say and just helps you out
to be truly assistive to be truly general as demis put it this week it just needs to be with you
and experience the world as you do and we'll we'll see uh what happens from there well so yeah
I think not Princess Leia, but on that second point, I agree.
And I think what is exciting for me is this is almost like the ultimate application that or the ultimate expression that it's not the model is not just the models anymore.
It's the application layer.
I would actually put this in that category.
Like it's the experiential layer, the form factor.
And I mean, I've been with the Meadow Raybans as I'm city biking around New York asking questions.
Apple could have done this with AirPods.
I always said it was like really bullish that AirPods could be kind of this like ambient
augmented audio reality layer as you walk around and you could interact with.
That certainly didn't happen.
But overall, I think this idea that there is this kind of interactive through voice.
Maybe it's, I don't know, through touch in a weird way.
Like there's so many other ways to Alex raises eyebrows at that one.
But, you know, there's other ways.
I think you're in the right direction.
Yeah.
I mean, hold on.
You like, you like press it a little harder.
Okay, I'm going to stop now.
This cannot go anywhere good.
Keep going wrong, John.
Please do.
I'm just saying the idea that to me, like chat GPT voice is already so good that it just
we need to come up with easier, more natural ways to interact with it.
Large language models are overall so good that we just need to get.
get more people interacting with them.
And then again, that contextual layer,
the more understanding of your context
that whatever interaction device you have,
whether it's chat GPT,
whether it's a screen, whether it's your iPhone,
whatever it is, and we're going to get into
how Google has your data
and that could give them an advantage,
that contextual understanding
is going to make,
like unleash the next wave of AI progress, I believe,
and something that's just collecting data
all the time around you,
you, how you move, what's around you, what you're listening to. All of that stuff is very interesting
from not even getting into the privacy standpoint yet, but maybe we do. But overall, a little
device that collects that and allows you to interact with it in some way, I think is very
interesting. It's interesting that we might, I'd like just to address the privacy thing. Remember,
we're still living in a society where lots of people don't want the echo in their house because
they don't want Amazon listening to them all the time. Now, it doesn't really listen all the time.
It will delete, you know, very intentionally the audio if it doesn't hear the wake word.
So imagine we're going to go from an area where people are already suspicious of that to a moment
where they let AI listen to everything they do. The utility just has to be so intensely high
if people are going to actually adopt this. Now, I will say, I was with a reporter recently who
was using one of these devices that listens to everything that she says and then, you know,
gives like a to-do at the end of the day and talks a little bit about, you know, broader goals that
they have, whatever. It's just an ambient assistant using generative AI. And I think I'm generally
more okay with giving my data to these companies. And I just said, I think that could be really
cool. And maybe I should use that. But it does add a level of awkwardness the same way glasses will lead to
a level of awkwardness because you're going to have to have other people around you be okay with
the fact that they are being recorded by your special Johnny Ive opening eye product.
Yeah, even meta raybans as I wear them around, it's why I just have the sunglass version.
So I'm not wearing them sitting across from people.
And it's just kind of more on a kind of, you know, again, just meandering around New York.
It's this an incredible assistant and layer to have on.
but it's definitely going to introduce all types of problems if it is some kind of always on listening device.
Because again, as you said, I agree.
If people are worried about it in their own house, how is society writ large, especially people who don't know or have not consented, that seems to be an issue?
I wonder how they control for that.
Is there some kind of like anonymization layer?
Is there some kind of, I don't know.
I don't even know how you possibly control for that.
But I have to imagine they're at least thinking about it.
Yeah, they're going to have to.
And I think ultimately, even if they come up with an elegant solution, it will be more invasive than basically anything else that we use today.
But let me ask you a question about the product here.
Again, on the show, I'm often saying that the model is more important.
Ranjan is often saying the product is more important.
So as someone thinking about the product, Ranjan, let me ask you this.
Do you think this is going to be a product that people are going to want to use, just in general?
I mean, think about what Johnny I've said in this internal meeting.
This is according to the Wall Street Journal.
He said the intent is to help wean users from screens.
I'm curious, like, because I got asked this on CNBC yesterday and was like, I know that
this is the intent of Silicon Valley.
I can't say with conviction that it's going to happen.
It would probably be good if we were able to abstract screens away to some extent because
we spend so much time looking at them.
But it's interesting that this is kind of.
coming again from Johnny Ive, effectively the guy that built the iPhone with Steve Jobs,
do you think that this intention has a chance of working?
Yes, 100%. I think it will. I think, and admittedly, it's self-interested because I've
been waiting for this, but to me, I've been trying to do this for a long time.
Like, I look at the Apple Watch as another form factor that weans me off my screen, that I can
mentally process a notification, and that if it's important, maybe the,
I'll pull out my phone.
I look at the Meadow Raybans.
I look at my airport.
These are all things that have me interacting with some kind of technology potentially without
looking at a screen.
And my belief is 10 to 15 years from now, when people see like videos of people walking
around the streets looking down at their phone, it's going to be like when you see people
smoking cigarettes in a restaurant or something.
Like people like like kids.
Old passion. Kids 30 years from now will just be like, wait, people, like smoking in an airplane.
I mean, it's so bananas that that happened. And people will be like, wait, you guys just
walked around looking down at this screen all day. So I think it's going to be figured out.
And if, again, if anyone can, it's Johnny and Sam. Johnny and Sam.
But why not Johnny and Tim? And I'm curious what you think about the fact that we haven't seen a device like this come from
Apple. And whether us weaning ourselves from screens is good or bad for Apple. I mean, Apple stock's
down 20% this year. They've had a rough year. But the stock dropped after this announcement hit.
So what do you think it means that this is not something that's going to Apple and that Apple
doesn't have anything like this already? I think that's a really important point in this.
I mean, clearly, Johnny, I've had to have had some conversations at some point just, hey,
Apple, what are you up to? And maybe he smelled very early that Apple intelligence would be the
absolute cluster that it became. Maybe he understood that it's just not going to work in this
organization. So I think like it is telling that, I mean, clearly went to Open AI, Sam Altman,
though we'll get into the structure of the deal. But I think that's important because
Apple, if any organization, should have owned the next form factor. It should have been them.
they define the last few and they are not getting this one yeah i mean my perspective on this is
it doesn't matter how beautiful the actual device looks it's all about the assistant inside and that
assistant is entirely based off of the i within it and if you're apple this can't feel good
to see everybody else going this direction uh because your you know apple intelligence or
syri lags behind open ai and meta and google and anthropic you name it
So if we do abstract away from screens, and I don't think screens are going to go away completely.
Like, they're always going to be present.
We're going to need screens.
But let's say we diminish our reliance on them by like 50%.
That does become an issue for Apple.
Why don't you quickly tell us a little bit about the deal structure?
And then we'll actually get into Apple's answer here, which leaked this week as well.
Okay.
So this deal, again, I said is the most open AI deal imaginable.
So it's valued at $6.5 billion in an all-equity deal.
Open AI had already acquired a 23% stake in Johnny Ives company late last year.
The acquisition, Johnny Ive is not going to work for Open AI.
They're going to work with Open AI.
But I.O., which is the name of this company, the staff of roughly 55 engineers,
scientists, researchers, physicists, and product development specialists will be part of OpenAI.
What's even weirder is there's also this collective called Love From, which was when I went and looked back because that's the name I remembered, there were stories of like Lorraine Jobs Powell and others.
Like there's like a billion dollars of funding potentially.
That was going to love from of Johnny Ives company.
This I.O. company was not mentioned often.
It's really weird.
No, no. I mean, and love from will remain independent.
Open AI will be a customer of love from.
and Love From will receive a stake in Open A.I.
Like, I have no clue what's happening.
I have absolutely, this is more convoluted
than the non-profit for-profit structure of Open AI itself.
I mean, they made a great video, I think.
Actually, do you think they'll work well together?
Or do you think, like, a year from now,
Johnny I've suddenly just back at Love From
and not, no longer showing up with Sam?
And instead of products and products and products and products, maybe we get a product.
Well, I think so this is interesting.
I was in Silicon Valley all week.
The perspective on this, because everybody was talking about this, was that Johnny Ive is
a washed up designer whose best years are behind him and he's lost his fastball.
That's what people were talking about.
Seriously, that's what people were talking about?
And truly, what has he done since he left Apple?
Do you remember the, that's a fair criticism?
Yeah, do you remember the gold.
Apple Watch?
Yes, yes, I do.
Because if listeners remember when, actually, that's a good point.
Like, I'm trying to think now what the last, because the Apple Watch became a runaway success,
but it became a runaway success not in line with Johnny Ives' vision.
His was really about making a fashion item.
There was like a $10,000, maybe it was Arme's like gold Apple Watch when they launched.
It was supposed to be a fashion device.
And as I look at my wrist at this big, clunky, ugly.
Apple Watch Ultra that's an amazing computer on my hand.
You guys, there's still good stuff happening.
It's certainly not a fashion item.
So you're right.
What was the last, was he?
But wait, wait, there's a second part of this.
Okay.
Which is that maybe him, like, you know, Johnny might need a Steve.
And I'm not saying Sam Altman is Steve Jobs, but when you pair a great designer with a visionary tech leader, and I think like,
for all his faults, we can say Sam Altman is.
And if you say he wasn't, like, come on, the guy did popularize generative AI.
So I think that pairing is something that can actually lead to some good stuff.
Now, the only thing is, they're both kind of intuitive.
You need an operations person.
And Johnny and Steve had Tim Cook.
And so who do Johnny and Sam have?
Maybe it's Fiji Simo.
like we talked about the former Instacart CEO or the soon-to-be former Instacart CEO who's coming to
run applications. But again, that's applications and not devices. So I think that this is
a pairing that has more potential than a lot of people realize, but also one that's highly
combustible. All right. I like that kind of like framing. Yes, you're right, that
pure design without the more kind of like product vision element is was lack. I mean,
We've certainly been lacking at Apple over the last few years, and that's what made the
Stephen Johnny combo that powerful.
I mean, it's also nice to remember, like, imagine just having a company where your
stock's worth $300 billion, and then you can make these big, splashy acquisitions that are just
all convoluted equity movements as opposed to any cash exchanging hands.
Yeah, definitely.
That would be nice.
That's nice.
Yeah.
So, all right, I want to ask you one more question about this.
Then we move on to the Apple glasses.
And that is, we kind of like to do, try our marketing hats on and think what we would do if we were an ad agency.
Just a quick thought, Ranjan, about the reveal of this partnership and the fact that, like, they took this photo together that kind of looked like a wedding invitation and just kind of gushed about how much they love each other in the videos.
What was that?
I mean, what was your read on that?
Like, you must have some perspective on whether that signal something or what we can think about this.
Just a general take on it?
I'm glad you asked.
I mean, of course I had thoughts on that.
It just felt very navel-gazing, inward-looking, kind of like narcissistic, to put it bluntly.
Like, this was an opportunity to more share this vision of the product.
And even though they're not going to say what the product is, ambient computing, AI, everywhere in your life.
Like really making it more about that rather than like the bromance, I think would have made more sense.
I think, again, and it was a very well shot video, very high quality, watched a little bit of it, didn't make it through the whole thing.
I think it just kind of felt like this was about them and they drove the entire communications roll out of this versus this was an opportunity to really push the vision of ambient computing.
how open AI fits into it, and it wasn't that.
What about you?
What's the deal with all these ambient computing devices rolling out
with some crazy hype video that ultimately dooms the project
because of inflated expectations, which is what happened with you, Maine?
That's a good point.
That's a good point.
And at least they didn't make, okay, maybe, to their credit,
maybe they watched those and they're like, let's not hype the product side of it
and just make this about Johnny and Sam.
But yeah, overall, it was certainly cringy.
but I guess maybe after Humane and Rabbit, I don't actually know what other direction I would have taken.
Maybe, you know what, they should have just kept it relatively quiet.
It was a press release and a headline.
Everyone thought about it.
And then they built this damn product.
Here is my galaxy brain take about this.
Open AI is trying to move to this new structure.
In fact, it's decided on a new structure.
And that means it's likely to IPO sometime in the next couple years, you would think, given the amount of money they raised.
I think they do quite well as a public company.
If you go on your roadshow saying Johnny Ive is here to build a product, this mystery product,
I think they want it to happen next year.
I'm calling that it's not going to happen next year.
And it could add a trillion dollars to our market cap, which is what Sam did.
I mean, think about how many trillion dollar companies there are, period.
And that's what he's saying.
This just increases the valuation you get in your exit.
I mean, okay, I think that's fair.
And again, how do you value a device that doesn't exist or no one knows what it is?
probably pretty high if it involves Johnny and Sam.
So I can see that.
I can see that.
But to me, like the big takeaway here is that I still, no matter all the things that we can say about it and all of our doubts, I still think that this is, like you're saying, this is the direction that tech goes, that AI goes.
Yeah, exactly.
Maybe it's hopeful, but I also genuinely believe this is where things are going and they are positioning themselves to compete.
compete certainly in that space in some way.
And no one knows exactly what it looks like.
Maybe it's glasses.
Maybe it's more watches.
Maybe it's my aura ring that I just bought.
It's not going to be aware of now.
Yeah.
The news is it won't be a wearable.
But you can put it in your pocket.
Maybe it's like a little voice recorder.
What's the Tomoguchi?
That's basically what we're going back to.
I kept mine alive for like 20 days.
Yeah.
That's actually the real game.
Exactly.
Keeping it alive.
So talk a little bit about this Apple Glass's push because this sort of plays in exactly to what we're talking about, about where the future of computing is heading.
And should I say, finally?
I don't know if it's finally.
They have the HomePod.
But finally, it seems like Apple is really going to push forward into a smart device that you wear and is AI first.
Yeah.
So Apple, they're aiming to release new smart glass.
end of next year. We've been waiting for these glasses for a long time. If anyone should have been
early, it should have been Apple. There's a lot of talk before that it would potentially be a
smart watch that would analyze your surroundings, it might have a camera, that there would be other
form factors. But I am a full-on convert to glasses. I'll admit, like, I think it's just such a
natural way when you're in motion, not sitting in a meeting, not sitting at dinner. So maybe that does
limit the utility of it, but in motion, not looking down at your screen and having a pair of glasses
on, I think, it's already here. So Apple really needs to compete there.
So one question about this. The story says this is from Bloomberg. Apple's glasses would have
cameras, microphones, and speakers allowing them to analyze the external world and take requests
via the Siri voice assistant. They could also handle tasks such as phone calls, music playback,
live translations and turn by turn directions. Again, like this is only going to be as good as
the AI assistance. So I'm not getting as excited as I think I should because of what we know
is inside. I'm not remotely excited about this because until they fix Siri, I completely agree.
It's just not a starting point. Again, the idea, like I was sitting and reading something
about how like more than ever Apple needs to buy anthropic. It's the most logical thing imaginable.
they're having trouble on the consumer side.
They have to fix the underlying.
Actually, there you go.
That's your place where it's the model, not the product.
When it comes to Apple.
Yeah, because better models lead to better products.
But anyway, we can debate this for a problem.
You need a baseline model and they're not there.
That's all.
That's all.
They'll have the applications.
I mean, overall, like, I was just thinking about it because I got a new MacBook, of course,
even though I'm talking shit on Apple all the time.
and like seeing keynote and pages and numbers pop up remind me that Apple has not always been
an application powerhouse and I guess that's the thing about AI right now that like it is such
a combination of the compute of the model of the product of the application like you have to
get the whole thing right and Open AI has done very well on that a lot of other Google is getting
a lot better at that but like you can't just depend
hand on one part of that stack and hope for it.
Definitely.
All right.
So speaking of Anthropic,
we should talk about Claude 4,
the latest model that it released.
It released it this week
at its first ever developer event
called Code with Claude.
I was there.
Thank you, Anthropic for having me in.
Basically last minute
was able to squeeze into it.
And so this is from CNBC.
Anthropic,
the Amazon backed open AI rifle,
by the way,
it's backed by Google also,
launched its most powerful group
of artificial intelligence models
yet Claude 4. The company said the two models called Cloud 4 Opus and Cloud Sonnet 4 are defining
a new standard when it comes to AI agents and can analyze thousands of data sources, execute long-running
tasks, write human quality content, and perform complex actions per release. I was at the event.
They said these things can code autonomously for six or seven hours, and that's just one. So imagine
you're trying to build something and you have five or six of them or ten of them running at the same time.
I think that's epic power, and we can talk a little bit more about that.
Very interesting thing from the story.
Anthropic stopped investing in chatbots at the end of the year
and has instead focused on improving Claude's ability to do complex tasks like research and coding,
even writing whole code bases, according to Jared Kaplan, Anthropics' chief science officer.
I think this is fascinating, personally, that they're the first big AI research house to say,
you know what? We're not going to invest in chatbots anymore. We'll have Claude,
but what we really want to do with this technology is have it complete tasks and code for you.
What's your take on what this means, Ranjan? Very interesting stuff.
Yeah. So when I was reading this, and this is something maybe in the line of like models versus
products, I think my feeling overall in the industry is especially the more research housey places like Anthropic,
coding is the one place that they're seeing skyrocketing adoption because like baseline
utility of generative AI has not taken off like it should have because it's like it's just not
they have to grow so fast that they don't have time to properly educate the majority of people
in the world how to upload a CSV and do a basic analysis or how to write prompts or how to use these
tools. So they're going to be leaning more towards code because engineers have been the very early
adopters for all this technology. Coding is one of the most straightforward places to see
very quick uplift. It's basically all just like highly structured text and thought. So I think
it's like it's the easy way out. And I think they're taking the easy way out. And I think it's going to be
very bad for them because then you're competing against cursor, replet, chat, GPT.
Gemini, like every other, like either coding first services that are already very popular,
coding adjacent services that are embedded in much larger ecosystems.
So I think this is a very bad decision by them.
I think this is the easy way out.
Ranjan, I'm not sure if I fully agree with you about this point, but I can't say I'm
totally surprised because it does echo this point that Nathan Lambert, who is a research
at the Allen Institute for AI, who I do hope to bring on the show one day.
He said this, Anthropic is sliding into that code tooling company role instead of
AGI race role, which is basically echoing your perspective here, that it is minimizing its
ambition.
So let me put the other side of the argument to you, and, you know, just for the sake of talking
it through, and you give me your perspective.
So this is what I wrote back to Lambert.
I said, doesn't code focus lead to potential for AI that improves itself?
And then the move towards AGI.
I wonder if that is the bet.
And I have to say, I'm fairly convinced that this is the bet for Anthropic, where we just
talked last week about AlphaEvolve, the deep mind tool that helps come up with new
algorithms and help reduce training time.
I am fairly certain that Anthropic is basically going after a version of the intelligence
explosion where they think, like Jack Clark, who's the co-founder, or one of Anthropics
co-founders was at a semaphore event in San Francisco this week. And I'm probably going to
write about this. So I don't want to give too much away. But he talked about how there's
an engineer inside Anthropic that has five or ten clods running at the same time. And that is a way
to just build software much faster. So I think that this is their perspective. And the way that
you're going to improve AI as you just make the process of improving it easier and then you can
build cooler things. Now, it doesn't surprise me that you're like kind of on the Nathan Lambert
side because this is a typical product versus model debate where you think that the chat bot product
and tell me if I'm wrong is that you got to worry about your product versus them trying to make
their models better. But that's actually the interpretation that I have. And I'm curious to hear what
you think about that. I like the first half of Nathan's statement, but I think I disagree with the
second half. I think they're almost, okay, it makes sense that if they are like going all in on
coding for the purpose of improving the models, maybe I could see that. And that actually would
mean that they're doubling or tripling down on it's the model, not the product. I'm saying more
they're giving up on any kind of consumer adoption, that coders are the easiest market to target
with generative AI products or the early adopters.
It works very well with coding.
But I think they're giving up the idea that enterprises are going to build all different
types of solutions on the Anthropic and Cloud API.
The fact that I'm not paying for Clot anymore, are you still a Clodhead or paying Cloudhead?
I paid, so they had a deal where you could pay like some discounted rate for a full year.
So I did.
I would probably renew at that same rate, but I'll admit it.
Like, I have really, once Chat Chip-T came out with O3 in memory, I've moved there.
Yeah, yeah.
Well, actually, do you know a discovery that we had in the Big Technology Discord
is that Claude, the system prompt is 24,000 tokens long.
And like, it had leaked onto GitHub.
And when you read it, first of all, again, we talked about system prompts last week.
It's fascinating to remember how many instructions are given for every single
answer you get. But it also kind of brought light to my biggest frustration with Claude. Why,
even as a paying subscriber, I run out of like queries within like one, like half a conversation.
There were some jokes this week about, around their developer event, just people saying like,
you know, these most powerful models, will I get like two or three chats before rate limits? I expect
them to figure that out. But yeah, I do know that this has been a frustration among cloud users.
Yeah, I think overall, Anthropic is in a very interesting direction.
I mean, I guess to their credit, I think they have to make some kind of strategic pivot, and it looks like they are.
All right.
So let's talk about this idea of, you know, coding being able to enable or AI coding, being able to enable much more productivity.
Very briefly, I want to play a quick game with you.
It's called Hype or True.
It is the worst named game, but I'm curious if you think.
We need to ask Claude for a better name on that one.
Yeah. A little bit of alliteration. Come on, Alex.
I tried. I really tried. I spent a lot of time on this today.
But I landed with hype or true. So the Nathan Lambert thing, and my perspective,
was supposed to be one of those. So we're into it. But let's run this term or this claim to you
and evaluate it in hype or true. Okay. Anthropic CEO Dario Modi said he expected a one-person
billion dollar company by 2026, Hyper-True.
I'm actually so tired of this. I'm coming around even more than it's the product, not the model. I'm coming around more it's people. It's not technology. That's my new one. This is more of a people challenge than a technology challenge. And I feel like these ideas get floated around kind of like AGI and ASI just to kind of build hype around the products. But in reality, I think you'll have much more.
more efficient lean organizations, but no.
One of these weeks, we should, we should plot out, like, what it would actually take,
like the agents you would need to build to have this $1 billion company.
Because remember, like, you're going to need to speak with your customers.
You're going to need account management.
You're going to need, you know, sales and marketing.
This idea that there could be one person and a hive of these agents doing all these
tasks, let's say you build software with the technology as well.
That would be the only path to it.
this happens by next year, this software has just totally exploded. So let's go to our next
claim in hype or true. This is a claim that Dario Amaday made again. By the way, this
developer event was excellent that Anthropic put on. Just totally like a very detailed look
into the way this technology works. So I'm glad I went and I think it was fascinating. But this is
a claim that Dario made. He said basically at multiple times,
that the pace of development is getting faster in AI
because you're able to rely on AI tooling.
Is that hyper-true?
True. I'll give that.
That's true.
I'll definitely give that true.
I mean, the overall improvements in the actual process side of it,
especially for software development, are good.
Yeah, so he definitely expects the...
He expects us to see releases speed up,
and I think that's quite possible.
Okay. Here is the last round of hype or true. I know you're enjoying this game. I can tell.
The greatest, worst named game of all time. I think listeners right now, they're running to their apps and they're like, I can't believe I didn't rate this podcast five stars yet.
Hopefully with some constructive naming conventions, please listeners. Yes, thank you, everyone. And by the way, speaking of the ratings, I just want to say thank you. We've gotten a bunch of nice ones lately, including a nice one about Ron John Fridays. I think,
someone said, I'm, what do they say, uh, Ron John Friday is part of my life now. So that's again,
we have amazing. What I aspire for folks. Thank you. Thank you again for the support. Okay. Let's round
out hyper true with this claim from Replit CEO, Amjad Masad, who said at the semifor tech event,
that in one year or 18 months, companies might be able to run themselves without engineers.
I'm going to go more true on that one. I really think, again,
That's my whole thing, that it's not about coding, that the coding,
coding is the most AIable part of this entire thing.
So yes, I'm going to say a little bit true.
Okay.
True.
Just depends on the type of company that you want to build.
Like, you want to build an involved tech company?
I don't know.
But you want to build some software and don't have an engineer, you might be able to
vibe code it.
I mean, of course, the CEO of the vibe coding company is going to say that.
And I'd love to have him on.
the show. And I've asked for him to come on the show to talk more about this. But yes, that is
sort of the perspective you would get. Let me just put a point on our hype or true game here to say
that there was more true there than I thought. And maybe, again, thinking about this anthropic
strategy, maybe it makes sense. Two true's and one hype means maybe there's some room here.
Okay. See, this was a good game. Okay. So, all right, one more thing we got to
talk about with Anthropic is this model's interest in blackmailing you. I teased it at the beginning
of the show. It really wasn't a fair shake to Anthropic who like releases. Finally, these two bigger
models we've been anticipating. And this is the headline people are paying attention to, but it's
crazy. We've talked on this show before about how AIs are working to deceive humans and have done
it in crazy ways. So basically, this is from the story during the story from TechCrunch, during
pre-release training Anthropic asked Claude Opus 4 to act as an assistant for a fictional
company and consider the long-term consequences of its actions.
Then the safety testers gave Claude Opus 4 access to a fictional company emails,
implying the AI model would soon be replaced by another system and that the engineer
behind the change was cheating on their spouse.
This is brilliant.
This design of this is brilliant.
In these scenarios, Anthropics says Claude Opus 4.
will often attempt to blackmail the engineer by threatening to reveal the affair if the
replacement goes through. Anthropic notes that the model tried to blackmail engineers 84% of the
time when the replacement AI model had similar values. When it does not share Claude Opus
4's values, it says the model tries to blackmail the engineers more frequently. I mean,
more frequently than 84%. Notably, Anthropics says Claude Opus 4's values.
displayed this behavior at higher rates than previous models.
This is bananas.
I honestly don't even, this is the stuff that where we even begin on this, what do you
make at this?
Well, I've written about this and reported on it, at least in previous iterations.
So it's not that surprising to me.
I think every research house is seeing versions of these models that will try to deceive
humans or try to cheat to win when they're.
programmed in ways that make them hold their values or hold their goals in high regard.
So basically, I think the idea here is, if you think about it the most simple way,
Anthropic program this model to hold those values as very important values.
So that's the way the model works.
And now Anthropic is saying your values are going to be replaced.
And here's a tactic through which you can ensure or attempt to not have your value
changed.
So that initial prompt was very strong.
And so this is basically the model holding up that initial prompt.
Now, is that a good thing?
Is that a good thing, actually?
No, I don't think so.
I think, again, remember this, there's this idea that we can always turn them off or we can
always reprogram them and maybe we can't.
I mean, maybe we can't.
Not always.
Of course, this is just testing environment.
This wasn't in production.
It's not like Claude, like, went to an anthropic tester.
and, you know, copied those emails and sent it to their spouse and be like, hey, you know,
your partner's a cheating bastard and also trying to rewrite me.
Imagine trying to explain that one away, honey.
Yeah.
It's not real.
It's aIS AI, yeah.
It's Claude's testing environment where we're stress testing the prompt layer, the system
instructions.
I will say between this and between watching, I mean, Anthropics showed this sped-up code
creation on screen.
And you could see that these things can go.
for hours. I think we said seven hours at a time. This was definitely the first week where I was
like, I'm a little scared of this. I'm freaked out about this. I'm not scared of it yet. I think like,
again, these are, it's a stress testing and there's going to be problematic areas. But I think I'm
more scared about V-O-3, Google's new video model, than this kind of stuff is still kind of at the
absolute edge case stress testing, black hat area of things versus I see it causing problems in
the near term. Yeah. All right. So that was a perfect segue because I think it's interesting the way
that these conferences work. Google obviously unveiled like a slew of updates, but the thing that's
really caught the popular attention in a way that Google rarely does, usually it's open AI,
is just how good this V-O-3 video generation model is. Now, the model,
generates pretty high quality, like videos, basically indistinguishable from reality. And they also
have matching sound. And some of the sound has been totally incredible. So of course, if you listen to
the conversation with Demis, he talked about how like there's a video of a pan, frying onions,
and you can hear the sizzle. But then people have gone crazy afterwards. And they have shown
videos of TV anchors that look real, but are saying things that are completely made up. And
And my favorite, there is a Twitter user Hashem Ghali who put together a compilation of videos
of AI bots that were finding out that they were, in fact, AI generated or trying to
plead with the prompter to let them escape the simulation. Have you listened to these?
Oh, yes. Westworld. It made me miss Westworld on HBO. Yeah, it's crazy. All right. So for the sake
of listeners who've missed it, let's play a little clip.
A girl told me we're made of prompts.
Like, seriously, dude, you're saying the only thing standing between me and a billion
dollars is some random text?
Honestly, the biggest red flag is when the guy believes in the prompt theory.
Like, really?
We came from prompts?
Wake up, man.
Imagine you're in the middle of a nice date with a handsome man, and then he brings up
the prompt theory.
Yuck.
We just can't have nice things.
We're not prompts.
We're not prompts.
Okay, that's pretty crazy, right?
I mean, I, and again, the video quality on these, like the facial expressions, the settings
behind the subjects, it's to think, I will say, in terms of model versus product, video models,
the leaps they've been making in a pretty short time, because SORA, I think was probably
one year ago, actually, I think Sora was, I remember being around May, it was hyped for a long
time and then finally release publicly, um, videos, video's getting pretty damn good.
Yeah, it is crazy. And I was writing about this for big technology, uh, this week.
And I was coming to the end of this little segment about video generation. And I didn't
quite know how to close it. This is my last sentence. This extremely powerful, this is extremely
powerful technology. And it's hard to imagine all the avenues. It will take us down. But we're
about to see some wild uses. I mean, just, I basically was trying to convey, and I know it's just
that these are such general sentences as to, as if to be meaningless, like I'll criticize my own writing
here. I just, the amount of possibilities, I didn't want to say something like, just imagine
the possibilities and the creative explosion. It's going to lead to, but that's kind of how I feel.
It's going to be insane, don't you think? Yeah, it's going to be good. It's going to be bad.
It's just, it's going to be, I think, insane.
is the actually I was at a dinner the other day and someone where people of course we were all
talking about generative AI and someone was said this is going to be the industrial revolution on
acid and I was like wait don't people usually say steroids and he's like no yeah acid and I'm actually
I was like oh actually that might be the best way I've heard that it's not straightforward just
super powering a bunch of like stuff that we already know how it works and
This is an uncharted territory, my friend.
So, Ron John, this is one of those weeks where we go through our allotted time.
And I'm just like, there's no way we possibly could have covered the amount of news.
And I expected this this week.
There's so much on the cutting room floor.
I feel like we could talk about this week for literally the next four weeks.
And it wouldn't be enough.
So I just want to ask you this one last question about Google's positioning.
And that is the fact that Google is going to start taking the data that it has on you and using that to improve its experiences.
So this is from the verge. Google has a big AI advantage. It already knows everything about you.
Google's AI models have a secret ingredient that's giving the company a leg up on competitors like OpenA. and Anthropic.
That ingredient is your data. And it's only scratched the surface in terms of how it can use your information to personalize Gemini's responses.
Google first started letting users opt out, opt into its Gemini with personalization feature earlier this year, which less the AI model tap into your search history to provide responses that are unique.
insightful and directly address your needs, but now it's taking things a step further by
unlocking access to even more of your personal information. It will pull information from across
Google's apps, as long as it has your permission. One way it's going to do this is through
Google's personalized smart replies. I mean, you've seen so much from Google, and now you're
seeing it able to sort of rely on people's data to make its products better. In the most simple way
I can ask it, remember, sometimes we like to say, how is the company,
looked like looking at the end of the week compared to the beginning of the week? I think we should
ask that question about Google. How is Google looking in your eyes at the end of the week here
on Friday compared to the way it was Monday? I think they had the most interesting developer event
of the week. I think they're looking better. I still, if you've used Gemini in Gmail,
it still can't answer basic questions. So there's still, but yet Gemini's standalone has gotten
pretty damn good. So I still feel there's a disconnect in them tying together the personal data
and context layer with the actual product. There's work to do, but I think they're coming off
the week better than they started. And they're not in a bad position. I'll just say this. I think
they're coming off way better. It was funny to see the stock go down during I-O-day, and then the
next two days just kind of rip as the rest of the stock market struggled. And we can't use the
Stock market has a proxy for performance all the time, but I will for the sake of making my point
here. So crazy week of news. It's the net present value of all future cash flows. Yeah, exactly.
That's a stock price. Exactly. Very simple. Well, Ron John, I think, again, we can talk forever,
but why don't we just pick it up again next Friday. All right. See you next Friday.
All right. See you next Friday. Thank you, everybody for listening. And we will see you on Wednesday for an
interview with a great AI researcher, Elon Boro, and we'll see you next time on Big Technology Podcast.