Big Technology Podcast - Apple’s Siri Embarrassment, Microsoft’s OpenAI Dilemma, Will AI Take Our Jobs?
Episode Date: March 7, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Siri’s poor performance 2) Amazon leaves Apple in the dust 3) Should Apple replace Siri with ChatGPT? 4...) Apple’s market cap is closer to $4 trillion than $3 trillion 5) Microsoft and OpenAI’s evolving relationship 6) Why AI wrappers are rising 7) Will AI take our jobs... now? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Was this the week the tides finally turned against Apple's disastrous Siri project?
Microsoft is working through its options with OpenAI.
It's all about those AI rappers, baby.
And let's talk about whether AI will take our jobs now.
That's coming up right after this.
Welcome to Big Technology Podcast Friday edition,
when we break down the news in our traditional cool-headed and nuanced format.
A lot of stuff is moving in the world of big tech in AI this week.
It's, I think, a watershed week in the age of Apple intelligence and not,
in a good way. We also have some news about Microsoft and Open AI's relationship and how that's
progressing. And finally, the AI field seems to be progressing beyond the model and towards the product
or the wrappers. So that's going to come as welcome news to Ranjan. We'll also talk about my
latest story about how I'm starting to feel like AI can do a lot of the work that I do. That's coming
up in our show. Joining us, as always on Friday is Ron John Roy of Margins. Ron John, great to see you.
We might have to update our it's the product, not the model T-shirts do. It's the
the rapper, not the model, but we'll get to that in just a bit. But I'm excited to talk about Siri
this week. I'm giddy. I'm giddy. I'm sorry. I shouldn't feel this much joy, but my God.
There's got to be some AI rapper rapper puns, like hip hop puns, East Coast, West Coast. I don't know
exactly what it is, but we'll have our merch guy work on it. We don't have a merch guy.
But if we did, that's what they'd be working on. And it's a good thing we don't, so we don't end up
doing this but exactly it might be a very successful fashion business you never know ron you got a dream
just like apple is dreaming with Siri and uh the dreams have turned to nightmare this week to me seemed
like the week where the tides finally turned on Siri and it begins with this german story that comes out
at the beginning of the week talking about how apples are artificial intelligence efforts reach a make
or break point now I think that's a very nice headline from an editor because when you start reading the
story. It's clear that German doesn't think it's reaching a make or break point. It's clear
that German is telling us, and I think we can all see with our own eyes we've been talking
about on the show, that Siri is an absolute embarrassment. And look at the first paragraph
of the show. Apple, the company behind the Mac, iPhone, iPad, and other groundbreaking products
has typically beaten rivals by following the hockey approach, skate to where the puck is going
to be, rather where it is right now. But we're currently in the middle of the biggest
technological revolution since the debut of the internet. And Apple is barely even on the ice.
I mean, this is a week of terrible cliches that are beaten to death, but each one of them has
Apple not on the ice, a corpse, we'll read them all. Now, German says, the unveiling of Amazon's
Alexa Plus has made Apple's shortcomings more apparent. When Apple unveiled an AI-infused version of
Syria in June, the system looked great and computer-generated video. The reality, though, is the
company barely had a functional prototype. And Apple engineers are going to need to move mountains
to get it finished as planned. We already know that it's not finished as planned. This type of
strong, forceful wording from someone like German coming out about the new Siri, I think just is a
moment where the narrative shifts. And I think as journalists, we sort of have to wait a while before
we say, yeah, this isn't working, right? The company might always be a little bit behind. You have to
reserve judgment. You can't really say in week three or four, a month three or four,
that it isn't working. But now we are months after last year's WWDC event and we're coming
up towards the next one. And as we take that leap around the sun, it seems like the promises
of Siri have gone from something that generated true excitement among Siri watchers like you,
Ron John, and has now become a deep disappointment and a deep embarrassment.
For Apple, and that's why you're seeing this moment here is because it's finally been enough time that the commentators can say, with credibility, this is a disaster. And this is a disaster.
This is a disaster. It is nice to see the things that we've seen with our very own eyes for months and months now be able to be reported, as you said, with full credibility, with full balance, and still just outright say it is bad. It is.
really bad. It's shockingly bad.
I, listening to your The Alexa episode from Wednesday and just listening, it was almost
enjoyable to hear about, oh, here's how we're approaching this problem with the stochastic
approach as opposed to an LLM approach. And like people excited and actually building things
that actually work. You could hear it in their voice. None of that exists from Apple.
Like Apple intelligence, the product, I think the biggest.
difference I've seen is the summary notifications now are italicized and before they were not.
Do you know what that takes, Ron John? That takes courage.
It takes courage. It's, I mean, it is, this was the first time, I think, in years. I really had to
thought, listening to the Alexa announcement, listening to the interview episode, I think I'm
going to switch from home pods. Like, I can't not have that quality.
of voice-led generative AI.
And voice-led generative AI is that good right now.
We've seen it on all different types of formats,
on Gemini, on ChatGPT.
So just the baseline is actually good enough.
And then you start thinking,
okay, then if I'm going to start using this more on my phone,
should I get a pixel?
And then suddenly I was telling this.
And like one of my friends made the years old joke
that do you want to be the person
with the green bubble instead of the blue bubble.
That is the only lock-in right now.
And it's crazy to me.
And again, I say this fully locked into the ecosystem,
but this is the first time I'm starting to really crack
and not just joke about it,
but actually think about moving out.
And by the way, you're hitting on something
that Panos and Daniel broke down in the Amazon episode.
So I asked them basically, like,
is there a switcher that goes from this deterministic
type of if-then statement to the stochastic or probabilistic LLM approach.
By the way, LMs, they are stochastic.
And they basically explain like you're thinking this as just like, you know, one model,
but there's a number of models to it.
It's far more advanced than that.
And then as you read German's story, you do see that this elementary architecture
that I was envisioning is built within the latest Alexa.
It actually exists within Siri.
This is from German's story.
The current iOS 18 version of Siri essentially has two brains,
one that operates the legacy Siri commands like timers and making calls,
and another that handles more advanced queries.
The latter capability will be able to tap user data and already is used
to not get confused when people change their request mid-command.
And this is for the next operating system.
For iOS 19, Apple's plan is to merge both systems together
and roll out a new Siri architecture.
I expect this to be introduced as early as Apple's worldwide developer conference in June of this year
with the launch by spring 2026 or as part of iOS 19.4.
The new system dubbed LLM Siri internally was supposed to also introduce a more conversational approach in the same release,
but that is now running behind schedule as well and won't be unveiled in June.
Something is rotten within Apple.
And I'm not saying that this means that they're bad people.
I'm not saying that this means that, you know, there's,
There's ethical issues.
I am going to say there must be a cultural problem.
Because if you're Apple and you have the ability to legitimately attract the best and the brightest in Silicon Valley in the tech world and you fall behind, when you are behind Alexa to the point where they are getting ready to release their update this month and you legitimately can't get into shape after announcing the vision at WWDC, your organization is made.
messed up. There's no potential explanation other than that. No, I really wonder how it could be
that bad. And we've we've joked about there needs to sometimes be like for the commercials,
the advertising side of it, like the one normal guy who just sits there and just says,
okay, that's weird, that's not weird, because let's not forget the advertisements that's kicked
off this whole debacle where people are kind of like celebrities are ignoring people that
aren't as important for them and summarizing their emails quickly in real time to try to,
you know, avoid having to pay attention to them.
Like, everything has just felt off from the beginning.
But when you look at the product, it's shockingly off everything else that's out in the market.
It would have been one thing if chat GPT voice was really good and then Gemini and no one
else was.
But when Gemini voice became as good and as chat GPT voice, advanced voice mode,
you saw that this was table stakes now this was par for the hyperscalor course and then Alexa I
am confident will be just as leading if not more leading in terms of that whole space that
I don't know it's something is up I think something is rotten at Apple will be a
a nice little cute headline that will be we'll see more and more but yeah there's something
else something going on that was yeah and i'm just going to talk about this a little bit more because
it is so important um the longer do this and more you can sort of learn to read between the lines of
some stories and there was an incredible set of clauses in german's story that i don't think
uh anybody picked out yet or or has been picked out in the way that i expected it to um that i
think we should go over so he says there have been problems with rivals poaching talent
and what they deem to be ineffective leadership.
Within the AI department, employees have raised serious questions
about whether chief executive officer Tim Cook
or even the company's board of directors
needs to make bigger changes.
The crisis could ultimately place the job of Apple AI
had John G. Andrea or others at risk.
Now, he says, but an imminent departure
would only be an acknowledgement of Apple's AI shortcomings
which the company isn't ready
to admit. So clearly there's a leadership issue to have John G. Andrea's name as someone who might
be forced out, maybe not imminently, but even in a story is a big deal. You could tell that Apple would
definitely push back on that. But to me, the most stunning thing in this report is who might have
to do the pushing. Now, he says, it could be Tim Cook, but it could also be the company's board of
directors. You never in a business or a tech story here of the company's board of directors,
intervening to either force the CEO to make changes in an org that isn't working or something
else. And Ronjan, you're, you are familiar with how these big corporations work. And I'm curious
if you read into this the same way that I am. I have 100% agree that if that is the case,
it's very, very odd for a technology company, for the board to be getting into the kind of product
side of the conversation. Yes, if the focus is solely on the leadership, that's one thing.
And yet it makes sense that if a CEO is afraid to fire a very long time lieutenant or ally,
then that makes sense that that would be the role of the board to try to step in. And maybe
we've gotten so removed from having independent thinking boards that in technology companies
that you just don't hear or see this stuff anymore, hashtag Tesla.
But overall, I think...
Hashtag Facebook.
Hashtag Facebook.
Hashtag all of the above.
I mean, actually, it's true.
So that's why you don't really think about that.
But so as of January 2025, you have the former CFO of Boeing, the former CEO of Johnson
and Johnson.
the former CEO of Northrop Grumman,
the founding partner and director at Black Rock.
So that actually does make it even more odd and comical
of like, are these people going to help guide the company
towards landing the future of generative AI?
It's hard to figure that out.
But I think I agree with you that we just haven't seen
that kind of behavior out of any large technology company
in a long, long time.
I mean, what's your best?
So for me, I'm saying culture is probably the problem here.
I've said it on the show a thousand times.
I'm going to say it again.
If you have access to the talent that you have and you can't build this, the way that you operate.
And by the way, I mean, engineers have told me this.
You cannot build AI in silos.
If you've got somebody working on computer vision for, let's say, the face ID and somebody
working on computer vision for the now ill-fated car project, they need to be able to speak
with each other.
this technology is evolving fast you can't have people in sales even look at what's happening with open source
and so the apple silo approach and the apple like let's ship on certain determined intervals approach
that's not working even i think amazon had to sort of do away with that they ship like of course
in their two planning sessions in the year so i think it's culture what do you think it is so
i have a slightly different theory here
and I'm not going to just again be complaining as a locked-in Apple customer,
but I think their financial performance has been slowing but not dramatically suffering yet
because of these disparities.
Like earlier I was talking about this that I now am thinking,
okay, I'm going to get rid of my home pods and go all Alexa for my smart home.
Maybe I want pixel so I can have Gemini as my core assistant as opposed to God help us,
on my iPhone. And if I do that, do I get rid of my AirPods? And is there a world where, you know,
like the downstream effects of losing that lock-in are huge, but they haven't seen it yet? And then on
the other side, subscriptions and services, which is, I don't want to say the scammiest, but just like
the least innovative, let's say, portion of their overall revenue has been one of the fastest
growing business segments. So they're becoming more dependent. It's now 22% of revenue. And so they're
now becoming more like, uh, in myself, like we, my wife and I have a family ICloud account and
somehow we just hit another limit. And now I think I'm paying like, I think I pay between
insurance on phone and I cloud storage. I'm probably paying Apple like 80, 90 bucks a month or I don't
even know what it is. I'm not, I'm ignoring it. But overall, I think they have this.
financial cushion that allows them to look the other way, kind of minute to minute, because
they're okay as of today. And then it's a, that kind of inertia really can affect a company this
big. So it's a natural resource, curse pretty much, is what you're saying, that they have
basically so much money that they don't have the pressure to change as others might. And their stock
is as doing as well as it is. And this was, it's a great segue into, um, into, um, into,
to this other part of this discussion, which is the market side of things, right? So, A, they have
all the cash, and then B, you think about how their stock performances. And remember, their iPhone
sales went down in the last quarter. They had five of six quarters where they had revenue decreases.
They have not been able to ship the Vision Pro in the way that they hoped. Apple Intelligence is a
dud. I was at the High Tower Investment Advisors. They had a conference this week for some of their
New York crew. It was really fun. It was at 30 Rock. And I was on a panel with Dan Ives,
who's like the most bullish of all Apple Bulls. And I looked at him and I was like, Dan, you know,
I just read the German story. And I'm like, Dan, what's going to happen with Apple, man?
And he goes, they're holding up pretty well, which is like Dan's like famous line. Like,
you know, not as bad as feared. And I was like, come on, Dan. And then I was like, all right,
let me just quickly check the market caps. Apple's market cap is $3.6 trillion.
dollars. Last one year, stocks up 40%. Invidia's market cap 2.74 trillion. Microsoft's market
cap 2.93 trillion. So these two other giants fell below 3 trillion. Apple is closer to 4 trillion than it
is to three. I mean, that is insane. There's no comeback to that. It's because the numbers are still
astonishingly good as of today. And this is where I like that. I think let's start saying Apple,
has the Dutch disease, the natural resource curse, because I think we might start hearing that a
lot more. And again, maybe we can call it a natural resource curse or we can call it slightly
monopolistic behavior that ties a lot of different services together. Or you could call it just a
shit ton of money, cry hard or Siri boys. Yeah, yeah, exactly. Well, no, I mean, Tim, we're looking
out for you here, Tim. This is not, these are your fans trying to tell you, Tim, listen
to us. Siri is worse than ever. Apple intelligence is worse than ever. But they're doing
okay financially. So they're not going to have that internal like all hands on deck. But it's
also, I do wonder from a cultural standpoint, you hear about at meta, at Google slash alphabet,
you know, all these companies, even at Amazon, like that kind of like existential all hands on deck
moment the generative AI brought into those companies, you don't really hear that at Apple.
And I don't say this in a disparaging way, but Tim Cook is not a pure technologist.
He's a production operation supply chain person at his core, and that's what he's been
incredible at.
Like, could that be the reason it hasn't been?
Like, he wasn't just sitting there like, oh, my God, this is going to change everything.
We have to cut a lot of people.
merge a lot of groups and we have to win at this.
That urgency is clearly not there.
Do you think that could be it?
I think, so when Gruber was on the show,
he basically said that he expected them to marshal all of the forces that they had.
Like when Apple can hit snags, it can.
But when it does, it tends to marshal all the forces
into big pushes and eventually succeeds.
So that's what he said he expected here.
I imagine that it happened.
They did take people off the car project and put them on Apple Intelligence.
But I just think, if you think about the nature of organizations, and this is sort of the thing that I think a lot of folks who are watching on the outside sometimes overlook, when you think about the nature of organizations, when there is a way of doing something inside a company, it is very, very hard to change those practices.
Google cannot name anything. Google knows it can't name anything. Google still calls its models like, you know, Google Gemini Bard, Flash Thinking 35.8, you know, 9-2 beta, okay?
it is very hard for these companies to change.
I just think the culture of Apple is a culture where it is just not set up to develop
software, well, to develop software outside of operating systems.
None of Apple's, none of Apple's like owned and operated software properties are really good.
I'd much rather use Gmail than Apple Mail.
I'd much rather use Google Calendar than Apple Calendar.
I could go on, right? Safari is okay.
I'm on numbers all day long.
Is that what it's called numbers, I think?
Is that their Excel?
Yeah, that's the Excel.
I think it's good.
When that shit boots up, I get mad.
I know when I, shut yourself down, numbers.
Notes is good.
Notes is good.
Notes is good.
But in terms of big software development, I don't think Apple has the culture in place to do it.
That could change, but we haven't seen it.
That's a good question, though, that is generative AI more of an operating system infrastructure layer type of development?
or is it more of like an app software type development?
Because you're right, Apple has had some hits,
but for the amount of default behavior
they're allowed to push onto people,
they don't actually develop great consumer-facing software,
but they do great operating systems.
So that's actually, that's an interesting question.
Where does generative AI fit into that stack?
I would say generative AI is about a software as it gets.
Now, can it function as an operating system?
Sure.
We think that it might.
I mean, we've seen some failures with Rabbit.
and humane trying to have AI as an operating system.
But it can.
But the thing is, with hardware and with an operating system, right, you have predictability.
This is the system.
The parts fit in very predetermined way within the system.
And they go arrange here.
You have the, you know, icons, you tap in, you go into an app, the app has to be built
according to these exact specifications.
In software, you sort of, you have to live in a world of uncertainty.
and a world where users can sort of push the boundaries of what you do.
LLMs are that without a doubt.
So to me it just seems like that's going to be kind of rough for them to build.
And it sort of leads us to this question of do they want to continue to stick with Siri
or potentially give the space that Apple intelligence and habits over prominently to somebody else?
M.G. Siegler writes in Spyglass, Apple should
swap out Siri with chat GPT.
This is, again, this week.
It's part of the wave.
He's responding, by the way, to what German says about when this is going to be good.
Kerman wrote, people with Apple's AI division now believe that a true modernized conversational
version of Siri won't reach consumers until iOS 20 at best in 2027.
This is basically akin to what we saw with Alexa Plus that's coming this week in 2025.
MG writes, if this is true, it's not just a joke.
outright worrying. And if it's not true, Apple should probably do something to refute it
because it's quite damaging on a number of fronts. And it couldn't come at a worse time
with Apple having finally just unveiled, with Amazon having just finally unveiled Alexa Plus.
If Amazon was sort of a tortoise in this race to the hairs of Google and Open AI, as always,
there's hope that despite being slow to start, the tortoise can steadily win in the end.
Apple is more like a dead duck now in the same race, a corpse.
2027, how about never?
Is never good for you?
M.G writes, citing an older story.
Why not fully outsource Siri to chat GPT?
You'd still want to keep the task-oriented elements
like setting timers with her.
But everything that requires anything
resembling a search query should be outsourced.
Right now, you can fully force this
by asking Siri to ask chat GPT something,
but it's cumbersome.
I'm suggesting this be made the default action.
He writes that Apple has long had an illustrious history
of teaming up with partners
to get a service out the door while they work on their own solution behind the scenes.
So basically, they could potentially work with Open AI for a couple of years as they get their
solution in order.
But in the meantime, maybe they just want to give it over to Open AI, which has a great voice
assistant with GPT4O and can handle a lot of the stuff that Syria is struggling to do today.
What do you think?
Maybe the bullish case for Apple here, and I'm stretching, but in this vein,
most people still aren't totally integrating generative AI into all stuff they do all day,
like probably us and many of our listeners.
So even though we kind of are like laughing at the timeline of 2027 or they still have money.
They still have plenty of money.
So maybe it really is still just let it build.
I think I don't believe this is actually the strategy in any way given how much they invest.
in the Apple intelligence marketing to start in a just really over-promising things.
But the idea that maybe at a certain point, they buy Mistral or which we've talked about
in the past, or they, I mean, there's more and more very capable chatbot experiences
or generally kind of like infrastructure level LLMs that can handle a lot of different
types of work that they could use.
So to just stop trying to do Apple intelligence, maybe there is a world where that just happens in a year.
We start the board meeting.
We start the board meeting.
We start pushing it, activists.
Just show up there at the shareholder.
Mr. Cook, is people who really want Siri to work well.
What is going on there?
What is your current stake in the company?
I have like 20 shares, but I really want Siri to work.
I do have an iPhone.
It shouldn't that matter.
Yeah.
Look, I think that you make a really good point here where every time we get really down on Apple,
we just realize that the company has a massive install base.
It is true that they can still recruit the best and brightest.
They have access to all the open source.
You mentioned mistrial.
Maybe they don't even have to acquire it.
Maybe they could just download the weights and then run an LLM that way.
I mean, there's definitely technical expertise there that's required.
But Apple is still operating in a world where so much of this innovation is not proprietary, not contained within a single company.
And that is a huge bullish sign for Apple.
And maybe that's why they're still trading at a $360 trillion, $3.6 trillion market cap.
It's still a generous.
It's a generous case for them, I think.
I think the stock price is just much more reflective that financially, I mean, obviously like performance, iPhone sales in China.
iPhone upgrade cycles.
There's a lot of genuine, more kind of standard business issues that are facing challenges
that are facing the company.
But I feel that they almost more than make up for that in the growth of their subscription
and services segment that that's what, again, is allowed them to paper over all these
concerns.
But I do think this is a core issue.
I do think because they told everyone to upgrade because of Apple intelligence.
They said it.
and that would be the to do it for Apple intelligence would be the most ridiculous
consumer decision if anyone told me they did that I would uh I would be horrified
for them for them for them and you get a higher multiple on subscriptions and services like
you get valued as a software company not as a hardware company so your market cap goes up
yep but you're right the liability is real I mean basically this is what German
writes just to close out this segment there's little reason for anyone to buy a new
iPhone or other product just to get this software, no matter how hard Apple pushes its marketing.
I mean, you remember I had a, I was talking about I had an interaction in the Apple store
where, of course, like I'm going to buy more Apple stuff, which is so funny, the context of
this whole conversation. But as I'm about this walk out the store, they're like, oh, and by the
way, we have Apple intelligence. And I'm like, do you know about Apple intelligence? And I'm like,
unfortunately, I know too much. I think I want to go to an Apple store now and get the pitch.
Like as of March 2025, hear the store associates pitch trying to sell this.
Because I'm curious what they're being trained on right now.
Like, is it still?
You can find your flight info and book tickets directly.
None of that.
I mean, yeah.
They very earnestly smile and they're like, Apple Intelligence.
And you're like, yeah, this is a labor rights abuse to make you pitch this.
I'm sorry.
That's the move.
That's the move.
We start a union backlash.
I do think, yeah, if we frame this through the unions,
potentially we could get a better Syria around John.
It's not right to make people try to sell and pitch Apple intelligence.
It's inhumane, and I do not stand for it.
And instead of making them stop pitching,
you should make them continue to pitch, but just fix the software.
Yes.
I think this is really to get, the way to get what we need in the world is, yes, through the struggle.
Marx would love this.
Marks would be totally into this.
workers you have nothing to lose but your shitty voice assistance pretty sure that's what he said
i think that's exactly inspired a movement yes capital yep marks was trying to set an alarm with
syria it didn't work he's just got super pissed up million should die yep um one last thing about
this there's another Siri delay so gruber reported this soda german i guess um the apple spokesperson
said Siri helps us our users find what they need and get things done quickly and in just the
past six months. We've made Siri more conversational, introduced new features like type to Siri
and product knowledge and added an integration with chat GPT. We've also been working a more
personalized Siri, giving it more awareness of your context as well as the ability to take action
for you within and across your apps. It's going to take us longer than we thought to deliver
on these features and we anticipate rolling them out in the coming years. So the stuff that hasn't
shown up, there's going to be a, I don't understand there's going to be another delay. I guess we were
waiting for this. But again, what a statement, very cheery. Look how great Siri's gotten,
according to the official line, according to most of us using the product. That is not the case.
Yeah, actually, maybe that's what's more concerning to me. If they just came out and said,
you know what, we err too much on the side of caution and safety and responsibility,
and we still believe in those. So that's why things are slow and not good. Just
say that. That's okay. I mean, you guys are the responsible, secure, private safety and
people, and that's a good story. And it's probably maybe true to an extent, but they still
try to pretend, much like that poor Apple Store associate that everything is rosy and okay.
It ain't. I'm here to tell you. It is not. It ain't. All right. We got to go to a break.
I just want to remind folks, we'd love to hear your feedback as always.
So we have a email address, big technology podcast at gmail.com.
If you have constructive criticism or anything you like to share, you can use that email
address.
I read every single one of them forward to Ranjan when applicable.
Ratings and comments on places like Apple Podcasts and Spotify are kind of like the front
door of the show.
So new listeners come and see those comments.
And we definitely, when there's good, constructive stuff that come in there, we'd like to post
them.
We don't really have a choice on Apple.
But if you have a good experience with the show and you want to post it in a comment on Spotify
or a five-star rating on Apple Podcasts, that would be much appreciated.
We always want to listen to you, and we definitely have the right forums available.
So hope to hear you there.
Okay, let's take a break and come back and talk about what's going on with Microsoft in OpenAI.
And then we'll close talking about our jobs because maybe AI will take them.
And then you can listen to AI every week.
Anyway, we'll be back after this.
Hey, everyone, let me tell you about The Hustle Daily Show,
a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite.
podcast app, like the one you're using right now.
We're back here on Big Technology Podcast, Friday edition.
All right, let's talk about this very interesting Microsoft and Open AI story.
So, Ranjan, you and I have talked about how there's been tension between Microsoft
and Open AI in the past few weeks, or in the past months, really, and the information
really breaks it all down.
The story breaks down what Mustafa Suleiman, who's the CEO of Microsoft AI, has been up to
within the company.
It is very interesting as Microsoft starts to affect.
effectively insulated from Open AI itself from Open AI.
And I do wonder what happens with the relationship there.
Let me just read the first couple paragraphs of the story.
Last fall, during a video call with senior leaders at Open AI and Microsoft,
Suleiman, who leaves Microsoft's in-house artificial intelligence unit,
wanted OpenAI staffers to explain how its latest model OpenAI worked, according to someone
present for the conversation.
He was peeved that OpenAI wasn't providing Microsoft with documentation about how it programmed
2001 to think about users' queries before answering them. The process known as chain of thought
is a key ingredient in the secret recipe of any AI model. Raising his voice, Sziliman told
Mir Marotti, then Open AI's chief technology officer, the AI startup wasn't holding its end of the deal
of the bargain. It was holding up its end of the deal with Microsoft, with which Open AI has a
wide-ranging analysis, and Suleiman cut the call short. So that's who he yelled that. He yelled
that, he yelled at Mirrari. He's simultaneously tasked with carrying on the Open AI partnership,
At the same time, he's also under orders to put Microsoft on a path to self-sufficiency in AI,
so we don't have to rely on Open AI's technology for the majority of Microsoft's AI products.
Interesting situation here for Sziliman, trying to, we finally have conversation,
that confirmation that Microsoft is going to try to be self-sufficient.
And there's been some serious yelling, shall we say, between the two parties.
is this this is a big story to the information I just want to get your perspective
Ranjan on this is this natural is this surprising do we already know this or is this
potentially a breaking point between the two I mean we've definitely heard about
tension between Soleiman and the larger open AI management infrastructure but I do
think this it's getting worse but it also makes sense because on one side the
entire co-pilot suite of products, like the more add-ons to Excel and Word, these are the
tools that have gotten the worst types of feedback from users. Those are the ones that we've,
there's been articles about how bad the uptake has been or the kind of egregious errors that are
being made. So if those are the ones being still powered by Open AI models, clearly there's
some kind of issue. I think, I do think this is an important moment because when I was reading
through this. This is going to end very soon. Like, I just, it has to do the Microsoft Open
AI word. I think six months from now we're no longer going to be talking about their relationship
because it is making less and less and less sense every day. And I think it's clear that
there's a lot of infighting in politics, but like what would be the reason to try to continue
and investing in that relationship when also you have Open AI going all in on SoftBank and
process on anyways, like, like, what's the purpose of this other than if Satya wants to just
have a little bit of a stranglehold and a leash on a Sam just, just for, just for fun.
Well, you've already invested $13 billion if you're Microsoft. So the question is,
what happens with that money? I mean, part of that money was supposed to be that you get
Open AI effectively as an outsource research house for Microsoft, and that hasn't happened in the way
that they've hoped. Now, of course, they're going to get a percentage of the future profits,
and they're going to get a stake if they go for profit.
But that $13 billion, like, part of that was supposed to buy this collaboration.
Yeah, that's fair.
But also, if you think about, like, what's the risk to that $13 billion?
It's very, it's not going to zero necessarily, right?
It's not like you, as a steward of capital, I don't think the risk is that great
in terms of still having this much integration and interaction.
And I think the other main thing, too, is,
I mean, maybe I'm wrong on this, but it does feel overall like when you see the advances
is coming out of other research houses, when the deep see moment more than anything prove
this, that the idea that the frontier model research houses are so far in a way, are they
that much better than Microsoft and Suleiman's team? Probably not, like to such an extent that
at this point they're realizing we don't actually need, that's not going to make or break our
own business. You're right. This is a consequence of the commoditization of models, right,
that we're seeing open source be able to handle basically the queries that a lot of these frontier
models can. And that parity is quite important in a company that's like Microsoft's ability
to be able to compete. And in fact, there is reporting from the information that Silliman's team
recently completed training of family of Microsoft models internally referred to as,
is MAI that perform nearly as well as leading models from open AI inanthropics and anthropic
on commonly accepted benchmarks.
The team is also training reasoning models, which use that chain of thought techniques
that could compete directly with open AIs.
Suleiman's staff is already experimenting with swapping out the MAI models for open AI models
in Microsoft's co-pilot.
The company is considering releasing the MAI models later this year as an application
programming interface or API.
a software hook that would allow outside developers to weave the Microsoft models into their own apps.
So I think what you said, it seems like it is indeed playing out.
MAI, MAI, 1.0, 2.0. It's a good name, too. I think it's got some good staying power.
I'm a lot. I like it. Good job, Simon. But yeah, that's it. Build your own models, release them, integrate them more directly so you have more control over them and you don't.
need to like, I mean, think about how ridiculous that must feel for someone like him where
he's like, just explain to me how O-1 is doing its reasoning. Like, we are a massive investor
in you. Can you, even like academic to academic, brilliant mind to brilliant mind, let's just
talk about how O1 achieves this kind of reasoning and they're hiding it from you. That's got to
be frustrating. Yeah. And by the way, for Open AI also, there's got to be benefit from breaking
free with this because Open AI was restricted in using just Microsoft infrastructure. Remember, open AI
models are not available on Amazon's AWS because of that agreement. So this could be an
opportunity for them to expand as well. Let everyone free, Satya, it's time. Let Sam run free. Let
Suleiman run free. Let everyone run free, Satya. It's your call. I think it's happening. Here's more
from the story at Suleiman's direction. Microsoft has been hedging its bets further by trying out models from
Opening I's competitors to power co-pilot.
Those include
Anthropic, Musk's
X-A-I, along with
open-source models from deep seek and
meta-platforms. So
this once-tight partnership
no longer seem so tight.
Yeah. Maybe both got exactly what they needed out of it.
Opening I got the resources to train
frontier models. Microsoft got the
positioning and the head start on co-pilot.
And now they evolved to realize
that they're better off free.
everyone's grown up and it's time to move on to the next phase of the battle I think there's got to be I'm not I'm not like a huge war history buff but I'm sure there's some case in the past of like some historical examples of this and like great wars and battles where I don't know someone uh an alliance like that once neither side has that need anymore just nicely breaks it off amicably and goes off I believe it's the great voice assistant
based communist revolutions where the Syria acolytes and the Marxists combined for better
functionality in the iPhone. And then once that alliance of convenience was no longer appropriate for both
side, they broke off and went on their own way. I'm going to go through all great tragedies
in history and try to figure out how Siri is at fault for, for every. You've heard of Das Kapital.
This was Das AI assistant.
God. All right. If you haven't turned off the program yet, I appreciate you. All right. So one more
little bit here from the story, just to look at the amount of money that Microsoft has made on this.
No bet is more important at Microsoft than the one it's making on AI. Last month, the company
told shareholders it was generating more than $13 billion in annualized AI revenue across all of its
businesses up from $10 billion just three months earlier. This is accelerating, Rajan.
I mean, we're seeing those numbers across Microsoft from the Azure side of things from Google Cloud.
I think we're starting to see – I still take those numbers with a bit of a grain of salt, though, because what exactly are people paying for?
I know, like, these companies are a bit cagey in terms of representing what AI revenue means.
I mean, even, like, the large consulting firms, I think I saw numbers from, like, Accenture and KPMG even that they're –
making gobs of money on AI. So I think I'm still, yes, I'm guessing a lot of Azure clients are
using a lot of API calls and I think stuff's happening, but I don't know. I don't know.
I still take it with a bit of a small, great assault. I know what it must be, Ron John.
It must be the rappers. It's all about the rappers. It's the rappers. It's the rapers. It's all about the
rappers. So the rappers are rising and the products are rising as Ranjan has long.
hope for. This is from Bloomberg. The hottest
company, the hottest AI
companies right now are apps. Today's
so-called AI wrappers are all the rage.
Step into any, uh, step into any
venture capital office in Silicon Valley and you'll
hear investors buzzing about startups
that offer AI chatbots, research tools,
and other software applications for coding,
clinicians, and customer service.
All built at least in part on the backs
of large language models created by other
leading AI developers. These startups are seeing
revenue and valuations grow at a fast
clip, often while spending a fraction of the amount of the top AI model developers due on chips,
data centers, and talent. Harvey, that was founded in 2022, surpassed 50 million in annual
revenue, annual recurring revenue in December, which Harvey is a legal startup. Any sphere,
the startup behind the popular code editing tool cursor has hit 100 million in annual recurring
revenue. Investors are eager to put their money into these services. Harvey raised 300,000,
million. Any sphere also raised a lot of money. We just can't find the exact number now. And the
Open AI Anthropic rounds have sort of left the ability for VCs to invest in. So it's all
about the wrapper. The product wins. The model's commoditized. And it's Ron John Roy's future.
We're all just living in it. Except Siri is terrible. And that doesn't make up for any of the other stuff.
And somewhere Mustafa Suleiman is yelling at someone from Open AI, because now he does not have to just smile and listen to what they're saying because the models are commoditized and it's all about the product.
But this is good, right?
Oh, go ahead.
This idea that we have products being built and working.
We have Mike McNano in this story.
He's a VC at Lightspeed he's been on here.
Just like after the iPhone launch, there were millions of new mobile apps.
Now with AIs and LLMs, there will be millions of new AI products.
Maybe that's too optimistic, but it's an interesting take.
Yeah, I would like to submit for the record that we do not use the term rapper.
I'm asking you out there, industry information reporter, whoever had written this one.
This was Bloomberg.
This was, I think, Kate Clark in Bloomberg.
This was Kate Clark at Bloomberg.
I think because Rapper still kind of denotes, there's a lot.
like a negative connotation to it versus, I mean, most products are built on some kind of
infrastructure and that's good. No one says like the greatest iPhone apps are just wrappers of iOS
or I don't know. Like it still, it takes away from how much work goes into creating a good
generative AI product. And I've spoken with people who at large law firms who use Harvey and
they're like, it's incredible. And maybe someone could use OpenAI's API and recreate the
entire thing, but they're not going to. And a large law firm will happily pay tens of thousands,
if not hundreds of thousands of dollars to have someone else do that work for them because
they're not a software developer. And I think it's just a reminder that like, this is where
people finally start using it. Because, I mean, big law would probably be the most resistant to using
generative AI if they were still having to go to chat GPT. So it's good that Harvey's out there
and cursers there for all the coders and more, more like this, please. Well, for context, like they
call it Rapper because the capabilities can get better and eventually sort of subsume the
apps, whereas you can't do that with cloud. Like you can't have like Amazon Web Service all
a sudden get better and next thing you know, it's a SaaS platform. You can have that happen
with an LLM. Now, there are going to be specialized use cases that you're going to want to use
the specialized software for, but something like coding, for instance, you would imagine that
these services will get better and maybe eventually compete with the wrappers built on top
of them. I'll use the word. But you make a good point that it's customized. It's built for a certain
user. And over time, they're just going to like kind of branch off and be their own thing.
All right. That's fair. And I think.
maybe what happened is we saw early on, early on like two years ago, from like the big jumps
from like 3.0 GPD 3 to 4, adding Dolly directly into chat GPT, like we started to see certain
successful apps like image generation apps get basically destroyed because the capability
became native and integrated into the larger chat bots. So I guess we've seen that, but I just don't
see that happening in the same way for large, verticalized industry use cases, enterprise use
cases.
I think, like, yes, editing a photo to make my face look like, I remember, like, two years
ago, there was, I feel, early stage generative AI image, there was like a bunch of, like,
look at yourself old or just these kind of, you'd pay some Chinese company, like,
all my money is in some Chinese bank accounts, seeing what I look like, old.
It's some Chinese company just hoovering up your facial recognition data.
But, and we're all paying because it was fun for like, for 99 cents.
And then ChatGBT, BT, yes, subsumes that layer of apps.
But, but yeah, it'll be interesting.
I met a guy this week who's like, I'm going to add you on LinkedIn.
I said, okay, he's like, I have a pretty cool picture on LinkedIn.
I said, okay, he goes, it's AI generated.
My mom framed it.
I said, okay.
There's, can we get him on?
I want to know so much more about whatever, whatever is going on.
This wasn't just the bourbon speaking, because I was like, all right, opening my phone, looking at this LinkedIn request.
And I was like, holy shit, that's a great picture.
It's a great photo.
He really has a great AI picture.
It's very cool.
I immediately wanted one.
But then if my mom, if I go home, I actually am in Boston right now and I'm going to be going to be going to.
my parents' house where I grew up very soon. And if there is a large blown-up photo of my
LinkedIn profile picture, you'll know you're loved. I don't, there's so many layers to how
weird that would be that. Yeah, but it's not. Is it AI? Okay. It's got to be cool in AI.
That's why. Maybe that's why my mom's not free. She's like, if it was AI, then we'd be framing it.
Remember, you were sitting around, you know, at the Super Bowl watching dot plot.
ads for Open AI. All your mom needed was a lovely picture of you with AI on LinkedIn.
Then she would know what to do with it, print it out and frame it. I'm not even trying,
I'm not even ragging on this guy or his mother. No, I'm not at all. It is an amazing photo.
It's great. I would frame it. I thought about framing it. I don't even know. I don't even his mom.
That, that actually is quite a move, I think. That would add so much.
You're on CNBC, podcast tapings.
What's going on behind you there?
Oh, it's a LinkedIn guy.
It's a LinkedIn guy with a cool picture.
Cool picture.
It's art.
Learn to appreciate it.
People don't.
People don't enough nowadays.
I know.
It's a problem.
Our society, it's gotten coarse.
So let's round out.
You're in Boston this weekend.
I hope you pick up a copy of the Boston Globe on Sunday
because you will see my op-ed on there.
And I've syndicated this week's big technology story.
Okay, I'm starting to think,
I can do my job after all with the Boston Globe Ideas section.
So thank you, Boston Globe Ideas for running it.
It is a follow-up to my last Boston Globe piece,
which is, wait, chat chippy T didn't take my job.
And I basically come and say,
listen, I'm sorry for taunting chat chippy T.
I'm starting to see that a lot of what I do can start to be done
with AI. And it really goes back to this, like, what can voice AI do? And I include the anecdote of
Evan Ratliff, who came on to talk about AI clones and the clone that he made of himself and that he
actually built an AI voice clone. He prompted it with a bunch of questions to ask to a voice tech
CEO. He sent it out to a voice tech CEO to do an interview and did a better job than him.
So, Ranjan, I'm just kind of curious what you think about my thesis here, that AI maybe not taking
our jobs, but it's starting to be able to really do a lot of the work that we do, work that
we thought would never be in the path of the machines. I agree with that, and I think it's not,
it's not bad. I mean, I think there's a lot, like, even the idea of sending a voice AI clone to go
do an interview, if the interview is essentially just kind of like, here's a bunch of pre-written
questions and I'm just trying to get information, then that actually is a good idea. And it's
cooler because there could be some back and forth in interaction, but it's not going to be,
you know, like really deep and go in lots of new directions, but it'll get the right
information out. So if you can interview more people or get more information and write more
stories because of that, especially as a, like a individual creator, I think that's great. I think
that's this kind of stuff for smaller media outfits like us, I think is good.
Yeah, so this is how I ended the piece.
I said, as AI extends beyond the chatbot and towards something that can research,
take calls, and even pontificate, they'll likely become a force multiplier used to scale up
individuals' effort and help them cover more ground.
That might lead to less hiring, smaller companies, or potentially fewer overall jobs.
And now I'm less confident in our broader ability to weather this change.
without pain. So I think we could have smaller companies doing what bigger companies do. But actually
if you're with less people, but then again, the other side of the coin, what you just brought up is
if you're me or if you're basically working on something small, this can be something that can
really increase your productivity. So there's two sides here. Yeah, I think, I mean, it gets into
the deeper abundance debate that like could, if you have smaller companies, but could you be
creating a lot more with that and then does it create new behaviors that again when people thought
banks physical bank locations would go out of business that physical malls would go out of
business and we keep seeing over and over that that doesn't come to fruition so i i think we have
no idea how this turns out i do agree if people who know how to leverage this technology
and companies that know how to can do a lot more with a lot less
than their competitors.
I think that's like we've all seen how clear that is.
But what that looks like across society and at scale, it's a tough one.
Yeah, it's crazy.
I mean, this is, I'm starting to see the stuff be able to do things that I just never dreamed possible.
I don't know if you've been like experimenting with the clawed coding capabilities,
but they have gone from just being very rudimentary and somewhat disappointing,
if kind of cool to like being downright insane.
I'm going to write about this in a future big technology story,
but I uploaded a fake bank statement to Claude
and had it write me a financial plan.
It took just an image and added up the numbers appropriately.
Okay, that's crazy.
Then I told it to like plot up my expenses on a bar and a line graph.
It did that or my balance on a line graph,
my expenses on a bar graph.
It did that.
And then I said, yeah, build me this financial plan.
And then after it built a financial plan, I said, build me a retirement calculator.
It builds a bespoke retirement calculator with the numbers pre-populated that works.
And you could change the variables and kind of see where you're going.
This thing is crazy.
I also prompted it to build me a video game.
And it did it in, it's very rudimentary.
It did it in one prompt, two further design refinements.
And it's like an actual play.
game. I mean, it's very, very basic, but this stuff is getting crazy, the capabilities.
And the general public, I don't think, is fully aware of how good it's gotten even over the past
three months. Well, actually, so for my son, who's in kindergarten, they'll be like, here's
like, here's, like, 25 words to memorize, here's the next 25. So I made a game in Claude, and I,
and it, like, shows animals when he gets it right that he likes and, like, added those kind of little
flare elements to it, and it works perfectly. And I run it directly as a clod artifact. At first,
I was trying to like, I was like, should I upload this to the app store and stuff like that? But it runs
well as an artifact. And then realizing again, so from an education standpoint, having like
hyper-tailored learning tools for kids everywhere in the world is amazing, is incredible to think
about. But it's actually funny because that one, my mom saw,
us using it. And she had no, like, she was just kind of like, oh, that's nice. Like, the idea that
I had programmed it myself, like, I didn't even, she's like, oh, that's nice. You're practicing
math with him. That's a good game. But like, I could have just been like, this is it. This is
the AI mom. This is it. Yeah. Put this in a frame. Put this in a frame. No, put that other guy's
LinkedIn photo. Yeah. So listen, the only thing we need to do at this point is just write a prompt for a
contextually aware voice assistant that works on your iPhone I mean you should be good I'm sorry
I had to the only way to end it this week yes folks I just want to let you know Ron John is
fresh off a plane I think he's kind of under the weather still showed up dedicated to the
craft we're lucky we got a trooper with us here so thank you Ron John well we're talking
Siri the jet lag from a quick London trip will not stop me
from that topic.
Nothing will stop this man.
All right, everyone.
Please check out the newsletter.
Give us five stars on Apple Podcasts and Spotify.
Send the show over to your friend, if you like.
Join the Discord as a paid member of Big Technology Podcast.
And have yourself a great weekend.
We are going to have a great show next week.
It's the CEO of Roblox coming on to talk about building video games with AI and a whole bunch of other stuff.
So we hope you tune in then.
Otherwise, Ronan and I will be back on Friday as usual.
you so much for listening and we'll see you next time on big technology podcast.