Big Technology Podcast - Countdown to GPT-5, OpenAI’s Stargate Sputters, AI Math Wars
Episode Date: July 25, 2025Financial Times San Francisco Bureau Chief Stephen Morris joins for our weekly discussion of the latest tech news. We cover: 1) Is GPT-5 really on its way? 2) GPT-5's reported capabilities 3) Is Sam A...ltman going to call GPT-5 AGI? 4) If GPT-5 codes well, where does that leave Anthropic? 5) Stargate hasn't made a single data center deal yet 6) Scaling Laws back in vogue? 7) AI math olympiad faceoff 8) AI data centers energy costs 9) Google's impressive earnings 10) Tesla's dark outlook 11) Satya Nadella addresses Microsoft's morale after layoffs --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
GPT-5 may indeed be coming soon, and some good reviews are starting to arrive.
Open AI Stargate program is off to a rocky start,
AI labs go to war over math, and big tech earnings start to pour in.
That's coming up on a Big Technology Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional cool-headed and nuanced format.
We're joined today by a special guest to help us break down the week's news,
Stephen Morris is the San Francisco Bureau Chief at the Financial Times and is here to speak with us about everything OpenAI, GPT5, and we might even cover some Tesla earnings.
Steven, it's great to see you again. Welcome to the show.
Great to be here, Alex. Thank you.
So let's talk about these rumblings because I don't think it's worth hesitating before we dive right into them that GPT5, OpenAI's latest, greatest, biggest, most hotly anticipated, most delayed model is seemingly.
on its way. The verge says OpenAI prepares to launch GPT5 in August. The story reads earlier this year,
Microsoft Engineers were preparing server capacity for OpenAI's next generation GPT5 model
arriving as soon as late May. After some additional testing and delays, sources say that OpenAI
plans to release GPT5 as early as next month. Sam Altman, the CEO of OpenAI,
says that they are going to be releasing GPT-5 soon and talked about some of its capabilities
with Theo Vaughn and said some crazy things about it, which we'll get into in a moment.
But first on the timing, Stephen, is it time for us to believe Open AI and the rumors that
this thing is actually going to come?
I almost felt silly putting it as the lead story in today's show.
No, I think you were the right, if that was the right call, this very much has consumed
Silicon Valley and San Francisco recently with speculation about will we actually see it this
time. Like you said, we've had so many false starts, so many false reports, but Sam Altman is out
there telling everyone it's coming next month. You know, traditionally, as you know, August is a
sleepy time for journalism, but definitely not this year. What we're a little thinner on is the
details about what exactly this is. What does it look like? And will it be kind of the leap forward
that I really do think Open AI needs it to be.
There is a lot going on with this company at the moment, you know, everything from
negotiations on restructuring with Microsoft, building these vast global Stargate network of
data centers, you know, new devices with Johnny Ive, but I think what underpins all of this,
the 300 billion valuation, the hype, is the fact that it has one of the best, if not the best
models. And we really have that necessarily hasn't necessarily been true for a while. So this is
Sam and the rest of the research's chance to really come out and shine. You're saying it hasn't
necessarily been true that Open AI has the best model for a while? That's certainly if you look
at benchmarks and if you talk to people around the sector, that's definitely the perception.
For a while, there wasn't really, you know, a legitimate competitor. But you talk to people now,
including, you know, its most vital stakeholder of Microsoft, they're locked in these negotiations
about open AI transforming itself to a for-profit entity.
And these are very tense.
But, you know, when you speak to people over there, you know, they don't necessarily think
that it's clear that Open AI is out in the lead, that it has like the most persuasive,
definitive model anymore.
And you're starting to see it offer other things off its platform like Elon Musk's grok from
X-A.
And I think if GPT is the slam dunk that certainly Sam Altman is telling everyone it's going to be,
that will kind of put, you know, those kind of doubters and those questions to write.
Yeah, GPT5 being that slam dunk.
Yeah, I'm looking at LM Arena right now.
Number one model, it's actually a tie between Gemini 2.5 Pro and opening eyes 03.
Opening eye takes the next couple spots with spots with Jet GPT 40 latest and 4.5 preview.
Hmm, wonder what that might be.
And then Grock and Kimi K2 are hot on its tail.
So there's definitely a lot of competition at the top.
But, you know, I almost would say that chat GPT or OpenAI has put more pressure on itself than its competitors have.
You know, there's been this, the way that this company speaks has been this long anticipation that its next big number release, that the GPT5 release, would be something really special.
would potentially even be artificial general intelligence, although I guess like that might
be more the hype than the company itself, but just the way that it talks about the way that
these models are going to work. It's pretty wild. Here's Sam Altman on Theo Vaughn, which
my speculation is that they booked this because they thought this would be GPT5 week and then
they just had to like talk about what it might be because they haven't released yet.
Here's what Sam Altman says. This morning I was testing our new model. I got email.
a question that I didn't quite understand, and I put it in the model GPT-5, and it answered it perfectly.
And I kind of sat back in my chair.
I was just like, oh, man, here's the moment.
And I got over it quickly, and I got busy onto the next thing, but it was like, I felt
useless relative to the AI and this thing that I should have been able to do, and it's really
hard, but the AI just did it like that.
It was a weird feeling.
Altman also said that GPT-5 was able to code up a project for him in like five minutes that
would have taken much longer otherwise.
Are they overhyping this?
Are they putting too much pressure on themselves?
I don't think Sam Altman has underhyped anything in his life.
So I don't know about that.
But if what people are saying and what they're, you know,
what they're saying is speculating about is true.
What we're going to see is a much larger model that marries all of the, you know,
innovations and capabilities from reasoning to deep research,
multi-modal capacities, reading text, seeing videos, and audio, and then wrapping that together
into a very speedy and, you know, cost-effective package, it really could be, you know, a leap forward.
I mean, what he has said about the models is that, you know, you shouldn't have to pick
which one you use yourself for a variety of different tasks.
I mean, the famous question asking it how many hours there are in Strawberry, you know,
models have struggled with this very basic thing, whereas they're extremely good.
complicated math problems or coding so if you ask it that question and the model doesn't
accidentally go off and spend five minutes doing a very expensive token consuming deep research
project but just knows itself that's a huge time saver for consumers it's in particular ones that
are less sophisticated they're just using this as a chatbot but then also i think what open
AI has been feeling you know in its competition with gemini and anthropics clawed
is that it is perceived to a slip behind a little bit on coding.
You know, that is the most tangible and financially rewarding real-word application
of these things so far.
And I think what we're going to see is open AI really strike back in that
and say, look, we can compete with Gemini, we can compete with Claude.
And your business should buy our enterprise product,
not just the consumer chatbot, which is like really captured the public imagination,
but big companies taking up big, hefty, multi-year contracts,
giving it sort of a lot more visibility into the future and its revenue.
And that is, if the coding aspect lands right with this,
I think that could be quite transformational for its business model.
That's perfect lead in because the information does have some news on that.
So they say GPT5 shines encoding tasks.
This is, I think, a brand new story here on Friday.
The information writes, GPT5 is almost here and we're hearing good things.
The earlier reaction from at least one person who has used the unreleased version
was extremely positive.
I'm just going to pause here.
I always, when I read this one person who said it was really great,
I'm always like, that's Sam Altman.
But anyway, they say GPT5 showed improved performance
in a number of domains, including the hard sciences,
completing tasks for users on their browsers and creative writing
compared to previous generations of models.
But the most notable improvement comes in software engineering
and increasingly lucrative application of LLMs.
GPT5 is not only best,
better at academic and competitive programming problems, but also at more practical programming tasks that real-life engineers might handle, like making changes in large, complicated code bases full of old code.
The nuance has been something that Open AI's models have struggled with in the past, and is one reason why rival Anthropic has been able to keep its lead with many app developer customers.
But as we've reported, this is still the information.
is more than aware of this issue and has been working in recent months to improve the coding
capabilities of its models. I mean, what are the implications that if Open AI is, let's say,
able to equal or pull ahead with Anthropic, which we know is the state of the art in coding
with this new model? Well, they've pursued very different, you know, subscription and revenue
models so far. Open AI has, you know, it's almost the verb, like to Google. You chat or you
GPT the question, especially if you're a young student. Whereas Anthropica's clawed doesn't
just doesn't have the same brand recognition, but it has relentlessly gone after what they call
enterprise customers, like big businesses, offers them access to their technology through APIs
and has longer, bigger and more visible contracts. Open AI has long been jealous of this. It wants
in on the game. It's also competing with Microsoft, you know, its own partner in offering
these services through, you know, the Azure platform. But increasingly,
Google and Gemini, which trumps its coding chops. So if open AI is able to prove that its models are
at least as good, if not much better, then it can start to take back some of this. And it really
does change the competitive landscape. Because I think GPT is the, you know, undisputed winner of
like the consumer chatbot wars so far. What it hasn't proved is that it can make the transition
to the business and governmental world in the same way that some of its competitive.
have. And maybe they were forced to go down that route because Open AI was just sucking all of
the oxygen out of the room on the app store.
Exactly. And it's interesting that you mentioned that Open AI is trying to do this. It's sort of like
a frenemy partnership with Microsoft at this point. I mean, of course, Microsoft is going after those
enterprise use cases. They're probably coming up against Amazon reselling Anthropic. Amazon's
invested $8 billion in Anthropic.
And, you know, the two companies really have to play together if they're going to get this much better with their next model to really be able to make that enterprise play.
And that's, that's really an open question.
And there was an interesting story, aspect of the verge story, which is, does Open AI declare AGI with this new model?
Does it say it's reached like human level intelligence or artificial general intelligence and sort of begin what might be a break?
from Microsoft. So this is from the first story. The declaration of AGI is particularly important
to Open AI because achieving it will force Microsoft to relinquish its rights to Open AI revenue
and its future AI models. Microsoft and OpenAI have been negotiating their partnership recently
as OpenAI needs Microsoft's approval to convert part of its business to a for-profit company.
It's unlikely that GPT5 will meet the AGI threshold that's reportedly been linked to Open AIs profits.
this is according to information
the companies have defined
artificial general intelligence
as a system generating
100 billion in profits.
So let me throw this to you.
Do you think they're going to declare
AGI with GPT5?
And if they do,
what happens to that partnership with Microsoft?
I hate to make bold predictions,
especially in tech,
because you can often be spectacularly wrong,
but I do not think they're going to say
GPT5 is anything approaching AGI,
however you choose to define it.
There's also,
a lot of nuance in its relationship with Microsoft, just for anyone that's not familiar, Open
AI has for a while been trying to restructure its company from sort of a pure non-profit
pursuing AI for the benefit of all humanity to create sort of a more an arm beneath this
entity that can actually raise a lot more money, in particular debt from more traditional
investors, which it argues is necessary for it to be able to build these data centers and
invest in people and processing power to compete.
Microsoft essentially has the keys to unlock that because of its early investment in 2019,
which it has then increased tenfold.
And one of the key clauses in this agreement is this AGI clause.
Once Open AI hits AGI, whatever that is, Microsoft is essentially shut out of the deal,
with the idea being that you shouldn't hand over the most powerful technology ever known to man
to a for-profit company like Microsoft because they can't be trusted with it.
back in 2019, this probably sounded like a good idea, because who knew when we were going
to hit artificial journey?
My out times have changed.
Or super intelligence.
And now, you know, you have Musk and Altman out there saying that they can feel the AGI
on almost a daily basis.
However, Microsoft is a big, ugly, competitive tech company that's been around for 50 years.
And they're not just going to let Sam Altman say, I feel like AGI has been reached.
You know, firstly, the board will have to, you know, the board of Open AI will have to,
form a subcommittee, of which Microsoft will have a say, to decide how to define it and whether
they've reached it. And secondly, there's a financial aspect. This has to be able to generate,
I think, and I may be wrong on this, more than a hundred billion kind of in revenue a year.
No profit. Profit a year.
Well, profit, exactly, which obviously is not really being made by any AI companies at the moment.
So there's a lot, it's not like this, you know, by saying it feels like AGI, Microsoft
were immediately excluded.
Their lawyers and their chief executive,
Sachin Adela, are much savier than that.
What I think we are seeing, though,
is, you know, is a path to a complete fracture
in the relationship between these two companies.
You said they were frenemies.
The Financial Times has done a lot of reporting on this.
I'm not even sure about the FR start to that relationship at the moment.
They seem to be an almost outright war,
briefing against each other and really trying to,
Well, they're basically trying to secure the best deal for themselves and their shareholders, right?
But there is an element, as you said, of mutually assured destruction here.
Open AI is Microsoft's only real play in AI at the moment.
They've created their own team internally, led by Mustafa Suleiman, the co-founder of DeepMind along with Demis Hasabas, who's still at Google.
They really haven't had much success building their own models, and a lot of what they're offering customers.
Their big, millions of enterprise customers off their Azure platform is, is their own spin on Open AI's underlying technology.
And Open AI, for its part, relies almost entirely on Microsoft's Azure Cloud Computing Network to train and run its models.
It's a great distribution method.
And also, it's one of his biggest financiers entitled to a huge share of its revenues.
So these two are sort of tied together.
They both jumped out of a plane.
And it kind of remains to be seen who's the first to, to, to, to, to,
blink and pull the parachute and how far down the road they get that. We wrote a story a few weeks
ago that Microsoft is, you know, willing to just walk away if it doesn't get what it needs
from these restructurings. And that includes the language around AGI. So it's interesting you
bring up Mustafa because I had him on, actually we spoke on YouTube and for Big Technologies newsletter.
And he was describing this new medical diagnostic orchestrator that Microsoft had built. And he said,
look as the models get commoditized, it's going to be the orchestration that makes the most
value. And it's like, oh, okay. So if you think, like, it sort of indicates a lack of faith in
open AI to continue to have that lead if you believe that the models are going to get
commoditized. But look, Stephen, I'm going to take the opposite side of yours on the AGI question.
I think there is a decent chance. I'm not saying it's for sure going to happen.
I think there's a decent chance that they are going to say it's AGI, that GPT5 is AGI, and by they, I mean open A.I.
And then just see what the F happens.
Because first of all, I think Sam likes chaos.
Second of all, just the way that he's speaking about this thing is, is, if this isn't something he would call AGI, then I don't know what it is.
Again, he says, I sat back in my chair and I was like, oh, man, it was a here it is moment.
Well, I would ask, what is it in a here it is moment?
I think they had this with O3, I don't know if you remember, but Tyler Cowan, he said O3 was AGI.
And my conspiratory minded self said, maybe somebody whispered in his ear that he should just call it AGI.
and sort of like clear the way for then someone like Sam Altman to say GPT5 is AGI and they were
restrained in waiting because you had a leading academic who said, okay, well, the previous
model fits that pattern. I don't know. Am I crazy? No, I mean, you're not crazy. I mean,
you know, this is the race to get there and the kind of, you know, the scientific and monetary
rewards if you do make it are just astronomical. And these things are coming
along very quickly. It's just AGI, there's no agreed definition. Even if you talk to Sam and we do at
the FT, he says, I don't even really know what that means anymore. But there's also a legal
definition as well. He is the chief executive of a company worth 300 billion. And when you say
things, you know, they can often end up in court. I mean, they said the open AI said a lot of things
about what their company was and its mission, which is now being used against it by Elon Musk as he
kind of runs interference around the outsides of this restructuring, you know, for people that
don't know, he sued Open AI saying it's abandoned, it's nonprofit mission. Elon Musk was, of
course, one of the co-founders and one of the biggest financiers at the start. And at the moment,
it looks like the Delaware district attorney and California district attorney have agreed with him
and have said, actually, yeah, they don't look a lot like a nonprofit anymore. If he comes out
and says, I think this is AGI, this is how I'm defining it, and this is how we're going to prove
it. Like, this model is better than, you know, most humans at almost all tasks that we give it.
Maybe you could make an argument for that. And AGI is a word of particular importance to open
AI, as we explained at length before, because of the restructuring. But a lot of people now are
talking about superintelligence, like you mentioned Mustafa before and Demis. They're talking about
building systems that are capable of being far better than the best human at, you know,
a huge variety of tasks and can't just kind of regurgitate and piece together the sum of
human knowledge that they've harvested from the internet, but can actually come up with
new things, you know, new ways of building rockets, new ways of generating power.
And I think that's kind of, did you not feel recently that that's where the goalposts has been
shifted?
ASI from AGI.
Yeah, it's definitely the new jargon term,
which has even made it easier
for a company like OpenAI to say,
hey, you know what?
We're going to call this AGI.
And here's another quote from the Theo Vaughn podcast
where Sam says,
GPT5 is the smartest thing.
GPT5 is smarter than us in almost every way.
You know, and yet we're here.
Dude, this is, this is it.
It's coming.
I mean, I am, again, I'm not stating this.
uh conclusively i'm leaving myself open to be wrong i'll admit i'm wrong if this is what happens
um but if he doesn't come out straight up and say this is a g i then there's going to be lots of
winks to it i i think that you're going to see a tidal wave of commentary calling it that when this
comes out i think it's a smarter tactic to let other experts in the field scientists rivals trump
say it for you and then you kind of move on from that basis and say hey look you know it's a subjective
term you know this is how we're let's take it to our board and see if they agree wouldn't it be
funny if trump just came out and truth social and said this is a g i and sam is like well see look
the president saying it i mean i wouldn't put it past any of them sam has obviously has obviously
managed to get um very close to donald trump um much to the much of the much of the chagrin of
elin who initially held his role as the first buddy but you know we've seen altman in
You know, in the Oval Office, just days after the inauguration, announcing this huge Stargate project, you know, he's appearing at the president's fundraisers.
You know, he's very, very good with politicians.
Exactly.
All right.
So in order to keep improving the models, these companies are going to have to build larger and larger data centers.
And at the heart of it is this $500 billion push to create a massive.
Data Center project with Oracle and SoftBank on behalf of Open AI. It's called Stargate.
But there is some news now that Stargate is hitting some speed bumps, and we're going to cover
that right after this. Hey, everyone, let me tell you about The Hustle Daily Show, a podcast filled with
business, tech news, and original stories to keep you in the loop on what's trending. More than
two million professionals read the Hustle's daily email for its irreverent and informative takes on
business and tech news. Now they have a
daily podcast called The Hustle Daily Show, where their team of writers break down the biggest
business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using
right now.
And we're back here on Big Technology Podcast with Stephen Morris.
He's the San Francisco Bureau Chief at the Financial Times.
Great having you on, Stephen.
Let's talk a little bit about Stargate.
So this is from the Wall Street Journal, the $500 billion effort unveiled at the White
House to super charge the U.S.'s artificial intelligence ambitions has struggled to get off the
ground and has sharply scaled back its near-term plans. Six months after Japanese billionaire
Masayoshi Stun stood shoulder to shoulder with Sam Altman and President Trump to announce
the Stargate project, the newly formed company charged with making it happen has yet to
complete a single deal for a data center. Sun and Altman's Open AI, which jointly lead
Stargate, have been at odds over crucial terms of the partnership, including
including where to build the sites, according to people familiar with the matter, while the
company's pledged. At the January announcement to invest $100 billion immediately, the project
is now setting a more modest goal of building a small data center by the end of this year,
likely in Ohio. I think that small data center is like still a gigawatt data center. So small
in the scale of what they promised, but still fairly large. What do you think is happening
here i'm really struggling to figure it out so i remember when i first heard about stargate i was in
davos you know the big you know the big conference of the great and the good um over in a small
mountain town in switzerland uh and this announcement blindsided everyone i actually just recently
met with the cfo of open a few hours before and she gave nothing away and everyone looked at
these astronomical numbers like half a trillion dollars you know a hundred billion initially you know
power on a scale, like almost unimaginable.
And since then, we at the Financial Times have been trying to work out where this money is coming
from, where it's going to be deployed.
And just as you can see in the Wall Street Journal article, which we've been writing
along the lines of as well, it's not clear that this is going well at all.
They haven't identified very many sites.
The money hasn't fully come in from the huge Japanese investor soft bank.
I guess in any conglomerate that's at the frontier of artificial intelligence with multiple different, you know, agendas, it's very hard to get everyone on the same page.
Like, where do you even build these things? Is that the power infrastructure there? This is all complicated by the tense re-structuring negotiations as well.
But what is very clear is that there hasn't been a hundred billion immediately deployed, as they promised at that infamous White House.
And they're now changing the definition of it. Stargate, of course, is a reference.
to the, I forget when it came out, the film that, you know, label people to time travel.
And opening I called it Stargate because that was the biggest human infrastructure project
ever in that fantasy book.
And that's what they want this to be.
But they haven't actually managed to get it off the ground yet.
And that must be somewhat concerning for both the company, but also its investors
because you have very, very big competitors out there, like meta, Microsoft, Google,
who are snapping up land, you know, power contracts.
They have deep government relations,
both on the state and federal level in the U.S. and around the world.
And you're trying to beat these guys at their own game,
and they're not going to be particularly happy about it.
So I do want to wink at something that we're going to have on the show next week.
I mean, one of the interesting things that you're starting to hear,
and I'm curious if you've heard this, Stephen, around AI research labs,
is that scaling is back in vogue.
There was like a period of time where these labs were like,
yeah, we're making these models bigger,
we're adding more compute, but we're getting diminishing returns.
I think maybe with the advent of what GROC is doing
and what Mark Zuckerberg is pushing,
there's starting to be a wave of AI researchers
believing in this sort of bitter lesson,
which is just that you don't really need new methods,
you just need more compute, and that's how important.
important this project is.
I'm curious, have you heard anything like that?
Absolutely.
We remember the other thing that happened that week in January when I was in Davos was
DeepSeaks release, which was built on the top of other open source products.
And they basically said, you can be innovative and you don't have to brute force this
with millions of GPUs all linked together on these vast, expensive training runs.
Because remember, one of these training runs goes wrong.
It's like a billion down the drain, you know, and you can't necessarily thought to do that
too many times. Right. That's why Mark Zuckerberg, instead of spending billions on training runs
gone wrong, said to spend it on talent. Yeah, well, I know extensively in your previous
editions, but I was chatting to somebody at Google the other day, and they were like, look at
who's moving. It's not necessarily the most innovative research is coming up with new ways to do
this. It's the people that know how to manage these training runs to make sure that they're
successful, you know, and that's part of the reason why Google bought a company called
character because the founder of that Noah was able to marshal their training runs of Gemini
much more effectively, meaning that they're faster to market and waste less money. But just to go back
to that brute force thing, like the more chips, more chips, more capacity, better model, that is
definitely coming back again. You wouldn't have Meta trying to build a data center the size of
Manhattan. You wouldn't have Anthropic talking about, you know, loosening its, loosening its policy,
shall we say, and taking money from the Middle East,
and you wouldn't have open AI trying to break away from Microsoft
and build its own gargantuan data center structure around the world,
if size in this didn't still matter.
Remember, it's not just the training of these models.
You've got to run them afterwards.
You can't afford to have them get slow or fail and go down
because your customers won't be happy.
And meanwhile, if you think about what's happening with Stargate, again,
because if this is the key, then Stargate is crucial.
This is from Sofricats, the CEO of Oracle.
Stargate isn't formed yet.
What?
This is Wall Street Journal's story.
Altman has used the Stargate name on projects that aren't being financed by the partnership between OpenAI and SoftBank.
Open AI refers to a data center in Abilene, Texas, and another degree to in March to use indent in Texas as part of Stargate, even though they are being done without SoftBank.
And SoftBank, I'm pretty sure, owns this Stargate name.
It's all very confusing.
Meanwhile, Open AI comes out with some news.
It's going to expand its Oracle Data Center
and it's going to develop an additional 4.5 gigawatts
of the Stargate Data Center.
It announced on Tuesday, the day after the Wall Street Journal story,
and it looks like it's going to attempt
to build 10 gigawatts of new compute through Stargate.
I think that was already sort of baked in,
but they're sort of signaling through the press that,
you know what, there are no speed bumps.
Who do you believe?
Well, I don't think there are definitely huge speed bumps like that they're having to go over.
That's not to say that they won't make it, but they are finding this more challenging and more difficult than they had.
Part of the reason they're able to claim Stargators off the ground is because they've changed the definition of it.
Previously, it had to involve Open AI, Oracle, the data center provider, soft bank for financier, and MGX, which is a huge.
new sovereign wealth, well, sovereign wealth linked fund in the Middle East that was going to provide
a lot of money. Now Open AI said any data center that we rent, because remember they don't
build it themselves, that's Stargate, which is obviously not what they said in January.
So they're kind of, but you remember, Safra Katz, chief executive Oracle, it's just a publicly
listed company with shareholders. If they're asked a question by an analyst on a call, they can't
lie because they'll be sued. So if Stargate doesn't exist, she has to say,
Stargate is not formed yet because otherwise she'll open herself personally but also the
company up to all kinds of lawsuits. So you see kind of, you know, the difference, you know,
between being a private Silicon Valley tech company and a listed one. You often get closer
to the truth when someone is not put on the stand, but when somebody has asked something,
you know, material information in a public context, you can't just say, oh, well, but we're doing
all this other stuff and we're just going to kind of change the definition. You're like, no,
it doesn't exist yet, which is somewhat concerning. Right. I mean,
A lot of this is if you look at what they're doing at Open AI, I mean, this is a company that
kind of exploded into the public imagination in 2022. You often lose track of the ambition.
You know, they're trying to build a device that they won't talk about with Johnny Ive that's not
a phone and it's not a headset or glasses. They're building their own data centers. You know,
they're building a variety of models, consumer apps, a browser. They want to get into shopping.
They want to get into agentic commerce online.
know, they have huge lobbying apparatus, you know, in Washington and around the world to try and
influence policy.
And I don't even know how Sam Altman arranges his day to try and keep all of this in his
head.
But it does feel to me, and if you speak to people that, you know, are close to him and advise
him, you know, maybe they're doing too much too soon.
I mean, I know tech is all like move fast and break stuff or, you know, you're established
monopolistic rivals like Google on.
Microsoft and Meta will come in and sweep the board.
But it does kind of feel like Open AI, you know,
is trying to keep a lot of balls in the air at the moment.
And Sam has a new baby.
And Sam has found the time to go on comedy podcasts.
A joke I made in our Discord today was like,
people like, how's you going on Theo Vaughn?
It's like, well, AGI is doing his work so he can spend the time doing things he loves
like comedy podcasts.
But a company that we'll call something Stargate when Stargate is not
formed yet, I think they might call AGI or their model AGI. And maybe it doesn't meet the
technical definition either. But, you know, as these projects get built up, we're going to start
to see this massive tax on the grid. And there's some great reporting in the F.T.
This week about what AI is doing to energy cost. Here's the headline. AI demand drives up
electricity supply cost in the largest U.S. market to record high. The cost of providing
electricity in America's largest power market
market will hit a record high
due to soaring demand from artificial
intelligence data centers and delays in
building new power plants, raising energy
prices for consumers. All right, this is going
to get a bit wonky. I'll just read this paragraph
and turn it over to you, Stephen. Grid operator
PJM, which covers 13
states in Washington, D.C.,
said Tuesday it produced an energy
supplies for $329.17
per megawatt day, a
22% increase compared with the
previous year. The organization will pay power producers 16.1 billion to meet its energy
needs from June 2026 to May 2027, a 10% increase compared with the previous year. It expects
a 1 to 5% rise for customers and their energy bills depending on how utilities and states
passed on costs. Wow. So this is now really starting to hit the size where it's
having a real impact on energy bills, double-digit pricing increases. Energy, it seems to me,
the ability to produce it and deliver it efficiently is going to be a major, major source
of competitive advantage for whichever country figures it out. Absolutely. That's why you're
seeing companies like Microsoft and Google bring old nuclear power plants online and strike
deals with like these mini fusion reactor companies of which sam oldman used to be a major owner of
one as well there is just quite simply not enough power that exists in the united states or around
the world or anywhere really apart from china to to drive these data center ambitions it just doesn't
exist and this is all linked to top level government policy donald trump made a speech earlier
this week about you know throwing the weight of the u.s federal government behind a i infrastructure
But just a few weeks before, he kind of gutted the American renewable energy industry,
in particular solar and wind, by taking away various federal credits.
This, to a large part, is how China is going to power the future of data centers and AI
with solar energy.
In the U.S., we're actually seeing it take a bit of a step back.
There's only so many gas turbines Elon Musk can put at his data center in Memphis to power
the thing.
What he really needs is a hydroelectric dam.
or a vast field of solar panels or offshore wind,
which is why he's become so agitated with Donald Trump
and the big, beautiful bill.
Not only did it almost destroy Tesla's business model overnight,
it also kind of gutted the ability of the US
to compete with China on renewable energy,
at least in terms of the investment and deployment.
So it doesn't surprise me at all that consumers are being hit in the pocket
due to the increase in demand for data center power,
and it's a real open question about how the hell you're going to power all of this stuff.
In particular, you know, in states that are really bidding for this,
to have these data centers built to create jobs and revenue like Ohio and Pennsylvania and Arizona and Texas.
Isn't it all just going to end up being nuclear power?
I mean, that's where I see it going.
It takes a long time to build a nuclear power station and get it online safely.
And while a lot of people might be pro-nuclear power,
I'm not sure how many people are pro living next to nuclear power.
So you've got kind of the NIMBY, not in my backyard, coming in.
I mean, we've, you know, you've had nuclear disasters in very advanced companies
with good track safety records like Japan very recently.
And I think memories are still strong of that.
But I do think there's a big place for the next generation of nuclear power in playing this.
Because, I mean, you know, the coal and oil and gas just won't last forever.
It's so funny you mentioned NIMBY because I'm reading this Reuters report about the president,
President Trump's plan to expedite AI development and lift some export controls and make sure that the data centers can work.
And this, pray the president's going full abundance.
This is from the story.
The plan calls for fast-tracking the construction of data centers by loosening environmental regulations
and utilizing federal land to expedite the development of the projects, including any power supplies.
It looks like, I mean, it's, I'm completely.
I'm conflicted about this. I'll be honest. I mean, I've heard from AI lab leaders who are happy about the fact that they're going to have the energy to be able to produce this. On the other hand, I'm not excited about federal land being used and being used to do this. That's, you know, I mean, call me an idealist, but that's the people's land. And the idea that you're going to fast track and, you know, potentially move. I mean, it's true. The abundance guys, they're Republicans, they all have a, a.
point that like it's too hard to build in the u.s but if you like sort of disregard the clean
water act maybe that's too strong of a word but if you blow past it um i am concerned of what
the consequences are going to be there yeah to return to the ethos of drill baby drill isn't it
you know just get these products build you know get these new sources online build as much as possible
i guess the argument for people that believe in abundance and we're just super
intelligence is just around the corner is these technologies will help humanity find ways to capture
carbon from the atmosphere or build, build and use things more efficiently. And that as we
become more accustomed to running these data centers, you can use them more efficiently
over time. But certainly the mass appropriation of federal land opening up, you know,
disregarding any environmental rules, doesn't seem, not as an American or a voter, but doesn't
seem like the best public policy to me.
Elias, who's the former chief scientists of OpenAI, now the guy who's running safe super
intelligence, he's had this vision that like the world is going to be wallpapered with
data centers as the scaling laws continue to show results. And I look at some of these pictures
of these data centers and I'm just like, oh my goodness, this is the vision being lived out.
Yeah. Well, with the ability to build a data center,
You know, Elon Musk has shown you can do it in months, not years.
He's also shown a willingness to disregard local planning laws and environmental laws,
which Memphis seems only too keen to help him do in order to make sure that he builds there and not anywhere else.
But it's it's a race.
And you look at, you know, countries, you know, jurisdictions where it's harder to build.
Like, from my personal experience, the UK and the EU, if you take 10 years to build something,
you know, Elon Musk is building in three months, China is building,
and Microsoft is building in one, there is going to be a little bit of a gap, especially if you
want your sovereign AI and you want your citizens' data stored and processed locally.
The concern in the other direction is that Europe will just fall so far behind these other
countries.
It won't have an AI industry that's meaningful in any sense, which may already case.
Yeah.
Unfortunately, it does seem like it's trending that way.
So I do want to talk about this bake-off between Google and Open AI on the International Math Olympiad.
There's kind of this funny story where like both of these companies said that they had achieved gold medals in the International Math Olympiad competition.
But like Open AI didn't officially participate.
So it announced first and then Google officially participated.
It announced second, but it has like the actual gold.
metal status because it was part of the competition. This is from the New York Times. This was the
first time of machine, which solved five of six problems at the 2025 competition reached this
level of success. They're talking about the Google machine. The news is another sign that leading
companies are continuing to improve their AI systems in areas like math, science, and computer
coding. And I just want to make sure I get this right. This is a large language model. This is a chatbot with
a reasoning system that is getting gold on the international math Olympiad, not a purpose-built
AI system that is used for solving these math problems. So the idea that you could have a chatbot
go into the math Olympiad and win gold once and perhaps twice for the first time, I think,
you know, we talk often about how benchmarks are unreliable and all that stuff. But I think
this is a pretty good indication that this stuff continues to make progress.
Yeah, I mean, it's a spectacular achievement.
It's quite funny after, you know, I moved here just over 18 months ago.
So I'm getting to know how Silicon Valley works and how the sausage are made and who competes
with who, who hates whom.
And Open AI and Google have a long-running battle trying to release products a few days ahead
of each other to kind of show the other up.
I mean, it happens frequently around Google's I.O. event, which was a few months ago.
And so Google had followed all the right procedures, deep mine.
It entered its model. It was in controlled circumstances.
And, you know, obviously it was pretty confident it was going to win a gold medal and it was going to be good PR.
And then you have Open AI, like using the same system, not officially entering, not it's subjecting itself to the same type of controls and scrutiny and front running Google's announcement by three days.
And again, like kind of soaking up all of the good media.
as a result of that.
Now, this little sort of behind the scenes
competitive doesn't take away from the achievement
that either of these have made.
And yeah, maybe they run GPT-5
through the Mass Olympiad.
It'll do even better.
Yeah, maybe they could go six to six at this time
and enter officially.
But I just thought it was worth bringing up
because it does show that all these things
that we thought, it's crazy, right?
This is a predict-the-next-word engine
with like a little bit of new techniques applied
to allow it to reason and go step by step.
And it's winning the gold in the Math Olympiad.
It's totally crazy.
And you think about the applications
that it could be put into or used for
with this type of math skill.
We're at this point where companies are all trying to figure out
like is there an enterprise use case for AI.
And I think if it's able to do abstract math,
It's able to get these problems right, you could start to see it having a real impact in
industries like finance and maybe even like the sciences.
Absolutely.
Well, you mentioned Mustafa Suleiman's AI diagnostic tool.
I mean, we both interviewed him about that a few weeks ago.
And what they did is they took the various models because they used, open AI actually performed best,
but they had a few others from Google and Anthropic too.
What they did is they almost created like a team of doctors from,
reasoning models that would talk to each other and question each other and make them go back to
the source material. And what they were able to do is take the most complicated sort of like house
level, you know, the TV show house level weird medical ailments that affect one person every six
years. And they were able to diagnose them, you know, at a rate like four times more successfully
and much faster and much cheaper with fewer tests than human doctors. So we're starting to see very real
real world use cases from this, you know, across the sciences, across mathematics. And I used to be
the banking editor for the FT in London covering, you know, financial services. They are extremely
interested in this technology and like what it could do in terms of turbocharging their returns.
And if we're going to look at the dystopian side a little bit more, how they could cut their head
counter and increase their margins. So I want to talk very briefly about a very interesting part of
this diagnostic orchestrator.
So it actually did improve performance.
I'm sure you caught this.
It improved performance by Forex over doctors
who didn't have like access to Google
and I guess their colleagues.
But when you looked at the fine print,
this orchestration system helped,
performed not Forex better than traditional LMs,
but like a few percentage points better
than traditional LMs.
And actually,
not that much better than reasoning systems. And it was like, on one hand, cool. A.I. is able to do
this much better than doctors. But on the other hand, I think another, if you wanted to take a
contrarian look at what was going on, it was that the very basic level of AI is itself three
times better than your average doctor who's given some constraints, which is crazy.
Yeah. I felt it was a bit unfair to test doctors who weren't allowed to talk to their colleagues,
refer to medical search online, you know, you're kind of, okay, you beat a human with a lot of
the, you know, the most impressive, you know, bits about, you know, modern medicine,
i.e., you know, the, the, the diagnostic process and putting in their colleagues.
But you know what, Microsoft, you know, are not going to, you know, are not going to, like,
put out a product that doesn't have impressive results, are they?
No. I don't think so. All right. Let's, let's breeze through a couple of these earnings reports.
We do have the San Francisco Bureau Chief of the Financial Times here.
So I feel like we should make use of your expertise as we're starting to see some big tech earnings come in.
So I'll give you one question each on Alphabet and Tesla.
And then I want to get to this memo that Sanya Nadella wrote about layoffs at Microsoft and then we'll get out of here.
So first of all, Alphabet.
It's amazing.
Despite the rise of generative AI, this company is continuing to crush.
We will have, by the way, their head of search and information and knowledge coming on the show.
in a couple weeks so folks stay tuned for that but here's the numbers from your story
google's core search and advertising business grew 12 percent uh beating expectation expectations
for a 9 percent rise so even as oh as generative i continues to take share you would imagine
in search everything's going up and google's figuring it out and not only are they
uh growing they are growing double digits and they're beating expectations
What is happening there?
It's very impressive.
Well, I think the reports of the demise of Google have been greatly exaggerated.
They had a bit of a wobble in 2023 and 2024, but they do seem to be very much back in the game.
I mean, their growth is impressive.
Every quarter, it's double-digit growth on tens of billions of revenue.
I mean, for most other companies in this world, these numbers can only be dreamed of.
But what they are showing is that the way that they've integrated AI into search,
whether it's the overviews, the bullet points at the top of your results, or the AI mode,
which you can click on, and then it just behaves like GPT, Gemini, or Claude.
It's actually boosting engagement.
People are searching more, the more people search, the more ads that Google can actually
show, and the more money they can earn from that.
So that's really what we're suing, and it's what we're seeing.
Google's argument has long been, yeah, we're going to lose some traffic to GPT,
but the pie overall is going to grow.
So maybe we don't have 91% of global search queries anymore.
Maybe we have 84%.
But if the actual overall pie of queries increases,
there's actually still more lucrative to them.
Now, whilst the results are very impressive,
there's a huge, you know,
there are huge clouds on the horizon for Google.
We're waiting for the results of the search antitrust remedies,
which could see Google have to do several important things.
It could have to sell its Chrome browser, which I'm using right now, to a rival.
We could see a hated rival like perplexity or Open AI buy that and have one of the best
distribution methods for their AI technology in the world.
They're going to have to share more data with rivals.
They're going to lose the right to be Apple's exclusive search engine provider on Safari across
its devices.
And they could see their business hamstrung in a variety of ways.
So whilst the results are very good for Google, they come with a big asteroid.
on the fact that, you know, even the Trump antitrust administrators are still going after them
and want to see them broken up.
So I wouldn't say that's why we saw the shares not jump 10%, but just a couple of percent,
because everyone's waiting to see how this antitrust stuff lands on them.
I'll just make one snarky comment.
It's perhaps easier to make money when you just ingest the entire web and the entire experience
happens on your platform versus having to send people to the pesky websites with the
information. Exactly. Okay, let's talk about Tesla. Tesla had a very bad earnings report. We knew this
was going to happen. But the news that I think the FT picked up on and was to me the right thing
to look at was the outlook. And the outlook is bad because this big, beautiful bill has cut off
EV credits and you would get a $7,500 credit to buy EVs in the U.S.
Not only that, the biggest source of profit for Tesla has been these regulatory credits.
So because they produce EVs, companies with mandates to produce EVs that don't meet those
standards are able to buy credits from Tesla and effectively, you know, wink, wink, meet the standards.
And Tesla, and that is, if not going away, it is vastly diminished.
They say, this is your story.
The revenue from the credits almost halved to $439 million in the quarter from a year before.
Last year, the company made $2.8 billion from these sales.
And these, of course, are straight profit.
So dark times ahead for Tesla.
Very dark times.
I mean, Tesla was worth $1.54 trillion on the 17th of December.
it's now worth around 900
so we're talking about
more than half a trillion of market cap wiped out
it rose to a peak because people are optimistic Musk's relationship
with Trump would allow Tesla
would allow Musk to help shape policy
maybe soften Trump's opposition to electric vehicles
I don't know about you we had a sweepstake in the office
how long it would take Trump and Musk to fall out
I was vastly over-optimistic at six months
a few of my colleagues said two three four
it ended up being four and a half
And now we are really seeing the shit hit the fan with regards to, you know, Trump, the Republicans and renewable energy and electric vehicles.
They do not like them.
They listen to the other lobbies far more than them.
And it's hard to actually imagine a worse set of policies for Tesla coming out of the big, beautiful bill than what emerged.
As you said, tax credits gone.
Tesla would have actually made a loss in the first quarter if it had not been for selling regulatory credits.
And what Trump administration has done is they haven't got read of these amendments.
emission trading systems. They've just said, if you don't abide by them, these emission standards,
the fine for noncompliance is zero. So if you're, if the fine is zero, why would you bother
complying? So Tesla, the market is essentially going to dry up whilst the system
technically still stays in place. So it's quite a cunning way of attacking it without actually
having to go to Congress and change the rules. And Musk sounded different, I don't know, he's bounced
back from a lot before, but he sounded different on the Tesla earnings call.
There was a lack of energy.
He was very resigned.
He said, we're going to have some rough quarters ahead.
And even when he's talking about building millions of humanoid AI powered robots
or unleashing a fleet of billions of robo-taxies around the world,
his heart clearly wasn't in it.
This is, you know, and I know Tesla and Musk himself has a lot of fans.
Every time I write a story, which might suggest this isn't the greatest company in the world.
they get like a torrent of online abuse from the Tesla Rati out there.
But as you deserve.
As I deserve, especially, you know, Tesla boomer mum or whatever she's called.
But we were right to point out that there were these, you know, just like Google,
there are these big clouds on the horizon.
It's just this seems more existential to Tesla.
Because remember, the way you get to armies of humanoid robots and robo-taxies is by actually making money.
and you make money by selling cars and selling credits.
Musk has alienated a lot of his traditional client base in Europe and the American around the world by championing these right-wing causes and appearing on chainsaws, appearing with chainsaws at Doge, hacking the federal government apart.
So it will be really interesting to see what they do.
I mean, the board has a big decision to make about Musk and his leadership of Tesla.
I mean, the company, it is him.
You invest in it on optimism that he will position this company best from the few.
future. But there's no denying that through his politics and his fallout with Trump, he's
become a bit of a liability to the company. That is true, but I just couldn't see them going
in any other direction, especially because Elon has proven time and again that when his back
is against the wall, he finds a way to figure it out, although his back is really against the
wall on this one. We reported that far from looking for a new CEO, the board is out. The board is
actually looking at giving him a new pay package because you remember most most of his pay got
cancelled by a Delaware court leaving him with only 13% of the company as opposed to about 20 and he said
something quite interesting on the earnings call when he was asked by an analyst do you feel comfortable
developing AI and these robots with only 13% control and he said that's a major concern for me
I've got so little control I can easily be ousted by activist shareholders
after having built this army of humanoid robots.
My control over Tesla should be enough to ensure that it goes in a good direction,
but not so much control that I can't be thrown out if I go crazy.
With his words.
Now, obviously, going crazy has been a great strategy for Musk in the past,
doing things successfully that others said were impossible.
But that's actually, that's a warning to the board.
He's like, give me more shares, give me more control.
or maybe I leave and focus my whole attention on X and XAI and SpaceX and then where does that leave Tesla?
Yeah, it would be in a terrible place. I would imagine because the valuation is not based off of the car business.
It's based off of everything that might come next. All right, let's end with this story, which I found interesting.
Satya Nadella felt it important to write Microsoft employees about the layoffs, the morale, the culture.
and the fact that this company is worth three, oh, almost $4 trillion, $3.84 trillion.
And is laying people off, which is like truly, I'll just say it's crazy.
Here it is from Geekwire in a company-wide memo, Nadella acknowledged that what he called
the uncertainty and seeping incongruence of Microsoft situation, even with its recent job cuts,
he wrote, it's thriving by every objective measure with strong performance, rapid
rapid capital investments and relatively unchanged overall headcount due to ongoing hiring.
Nadella pointed out that some of the talent and expertise in our industry and at Microsoft
is being recognized and rewarded at levels never seen before.
And yet at the same time, we've undergone layoffs.
And here is to me the most interesting paragraph.
This is the enigma of success in an industry that has no franchise value.
Progress isn't linear.
It's dynamic, sometimes dissonant.
and always demanding but it's also a new opportunity for us to shape lead through and have greater
impact than ever before and yet as i read saty andela's words i honestly cannot tell you why he felt
the need to lay off uh 10 000 plus people in the recent months what is happening here
now nadello is you know a very savvy man i mean he what he did at microsoft was essentially take it out of it
various failed consumer enterprises and really refocus it on enterprises, i.e. corporations and
businesses and data centers, azure. And that's worked out extremely well for him. But he has cut
his way to success for out of the consumer business, giving him the capacity to invest in the
other side. What they're seeing now is that they're going to be spending a lot more money.
They like all the other tech companies are spending tens, if not, you know, soon to be
more than a hundred billion on infrastructure every year. They also have shareholders to appease
who want dividends, who want to sheet the share price continue to go up. And you've got to make the
sums add up. So you have to take some of it out. So he's gambling that there are some non-AI
native people in the company who can be replaced either by AI systems themselves or you can
bring in cheaper, younger people that are better able to infuse this technology through the company
and shake it out of its own ways.
I do think headcount of Microsoft has actually stayed, like, roughly level.
So whilst you've had a lot of these layoffs,
they've clearly been hiring a lot of people as well.
I think Mustafa Suleiman now has six or seven thousand people reporting to him.
And just this week, I reported that they poached another 23 from DeepMind.
You know, these people, as you well know, do not come cheap, right?
Yeah, at the top end, they're getting one, 200, 300 million.
At the bottom end, they're still incredibly well compensated.
and it has to come from somewhere.
But by sending a memo and saying it's weighing heavily,
that tells you another story about morale inside the company.
You know, if you're doing, you know, 15,000 people is a lot, you know,
especially in, you know, in a smallish place like Seattle.
You know, to send something like this to try and put a more human face on it,
tells you that people at Microsoft are not happy.
Absolutely.
And I think a lot of people are.
pointing to a couple things. First, people, it seems like the games division of Microsoft has
taken a massive hit, probably disproportionate. I guess if you're going to invest somewhere,
I would invest in AI over games. Second, there's this like thing that keeps popping up in my
mentions that it's not Microsoft, you know, cutting staff. It's basically reallocating its budgets
and offshoring. What do you think about the offshoring idea? That he's basically just hiring the
same people just in countries with lower cost labor.
Absolutely, that's happening.
I mean, I can't remember.
I read a stat the other day.
It may be wrong that there have been 70 or 75,000 layoffs in U.S. tech recently, and
those jobs haven't disappeared.
Some of them have been replaced by AI, but a lot of them have moved to lower cost
jurisdictions abroad.
There's also been a very big trend at big tech companies to prefer contractors over full-time
staff.
Contractors don't get free massages.
they don't necessarily get the same levels of health care.
And whilst compared to Europe where I'm from,
there are basically no employee protections out here in the States.
There are more if you're a full-time employee,
like you're owed redundancy,
whereas if you're a contractor working for Tech Mahindra
or someone like that, you know, based remotely
or out in India or Malaysia,
you are far, far less of a burden on the company
if ever there's a slowdown and they need to cut costs.
And you're also, you know, far more flexible
in terms of moving, you know,
on to different projects.
So I feel bad for all the people that have lost their jobs on Microsoft,
but we should emphasize it's not just them.
This is part of a broader trend across the technology and wider industries.
And I think we should be bracing for even more of this as we start to see people trust AI to
do jobs that humans were relied on for long periods of the past.
Yeah, I think you're right.
Folks, be careful of those massages.
They come with a hidden cost that you may not be fully appreciating.
at the time. Though enjoy them, enjoy the massage. Don't feel bad as it's happening because,
gosh, that's a very nice perk. So, Stephen Morse, thank you for joining us. Where can people find
your work and that of your team? You can find us go. I think one of the FTE is one of the most
expensive newspapers in the world, but we're definitely worth it. Find us on our website,
mainly on our app, and occasionally posting on social media as well. All right. Well, thank you,
Stephen for joining. Thank you, everybody, for listening and watching. We will be back on Wednesday
with, I think, what's going to be the best interview of the year here on Big Technology Podcasts.
So we hope you stay tuned and we'll see you next time on Big Technology Podcast.