Big Technology Podcast - OpenAI and Microsoft’s Grand Bargain, Sam Altman’s Next Three Years, A New Humanoid Robot
Episode Date: October 31, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) OpenAI converts to a public benefit corporation 2) Why this is big news 3) Satya Nadela's wise OpenAI ma...neuver 4) Microsoft wants every AI model on Azure 5) Is AGI dead? 6) Inside Microsoft and OpenAI's negotiations 7) Sam Altman charts out OpenAI's next three years 8) Is building automated AI researchers a worthwhile and ambitious goal? 9) OpenAI also wants to be its own Ai cloud 10) OpenAI has become Facebook, kinda 11) OpenAI employees say they don't want to be engagement farmers 12) Meta's threat from OpenAI 13) Instead of the AI bubble, how about the AI wobble? 14) Do we want the 1X Technologies Neo humanoid robot? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Microsoft and OpenAI have met and signed a peace treaty, so this entire generative AI business can move forward.
Sam Altman plots out OpenAI's next three years, big tech earnings come in, and there's a new humanoid robot in town.
That's coming up on a very spooky Big Technology podcast Friday edition right after this.
Industrial X Unleashed is bringing together leaders from IFS, Anthropic, Boston Dynamics, Microsoft, Siemens,
and the world's most progressive industrial companies at the frontier of industrial AI,
in the real world. There's a clear market shift happening. The world's largest industrial
enterprises are done experimenting with AI. They're deploying it at scale, and they're choosing
IFS to co-innovate with them. IFS is purpose-built for asset and service-intensive industries,
manufacturing, energy, aerospace, construction, where downtime costs millions and safety is non-negotiable.
Industrial X Unleashed will feature live demos of AI products embedded in real-world operations,
customers sharing measurable outcomes, and learnings from companies deploying industrial AI,
at scale today. Learn more at
industrialx.a.ai.
Capital One's tech team isn't just
talking about multi-agentic AI.
They already deployed one.
It's called chat concierge
and it's simplifying car shopping.
Using self-reflection and
layered reasoning with live API
checks, it doesn't just help
buyers find a car they love.
It helps schedule a test drive,
get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive,
and deployed. That's how they stack. That's technology at Capital One. Welcome to Big Technology
Podcast Friday edition where we break down the news. In our traditional cool-headed and nuanced format,
we have a great show for you today because finally, the AI deal of the century is done. Microsoft
and Open AI have a resolution. Open AI might be heading towards an IPO. And finally,
all the funding that Open AI has been promised can start pouring in because the structure is here.
We're also going to talk about Sam Altman's vision for the next three years.
years of the company, how opening eye has become maybe a little too medified, uh, big tech earnings.
And of course, there is a new humanoid robot that Ranjan and I are both eager to discuss.
So, uh, joining us as always on Friday is Ranjan Roy, Ronjan, happy Halloween.
Have you, uh, have you ordered pre-ordered your Neo robot just yet?
No, to me, that is of all spooky headlines this week. That is the spookiest.
And I'm staying far away. How about you?
I have not pre-ordered it either.
But I don't know.
I don't think it's that spooky.
I want a robot standing right next to me, just, you know,
handling all of my work, all of my household tasks.
All right.
We will save that debate for the end because I think you and I will have diverging opinions on this.
Pro Neo.
Pro Neo.
As part of our service here to the public discussing the benefits and the downsides of AI and robotics,
we'll have to get into it.
But first, I think the biggest story of the week and really the biggest story,
of the quarter, if you ask me, maybe this entire half of the year is the fact that OpenAI
and Microsoft have a deal. This is from the Wall Street Journal. OpenAI has successfully converted
to a more traditional corporate structure, a move that cleared an obstacle for a potential
initial public offering, and pushed the valuation of longtime partner Microsoft above
$4 trillion. The artificial intelligence startup has turned its for-profit subsidiary into
a public benefit corporation, of which Microsoft will own 27%. The conversion, the conversion, the
conversion will grant OpenAI's nonprofit parent a stake in the for-profit worth $130 billion
with the ability to get more ownership as the for-profit becomes more valuable. The deal ends
nearly a year of wrangling with the attorneys general of California and Delaware who have
the power to regulate nonprofits, as well as with the broader philanthropic community and
key investors over the question of whether the AI juggernaut could remain true to its mission
while transforming into a more conventional company.
Rajan, first question for you here is, well, opening eyes made this deal.
So like Microsoft becomes a more traditional investor in the company, but it's still controlled
by a nonprofit.
And it was, I mean, if you think about the conversion itself, it's going from a for-profit
subsidiary into a public benefit corporation.
What is the big deal here?
This is, actually, I'm going to ask back to you, can you explain and clearly?
language, what you think this means. Because I still have a hard time trying to understand,
again, that the nonprofit has a stake in the for-profit worth 130 billion and will get more
ownership as the for-profit becomes more valuable. It still felt very open AI-e corporate structure
to me. It didn't help necessarily clear up a lot of the, what's been vague about everything so
far. Do you understand what is going on? So I'll be the first to admit, not fully.
Okay. Why I think this is important. This is why I think it's the most important story is
because Microsoft's ownership in OpenAI is finally cleared up. And yes, there's going to always
be some weirdness with Open AI when it comes to the nonprofit and the for-profit, the public
benefit corporation. But up until this point, you had Microsoft in this position where it was
entitled to 50% of OpenAI's profits. It had an unlike determined ownership percentage of
the company. And it also had access to all of its IP rights. And it could potentially lose
access to all that if Open AI just said, hey, we've hit AGI. So I think that to me is the
biggest thing that needed to sort it out. And until that became sorted out, until this sort of
conversion was complete and Microsoft was baked in as a traditional investor, the company
couldn't move forward. And now it's been settled and now it can move forward. So to me, that is the
biggest point. And of course, we know Microsoft has been holding it up because it was trying to
negotiate and strong arm open AI for a large percentage, even larger, I think, than it got of this
public benefit cooperation. That is the big deal to me. What are you? Okay. I'll agree with you. I mean,
if nothing else, the fact that this opens up the conversation around an open AI IPO, that is massive news.
I think as well, one of my favorite parts of the agreement was, you'd mention IP.
Microsoft still will have exclusive IP rights to OpenAI technology until 2032.
So there's a long time that whatever OpenAI makes, Microsoft still has access to.
But I also loved that the deal has OpenAI committing to purchase an additional $250 billion worth of Azure Cloud Computing Services.
So, of course, I mean, Microsoft, I think, came out pretty good here, maintaining 27% ownership,
getting just a massive commitment in terms of compute and compute spend over the next number of years,
maintaining exclusive IP rights.
So I guess it does appear everyone does seem to win here.
That's right.
And I think the Microsoft thing is quite interesting.
For Microsoft, you are leaving this.
the great, it's an amazing deal for Microsoft. I mean, of course, Microsoft sort of had
open AI in a stranglehold, right? And again, that's why I think this is important. And now
it sort of loosened that a little bit. It's allowing open AI, for instance, to, you know,
it's no longer, it no longer has like first right of refusal to work with open AI on infrastructure
projects. But it still keeps 27% of the company. It still keeps the IP, as you mentioned, till
2032. And so what I think Satya Nadella is doing here is running the needle. And actually,
folks listened to the Scott Guthrie interview here on the show a couple weeks ago.
You sort of saw it trending this way, which is Satya saying, you may be able to build, you know, an amazing company or this sort of unprecedented new technology by scaling up what you have today.
And Microsoft doesn't want to bet its entire company facilitating that for Open AI.
So it relinquishes some of its stake in what Open AI was going to get in the future.
for two things. First of all, there's 27% of the company, which is massive. And then, well, and the
IP rights. So as we have, like, we enter the most critical buildout of generative AI. Microsoft will be
the one that owns, you know, the IP rights of the most important company. And then it says,
basically, if you want to go build, go build with others. So it prevents itself from getting
the downsides of, like, let's say, facilitating this whole build.
build out and having to, you know, effectively mortgage your company's financial future to do it.
So it's, I think, a terrific move for Microsoft.
Well, you know Sotia is feeling good because in that coverage, he starts talking about
how they're a platform company.
And Sotia has the quote, my mindset is all platform.
I'm happy with OpenAI.
I'd love to have Anthropic.
Grock anyone.
If Google wants to put Gemini on Azure, please do so.
He's feeling pretty good right now.
think and for a good reason that was a very important quote from satia it's him washing his hands
of open a i in a way i mean you you realize that azure is going to be true open a i's models will
only be available on azure and on uh and through opening i's API that's it unless it gets
satia's blessing uh and so now satia saying well we're going to already have the sale by the way
we're going to get to earnings i guess but we'll probably read them throughout Microsoft Azure grew
39% quarter over year over year in this most recent quarter. I mean, that's astonishing.
And so why wouldn't that you then, you know, now that he's like, effectively, he has his
ownership stake in Open AI. He has, and he has the exclusive rights. Why wouldn't he, you know,
continue to welcome all the other models and make Azure the go-to place for anybody looking
to build with this technology? Yeah. I think, again, we'll get more into earnings. We'll get more
into the potential for an AI bubble or a wobble as one of my favorite phrases of the week came
out, but 39% on a business of this scale. I've worked at, you know, like mid-100 million revenue
startups, 39% is a massive number. And Azure is just incredibly well-placed. And yeah, I think this idea
of just openly talking about platform, even kind of throwing shade and just saying if Google wants
to put Gemini on there. I think they are in a very good place right now.
Now, even though their nonprofit will control, we'll appoint the board of the for-profit
side of Open AI, I think by all intents and purposes, it's become one entity. There's only
one board member on the nonprofit that's not on the for-profit side. And that's where a lot of
folks are saying, you know, this is effectively the end of the nonprofit, even though OpenAI
did make some nice announcements saying that the nonprofit was going to, like, invest in some
AI safety initiatives. Here's from this one AI watcher, Svi Mousiewicz. He says, they're moving
to complete the greatest theft in human history or perhaps second business, if you count what
happened around the dissolution of the USSR. Are we going to let them get away with this?
What do you think about the fact that this is effectively? I didn't quite understand. I know you had
thrown that the tweet into our prep doc. I didn't fully understand. What is he calling the theft
there? The theft would be basically that the public was entitled to whatever opening I was building
because it operated as a nonprofit. And now clearly the upside will be handed to Microsoft and the
investors. Okay. I don't feel too bad about that because anyone who's lived under the illusion for
the past few years that anything was actually meant for the public or Open AI was truly open
and about this mission. Come on. No one really believe that, did they? No, but I think if you start
a company as a nonprofit, there should be some obligation to hold to that mission. I don't know.
I'm going to be less cynical here. I'll be more idealistic, Ranja. Let me be switch sides for a moment.
Come on. Like, if there are laws or laws.
if you started this way, you obviously got some of the startup capital from Elon to do it.
I don't think it should be that easy to be like, nah, we're for profit now.
No, I mean, at a truly human level, I agree with you in terms of like how things.
I'll take that. A truly human level. A truly human level. We have to go that far.
We have to go. That's as far as I can go with it. But I mean, again, like you had mentioned the board structure.
there's, again, the cloud compute commitment, so much of this still feels so open AI.
Like in the, from the information, Open AI Foundations Board controls the Open AI
Corporations Board through its ability to name and remove its directors, which could upend
the company.
So the nonprofit part of it still has a good deal of influence, but everyone on the foundation
board is also on the for-profit companies board, with the exception of,
Carnegie Mellon University's machine learning department head Zico Kelter.
Like, I mean, you don't get more open AI in terms of corporate governance and structure
than that announcement.
And I also wonder, why did Zico not get the call for, uh, for the for profit board?
That's pretty odd that, uh, one person gets, gets, uh, gets a stiff arm on that announcement.
I read that and I was just like, I got to get Zico on the podcast.
So listeners, fear not.
There will be an invite in DECO's inbox, probably by the time you listen to this show.
I think you would mention this at the top.
One of the kind of also interesting subtexts of this for me is does this mean AGI is nowhere on the horizon?
Because remember, Sam gave Sotia a pretty sweetheart deal here.
And all he had to ever do was just say AGI and everything would have gone his way.
Does this mean Sam's not going to be declaring AGI any time soon because now he doesn't have to?
He's already started backing off of it for a while now.
Yeah, I think that we can all put that silly little acronym to the side for a while.
Because if it ever had meaning, that meaning has been sort of completely dissolved.
And I will put the death date on AGI back to, I think you know what I'm about to say,
Sam Altman's appearance on Theo Vaughn, where he basically said GPT5 was going to be AGI.GI.
without using the term.
And then he rolled it out and everyone's like, huh?
The historic date of Theo Vaughn or historic presence of Theo Vaughn in this conversation.
And also remember, it's Halloween.
You can easy date to remember that that's the day that from a corporate structure standpoint,
Open AI made us all be able to move on from the absurd concept and notion of what AGI is or could be.
So let's retire it.
Well, we can still talk about ASI and super intelligence.
Before we retire it, though, we should actually talk about there's some juicy details in this information story about the agreement between OpenAI and Microsoft that that term did come up.
AGI did come up and was used as a negotiating tactic.
Leading up this from the story, leading up to the restructuring, Microsoft's and Open AI were often at odds over how long Microsoft will get access to the startup's intellectual property.
Open AI had threatened to declare it had reached artificial general intelligence or AI that could handle most economically valuable work.
which could have abruptly ended Microsoft's rights to reuse the startup's technology.
The negotiations were also complicated because the prior OpenAI structure promised to pay out future profits to Microsoft
before other investors got a share of profits, a potential advantage that's no longer in place.
Under the new agreement, Microsoft will keep getting access to Open AI's models after AGI is reached with appropriate safety guardrails.
And Open AI will continue to have the exclusive rights to res.
Oh, I think it's, yeah, an opening I will continue to have the exclusive rights to resell opening eyes models on its Azure cloud service until AGI is reach.
I think that's a typo.
I think that was Microsoft.
So, yeah, AGI did come up in the negotiations.
These negotiations got nasty.
They must have been amazing to just like, oh, I would love to have been there.
Opening, I threatened to call antitrust regulators to come into this after the way that Microsoft held it over to them.
Yeah, I mean, and imagine lawyer just sitting across the table, hey, we can just say AGI.
any time. But again, that is how far away from AGI GPT-5 must have been even internally in their
own understanding of it if they gave up, because that's a pretty good card to have in your
back pocket. So to just give that up, it means you're not going to be saying it anytime soon.
But what I keep thinking about, and I'm very excited about, I want this IPO to happen
because I want like an actual gap compliant, or at least, you know, gap-adjusted reading of what are Open AIs financials.
Because even like in a live stream in the information reporting, it talks about how Sam Altman is talking about, they have $1.4 trillion worth of financial obligation from all of these data center commitments.
I mean, throw another 250 bill with Microsoft, even though their revenue is going to be $13 billion this year.
And I think it's $20 billion they're losing this year.
Like, I just want to see these financials actually just laid out clearly.
I've loved all the reporting and the leaks and whatever else around trying to understand the actual economics of their business.
But that's going to give us a better look at how is this all working.
And we need that as an industry.
So thank you for the public service whenever.
this happens. When I read that they're thinking about an IPO, and folks, by the way,
M.G. Siegler is going to be on the show next week for his first week of the month appearance.
I think we'll probably do it on Wednesday, but stay tuned for that. We'll talk about the
potential open AI IPO at length. But Ron, just to address it here, the one thing that I just
thought was so funny was, or I don't know, absurd, is have we ever had a company that is going to
go for a trillion dollar IPO on 13 billion in revenue? I mean, then, of course, that's this year's
revenue so maybe they'll grow it now they'll i'm sure they'll grow it next year the question is by how much
but then 120 billion in losses over a three or four year period i mean has that ever happened
you know what i i cannot think about from a multiple perspective i mean that's already just
insane but i'm sure we've seen that at some point from a like loss perspective certainly not
because I can't imagine that this scale,
any company ever losing that much money
and being able to confidently even think about an IPO.
So do you think it happened?
When do you think they go for it?
I think it's going to be a while.
I mean, again, so Altman did tweet out.
He did this live stream this week.
He shared some perspective there and then tweeted out some notes about it.
I mean, on the live stream, he said Open AI has 1.4 trillion.
and worth of financial obligation.
And it's made commitments to use 30 gigawatts of data center capacity.
But OpenAI's revenue is going to be $13 billion this year.
And we already have 800 million people using ChatGPT.
I just don't see how you go from, and I could be wrong,
but I don't see how you go from Open AI's base of revenue now
to $1.4 trillion that it can spend and ever have an economically viable company.
I mean, we could go back to this and I'll look like an idiot when opening eye pulls it off.
But to me, the financial picture just seems absurd.
Well, do you know what could help them in kind of creating a more viable financial picture?
Erotic chat GPT.
Well, certainly erotic chat GPT.
I was going to say advertisements, but erotic chat GPT can also work there.
Yeah, so we will talk about the advertising.
And there are also, for folks, we've talked about this study that there's 90s.
95% of AI pilots are not generating an ROI from MIT.
There was a study this week that said 74% of enterprises are actually finding an ROI from Wharton.
So some dueling studies, another data point.
But I think before we get there, we should talk a little bit about this Sam Altman memo
and what he basically spoke about in terms of open AI's roadmap.
So let me just run this by you.
He says, we have internal goals of, and this might be another way that they end up making
the money.
We have internal goals of having an automated AI research intern by September of 2026, running
on hundreds of thousands of GPUs and a true automated AI researcher by March of
2008.
We may totally fail at this goal.
But given the extraordinary potential impact, we think, we think, sorry, given the extraordinary
potential impacts, we think it is in the public's interest to be transparent.
about this. This was like totally overlooked in this week's news, but I think this is the most
important piece of open AI. Well, not the most important, but a very important piece of
information on opening eye. They're trying to cause an intelligence explosion and they believe
they're going to get there within three years. Wait, I need you to explain this one for me,
because when I read that part of the memo, that was my biggest, Sam Altman, the eye roll moment.
Because again, one, talking about the public interest and the impact on the public interest,
Sam, you don't have to do that anymore.
It's like you're for-profit now.
It's okay.
Those kind of bombastic statements around public interest, it's over.
It's okay.
You don't have to do that.
But also like vague language around what is an intern versus a researcher.
Why does a research intern need hundreds of thousands of GPUs when actually,
I mean, deep research across all platforms is very good right now.
Like, what do you think he meant by intern versus researcher?
And what is a true automated AI researcher?
Okay.
I will take a swing at this.
I think an intern is an AI program that can do work for you and then come back.
And you have to really supervise it a lot.
So if you're, let's say, can't you do that now?
I think you can.
But it's an AI research intern.
Um, so I think he is putting it on. I know you're laughing at it, but I think he's putting it on, uh, what they're seeing today. And I'm taking it seriously because I don't think open AI is the only one trying to do this also like Anthropic is also trying to do this. All these labs are trying to effectively cause an intelligence explosion by building AI that can improve itself. And so my reading of this is an AI research intern, uh, is going to be something that's like not autonomous. Uh, and a true.
AI automated AI researcher by March of 2028 means effectively you give it tasks and it just
will go and in an automated fashion be able to improve your models, come up with new methods
to improve models, and then the models will just start improving themselves. I think that
Sam caveats it in a good way. We may totally fail at this goal. But I also think that it's important
for them to be out in the open saying that this is what they want to do. This is obviously the path
they think is going to get them to, I won't even say AGI, but like human level intelligence
or super intelligence. And that's why I titled the section of this section of the podcast,
Sam Altman's next three years, because these three years are going to be very interesting.
That was a noble effort. I'll give you that. I think, okay, if we're reading it as some kind of
self-evolving model that's like improving at the model level rather than just doing the task,
I think that can get us to something that starts to be interesting.
But I think I'm going to say this is one of Sam Altman's lesser interesting memos that I've read.
I feel like, I don't know.
In the past, there's much more kind of concrete visions of what the future looks like.
And now he's giving vague notions and just saying buzzwords like researcher.
he goes on to talk about like an AI cloud that enables huge businesses like I don't know do you do
you believe genuinely believe this is like a pretty important Sam Altman memo or do you think actually
does this show he's losing a bit of steam no I wouldn't say he's losing steam at all I have a
completely opposite read then okay okay I think an a automated AI researcher is not like it's not
anywhere close to pie in the sky or amorphous. It's an obvious thing that they want these
self-recursive recursively self-improving AI systems and they're going out to build them.
And that to me is if you want to see fast progress in AI, if you want to see a payoff of
all these billions, maybe trillion plus dollars that they are going to, you know, invest in
this stuff, you almost need that. And we don't, it's right, we don't have a real picture of what
that looks like today. But the fact that that is the North Star, I think, is very important
because if they do pull it off, and they may not, but if they do pull it off, you know,
the ride that we're on up until this point is going to feel like, you know, a small children's
train compared to what we'll be on, you know, should this vision. Japanese bullet train.
I think I'm going to give you again credit for extrapolating into something that is actually a
lot more, I don't know, exciting. But to me, again, the actual words he used in this,
He goes on to say in 2026, we expect that our AI systems may be able to make small new discoveries.
In 2028, we could be looking at big ones.
It just, I still don't understand because, like, we talked the other day about deep mind and, like, the cell-to-structure model that's making massive advances in cancer research.
Like, generative AI and large language models are already making big discoveries and exciting discoveries.
But then again, once he goes on to say, we think that.
science and the institutions that let us widely distribute the fruits of science are the most
important ways that the quality of life improves over time. Like again, bringing it back to quality
of life and the public interest, Sam, you don't have to do that anymore. Just show us some good
numbers. Sam has definitely had some fortune cookie stuff and he always does. But he has had some
fortune cookie stuff before. He had good fortune cookies. Yeah, I mean, he had a tweet this week,
all palaces are temporary palaces, all theories are provisional theories. And someone
quote tweeted and was just like, can you just focus on building a company?
That's what I want to hear. That's what I want to at least make them good as opposed to
kind of amorphous, vague statement, public interest, public interest.
Okay. But again, I'm just off the wall. I'm standing on the table from my,
my perspective here, and I hear your skepticism, but I think the entire AI industry is
trying to move this way. We're trying to build AI models that improve the standard
AI models. Just as the cell to sentence model with Google was able to look at a bunch of different
treatments and make hypotheses about which would get the cancer cells to raise their hands to the immune
system, they are going to try to get AI models to make a bunch of different hypotheses about the way
to build better models and then at some point act on them and check their work. And so I think
we're living in this moment where AI has started to code really well. And people are like, well,
have what sort of use cases does that actually have, right? And well, okay, it can help coders build
stuff, but it's also like that is the foundational layer for anything you want to build. And once
AI starts to code well, you can start thinking about, you know, AI then stepping in where researchers
are. I'm not saying Sam's going to pull it off. I'm just saying that this is, this is the vision.
This is the ultimate vision. And, you know, the fact that he has a timeline on it, we'll be able,
over the next three years to check his progress.
I agree with all of everything you said, Sam didn't say it in such words.
He could, I feel it's that line between overly like, kind of like the tweet you just read.
That's like classic Sam or just lay out a really concrete vision that's like saying exactly
self-recursive improvement in models.
Like that, just say that if that's what the vision is.
Okay.
I'm with you.
Look.
Yeah.
He obviously should have hired the Cantorotin-Roy marketing agency.
That was the mistake.
That was the mistake.
Huge mistake.
Yeah.
But let's talk, before we go on to some other stuff,
let's talk briefly about this AI cloud that enables huge business.
It is interesting, like they are building with the support of the AI clouds that enable
huge businesses, Nvidia, Oracle, and Microsoft.
But now we're saying they want to build it themselves.
On the product side, like you said, we're trying to move towards a true platform where
people and companies build on top of our offerings will capture the most value. Today, people can
build on our API and apps in chat, GPT. Eventually, we want to offer an AI cloud that enables
huge businesses. What do you think about that? Okay, okay. You're right. You know, now rereading
that, my first read of that was kind of like that already exists. So again, as this kind of like
grandiose futuristic vision, that wasn't that interesting. But maybe that is kind of a dig at Microsoft.
Google and Oracle even and saying, like, you know, your big Azure business that just grew 39%
year on year and is at the growth engine of your entire business, is he saying that Open AI is going
to be moving in that direction? Okay, that's interesting. Yeah, let me make a point here. On top of
this moment that Open AI has ushered in, Microsoft has become a $4 trillion company. Invita has become a $5 trillion
dollar company. Google is nearing 4 trillion. Even Google, the much forlorned Google is getting
close to 4 trillion. But it's just an absolutely fantastic earnings report this week. And opening
eye is like looking around and it's being like, well, Nvidia builds infrastructure and
Nvidia designs large language models, the Nemotron models. It's saying, well, Microsoft has this
cloud business and Microsoft is building its own LLMs. Amazon has this cloud business and
Amazon is building its own LLMs. Google has its own cloud business. And Google has its own cloud business. And
Google is building its own LLMs.
Everyone seems to be doing well.
We're building LLMs.
Why aren't we a cloud?
That's what Sam's saying.
All right, listeners, I will step back for my earlier statements.
I think Alex might have convinced me.
Okay, I think there's something here.
I think you're right that he is at least hinting at they want a piece of that entire infrastructure layer.
What that looks like remains to be seen.
but, and I'll say we're going to get into the ads conversation.
To me, this is a lot more interesting and exciting
if they're actually going to be going after all the large cloud providers
versus just kind of giving a SORA personalized ads.
This could be something that he starts leaning into a lot more
now that we think about like what they have,
what picture they need to paint to actually make it to an IPO.
Now that I have made the case that this is the,
smart, bold business decision, and they may not make it make it there, but this is certainly
where they're heading. I want to take a break. And then on the other side of this break, sort of
allow myself to unfortunately make the counterargument because we've been hinting at ads
this entire time, but there is a conversation that OpenAI is going through a Facebookification
process where all the executives have come from meta. They have putting in Facebook processes and
they're chasing a Facebook business model. And so,
It is interesting to see on the ground how different this is compared to that memo that Sam
Altman wrote.
So let's cover that right after this.
The holidays sneak up fast, but it's not too early to get your shopping done and actually
have fun with it.
Uncommon goods makes holiday shopping stress-free and joyful with thousands of one-of-a-kind
gifts you can't find anywhere else.
I'm already in.
I grabbed a cool Smokey the Bear sweatshirt and a Yosemite ski hat, so I'm fully prepared
for a long, cozy winter season.
Both items look great
and definitely don't have the mass-produced feel
you see everywhere else.
And there's plenty of other good stuff on the site.
From moms and dads to kids and teens,
from book lovers history buffs
and die-hard football fans to foodies,
mixologists, and avid gardeners,
you'll find thousands of new gift ideas
that you won't find elsewhere.
So shop early, have fun,
and cross some names off your list today.
To get 15% off your next gift,
go to UncommonGoods.com slash big tech.
That's uncommongoods.com slash big tech for 15% off.
Don't miss out on this limited time offer, Uncommon Goods.
We're all out of the ordinary.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing, and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
And we're back here on Big Technology Podcast Friday edition with Ron John Roy of margins, as we do every week.
We break down the week's news in the first half.
We went through the Microsoft and Open AI agreement, and Sam Altman's big plan for the future of OpenAI.
and I stood on the table here and said they are making bold bets.
They're trying to be a massive business, autonomous research, their own cloud.
And then you look at what's happening on the ground inside OpenAI.
And what we see there is a mirroring of Facebook in many ways that are making Open AI's employees themselves quite uncomfortable.
So here's from the information, Open AI readies itself for its Facebook era.
In the three years since OpenAI introduced Chat Chip-T, the company has seen an influx of leadership and staff from META, even as META has poached liberally from Open AI in recent months.
Of the people who list Open AI as their current employer on LinkedIn, approximately 630 previously worked for META, which represents 20% of the roughly 3,000 total employees at Open AI.
The contingent is so large that Open AIs Slack channel has just, as, open AIs has a channel, sorry, Open AIs Slack has a channel, sorry, open AIs Slack has a channel,
just for former meta employees,
including one,
sorry,
according to one of the current employees.
As the meta alums have arrived,
it's become evident
that some of OpenAI's latest strategies and initiatives
do resemble the tactics meta used
to grow a corporate juggernaut.
Open AI itself is keenly interested in growing
into a similarly gigantic form,
an effort to satisfy investors
and justify the half trillion valuation
it received a few months ago.
So, Ranjan, I mean,
we've talked about it, you know, at length, about all the open AI, although Facebook executives
now working within Open AI, does what they're actually doing and their hiring practices
show a very different direction than the one Sam is talking about in his tweets and live streams?
I think certainly, yes. I think it's telling and almost, I don't want to say funny, but like,
I mean, and we've talked about this since the moment Fiji Sima was hired.
When the moment Pulse came out, certainly when SORA came out, like they're just laying this groundwork in infrastructure for just lots of ads.
And I actually think they're going to have to release this ahead of any kind of IPO because, you know, online advertising, especially whatever that next wave of AI platform advertising is going to be very different than it looks today across like any meta platform, Facebook, Instagram or even a TikTok.
So that is the next big market.
They're incredibly well positioned, but it certainly doesn't square with any of the big, like no one is going to like to hear when you're painting this futuristic vision that it's just going to be a bunch of ads stuffed into whatever your feed, even though now it's not going to be a feed, but it's going to be like a chat conversation.
So it's definitely not as grandiose, but it's certainly what's going to happen.
Yeah. And when you look at the, and this is again, I'm like, I made this whole case for like, all right, opening eyes going for the gold. And maybe they are. But I also think they need to build an intermediate business. And I don't know if this is going to really work out if they end up being an ads business above all else. But it certainly seems like they're going that way. We've talked about Pulse, which is this morning update that is available to chatypD pro users right now. That is like an obvious media or ad product coming up. And there's also, this is. This is.
from the information story, open-AI employee worries range from the perceived shallowness
and potential abuse of SORA, the company's new video app, which doubles as a social
network and is rocketed to the top of the app store charts to recent comments from
Open AI's leaders suggesting a growing openness to advertising as a revenue stream.
I mean, SORA is almost a case in point of like what happens when you merge Facebook ethos
with an open AI ethos, right?
It had some kind of disregard for, you know, typical normal.
like privacy, some good engagement stuff to get you, like, interacting with your friends,
social components, notifications.
It felt like they almost did the Facebook version of AI better than Facebook did.
I mean, not only almost, they did.
And you saw how flat Facebooks fell with vibes.
So it is starting to look very simple.
Actually, in terms of the Logan Paul, like being able to use him as a cameo on SORA.
Jake Paul.
like, oh, sorry, Jake Paul. All right. Always get my Paul's wrong. Um, even Jake Paul, like,
being used as a cameo. And I remember like at first people were, it was actually brilliant. Like,
at first a lot of people were kind of horrified that someone would let themselves have their
likeness be used so openly and people kind of making fun of it. And actually to Sam's credit,
too, he did that. And it was such a perfect seeding of a growth hack. Because like, when you open that
blank start problem of what do I create, but suddenly being able to make fun of Jake Paul and
Sam Altman is what everyone wants to do anyways and is a deep human trait of all of us. So to enable
that was a great growth hack and really feels like just classic Facebook product. So they're doing
well. Now remember opening, I started as a research lab and there have been people that have like
sort of stayed there from those research days. But as they bring in these meta executives,
Some of their perspectives have flown in.
Here, this is again from the story.
In 2024, a long memo from Kevin Wheel, a former meta executive, who was then serving
as the chief product officer, caused a bit of a stir shortly after he joined.
In the document, Wheel, who was known at Meta for his lengthy missives, detailed his goal
of getting chat GPT to a billion weekly active users and having it passed the toothbrush test,
a term former Google CEO Larry Page coined to refer to products consumers use daily.
Some executives, including Mira Muradi, who was then opening eyes chief technology officer, chafed at Wales memo and felt the naked emphasis on simply increasing users rather than trying to build a quality product that could attract users was the wrong strategy.
Very interesting culture clash going on within that company.
And Mira is no longer there, so it seems obvious which elements of the company have won.
Yeah, I like the toothbrush test, a product you use daily, is a good.
good kind of barometer or metric for this. And I mean, chat GPT certainly has passed the toothbrush test.
There's times when I actually wish they did not kind of engagement farm the hell out of it
because it's a pretty compelling daily use product for anyone, but obviously they need to
kind of increase time on site, total chat interactions. And, you know, they're going to have
those KPIs in a dashboard. But my favorite part of this is the
idea that maybe Miramorati, this is why she doesn't even need a product and is raising a
billion dollars with no product. So it does not actually get. If you can get a billion dollars for
no product, go get it. Yeah. And that way the product is not going to be used in the wrong way
and not going to be used for engagement farming because there's no product. It's no product. It's brilliant.
It's brilliant. I swear, any VC who bought that. I mean, we will see something, but I don't know.
I don't think that is that the purest form of product that avoids all of this kind of like sullying of greatness is no product at all
you know i think we'll try to do that with the podcast you know we will just not play and not record any episodes
so as not to upset the balance of the universe i i think that's the only way i think advertisers would be
into that i mean they must they or maybe we'll just get some VC funding no we can
just get, we should just get Mura and Elias to advertise on our non-show.
No product, no show.
No product, no show.
No products. No ads. God bless.
As we're all just downloading Dave's Hot Chicken app for free sliders.
Oh, you saw.
This week, Jensen and the CEOs of Samsung and I think one of their companies,
they were eating fried chicken.
Longtime readers of margins will know of my, I love fried chicken and I use it to explain.
I've written a lot about.
fried chicken and private equity
with Popeyes. It's a long-running
thing. I love when it shows up at the center of
the great business stories, and
it continues to do so.
That's, forget, forget
AGI or A-S-I, fried chicken
is the barometer for
where greatness and tech lies.
So one more part of this story
that was really interesting to me is at the
very end, super buried, but I would
have written an entire story based out of this.
I don't know if they didn't have the sourcing there or what
happened, but some employees,
Employees have bristled at how post-training, when a model learns to follow instructions and respond how humans prefer,
has started to emphasize engagement metrics, according to a former employee.
According to a former employee, they have felt the new emphasis is another sign that Open AI is beginning to become another meta.
We don't want to become engagement farmers, the current employee said.
Wait a second. Post-training, core of the model, they're building in engagement hacking or emphasizing engagement metrics.
That's a huge story.
I mean, but anyone who uses ChachyPT sees it baked into the product.
We've talked about this a lot.
I've complained about this a lot that you can ask a very straightforward question
that has a very straightforward answer.
And it has to ask you, would you like to do these other three things?
And here's another chart or a slide or something.
Like, you know, like the entire product is it's so clear what is happening.
And I'm sure it works.
And I guess that really does kind of like solidify the medification of open AI where, you know,
you can when it's so baked into the product that you can viscerally feel it, I think it shows that you've been metified.
Right.
And you had talked pretty recently about how like open AI just wants to use its compute.
And we had some readers being like, what is Ranjan talking about?
And I think we're starting to get some illumination here, which is that.
they are chasing users. They want engagement. More engagement probably leads to more
investment. They think they'll get the cost down eventually. And they have an IPO. And they sort of
will figure it out after the IPO. Well, yeah, because 1.4 trillion in financial commitments around
data centers and compute, they have to show that it's being leveraged and utilized in some
way. Like we had talked about this in the SORA context, which is a very heavy compute
user usage. But to me, the pulse where it kind of works all night to give you an update of
things in the morning. And it's just draining compute even as you sleep. Like all of these products,
typically you would think a company that's operating with some kind of margin is actually
worried about, are we over using compute? Every other enterprise in the world is definitely
having conversations when it comes to AI around, are we doing this in an economic?
economically viable way. But Open AI, they're building things that just, they're not worried
about efficiency. They're worried about actually does it use more compute because that's going to
look great to the market. As counterintuitive as it shows, or as it may seem, it'll look good
to the market. I'm convinced to show we're actually using this kind of percentage of our overall
capacity. And now on the flip side, you wonder, well, what's happening at meta? Because Mark
Zuckerberg sitting over there in Menlo Park, and he's like, wait a second. Open AI has the fastest
growing consumer application of all time. It has lots of my executives. It has a bot that's become
not only informational, but a something that's having relationships with many of its users.
And people are getting less and less interested in having relationships with other people on social
media, and they're very well capitalized.
And so I've actually gone from being like, well, you know, meta, not Google, might be the
most under threat from this generative AI moment.
And meta, to Mark Zuckerberg's credit, is going to spend and try to stay in the game.
And when it told Wall Street this week that it was going to spend a lot more in order to do
it, Wall Street did not like that.
And Meta stock is down 12.5% this week as it is recording.
I'm actually kind of curious what your perspective is on this meta situation.
Is it an existential threat and how does the business get through if it's going to actually fight this way?
Well, I was definitely thinking a lot about that why meta, it was a negative signal to the market that they're increasing their CAPEX spend, whereas Microsoft and Google, it was almost a show of strength to kind of continue that story on CAPX spending.
And I think it's got to be because when Google or Microsoft's,
spend more, and even Amazon on CapEx, part of the assumption embedded in that is companies will use that
and give them money for that capacity. It's not just training your own models. It's not just
feeding your own product. It's you're building an entire infrastructure that people will spend
money to be part of. Whereas with meta, every dollar they spend, it only sees a return when it actually
flows into one of their products.
So it's a very different kind of spending
than the other tech giant.
So that was my read.
I don't know.
How did you think about it?
No, that makes sense.
But then it's also interesting.
I'm actually curious what you think about
the product side of matter also.
Because you're going to get less of an immediate reward
for your spending and your product is under threat.
It's a good point.
But to me, I don't know.
I think, okay, the product side,
one, we've talked about this a bunch recently, like, I think meta is going into the hardware
business in a big way. I think they're going to go actually start competing with Apple.
So the way AI flows into actual kind of more from the hardware side, I think they're going to
have like a pretty interesting place in all of this competitively. But you can also look at like,
remember, when iOS 14.5 came out, meta, everyone said their ads business was dead. And they
leverage, like, classic AI in machine learning to actually solve their ads problem and their
ad targeting problem. So they have actually shown using AI to actually improve your own existing
business. They probably showed that better than anyone. It wasn't an add-on or kind of like a tack-on
feature. Literally transformed and saved the core advertising business. So I think they're in two
businesses right now, advertising and potentially hardware, and AI can help both.
Here's my new take is that meta might want to consider not developing its own models
and just building products on top of others.
I think I'm owned.
I agree with that because Lama always almost felt like almost a defensive posture or not
defensive, like Trojan horse just to kind of scare the open AIs of the world, open source,
here you go.
So I think it's never really been clear.
And clearly all the upheaval with at Fair and now, what's it called again, TBD?
Oh, yeah, that's the superintelligence lab is now called.
Yeah, the superintelligence lab.
Like, it's clear that there has not been a straightforward, straight line path over there
in terms of model development.
Right.
Okay.
We should also talk about this AI wobble.
Because, you know, this week, obviously, we got a lot of numbers about whether the AI investment is, like, seeing return.
And, I mean, some investors are making this argument that, like, well, unlike, let's say the dot-com boom where you laid all that fiber and 95% of it was dark, now all the GPUs are being used.
So, therefore, no bubble.
But what's your perspective on the AI wobble?
Yeah, this is my favorite phrase of the week.
Everyone, it was just in the Goldman Sachs morning briefing newsletter from like a managing
partner and CIO of a fund I had not heard of AI Wobble, the idea that like, you know,
so much of the current market action and growth is completely centered on AI, but the idea
that there is a much more immediate actual kind of return potential here, I think is how I view
the market. So comparing this to internet bubble 1.0 has never been the, like, correct
corollary for me. So I'm loving this AI wobble. It's not a bubble. We're going to get a bit
of a wobble. Maybe when Open AI announces their trillion dollar IPO and we actually look at
their balance sheet and P&L, but I think it's going to be a wobble. My one question is, I mean,
this is going to come up in a conversation that Nick Clegg will have here on the podcast,
coming up in a couple weeks, but, and he's the former president of global affairs at META,
all this money is going into this AI model development with the belief that if you reach
AGI or superintelligence or whatever it is, that you can, you will, and you and you alone will
possess it. And there's been no evidence to show that, like, once company gets, builds a model
of that nature, that they can hoard that intelligence. So, well, that's the best argument.
It's for the public good.
Yeah.
It's for the public good.
But if that's the case, then it's,
then all that money is going to light on fire if you can't hoard the intelligence.
Interesting.
Well, yeah, I think, I mean, you, Alex had a tweet in our prep doc.
That was very interesting to me because Sundar basically like proudly saying 150 Google Cloud
customers processed one trillion tokens and then had some kind of standard case study metrics
about email open rates and stuff like creating campaign efficiency.
but when you actually map out the numbers there,
that's less than a million dollars using their own token cost per enterprise.
So you're at 150 million in revenue,
which is 0.3% of GCP's overall annual revenue.
So the scale is interesting to me where I think that it just reflects.
No one has any idea what the economics of any of this are.
Like, we've talked about this at length.
How do you value a company?
This isn't software.
This is something different.
Maybe it's more industrials.
Maybe it's more traditional retail, like apparel retail even.
The actual kind of economics of generative AI, no one knows.
And this was such a perfect encapsulation of it where Sundar is just like, you know, bragging on Twitter.
And then when you actually just calculate it out based on their own numbers, it's kind of underwhelming.
Okay. Yeah, it's going to be fascinating to watch. I mean, obviously the stakes just get, they get raised higher every week this week. We even talk about it, Nvidia hit $5 trillion. It's like an afterthought. It took three, three months to go from $4 trillion to $5 trillion. Same. I'm more concerned about Jensen's fried chicken order in Korea than the extra trillion in market cap.
All right. So let's close with this story about this new humanoid robot, the one X technologies, Neo. Joanna Stern.
from the Wall Street Journal wrote,
I tried the robot that's coming to live with you.
It's still part human.
She describes her experience with it.
The five-foot-six-inch robot,
shuffled to the dishwasher,
pulled the door handle, and slid a fork,
tines up naturally into the silverware holder.
Then it grabbed a towel to wipe the counter.
Later, it folded my sweater
and fetched a bottle of water from the fridge.
It was wild to watch.
Sure, Neo nearly toppled over
while closing the dishwasher,
took two minutes to fold a shirt,
and twisted its arm attempting to dance the macarena.
But, oh, she goes, I didn't mention Neo had a human puppet master controlling it with a VR headset.
You like this?
It's a person using VR to control a robot in your house.
You like this?
No, I actually don't like it.
I was just saying that for the sake of argument.
Now you have to make the case for it.
You kept the listeners on the line the entire show.
I was saying you were going to like this.
Now you're out.
Okay, here's what I like about it.
Okay.
Now we're talking.
Yeah.
To me, I respect and like a startup going for it.
Everyone has painted the picture, Elon more than others, that humanoid robots,
remember optimists, haven't heard much about that recently.
This is a future.
I like that a startup is actually like pre-order.
It's not perfect.
there's a person who's going to be looking inside your house all day long and controlling this remotely,
but we're going for it. We're doing it. If this is truly the future, we're going to at least
take a shot and try to do it. That's my positive case. And I think it's, I respect it.
People allow real humans in their house during the day. So this idea, I mean, the thing that got me was
he was like, this robot is training.
It needs to learn the use cases in your house.
We'll collect data and then eventually we'll be able to automate that.
It's the second part of that that I'm not 100% sure.
Like, people have done this with chatbots.
The pre-LM chatbots were like,
we just need to collect data on what people want,
and then we'll find a way to build that.
And they couldn't until the technology breakthrough happened.
So this idea that they're building a company with a direct shot
to like a humanoid robot that works, to me is not promised.
But yeah, I guess, okay, I'm coming around to you here.
Kudos for trying.
Well, do you know what, but to take the other side, because that's really where my mind is,
I think I have said this in past episodes, I just never thought that the form factor of robotics
needs to be humanoid.
Like, it makes everything more difficult.
And I would rather, I wish, and I know I've lost this battle.
I've given up.
Everyone wants a humanoid robot.
I rather would have a laundry folding robot in one corner of my house.
Like the Roomba vision.
Like you didn't need a humanoid robot to hold a vacuum cleaner that you already have to vacuum your house.
Just make a little disc-like thing and it actually works great.
So I would rather have a series of robots around the house specialized for their task rather than needing this kind of...
Actually, it looks kind of cool.
give them. It's like, looks like what we all imagine a humanoid robot to look like with like
a, this weird mask type, blank mask face that someone somewhere is looking into your house
through and controlling through VR. It looks cool, but I just know humanoid robots. It's,
maybe it's not too late. Yeah, I mean, I will tell you that my reaction when, so Joanna made a
great video about this and my reaction when like the human, the human behind the robot started speaking was
like, can you, can you, please shut up?
Like, the guy's name was touring.
That was his first name.
Wow.
It was his real first name was touring.
Oh, really?
Yeah, the teleoperator.
But I'm also concerned about this future where people in wealthy countries are buying
these bots for like $30,000 and then paying like a service fee of like $100 a month.
And then on the other end, you have just like warehouses filled with like thousands of people
in the Philippines.
that are just like operating these bots for them as like virtual housekeepers.
And I mean, maybe that gives, you know, some, you know,
creates some jobs in the Philippines, but it just feels dystopian to me.
I don't like it.
I mean, I'm not to be even overly cynical, but that's how a lot of globalization worked.
I mean, call centers were basically like a early precursor to that.
So I think that side, but I don't know, the ability, like how easy.
could the person on the other end just fuck up your house like that's very good question like
what's stopping them is there yeah now they say they say they say they have zones um
they have like zones that the robot can't go into and like the robot can is able to lift sort
like very heavy weights but like not you know they say it's not going to be able to they're not
going to let it let's be honest like there's probably a way around it if you give a person
a robot in your house, like free reign around your house.
There's definitely going to be one who's like, I don't like the person who's like employing me.
I'm going to smash some frames and stuff like that for sure.
And then, of course, you know, we have cloud outages all the time.
So for me, the biggest worry is like, you know, humanoid robot holding my baby gets hit with an AWS.
Anyone who's letting that happen.
Wait, I'm not done with the tweet.
AWS cloud objects happen and the baby drops.
Anyone who lets that happen.
I'm just telling you, don't let me into your house
through the guise of a VR-controlled humanoid robot
because Ronchan, we'll smash your shit up.
I am not the teleoperator.
Only mischief can take place at that point.
But I, again, respect them actually trying to make this a real thing
rather than just endless talk and demos.
They're trying.
So congrats to you, one extra.
technologies. If you came in here trying to make a convincing argument for the
Neo robot, I'm not buying it. Especially not letting you in my house with the ability
to lift heavy things and smash my stuff out. But you know what one more
positive thing. The company's name is kind of awesome to one X technologies. Is that
like a play on like everyone wants 10x, everyone wants 100 X but you know what?
One X. Dude that is a great name. Okay. I suddenly believe in this technology.
Yeah, I think one X. You know, it's real. It's on the ground. It's happening. We don't need your 10 or 100 X. Our robots are only one X, and we're okay with that. Are you going to dress up as the Neo for Halloween? I wonder if there's going to be, if we're going to see a Neo robot out there. But that's it. I hope we do.
To whoever is dressing up as Neo, meaning you put together that costume in the last two days, I salute you. More so than.
the robot itself.
You know, I've had a week to prepare
so I can tell you what my costume's going to be.
The Dave hot chicken sandwich.
My wife will be the ketchup bottle.
The app.
The app.
How about me?
I do.
The new couple, Halloween, uh, one, two is one person is the Dave's hot chicken app and
the other person is SOR.
Is the slider.
Oh, Sora.
Okay.
Number one and number two.
All right, Rangin, I think we should, we should, uh, say farewell before this really
goes off the rails.
I think it's time. One X Technologies.
All right, everybody. Have a great weekend.
We'll be back on the feed next with M.G. Siegler talking about potential open AI, IPO, and everything else around that.
So thanks to you, Ron John.
Thanks to everybody for listening.
And we'll see you next time on Big Technology Podcast.
