Big Technology Podcast - OpenAI Bailout?, Elon’s $1 Trillion Pay Deal, Amazon Sues Perplexity
Episode Date: November 7, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) OpenAI CFO Sarah Friar's comments about a government backstop for its financing 2) Why these statements ...matter 3) Does OpenAI need financial discipline 4) Do we want to be the discipline police? 5) Should we build a national compute reserve? 6) Why OpenAI is getting so much scrutiny lately 7) OpenAI's ambitious financial plans 8) Elon's $1 trillion pay package gets approved 9) Amazon sues Perplexity 10) Ilya's deposition revealed 11) Farmer insights --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Is Open AI looking for a government bailout if things go wrong?
Elon Musk gets a $1 trillion pay package to build an entirely new Tesla, and Amazon Sue's
perplexity.
That's coming up on a big technology podcast Friday edition right after this.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
It doesn't just help buyers find a car they love. It helps schedule a test drive, get pre-approved for financing, and estimate trade and value.
Advanced, intuitive, and deployed. That's how they stack. That's technology at Capital One.
Industrial X Unleashed is bringing together leaders from IFS, Anthropic, Boston Dynamics, Microsoft, Siemens,
and the world's most progressive industrial companies at the frontier of industrial AI applied in the real world.
There's a clear market shift happening.
The world's largest industrial enterprises are done experimenting with AI.
They're deploying it at scale, and they're choosing IFS to co-innovate with them.
IFS is purpose-built for asset and service-intensive industries, manufacturing, energy, aerospace, construction,
where downtime costs millions and safety is non-negotiable.
Industrial X Unleashed will feature live demos of AI products embedded in real-world operations,
customers sharing measurable outcomes, and learnings from companies deploying industrial AI at scale today.
learn more at industrial x.a i welcome to big technology podcast friday edition where we break down the news
in our traditional cool-headed and nuanced format we have a great show for you a big show for you today
because we're going to talk all about this big controversy over opening i potentially requesting
a backstop of its debt from the federal government or not some disputed reports on that but we have
a point of view elin musk also won a one trillion dollar pay deal that obviously goes in different
steps, but has finally been passed by Tesla's shareholders, and Amazon has sued perplexity
putting the bot internet into question. Joining us, as always, on Friday to do it is Ranjan Roy
of margins. Ranjan, good to see you. Who knew that the biggest socialism story of the week
would be Open AI? You were cooking that up for a while. I've been waiting for that all week.
Also, Ranjan, I'm very glad to see that, you know, you're at your computer and not messing people's stuff up with your personal robot.
I know we had some people concerned about that.
I left the robot.
We're waiting another few weeks before I start destroying people's houses via humanoid robot.
That's good.
Well, we will take our precious time left on this earth before Ranjan gets access to destructive technology to talk about the state of OpenAI,
really, it's a story about them looking for a backstop from the federal government,
but more broadly, it's a story about whether opening I is ready for this moment in the company history.
So let me set the scene.
Opening I, CFO, Sarah Fryer, is out at the Wall Street Journal's big tech conference.
And she basically says explicitly that they are going to be looking for a backstop on the debt that they take out.
should things go wrong. Let's listen to a clip. Maybe even governmental, the ways governments can come to
bear. Meaning like a federal subsidy or something? Meaning like just first of all, the backstop, the guarantee
that allows the financing to happen. Okay, so Friar says we're looking for this backstop, this
guarantee that allows the financing to happen. That can really drop the cost of the financing,
but also increases the loan to value.
So the amount of debt that you can take on top of an equity portion.
And so the reporter asks some form of backstop for chip investment.
And she says exactly.
Ranjan, let's just start with the logic here.
It's pretty logical for a company like OpenAI,
which has not been turned down by anything, really, lately.
Basically everything it touches, it gets, everything it wants, it gets.
Why not ask the federal government to guarantee?
your loans. Well, I think, though, everything that it's been trying to get is actually it
committing to spending money, not asking for money, remember. All of the, I mean, it's obviously
raised an ungodly amount of money, but in the last few weeks, every big announcement is we will
spend $38 billion with Amazon, $100 billion with Oracle and everything else. So this is a very,
very different ask, I think. I mean, we'll definitely get into what it means. But I do.
agree that there's a certain kind of, I don't want to say arrogance. It's almost just, it feels
like they just can say whatever they want in whatever big numbers they want. And it's okay
because it's worked out pretty well for them so far. Yeah. I think my broader point is that
opening eyes just been on this like run of a century, right? Just like funding deals left and right,
the fastest growing product in history. And so when you think about like what you can do,
do, why not ask, you know, whether the federal government can guarantee your loans, for instance,
because you can position it as a national security thing.
Here's, here's Friar, and these are the exact comments that she gave at this event from the Wall Street Journal.
She goes, I think we're seeing that, and I think the U.S. government in particular,
has been incredibly forward-leading, has really understood that AI has an almost, almost as a national strategic asset,
and that we really need to be thoughtful when we think about competition with, for example, China,
are we doing all the right things to grow our AI ecosystem as fast as possible?
So she's coming out and saying it.
There's no ambiguity about it.
She's clearly asking for a backstop, and she's positioning it as a national security situation for the United States.
This is where it is difficult.
I'm going to try to take Open AI side in this.
Let's say this is the single biggest national security.
battle that we're going to face over the next 30 to 50 years at that point. And we've all
talked about, I think, on all sides of the political spectrum, the idea that America's fallen
behind in critical infrastructure. So if that is the case, and we are buying into this story
that AI is going to be the battleground of the next century, does it seem okay that they're
asking for federal guarantees or a backstop in terms of all the debt finance?
that's being taken out?
I mean, I think that's my point here.
I think it's perfectly reasonable for OpenAI to ask.
I don't think that the U.S. taxpayers should be backstopping the company's debt, though,
because and Sam Altman in a follow-up made this clear, if they fail, you'll still have Google,
you'll still have Anthropic, you'll have many others that are going to be building this.
So, in other words, I do think that the United States is going to, is going to, is
going to be in a good position. And I also don't really think that the government should be
picking winners and giving open AI these guarantees, assuming that open AI would be the only one
to get them. So to me, it's just like, it's a personal, it's a perfectly reasonable ask,
and it's a perfectly reasonable no from the U.S. government. And David Sachs, the AI czar,
basically said no to bailouts. I still love that we have AI czars. Like, I think,
got to have a czar. Is it real if it doesn't have a czar?
Now, if it's not a czar, I mean, especially in the context of not wanting to have Soviet Union references across our government, but side conversation.
I think on this, I think on this, so to me, where this is the most interesting is, as we're talking about this, the idea, okay, in the like halls of power, having discussions, looking at the long term, what is the threat from China, what critical infrastructure do we need?
I mean, even if you look at like the Biden IRA, there's plenty of money that was being given to fund critical infrastructure, especially across energy.
So these conversations to me seem completely reasonable as well.
What's so fascinating to me is how all of this played out just in the last week, what kind of words were actually uttered by Sarah Fryer and then posted on linked by Sarah Fryer and then tweeted by Sam Altman.
Like they're trying to get out of it and then not get out of it.
and get out of it again.
Like, that to me is, it's almost shocking,
but it's not shocking because it's open AI.
But you would think that they would have a more cohesive strategy
on something this important and big.
And that's exactly it.
So we both started this show talking about,
all right, like, we're not going to, you know,
say it's the craziest thing in the world for them to ask.
I'm not saying to be granted.
I don't think they should be granted.
But why not ask if you're open AI for something like this?
The internet just blew up.
And basically it was like, you know,
we do not need the citizens of the United States of America to be guaranteeing the loans for a
$500 billion company. It's crazy. Fair enough. And then the weird thing happened, which is that
Open AI's walkback started. And it wasn't just a walkback saying, you know, after consideration,
we don't want these guarantees. It was almost like, hey, we actually never said that.
here's here's this is from sarah fryer cnbc headline open a i cfo sarah fryer says company isn't seeking government
backstop clarifying prior comment open a i cfo sarah fryer said late wednesday that the artificial
intelligence startup is not seeking a government backstop for its infrastructure commitments i use
the word backstop and it muddied the point fryer wrote on lincoln as the full clip of my answer shows
i was making the point that american strength in technology will come from building real
industrial capacity, which requires the private sector and government playing their part.
Sam Altman, I would like to clarify a few things. First, the obvious one. We do not have
or want government guarantees for open AI data centers. We believe that governments should not
pick winners and losers, and that taxpayers should not bail out companies that make bad
business decisions or otherwise lose in the market. If one company fails, other companies will do good
work. This is the weird thing. They are going back on it, but that's exactly what they were asking
for. Yeah, I think it's always so frustrating now where Sarah Fryer, of course, going on LinkedIn and
saying, like, it muddied the point. The full clip of my answer shows trying to almost make it seem like
pulled out of context when, I mean, in this case, it was so explicit. The reporter even asked her
to confirm it and she confirmed it and said the word backstop again.
So, like, I think it's just, already that's just a little bit disingenuous and just, and does not land well.
But I think the reason this is so salient right now is, I think, because it comes on the heels of, I think it was last week, the Brad Garsner altimeter interview where Sam Altman, where he asked him around, you know, you're only making $13 billion, but you're committed to $1.4 trillion in spend.
How are you going to do this?
and Sam got very defensive, and then suddenly we start to get leaks that they're seeing
$20 billion annualized at ARR, not necessarily actual revenue.
But basically, the numbers have never added up.
Like, we've talked about this at length.
No one actually has really mapped out a genuine understanding of how that $1.4 trillion in
spend actually is justified.
So I think it's more real because it feels more emotional.
almost because they almost will certainly need this government backstop to actually make this work.
Like, it's not just, you know, this is some opaque financing deal that no one understands, like, 2008,
like where, you know, only PhDs are able to kind of, you know, like untangle complex sum prime mortgages and see the credit default swaps.
Like, they're pretty clear numbers and they don't make much sense.
So when you say government financing and backstop, it feels a lot more real.
I think that's why this is hitting so hard.
That's exactly right.
And on CNBC, I was on CNBC talking about this earlier in the week.
And I said, basically, look, opening eyes can be a trillion dollar IPO company.
It's already $500 billion.
And its executives need to learn to speak with a little bit more discipline when it comes to questions like this.
Questions like Sam got at the Brad Gersner interview.
And then subsequently now, it can add on what, say, our fryer said.
And I think that you're right, that this is the reason why.
people freaked out here.
It wasn't necessarily because of the ask itself, or maybe it was.
And we're going to go into some more things about what Friar actually said at this conference
because there's some more really fascinating aspects of her talk that we should cover and
not just this one thing.
But it's the fact that basically the entire U.S. stock market, the entire global stock market,
to some degree is counting on OpenAI to one execute on the promises that it's made to
companies like Nvidia and AMD and Oracle.
in Microsoft to some degree.
So there's that, there's that, you know, dependency there.
And then if we all, we both know, and we, I think we all know everybody listening to
this, that if there's weird stuff going on in Open AI's books, that's going to cascade.
And so that's why people are like, wait a second.
This does not sound like, you know, all taken together, like the remarks of a company
that's mature enough to be able to have everybody counting on it.
No, that's a good point because I was actually thinking about like, I mean, Sarah Friar spent over a decade at Goldman Sachs. She was the CFO of multiple, I think, yeah, Square next or CEO of next door. Like she has been C-suite of publicly traded companies. She was at the most buttoned up investment bank in existence. Like you would think decades and multiple decades of just like knowing how to be disciplined around messaging.
would just be so kind of deeply ingrained in her, yet you go to Open AI and suddenly you see
how this like misspeaking or potentially actually saying what you're thinking, but then trying
to walk it back, having to post on LinkedIn.
Meanwhile, your CEO is tweeting out all other sorts of things.
But then in the background, I had just come across that there was a letter though submitted
to the Office of Science and Technology, where again, they have an entire section where
they talk about like the need to counter the PRC, People's Republic of China, by de-risking
U.S. manufacturing expansion, provide manufacturers with the certainty in capital.
They need to scale production quickly.
The federal government should also deploy grants, cost-sharing agreements, loans or loan guarantees
to expand industrial-based capacity.
So from a policy perspective, they are actually pursuing this.
And again, as we are saying, that's not the most ridiculous, unreasonable thing to at least
have a conversation around but like and then we're going to talk about the ilia sam
battles of the past but it's almost like she she shows up at open ai and suddenly her entire
calm strategy is like uh Elon situation or or just kind of this kind of like you know
totally chaotic open AI calm situation it brings you down that's the thing it's not the ask
itself it's the back going back and forth and then it comes on the back of this
comment where Sam tells, you know, Brad Gersonner enough when Gersonner asks, how are you going to fund
$1.4 trillion in investments with $13 billion in revenue. I'm not like, I don't want to be the like,
you know, disciplined police. I certainly don't hope to be that way. But again, if you think about just
the magnitude of the, you know, how much of the stock market, how much of the economy is depending on
this to work out, you want to see something more buttoned up than that. And I do think we should
to some of the other comments that Friar made at the Boston Journal.
Wait, wait. Hold on. Hold on. I would like that we are the discipline police in this situation.
Come on. I want to, I don't think you should say like that's a bad thing. Come on.
For the companies that are valued, once you cross a certain threshold, just can we go back to
having a little bit of communications discipline? I have worked in this for many years, even in
companies in the hundreds of millions of revenue and that are you know like even then we had a lot
of hand-wringing and oversight and and i'm not going to say it was necessarily always the most
pleasurable thing to have to deal with but like at every other level every you know did you
see the snowflake chief revenue officer i believe who was speaking to a tick tocker and
accidentally like said something about revenue guidance and they and they filed an AK they
still are trying to, it was a screw up, but they're still trying to play by the rules. And
they're like recognizing that this is actually, you know, like an important thing that actually
to play by the rules, we have to still, it was a screw up. And we're going to try to fix that
versus companies like this that are just completely the entire, you know, like discipline of
the U.S. financial markets just kind of making a mockery of. So I'm going to say we need more
disciplined police around comms, please just be a little thoughtful when you're speaking.
This is why we need humanoid robots. So Ranjan at scale can go into your living room and start
flipping tables over if you're undisciplined. If I had my humanoid robot army and it got me
my trillion dollar pay package, because actually there's a humanoid robot army in that actual
language in the pay package. Yeah, I'd be flipping tables right now. Well, hang on. I mean, does it matter
that they're private, they're a private company.
Like, they don't need to file with the SEC for these type of things.
No, no, but that's the thing.
Like, shouldn't you have a little grace so you can ramp up to this stuff?
You tell me.
But see, this is where the whole like, you know, ballooning of private valuations, I do think becomes even more problematic.
This is a whole much larger ramped in general.
But like, it's because it allows the company to become this critical to our over, as you said,
like Open AI might be the like entire kind of like at the bottom of the pyramid holding up the entire U.S. economy right now.
And because they have not had to go through any kind of rigor that even like a public company with a couple of hundred million in revenue would have to go through, that's how we're ending up in these kind of situations.
And it's, I mean, it's a little bit scary that like, you know, on one hand you have your CEO getting mad.
about just being asked a very simple basic question that he should have had a very clear answer
to. And on the other hand, you have a CFO who's had decades of training on this and suddenly
seems to be backsliding into, you know, a way of communicating that, you know, you would kind
of attribute to a first time CFO.
That's a good, that should be an article. That's a good, good opinion piece right there.
That's, uh, I like that. Okay. So I do want to talk.
about some of the other things that fryer said but before we get there let me just ask you
this one bigger question which is that I'm curious do you think the backlash that
we're seeing here I mean because obviously open AI backtracked because of the
backlash to friar's comments do you think the backlash is finally a sign that
open eye is hitting the ceiling of what it can get financially like eventually
it's gonna push and there will be push back is this sort of a sign of the
top of its of its abilities yeah I think
it's it's not just that it's also like a really good indicator of where it is in the national
conversation or the global conversation right now like we have talked about it for a long time
our listeners and our friends and networks have been talking about it but they are inserting themselves
and they are becoming not just like an economically critical part of the u.s economy but also
just known by every person in america so i i think it's a sign that
they have a branding problem they're going to continue to have a branding problem and so the idea
that like they are getting a backstop when you know they've had secondary sales of at i think they had
a secondary sale at the 500 billion value 500 billion dollar valuation so like already people are
getting insanely rich off this so it just makes it that much more unpalatable for any normal person
to have to hear this kind of thing.
It is interesting because their product really is beloved,
but they are starting to, again, speaking of our discussion last week,
run into some, like, Facebook territory on the comm side.
And what you don't want to be is viewed as, like, big bad tech.
So I don't think they're there.
I think people really do love chat chabit.
To me, it's already one of, if not the most useful tech products I use on a day-to-day basis.
Maybe the iPhone is right above that.
But it's pretty close.
and I don't know.
I think that this is a concern for them.
Yeah, agreed.
And yeah, I think it's a good point that the distinction between like people love the product versus love the company.
Facebook actually has shown, meta, shown pretty well that you can do both and succeed.
So if your product remains that sticky and like addictive.
So I think it doesn't mean from a product standpoint that it's, it's,
genuinely troublesome for them, but I don't know. I feel at a certain point, like the more
people kind of pay attention to how open AI has been run for this long. And it's not just us and
others talking about it. I think it really starts to kind of bring a different kind of light
shining on it. Now I do, now let's get to these comments by Friar because this was fascinating.
And it sort of makes you think this is how they are going to
pay for that $1.4 trillion in investments. It's the best explanation I've heard yet. And if they're
able to pull off this vision, then maybe Open AI will be the most valuable company ever, ever.
And so this is sort of what Fryer said. I'm just going to get to the last two parts of what
she said. She said they will, this is from Garrett DeVink, who is a Washington post reporter,
they will do creative commercial deals. They're not just going to sell access to a pharma company
on a per token basis.
They're going to demand a rev share of the profit the pharma company brings in from developing
drugs using chat GPT.
This applies to commerce too.
They want to take a cut of both the discovery and the transaction when someone searches for a product
using chatchipt.
This is already happening with Walmart, Etsy, Shopify, all announced on their dev day.
That is very interesting.
So what do you think about these monetization strategies?
You know what?
I, the pharma thing is very interesting to me because the commerce side of it is kind of par for the course from a platform.
I mean, that's Amazon's entire retail business, I mean, with the on the third party marketplace.
But everyone accepts if you sell on my platform, I will take a cut of the transaction.
But the idea that like our technology helps the pharma company create drugs in a much faster way and we would want to take a rev share.
I think that's really interesting.
And you can imagine how many other ways that becomes applied.
But then also at that point, are they taking on the risk as well?
And then, like, they're giving away free compute in order to get that longer term
rev share, which just adds actually an amazing whole additional layer of risk to the business.
Like, I wonder what, like, what could that look like?
So I actually thought this was much cooler.
when I saw it for the first time.
And then when I think about it, it's like, all right, so let's say, let's just go crazy here.
Let's say Open AI develops medical superintelligence that enables pharma companies to do things they never could before at human scale.
Somebody else is just going to develop super intelligence as well, and they're going to get into a price war.
So how long can you say, I want a party of the profit of this drug that you're going to develop with our technology,
where somebody else can be like, well, here's the platform, just pay us the licensing fee.
right that only that pricing model only works if you're the only one that can offer that
i just don't think they're going to be well no no but i see i would think about it differently
because not from the competitive standpoint but from the again going back to what business are you in
when we're when when they're looking at their trillion dollar IPO and we're going through their s one
you know the way any investors should rationally look at these things is just try to understand
what business are they in? We've talked about this at length that, you know, like, already we don't
know the economics of what a generative AI business would be. It's not traditional software because
there's actually, you know, like incremental cost to the utilization of it. The more compute you use,
the more expensive it is for the provider. But still, subscription business for consumers,
API token revenue. That's a pretty straightforward business. And I think that could make sense.
sense. If it is, we are an AI cloud business, a consumer devices business, which Sam Altman said as part of that Gersner interview. And now we're also a drug business, a pharma business because we're going to be taking on some kind of risk around drug development in the actual underlying economics of our business. Like, I mean, my God, that, that's quite something. I like, I respect a good creative contract. But like, that, that's, that's, that's quite something. I like, I like, I respect a good creative contract.
but like that is that's too much if they can pull it off i mean good for them and i don't think
it's a non-zero percent chance that they can that they can pull it off but again the question is
you're not the only one developing this if you're the only one developing it fine you could get
a percentage of the revenue it's just can be harder than i think they think um to do that now
okay let's just go quickly and talk about this government infrastructure uh side of things
so um sam altman says what we do think might make sense is government's building and owning
their own AI infrastructure, and one area where we have discussed loan guarantees is part of
supporting the buildout of semiconductor fabs in the U.S., where we and other companies have
responded to the government's call and where we are, we would be happy to help.
What do you think about the idea of a government-owned AI infrastructure?
Does that make sense?
And we don't really have to talk too much about this chip thing because that's been done,
But I think the idea of government AI infrastructure, obviously, lots of governments are talking about sovereign AI.
I'm curious what you think about that, Ron John.
I mean, if we're saying, I think was it Sundar was like it was more important than electricity, like, and fire and fire, if we have like nationalized utility infrastructure, though it's still not fully nationalized in that way, but still like, you know, like much more publicly.
private types of infrastructure. In that way, sure, maybe it makes sense if we're really saying
this is going to be the backbone of the entire economy. But again, if that's truly the case,
we should be building that today with that in mind and not where private citizens can actually
just make ungodly amounts of money before we even get there and then be backstopped by the
government. Then it should be like, if that was really the case,
And there are anyone, this is a part that feels disingenuous again to me because like if Sam, if Jensen and other comments, like if anyone was really serious about this, you would be giving up economics within your own firm for the public side of things.
But no one's doing that yet.
That's right.
That's a very good caveat.
But I would say I'm for it.
Go out and build the government AI infrastructure because in case this thing does be.
I think we'd both agree there's a chance that this thing can reach AGI.
I don't know how high of a chance there is.
The amount that it's advanced in three years is actually insane.
Like I'm now using chat GPT and it is like reliable on many, many searches.
Like they've built, they've answered a lot of the questions like hallucinations, reliability, capabilities, things like math and science or assisting with these capabilities.
It can do that.
So given the parreras, we've seen some.
so far. I think a government should invest in it. I don't fully know what a government does
with it. But that's like that that's what does it look like in that case? If you think about
it like is if you're running a company, do you get a bill from the US government for your
API consumption? Like I cannot imagine any of the folks involved in Silicon Valley or AI are
going to be advocating for that. But but so what does it look like? Or is it just the government
backstop your loans and heads, what is it, heads you when tails they lose?
I mean, ultimately, I think what a government wants is for its country to be strong. And so that
can play out in a couple of ways. First of all, government can develop AI technology on its own,
build a foundational model, and make that available for, you know, I guess anybody in the country
to use at a low cost and spark innovation from there. So that's like external use. And the other
side of it is, if you're an effective and efficient government, you're going to end up,
you know, creating a stronger country. And we know that we don't have an efficient government.
Doge obviously wasn't the answer to make it efficient. But do you want to build very powerful
AI and then use it to sort of connect all the disparate data that you have, you know,
within the country, you know, assuming you can do it in a way that's sort of privacy compliant,
which is a big if. And then use that to make better decisions and then run your country
better. If you can do that, then it would be very valuable. So are you advocating for USGPT or some
kind of the foundation model? I'm, okay, maybe. I'm feeling it a little bit.
USGPT, build it. Nationalize the LLMs. Well, you could have, you could have the private sector
and the public sector both doing it. But I think given where we are today, why not? I mean,
it's not it's not going to be a massive portion of your of your budget to be able to do stuff
like this and i think the u.s government should now what are you running is this 2028 platform
pitch no i think but we're running let's join ticket joint ticket joint ticket this is the platform
i'll be your i'll be your vice president you can run against i don't know whoever comes up
the anti jd vance right jd vans right jd vans would be pro pro u s u. no no
No, it's nationalizing the LLMs.
We're the team USGPT.
The people want it.
We can run under Elon's new third party.
I think we would definitely be the ones he's looking for.
But there is this battle between countries, right?
Jensen Wong from NVIDIA on Wednesday, as this is all going down, says,
as I have long said, China is nanoseconds behind America and AI.
It is vital that America wins by racing ahead and winning developers worldwide.
I mean, the man wants to sell his chips into China.
I don't know how that makes the U.S. stay ahead if you're anti-export restrictions.
What do you think is going on here?
And is there a value in one country having more advanced AI than another?
And probably, yes, but what do you think?
I don't know if I'm overly salty this week, just given socialism as the topic of the new.
Meanwhile, we're having to hear all these kind of things.
But like, even statements like this, it's always kind of just China is nanoseconds
behind America in AI.
Like, is he using nanoseconds just to sound more technical and smarter versus either
just say China's tied with America or almost catching up?
But I feel like it almost is just me.
He's saying it to make it sound more techy.
But that's more of just a, you know, a slight criticism.
I think, more importantly, as you said, Jensen is not shown, I mean, he's lobbied hard to sell chips into China.
So if this was really an issue and you really cared about it, you would not be doing that.
All right.
Let's close this segment out with, of course, some bubble talk.
This user on X said, Edward Dowd, it says, I see a pattern.
Altman and Jensen smell the end of the bubble, financing, drying up, and are going to
to ask daddy for taxpayer money citing national security issues. Your read. I think it feels like,
I mean, that's how you started this segment and it feels like that is kind of where things are,
that there's this, it's still a money pit in the way things are working currently. And if it's,
if you need to continue to have money pouring in and private capital is going to dry up at some
point, you might as well find the next wave of money.
Inevitably, I mean, there's only a certain number of sources that you can go to for money
and then you eventually look for governments.
I just don't think we're at that.
I think that stage is still a couple of years away.
I don't think we're there yet.
Yeah.
I think I'm realizing why I'm even saltier on this whole topic now is having worked in the
finance sector on a trading floor during.
the last uh during the global financial crisis when there was bailouts and it was you know like
watching it firsthand was a very problematic thing so i i genuinely think like it's all it's always
stuck with me in a way but wall street never like openly called maybe dickfolded a little bit
but like wall street wasn't you know like begging for this stuff even before anything happened
so that's why it's just come on come on tech industry don't ask for a bailout in a
backstop before we're even there yet. Maybe behind the scenes start kind of laying the seeds,
but don't say it at a Wall Street Journal, tech live event. That's all I'm asking.
It's got to be a record. Is it the first non-public company to float the idea of being too big to
fail, probably? Okay. But one person that might be able to backstop Open AI is Elon Musk after he got
a trillion dollar pay package approved. Of course, the money won't come right away. But we'll talk about
the big pay package, whether it makes sense or not for Elon to get that money and how he'll
get it right after this.
The holidays sneak up fast, but it's not too early to get your shopping done and actually
have fun with it. Uncommon goods makes holiday shopping stress-free and joyful with thousands
of one-of-a-kind gifts you can't find anywhere else. I'm already in. I grabbed a cool Smoky
the Bear sweatshirt and a Yosemite ski hat, so I'm fully prepared for a long, cozy winter season.
Both items look great
and definitely don't have the mass-produced feel
you see everywhere else.
And there's plenty of other good stuff on the site.
From moms and dads to kids and teens,
from book lovers history buffs,
and die-hard football fans to foodies,
mixologists, and avid gardeners.
You'll find thousands of new gift ideas
that you won't find elsewhere.
So shop early, have fun,
and cross some names off your list today.
To get 15% off your next gift,
go to UncommonGoods.com slash big tech.
That's uncommon goods.com slash big tech for 15% off.
Don't miss out on this limited time offer.
Uncommon goods.
We're all out of the ordinary.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing, and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
And we're back here on Big Technology Podcast Friday edition.
Someone's had a good week.
His name is Elon Musk.
Tesla shareholders, according to the Wall Street Journal, approved his $1 trillion pay package.
Flanked by dancing human-eyed robots on stage, bathed in pink and blue light,
at the electric vehicle makers
Austin, Texas, headquarters
Musk thank the crowd of shareholders
who supported the trillion dollar pay package
with more than 75%
of the votes cast.
What we're about to embark upon is not merely
a new chapter of the future
of Tesla, but a whole new
book. I guess what I'm saying is
hang on to your Tesla stock, Musk
said. Do you
like the trillion dollar pay package?
Of course, like Tesla has to like,
hit some crazy goals, which is an $8.5 trillion market cap. It's right now at one and a half
trillion. So, you know, 5X, the company. What do you think, Ron John? So I saw from like non-tech
friends, I saw a lot of kind of angry posts about this and that live more in the political
realm. But I'm actually going to say I, and I've been plenty salty this episode so far,
But this one did not bother me in the sense that if you're getting Tesla to $8.5 trillion, take a trillion, Elon.
Like, if you're going to sell 11.5 million new vehicles, even when you've only sold 8.5 million vehicles in the lifetime of Tesla, take your trillion.
Like, if you create this humanoid robot army and it remains out of my hands and I'm not destroying people's apartments with it, you know, take your trillion.
And all of that, I had written in the past a few times, like, to me, the problem that happens here is I, my, my grand theory is that, like, Elon and Tesla as a company was a great car company.
The moment he had his 27 pay package that was completely aligned to valuation, that's when all the shenanigans around just trying to, like, focus on appreciating the stock, you know, like, just really building this army of just people who are.
religious about the stock began.
So now that we're moving back to just another just insane goal around the actual
stock appreciation is the primary driver of this, just reminds me we're going to go
through endless cycles of Elon saying crazy things.
Already, I just think he was like this morning tweeting like so many rabbits to pull out of the
hat right now.
He said he wants a big enough ownership stake in Tesla to be comfortable that the
robot army he was developing did not fall into the wrong hands. So maybe he listened to our
podcast episode last week. He's mostly trying to do this to keep his robot army away from
you, Ron John. Even though he'll be president and Musk's third party, he wants to control those
robots. I think that's a fair bargain. Which I can't argue with. I respect this.
But I, okay, so a couple of things. First of all, it is interesting to me that this is Musk effectively
saying he's done with Tesla as a car company. I mean, that's the only way that I read it, right?
Like, are they going to still produce cars? Yes. But the ambition is robotaxies and the ambition
is humanoid robots. And if you can pull that off, I think he'll be worth the money,
especially if you can pull that off in a way that makes the market cap go to $8.5 trillion. I don't
know. It seems to me like robotaxis, humanoid robots are much more the future than electric
vehicles, even if, you know, maybe they, that those robot taxis are EVs.
What do you think about this?
Yeah, no, no, it's a good point, which is, like, I mean, this is where the disconnect is always, it's been around for a long time already.
But, like, like, it's almost comical as analysts all try to analyze every, like, Tesla sales number and like from a car company standpoint.
But then the stock price is just completely dependent on humanoid robots and things that don't exist today.
But I think that's actually a fair point.
Like, Tesla is no longer a car company in any way.
Elon is not pretending it's a car company in any way.
We should all stop looking at actual vehicle sales.
That should be an afterthought.
That should be, like, the least important part of the overall business.
And people should just focus on the bigger opportunities here.
Do you think Elon is sincere when he says, look, we're going to be producing all these humanoid robots?
And if we're doing that, I don't want anybody else to.
control them. I want to have oversight and therefore I should have 25% of the company. So I'm
curious. Actually, this is a two-parter. One, do you think he's sincere? Two, if you know that
Elon wants to control this humanoid robot army, what exactly are you signing up for if you
buy one? Like, is that yours or is that his? It's a good point. And actually, now that you
repeat that, it is kind of like it's a threat. It's like extortion to the shareholder base
where it's like, listen, if you don't give me this,
this humanoid robot army we're building could fall into the wrong hands
and only I can save you.
But you better give me my 25% of Tesla if we have these milestones
because otherwise watch out for those robots.
So I think, I mean, I don't know, with Tesla,
it's always something like this, but it's pretty amazing
that that's just being said out loud.
Yeah, you also effectively threatened to leave as well.
So maybe this is the goal.
Maybe true world domination means having a humanoid robot in every house.
And if somebody annoys you or is undisciplined in their PR statements about financial stuff, you just go smash their shit.
The true promise of the humanoid robot army is world.
I mean, actually, you know what?
It's got to be the path to world domination is necessarily having control of a humanoid robot army.
I think that's a pretty safe bet.
Exactly.
All right.
So I don't know if that segment made me more optimistic about the state of the world or less.
But I think it makes sense that Elon got that package.
And we'll see what he does with it.
It'll certainly be an interesting story moving forward.
I doubt he'll ever get the trillion.
But if he does, good on him, I suppose.
Okay.
Let's go to this story.
Bloomberg says Amazon sues to stop perplexity from using AI tools to buy stuff.
I thought this was a fascinating story, and it really goes to sort of the question of whether this agent-agentic web will ever be allowed to take off because there are going to be companies that are just going to say, we don't want your agents using our technology.
We never, our stuff is for humans and not bots.
Here's a story.
Amazon Inc. is suing perplexity to try to stop the startup from helping users buy items on the world's largest online marketplace, setting up a shutdown, a showdown that may have implications for the reach of so-called agentic.
artificial intelligence. The U.S. online retailer filed a lawsuit Tuesday demanding perplexity
stop allowing its AI browser agent comment to make purchases online for users. The e-commerce
giant is accusing perplexity of committing computer fraud by failing to disclose when
Comet is shopping on a real person's behalf. What do you think about this? It's a pretty fascinating
showdown. Yeah, actually, okay, I feel like this is bringing us all the earlier conversations back
down to Earth and this is where like the real work is happening. And it is incredibly fascinating
because like what is a browsing activity? So the idea that Comet Perplexities browser
using agentic browser capabilities to actually do the shopping of a customer, which I've actually
and it didn't actually work. I'll say this. I was testing something. My son's really into Dogman
the book series. So I was like, here's the books I own.
can you find all the books I don't own and create an Amazon shopping list?
It actually was a disaster and it was unsuccessful.
I tried it on ChatGPT Atlas as well, did not work.
So just a reminder, like when we talk about these more kind of like theoretical things
about your browsers doing all your shopping, we're not quite there yet.
But again, you figure you're in perplexity.
You ask, buy me some paper towels and it goes and does the work.
like why shouldn't you as a consumer have the ability to do that like it feels like that seems
i don't know a pretty basic thing but amazon rufus is actually killing it right now that
andy jassy came out there saying like usage over 250 million users have actually used it
users are 60% more likely to buy a product if they use rufus so like they have in their
own closed ecosystem a pretty valuable path in terms of still
owning the entire shopper journey, so they're going to fight for it.
But it's a weird theoretical thing because, like, yeah, is it, if it's an AI doing the
shopping for a consumer who wants the AI to do it, should you be allowed to block it?
Do you think they should?
I don't know.
It's a question that we're going to get.
You know, we talked a lot about how agents could do tool calling.
And this is going to just be a question that we'll just keep coming up because you think
about the contract that effectively app builders built with, you know, people who use the
internet in mind. It's supposed to be built for human users, right? And now what happens if most
of the traffic is bought? So maybe agents working on our behalf, but it's a completely different
value of visit that probably doesn't sustain a product the same way that a human visit would.
Think about just a mapping product, for instance, right? So let's say you're asking an agent for
directions, right? And in the background, it's like going to Google Maps. It's not Gemini, right? It's
going to Google Maps and it's finding new directions. And then, you know, presenting to you in a chat
window, well, Google Maps only made sense for Google to create because there's ads there that will
support it. So now, like, what happens if most of the traffic on Google Maps is agentic? It sort of
really changes the economics of the Internet. Now, if I'm Amazon, I wouldn't block perplexity
because what I want is purchases, and if a bot is on my page trying to add stuff to a card that somebody might buy, then I'm very happy about that.
The only thing to worry about is that Amazon has a big advertising business now.
So that's a cut off in the advertising business.
Okay, that's a really good point, I think, like the idea, especially if there's no transaction or commerce taking place, agentic browsing totally destroys the economics of the web.
But I think, but even on the transactions, I think the idea is probably, like, unless it still introduces a great deal of risk, because Amazon's value really is locking you in their ecosystem.
I like, I don't go to a lot of web pages as destinations.
I go to Amazon.com all the time.
Like, if suddenly perplexity is choosing where to buy for the customer, that's a problem.
So I think, I think, yeah, I think, see, yeah, I think it is.
is a big problem for Amazon. They have to maintain some ownership of that customer
relationship and not just be one of many choices that if perplexity gives you the benefit of
being where the transaction takes place, that's a threat. And it's just, by the way,
one more sentence. Like, people use Amazon, like you said, because they're locked into the
ecosystem. They just find it to be the most convenient place. You go to Amazon, it's one site that
has everything, the everything store. And you don't want to, like, search 20 different sites for
the same product. The agentic stuff flips the everything store completely on its head. Because instead
of the everything store, you just type one more query in. And now the entire web is the everything
store inside of the comet browser. So that is, I think, why Amazon is upset at this. It's just
because the value proposition is gone if it's just one more sentence into an AI browser to find
you products across the entire web. Chatbots are the everything store. I like that take. I think
chat GPT is, perplexity is, Gemini, all these chatbots allow us to shop for everything and
anything at once, which is what Amazon, that's been their entire value proposition. But one
question, if a genetic browsing is problematic and different, if my humanoid robot is going to a
store, is that problematic? And will that be banned? And will they be discriminated?
well i think we will definitely have a a war among people and humanoid robots it's without a doubt
even if the humanoid robots don't assemble into the army people will knock these things out
and they're already burning waymos people don't like robots they don't and so if
they will just knock these things out well but if so if perplexity comet traffic to
Amazon can be banned legally, like if you can file a lawsuit and say, that traffic should not be
allowed on my website, can Walmart physical retail prevent my humanoid robot from going and
shopping for me?
Absolutely.
I mean, bars were banning Google Glass 10 years ago.
So, yeah, there are going to be no bot zones.
No bot zones.
It's discrimination.
Bots have rights to.
Neo, don't discriminate.
We're joking, but this will absolutely be a battle that will play out in our lifetimes.
Yeah, actually, you're right.
Without a doubt, this will be, I mean, we might go back to this show.
When we're podcasting together in 2045 and just being like, well, we called it 20 years ago.
It's going to be robot rights Twitter, just like, yeah.
Well, I mean, is it a coincidence that the guy that's building the robot army also owns Twitter?
Oh, okay.
I don't think that's a forward thinking thing.
You think it was just about Trump?
I don't think so.
It's all about the humanoid robot army in the end.
We should all be working towards it.
I mean, of course, we should all.
This podcast will be front and center in those wars, I promise you that, now that we've staked out our positions.
Okay, let's end this week on what I call notes on the coup, because we just got a deposition that had Elia Sutskever, the former Open AI chief scientists.
his testimony about what happened inside OpenAI before Sam Altman was fired.
So Ilya apparently had been planning the coup for about a year, put together this document.
And this is from the deposition.
The very first page says, Sam exhibits a consistent pattern of lying, undermining his execs,
and pitting his execs against one another.
And the lawyer asked Elia, that was clearly your view at the time.
And Elia says, correct.
I mean, it's just kind of interesting that we're now seeing that that's the document coming out.
So maybe it wasn't necessarily the effective altruists, although I'm sure they played some role in this.
But it was, you know, opening eyes, own executives that caused that firing.
And it's a pretty, pretty wild thing to read that Elya had written down this type of thing about Sam.
What are your thoughts about this?
I think reading through all this, the part I still can't score.
I'm curious what you think.
Is this, was this done to save humanity from runaway evil AI?
Or was this just very human corporate infighting?
Like, this guy annoys me.
He's always lying.
He's stabbing me in the back.
I'm going to try to, like, go after him as well.
Like, when you read this stuff for all the talk about protecting us from runaway AI,
it just feels like the most human thing ever.
like it's just people are annoyed at each other there's like everyone's worked with people like this
i don't know how did you read this oh totally yeah it totally shifted my i mean you know it was
always politics let's just put it that way it was always politics i don't think there was any like
real fear within open ai that you know chat chippee i mean it's easy to say in retrospect but that chat
tpt was like going to go self-aware and destroy the world you know at that point um but yeah it's
it was definitely infighting and i think one of the things that really resonated as
this stuff circulated on Twitter was that it was so poorly planned. There was, I mean, we knew about
this already, right, that it was a poorly planned coup. But like as the details come out,
here's one Twitter user. It's the worst coup ever. Plan for a year without any PR strategy.
As they say, if you go to go for the king, better get his head. First and foremost, the skill that any
leader needs is to be able to survive. Ilya is a great scientist and great human being, but not a
practical leader, he didn't have the skill to plot and survive and come out on top. I mean,
here's another one, even though I don't feel like Sam Fraud. Even though I don't like Sam
Fraudman, I still think it's good that Ilya failed. Somebody that has a planning ability and
theory of mind of a toddler shouldn't be in charge of AGI. I mean, it is interesting that,
like, you saw this app play out. And like, yeah, it goes back to it. It's like, these are the people
that are supposed to be protecting us from AI gone bad. I don't know. Yeah, I think that's where
Ilya wasn't going to be our savior in controlling the humanoid robot army.
I think the, it is, again, yeah, the documentation, it's as clunky as we all imagined it to be.
And, I mean, like, going back to where we started the show today, like, this is the kind of stuff that's just been going on at this company for so long.
And everyone has just been waiting for them to mature into a different type of organization.
and it doesn't seem like it's happened yet all right and of course my favorite part is when the lawyers bicker at each other here's what one attorney says don't raise your voice the other one says i'm tired of being told i talk too much i'm talking too much first one replies well you are and the next one goes check yourself the toddlerification of all communication man it's it's everywhere i still
think, though, just going, thinking back, like, Mia Marotti, how many days was she CEO?
Two, like a half day.
Maybe two.
I think it was like two days.
It was like, uh, I mean, still, what an exit from that two days of CEO and then go on to raise a billion dollar pre-profit, raise a pre-product round at a billion dollar valuation.
That actually, I think, and you actually, yeah, yeah, everyone won. Everyone won. We're going to be bailing them out in the
anyway, so...
Exactly.
Start filing your taxes and preparing.
We are going to all be funding the lifestyles of every AI leader out there.
Okay, one more last little fun thing before we leave.
The New Yorker had a story on the data center buildout, and apparently the reporter spoke
with this farmer, and they go, I asked the farmer if he ever used AI, and the farmer said,
I use Claude.
Google sucks now.
I mean, obviously it's not every farmer, but I just think that that was like a pretty interesting line that shows just how much people are using AI and how this has proliferated well beyond Silicon Valley.
What do you think?
No, no, I think that's a good place to kind of end this because for all of our talk, I mean, you alluded to it multiple times.
I firmly believe it.
This is one of the most incredible technologies that we will experience in our lifetime.
end like it's everywhere like there's no doubt that everyone at every level my parents you know like
it's just it's already being used at least at the kind of like base level of promise so
it's something will be amazing and continue to evolve and change and there'll be a lot of
money and market capitalization realized I think like that that never leaves my head in all of this
It's just how we get there and what it looks like is definitely, it's going to be quite a story.
And therefore, we must save it at any cost.
At any cost, yes.
And give me my humanoid robot army.
2028, we're running.
We're in.
We're in.
We're in.
We'll figure out the rest of the platform at later.
But wait, are we pro or anti-humanoid robots?
I can't tell.
No, no, we're now pro as long as we're in control.
And we are the only ones who can save everyone.
I think we won't get any votes.
I think we'll get a couple.
I mean, Eric Adams got 5,000 votes.
Oh, yeah, if Eric can.
I mean, some listeners out there will give us at least something for it.
All right, Ranjan, good stuff, as always.
Thank you guys for coming on.
See you next week.
All right, everybody.
Thank you for listening next Wednesday.
Mustafa Suleiman will be on to talk about why he thinks LMs may be the route
to super intelligence after all.
And then Ranjan and I will be back on Friday.
Thanks for listening and we'll see you next time on Big Technology Podcast.
