Chapo Trap House - 987 - May I Meet You? feat. Ed Zitron (11/17/25)
Episode Date: November 18, 2025This episode isn’t just great—it’ll revolutionize what you think of AI. Ed Zitron is back in the trap to discuss his recent reporting on just how much money companies like OpenAI have (and how m...uch they’re burning). We talk about the byzantine financing of generative AI and LLMs, the tech world’s dream of recreating the postwar boom with a technology primarily used to make the world’s least legal porn, and the proliferation of data centers across the country. Plus: Bill Ackman teaches you how to pick up girls. Get your Ed at: Better Offline podcast: https://linktr.ee/betteroffline Where’s Your Ed At newsletter: https://www.wheresyoured.at/ Twitter/X: https://x.com/edzitron Bluesky: https://bsky.app/profile/edzitron.com
Transcript
Discussion (0)
All I'm going to be is a trouble
All I'm going to be is a joke
AI, it isn't just two letters, or a different movie about a robot,
who had to wait 200 years to have sex.
But will investors be waiting 200 years for their capital to have sex?
That is the question that we,
have Ed Zitran on to discuss today. Ed, do you think that the robot and bicentennial man
had sex? I understand this is the subject of a recent article you wrote with the financial
times. Well, Bryce Elder and I at the Financial Times were discussing this. And we could not,
we spoke to many sources and no one would agree. Half of them believed he fucked all the time.
Some of them believed he was actually asexual. And which meant, which meant he would have sex,
but only with one specific partner,
which I think is not seeing the movie,
but I think that's the part of the movie.
Yeah, well, actually, Michael Burry closed his fund
because he couldn't figure out this question.
It's like, it's back investors pretty much for,
since a bicentennial man has come out.
I think the one thing I can say with absolute certainty here
is that the Jude Law character from Steven Spielberg's AI
definitely had sex.
Yes.
That was sort of like, that was a plot point in the movie.
100%.
Yeah.
Actually, we have add on today to discuss kind of a bombshell regarding OpenAI's capital expenditures and potential operating costs.
Ed, could you explain what you found looking over the financial data that OpenAI has on offer to the public?
Yeah.
So specifically it's OPEC and revenue.
So in documents I viewed and the FD and I have looked over.
as well because they did a story on it as well. I found that open AI spent through September
of the end of this, sorry, end of September this year from the beginning of the year,
$8.67 billion on inference, which is just the thing for creating the output. Inference is just
the process of spitting out. I don't know, Scooby-Doo with big old pair of tits or a fanfic of
about that, which is most, I think most of what Open AIs chat GPT is used for based on the date
I've seen. But also on top of this, I was able to see Microsoft's revenue share with OpenAI. So they
get 20% of all of Open AI's revenue. And through the first three quarters of this year, they've made
$4.329 billion in revenue. Now, they might be a little bit more based on the share they get from Bing
and the share of Microsoft's models. Microsoft has the exclusive rights to sell Open AI's models.
but basically this company's projected to make $13 billion this year.
I don't fucking know how they're going to do so
because they'd have to make like $8.something billion of revenue
in the final quarter of this year.
But putting that all aside,
all of their revenue is being eaten by their inference costs.
And that's just to be clear,
the cost to just create the outputs.
On top of that, they have thousands of highly paid staff,
they have billions of dollars of training costs.
I mean, they have real estate and their data and legal.
fees. So this is just massive amounts more. And indeed, there was reporting that came out earlier
in the year that said they'd only spent two and a half billion dollars on inference to the first
half of the year. It's bullshit. It's just not, it's not what I've seen. And it's really worrying
because this company, I don't know, maybe there's something missing. I'm not sure what it could
be. But if they've only made this much money and they spent this much money, I don't see how
Open AI survives the next year. And I mean, I've kind of been on this train for a while.
Um, from the charts that I saw in the FT piece, uh, it showed inference costs, um, quadrupling in the past year. And, uh, I think if you extend it to 18 months, I think they, it, it, whatever the word for multiplying by eight is. I'm not even going to attempt that one. But, um, what, like, obviously it's kind of a black box, but what, what, what accounts for the almost exponential?
growth in Inferred's costs. So what it is is, so there was a common myth that was broken by the
by MIT Technology Review in the middle of the year. People used to think that all of the costs of
these data centers came from training from shoving data into these models. I'm sure a listener
will hate me for that simplified version, but whatever. Nevertheless, according to MIT technology
review, 80 to 90% of data center expenditures are on inference, just creating outputs. Now, the reason
that's increased is Open AI and all these companies have hit the point of diminishing.
returns with these models where they can't just train them more. So they have to do something
called reasoning, which is where, and I'm doing air quotes, they think, which means instead of
when they generate an output just going, okay, I will write this, it considers the steps. So you say,
okay, generate me a table of the data that I'm feeding you. And it says, okay, they want a table
of something. I will do this, this and this. Now, that thinking process, that kind of back of house
breaking down a task into component elements, that is more token intensive. It's doing, it's basically
generating an output to pass through how to generate an output. This massively increase, it's called
test time compute. It's massively increasing the costs of doing, in many cases, the same thing,
but it's the only way they're really seeing any kind of improvement. And when I say improvement,
that's based on benchmarks which are rigged for large language models, because these things don't think,
they don't have consciousness, they can't do most of the things that humans do. And as a result,
I will know so they're probabilistic. So each time they generate something they're guessing based on
probability. So you've just got a machine that computes a bunch to put an output together. And like
GPD5, new version of chat GPD came out in August, I think. Very reasoning focused. So everyone is just
throwing more compute power at every output. So the costs are going to grow exponentially. And that's for
free users, that's for paid users, doesn't matter who.
Ed, I have a question about the concept of inference.
Like, you're saying Open AI is spending like an astonishing amount of money, like,
in the billions of dollars.
And you said, like, on something, on inference, which is the creation of the thing that
they're supposed to be selling, right?
It's the creation of the output.
It's where the model creates an output.
That's a very simplified way of putting it.
There's a whole technical thing.
But when inference happens is basically the machine creating the output.
that you see. So if you say chat GPT, generate me, I don't know, a thousand words of a story about
this, it then, through inference, creates the output that gives you the text. So when you ask chat
GPT something, the answer it spits back, like how it generates that is that's where the money's
going. Correct. Like the thing is like, that's hard for me to understand is like the thing that
they're selling already exists. So like, why are they spending, like, where is all this money going? Is it to
make it better? Is it to make it like the kind of thinking machine that they're selling it
as? Well, let me rephrase it. So inference is not a process where it's going. When a large
language model generates an answer, it isn't looking at a database of information. It's not like it's
got every fact. The reason that these things hallucinate is because each time they're generating
everything new. And the more training data they have, the more likely they are to give a right
answer. But that being said, the process of reasoning, which is, just as a reminder,
It breaks down the task into component elements and then spits out an output.
That process, the more reasoning it does, the more likely it is to hallucinate.
So the reason it's getting more expensive is it's doing more computing, in many cases it's getting things wrong, but it's doing more compute-intensive stuff.
So the cost of just creating outputs is increasing.
And Altman himself said not long after GPT5 came out that now way more users are being exposed to reasoning models, which just means everyone trying to generate something.
from chat GPT is now generate, they're using more compute to generate it.
And they will claim that they have this router model.
There's the whole thing with GPT5, this router model that makes things more efficient.
I actually reported that it's the literal opposite.
By using the router model, it's actually, because they have to do something called
the system prompt, it's the whole thing.
They are basically doing more work to generate each thing and sending them to more compute
intensive reasoning models.
Does that make sense?
I can break it down.
Yeah, yeah, I mean, like, this is, to add on to that a little bit, this is like a super abridged, very dumbed down to the point where I can sort of understand an explanation of it, but every time you ask a question to chat GPT, it, it associates like what the query or input is with like a set of math equations.
And the training is associating like different, you know, different sets of answers to math equations.
And the more complex that those inputs become and they become exponentially more complex with reasoning because they're breaking it down into components.
They're doing more equations and that it takes more computing power.
It takes more like actual power.
I think you can just simplify it to they use compute to, we.
transform our models, the whole root of it is it's comparing two sets of data and deciding which
one is the most likely. With reasoning, it's using that process to generate a plan of action to create
an output. And doing that can sometimes give more accurate answers. But every time you use reasoning,
uses more compute. So you're kind of there, it's really annoying. It's complex. It's complex because
lots of language models are kind of a crazy, like, if they weren't done in this way, they'd be kind
of impressive, but because of what they're claiming they can do, they're not.
Well, I mean, that is sort of the rub with them.
They're incredibly impressive for pretty much none of the things they're advertised for,
but for a lot of, like, non-consumer uses.
Last time you were on, I talked about the example of people using them for, you know,
to develop antibiotics that overcome antibiotic resistant bacteria and things like that.
they're of course marketed as like oh you can you can make your own sitcom where like Andrew
Andrew Schultz is part of the civil rights movement and they're terrible at that but like all the
things that like none of us would use them for it is like kind of exciting well the thing is
as well it's like was the antibiotic example a transformer based model what they've done with large
language models is they conflated them with all machine learning so all of the useful shit in
AI, they're like, oh, that's all of the AI we're talking about. But the thing that the thing that
they're building the data center for is things like chat GPT or worse. Well, I guess you're like the
problem for a company like open AI based on what you're telling me here is that like usually as
technology advances, I mean, or at least I was led to believe that the better it gets at doing
something, the cheaper it will be for it to perform that function. And it seems like open AI has
a problem now where it's like to get these large language models or to get the product that
they're selling to do what it's promised or even to just continue working, it's costing them
exponentially more and more money to keep it going, which seems like kind of a problem for a,
you know, a firm that's, you know, supposed to be making money. So that's the thing. It's,
it's exactly that. It's the, usually when tech scales and the reason tech has had the valuations
they've had is, as it scales, it gets cheaper. Cloud computing. Yes, it's expensive for someone like
Facebook, a very big, like a very big application like that, it will be expensive to run, but as
it scales, the value kind of expands while the costs stay manageable. With this, the larger the
companies get, and the more that people do with them, the more expensive they get. Indeed,
I've never seen a company in history where your power users are literally costing you more
than your average users, and they're not making you much more either. There's a company called
Augment Code, an AI powered coding company, where they had a 250 buck a month customer,
who was spending $15,000 a month in costs.
And that's because you can't actually manage these.
It's fucking insane.
It's everywhere.
There is a ranking.
You couldn't run a podcast like that.
I'll tell you that much.
Well, there's this thing called,
so Anthropic,
they have this thing called ClaudeCode,
an AI coding thing.
There is a thing called Vib rank
where it's just people
who have found ways to spend more
because you can measure your costs,
even though you pay like 200 bucks a month.
There's someone spending 50 grand a month of costs.
They're just like, fuck it, I should do it.
And they don't have a way of stopping it.
They have no way of stopping it because if they could, they would.
But with someone like OpenAI as well, they have the problem with the GPUs.
Because, and I forget what his name is something, Rubenstein.
He's, like, employee number four at invidiumate at this point where it's, while these new
GPUs are getting more powerful and they're getting more efficient, that doesn't mean they're
drawing less power.
So they might be able to do more, more efficiently.
but you're not actually saving any money with Blackwell.
And the information reported that those GB200 wrecks,
his little business sense for you.
The ones that Open AI has put 50,000 of per building in Abilene, Texas,
paid for by Larry Ellison and Oracle.
Those things have a negative 100% gross profit margin.
They don't teach you that shit in business school.
Negative 100%.
That's the good shit.
So everyone's losing money.
So, like, what is, like, from the industry itself, from Altman and everyone else, what is, like, the conventional wisdom on this?
Is there, is it like some vague thing where, like, eventually we'll reach some point where it's so good that these processes are almost like automatic and inferred's costs will, like, will just collapse or what?
Is there any, do they address it at all?
So they're doing a deal with Broadcom for, they're broadcom, a chip maker who doesn't have the best rep if you look back, but nevertheless, they're building inference chips with them.
And they were at one point hinting that they would make things cheaper, but they've stopped saying that.
And the information even reported that those will have modest gains, and that's the good ship.
So they made these chips with Broadcom that they're also going to build 10 gigawatts of data centers for.
Not really sure how that's going to happen.
But there really isn't an answer here.
Because the media has not really asked Open AI these questions up front, no one has really managed to get an answer.
They don't have one.
They are trying to do, they're playing into Sam, man.
They're playing the hits.
They're just like, we're going to build bigger, get money, so big, and then we'll do ads.
The thing is with ads is, and this is everyone's answer here, Open AI will just do ads.
They've 800 million weekly active users, even though that number is slightly questionable.
They'll do ads.
The problem is no one has been able to do ads.
with the LLMs. Nobody.
Perplexity had ads in 2024.
They made $20,000
and their ad chief left earlier
this year. 20 grand in
24 on ads. They're an AI search
engine and they couldn't do it.
20,000 dollars?
Like, that's
like 20 grand.
Like one ad read on
Comtown.
Yeah.
I would love to hear it.
I'd love to hear a Comtown read of a
perplexity ad.
I'll be thinking about that one all day
But that's the thing
And people are like
Oh they brought in Fiji CMOs
The former CEO of Instacart
Head of Facebook apps
They brought her in
There's the CEO of applications
She's obviously by the way
The Fall Girl
They're obviously going to pin
Open AIs failing on her
And claims she fucked up
It's Altman
But they think
Oh we'll do ads
The thing is with an LLM
With ads
The whole reason you buy ads
Is you can do
Accurate placement
an accurate attribution.
So you can say,
I know where this will go
or where it won't go,
and I know that I'll be able
to track what it does.
How the fuck do you do that
with large language models?
Nobody's been able to crack it.
Nobody, not perplexity.
Perplexity's ad chief of
mentioned left, I think, in August.
It's like,
if an AI-powered search engine
can't do ads,
how is Open AI going to do it?
And people say,
well, Open AI has the smartest people
in the world.
Well, you know what?
When they die,
I'll say what,
Carla Bann said in Doom,
if they're so smart, why they're so dead?
Because they've got all these smart fucking people.
And it's like they haven't been able to work this shit out.
The answer is, I don't think anyone has a plan.
I think everyone wants to look at this as they look at the wider world and say,
there's a grand strategy, this big conspiracy, they're going to do this and this.
No, they're not.
Government contracts, they haven't got shit.
$200 million with the Department of Defense that everybody got.
Oh, the government can just feed the money.
this fucking company, they're saying they are, they are, they've signed a contract with Oracle to
spend $300 billion in five years.
They, they don't have the money.
They won't have the money.
There's no, I think, um, Microsoft's operating expenses like over $200 billion a year.
Maybe I'm fudging that.
But nevertheless, Microsoft's very profitable.
There's no company like Open AI because no company has been pumped up this.
big. It's the world's largest fail son. All that, like, this is the greatest collection
of smart people in one one company ever. It reminds me of long-term capital management.
The, the, uh, one of the first hedge funds to need a bailout. Oh, but, um, I, so I, that is,
that, that is another thing I wanted to get into. Um, you talked extensively about, uh,
the, how weird some of open AI's deals are.
Obviously, there's this Microsoft thing where it's kind of a black box figuring out, you know, what revenue they're taking out of which a lot of it is sort of Byzantine and money being moved around.
The AMD deal where, correct me if I'm wrong, but they, it gives them an option to buy an uncertain amount of shares at.
for a penny each, like millions of shares at a time,
if, per some efficiency marker?
It's such a weird deal.
So all three, so they have three big deals like that.
They've got 10 billion, sorry, 10 gigawatts of data centers they have to bill for
Nvidia for $100 billion.
And just to be clear, this has been misreported everywhere.
Open AI has not been given $100 billion by Nvidia, nor have they been given anything yet.
They might get $10 billion soon, but every gigawatt,
build, they get more. The AMD deal is like six something gigawatts and it's every successive
gigawatt and they're meant to build the first one by next year. It takes two and a half years
and like 40, 50 billion dollars per gigawatts. I'm not sure how they're going to do that.
But they, it's based on how many gigawatts they build and also AMD's share price. Lisa Sue actually
got a pretty good deal on this. She managed to get the stock bump, but without really risking it.
It's still going to suck when Open AI pops its clogs. But nevertheless,
it's, like, that one is really multifaceted.
Like, you have to, like, they have to successfully do the gigawatts and then the share
price must increase by a certain amount in a certain time period.
Only then can Open AI buy part of the trenches.
It's a very weird deal.
The Broadcom one is just, I think, successive gigawatts.
But that one's really funny.
The reason I want to bring this up is Sachinadella revealed, because OpenAI has to share all
their IP with Microsoft.
Apparently, Microsoft has all the details of their chips from Broadcom.
So OpenAI, put all this money in with Broadcom to build these custom chips.
Microsoft has it now.
Just fuck it.
The business geniuses.
Like, it's just, it's fuck we're on fuckwear action.
Ed, like, you mentioned these, like, sort of a deal with Nvidia for opening it to
generate 10, 10, with like, 10 gigawatts of electricity, like, and like, in terms of, like,
these data center, could you just give me like an idea about how much 10 gigawatts of electricity is?
Well, first of all, the concept of a gigawatt data center is also very new. Also, there is no such
thing as a gigawatt data center. It's usually buildings that connect together through high-speed
networking with a company called Melanox that was acquired by Nvidia in 2019. Nevertheless,
a gigawatt of date, I don't have the numbers of comparisons to cities, but the one I can think of
off the top of my head is, I think New York's combined power of the one in Queens is like
one point something gigawatts. So the entire, like two-thirds of New York's power is like 1.2 gigawatts.
Someone's going to flay me in the comments for this one, but it's something like that.
It's considerably more use of electricity than the largest city in the world.
Exactly, with millions of people. But to give you some scale of this, so there's an important term
IT loads. No IT loads refused. I'm just going to say it. So IT load is what they're talking about when they say a gigawatt data center. So that means that they need like 1.3 to 1.4 gigawatts of power. So when he says 10 gigawatts of IT load, they need 14 or 15 gigawatts of actual power. So this is really funny. Out in Abilene, Texas, building 1.2 gigawatts of data center capacity. They have 200 megawatts of power. They're never going to turn that fucking thing on. They don't have a
enough. But the idea of 10 gigawatts of data centers is just insane. Just no one's, I don't think
anyone's actually successfully built a new one. These companies are not generating the electricity to do
this. They're simply needing that level of electricity. And I know like a lot has been talked about
how like the water use of these data centers may be overstated. I mean, as far as I'm concerned,
even one ounce of freshwater going towards any of this shit is a calamitous waste of resources.
but like in terms of like like they are not going to like are they going to be paying the government
in terms of like the excess stress they're going to be putting or like the state of Texas
on the excess stress they're going to be putting on the power grid or rather are these states
going to be paying them for the privilege of using a you know a New York city sized amount of
electricity to generate images of you know donkey Kong with tits or me as an anime character
yes so they're not paying you guys you guys
don't need to worry about that. All these companies claim to have some sort of, oh, we're going to
bring jobs. These data centers don't create a ton of jobs. It's a lot of talent. No one works in
them. It's just like a warehouse with computers in it. But even then, when it comes to the
construction jobs, it's usually people flown in. It really is like the West, pillorying,
the South in many cases. Like in Texas for Abilene, for the Stargate Abilene, they are bringing in
these massive horrifying old gas turbine engines. It's a bunch of specialized talent. I did a whole
dig into all of the construction firms. There's so many of them. It's very clearly, by the way,
just construction firms wheeling in and being like, yeah, I need 100 bazillion. I, uh, mate. It's like
the classic builder thing. It'd be like, oh, yeah, it's going to take another six months,
mate, so I didn't see that. And they all require specialized cooling. But basically in Texas,
you're really going to want the double platinum service. Oh, it really is.
crowding, you know.
So much shit like that.
Oh, the cooling, you're going to need way more cooling than that, mate.
No, it's with the local governments they love, like Abilene especially, it's just like,
oh, we've given you every text abatement we can find.
But what they're finding is, crony capitalism really, really good at free money, really good
land.
You can't beat physics and you can't rush power.
You can't just be like, we've built a power station and now we've connected the power to
the power thingy, and now powers.
happening. You have to do actual land testing. There is a limit. Sorry, limit. There's a, uh, what's it
called? A lack of the electricity grade steel you need is a lack of the transformers. Massive
drought of the talent to build these things. So even if these fuck nuts had the ability to build them,
the people to build them don't exist. And if you rush power, you die. It's not even the people
you're killing with the gas turbines. You kill the people building it. So putting all of that
side, even if they fix this, the amount of money and time, they need to have this stuff built
next year, middle of next year, pretty much. They're not going to, I don't even think they
get any of this built, but like the majority of it built by the end of next year. Just a small
correction on data center jobs. There is actually a study that came out. It turns out that
lot lizards in data center parking lots make 17% more than regular lot lizards.
But, Ed, when you say that, like, they have to have this stuff built by next year,
like, in what sense, to meet their projections, to be profitable, like, to do the thing that
they're promising to do? Like, why do they need all this electricity?
So I'm remiss here. I should have defined who they was. So Open AI needs this built because
even though Open AI burns billions, they are running up against capacity issues. Because,
like any cloud storage thing or any cloud service, I should say, they run into, there are peaks
and troughs. There are times when they release new product, they get a bunch of new attention
like Sora. They need the capacity and they keep running up against the limits of it because
their shit is so compute intensive. Oracle needs this shit built because middle of 2026 is the
beginning of Oracle's fiscal year 2027, which is when the fuck nuts are open AI need to start paying them.
Oracle has mortgaged their future on this.
The private equity firms that are building these data centers also need the things to be built
so that they can get paid so that they can start paying back the debt.
So everyone here, I mean, there's not going to be a bailout or anything,
but all of these coming, Corweave right now,
this big AI data center company,
they're not even a data center company,
they're a data center leasing company that rents compute to people
where all of their customers are either InVIDIA or Open AI.
I can get into more detail, but they in their last, yeah, go on.
So, like, I was going to say, like, given what you're saying, like, how should we view,
I saw like a couple weeks ago Sam Altman made a public statement about being too big to fail
and sort of like, winking at the idea that perhaps OpenA AI is like, well, I would never ask for
a government bailout, but sometimes when you get so big, the government is the only one
who can make, I mean, you know the comments I'm referring to and like, how should we view those
comments?
So the way to look at Open AI is they are not too big.
to fail, they are too small to pull apart into enough pieces for enough people to eat. OpenAI has
promised people $1.4 trillion worth of compute deals. With Corweave, with Amazon, they just
signed a $38 billion deal. Microsoft, $250 billion of like upcoming as your spend, $300 billion
with Oracle, all of this stuff. Too big to fail in this case would be that the government just
pays Open AI's bills. And Trump doesn't, do I me, Sammy? He doesn't like Trump, Trump's not going to like
Altman? You think Trump's going to
fucking bail it? Nasty,
Clemmy Sammy, don't like him? But he's not
going to bail him out. But it's
something
where Altman would love a bailout, but he
even said that what they meant was
they wanted the government to back loans
for data centers. And then they had
sent a letter to, he's a fucking liar. He was like,
we didn't ask for anything like that. They
sent a letter to the government asking for the Chips
Act to cover data centers.
Putting all that aside,
open AI can't be bailed out. It would be
like bailing out Enron. It's something where Open AI's failure would be a symbolic hit. It would
break the myth of the AI trade. But Open AI as an economic entity, it's not actually that huge.
They've got like $4.3 billion of revenue through the end of Q3. I mean, they're spending,
they spent $12.4 billion on inference since the beginning of 2024. Like, it's a lot of money for
Microsoft, but their death doesn't really fuck the economy up other than the symbol. But if the
government backstops open AI, that doesn't fix AI at all. The problem is AI has been sold as this
magical tool without ever having the proof of revenue. The only companies really making money on
this are Nvidia and construction firms, really. Yeah, I mean, it's impossible to really get into
anyone's head, especially in this case. But I am curious about like, you know, what are these
companies getting out of these deals specifically? With Microsoft, just from the outside, it does
sort of look like, I don't know, another nation-state-sized company moving money around in a way
that's perhaps tax advantageous or otherwise makes it look like they're getting something
out of this that they're not, but from the more like policy end of things, it seems to just
be wishful thinking.
This idea that like, okay, in 30 years, we'll build more nuclear power plants than in the
previous 80 years combined.
And then finally, after trillions of dollars spent, it will have like a permanent 4% uptick
and productivity. And that will be like the great economic engine of this 40-year period.
It's a myth. It's just everyone thought this would turn into something because we had this
scaling laws paper from 2020 and the jump between GPD 3.5 and GPD4 felt so big that people
thought, oh, this is going to keep jumping. But with Microsoft, by the way, they have a very obvious
thing. They get to feed revenue to themselves, boost growth, and they own all of OpenAI's IP and
research. When Open AI dies, Microsoft just goes, now we have all of this. This is ours, because they do. Microsoft makes out of this fine. Oracle, Oracle is insane. Oracle has taken on like $56 billion of debt to build data centers for Open AI, a company that has never had that much money, even if you could, I guess if you combined all their funding and revenue. But nevertheless, Open AI can't afford to pay Oracle for the data centers. Oracle doesn't have the data centers. But, Oracle doesn't have the data centers.
So Oracle has mortgaged its future on GPUs because of OpenAI in a way, and they claim they have other customers but nowhere near close, in a way that I think threatens the material health of that company. I don't think they'll die, but let me tell you this. The CEO, Safra Katz, former CEO, she retired from CEO of Oracle a few weeks after signing that $300 billion deal with Open AI. Do you think that she did that because she did that because she,
thought it would go well? Because I'm kind of like, you're like, oh, it's the beginning of this
new era and I'm out. This is the most important thing. But it really is just, it is wishful thinking.
It is everyone thought this would be the new economic growth engine. And because everything
in tech is run by management and soans, all they care, it's the right economy. It's,
it's all growth at all costs. Its number was going up. So much big, so much money, so much number,
number so big, without looking, and you'll notice, other than Microsoft, who has only talked about
their AI revenue twice and stopped in January. Nobody in these public, in the hyperscalers,
in the Mag 7, other than Nvidia, has talked about AI revenue. Nobody, not one of them. So it's
not like when the bubble pops that they can point at this substantive business, that they can
say AI is this much money. They can't do that because it's small. Amazon spent $116 billion in
capital expenditures this year and they haven't talked about their AI revenue. It's astonishing.
Well, I guess we'd like sort of a broader question. I know I'll probably be misstating this, but I just saw one of these like fact toys the other day that like seem to imply that like 90% of the growth in the current U.S. economy is from investment in AI, which is a staggering number if it's true. And like my interaction with this is basically like I don't really use chat GPT for anything. I don't really have any interaction with AI. But when I watch TV slowly but surely over.
the last year, year and a half, every commercial I've seen on TV for whatever product they're
selling is saying it's now enhanced by AI provided insights. And these are usually in treating you
to gamble when you're watching a sports game. So like, I'm thinking like, wow, this is,
this is a great model for the future of our economy? So I guess like the broad question is,
is AI a bubble? Or rather, what is the argument that AI isn't a bubble? Like, that to me is more
interesting. So here's the fun thing. That economic growth that you're talking about, and that's
true, it made, I think for the first half of this year, AI data center development specifically,
just building data centers had more economic impact than all consumer spending combined,
which is not good. Now, this is not AI revenue. This is not people paying for AI services.
this is nothing to do, in fact, with AI, as many of those data centers haven't even fucking
started building yet. This is just building data centers to put GPUs in and buying GPUs
from one vendor, Nvidia. This is literally just building things. It's the, I think it may be
one of the largest construction eras ever. It's equivalent to the post-1997 Telecommunications Act,
free-for-all when everyone was building fiber, except the difference is, is that this is not
useful like fiber. These GPUs are specialized for a bunch of parallel processing, which is not
useful in general purpose computing. And so it would be like the equivalent of if like when they
built the Hoover Dam during the New Deal, the Hoover Dam didn't generate electricity. It just
used it. Yes. Actually, not dissimilar. It's one of the most bonkers things ever. And the reason
everyone buys from invidia is they have this programming language libraries, whatever you call it,
called CUDA, so you do CUDA, which is far and above the only game in town for this thing.
You can do inference and other things, but putting that aside, Nvidia is basically the single
person in this market. But what CUDA can do is good for 3D modeling, AI, crypto mining to an extent,
and uh some scientific research but otherwise these data centers when this all pops all it's going
to do is create a very cheap market for an asset that isn't really useful for much you can't use
them from gaming i don't even think they ever ever i was about to say vGA out that get my else
on that display port i guess you'd say but it's yeah these things aren't useful for other things
so you're going to have these massive half-built data centers full of things that you can't turn on
because the power isn't there
and when people try and sell them
it's going to create this thing called
I believe this is my theory
an impairment event
because the value of a GPU
will go down
and everyone's bought hundreds of billions
of these fucking things
so they're going to start
all of these public companies
are going to have to say
yeah all those things we bought
are worth less
we have to take that off on their income
it's going to suck so bad for them
I can't like they deserve to suffer for this
it's such a waste of time and money
It's been obtrusive, to your point, Will, it's on every commercial.
It's ruining every app.
It's sickening.
It's, you, you can't even, like, use Chrome without them opening a new tab for you that says
Ask Gemini, which I think out of all of the, all of the consumer ready AI, that might be
the worst one.
Yeah, when you're using Google Docs and it's like, do you need help writing?
No.
Yeah.
no that is that is by far the shittiest one i mean it may just be bitterness because it's the
forced on you more than all the other ones but i don't i don't think i've ever seen it
get a single thing i've asked right i'm literally looking at a spreadsheet because i put the
numbers we're going to talk about today up in a spreadsheet and my cursor is just sitting
on a blanks thing and he just says visualize this by this and i'm
has a little Gemini icon.
I can't just leave a cursor in a spot
without Google attempting to burn compute.
They're like, nah, we must use my TPUs.
We must spend my horopics must increase.
So, um,
there's been a lot of talk about like,
Altman's seven trillion dollar figure.
To the best of your ability,
like,
what, say like someone comes along and says,
hey, I've been,
I actually,
I saved all my being.
babies. I just sold them. Here's $7 trillion. What do they accomplish with $7 trillion?
In like in their best of worlds with $7 trillion and like Greg Abbott, you know, builds 50 nuclear
power plants in Texas. Well, I mean, the $7 trillion number I think was from last year and he was
he just made that number up. That was a number that he came up with and then had to walk back.
But what he was talking about was building a bunch of data centers and building specialized
computer, sorry, specialised
GPUs, so like he's doing with
Broadcom. The actual number I think
of, because he wants to build 250
gigawatts of IT load,
which I think would be
10 or something trillion
worth of data centers and power.
It's so fucking stupid. He wants
to, he claims, build
a bunch of data centers
with specialized compute chips in
them, and then
have the power for them, and
then you may be wondering what the plan is after
that. And I don't believe he has one. I don't believe that more compute. I don't think he has a
thing he's saying, oh, we've got enough compute now. We can do this. I don't think he's going to.
Yeah, this is what perplexes me. Is it like all of this is to do what exactly? To produce what
available? I mean, like, I mean, is it what we're seeing already? Like, that's, that's kind of,
that is sort of the, the most interesting thing about it to me. Um, it's obviously it's very different
in scope and even what kind of thing it is compared to like the crypto shit of the previous
few years.
But they both have the same central promise, which is, okay, I know there have been some
false starts, but this is going to be, this is going to be like our generation's answer
to the post-war prosperity.
Right.
This is going to be the driving economic engine of the next 40 years.
This will be the reason that everyone under the age of 30 will be able to,
buy a house. This will create the new middle class. And it just at a certain point, it just is
cargo cult shit. And you know what? That's exactly it. That is that you've nailed it with the
cargo cult thing. What it is is everybody is repeating the cycles of what has been done before.
You said it earlier, well, where it's like what, what has worked before is shove a bunch of
money in the thing, hire as many people, put as much money into it, and then business model has come out.
It's worked, but also the business model was obvious from the beginning, even with Uber,
which is, and they've burned, I think Uber's total burn was like $32 billion.
So people will say, well, Uber burned a lot of money.
Uber burned $32 billion, and they're kind of profitable now.
Amazon Web Services cost $68 billion in today's money over 10 years, so nothing close.
But nevertheless, there was always a business model.
Amazon Web Services was cash flow positive in three years as well.
But what they're doing is they've run out.
all of the technological people now. Sam Mortman, not technological. Mark Zuckerberg hasn't written a line
of code since 2006. Satchin Adela MBA, Tim Cook, MBA, Andy Jassy MBA, replaced the Amazon Web Services
with another MBA. It's all management consultants. So all they can do is copy. So they've said,
what do we do before? Money everywhere. What do we do with the press? We spread a mantra that this
will just turn into something. When they ask, how will it do that? Say, well, it worked before.
and so everyone's just doing the thing that they thought the cargo cult ship where they say oh we'll just repeat the cycle but we'll do it bigger this time never before has an era of tech without exception had this much money this much attention and this much honestly government help and they're doing it without a real plan uh people annoyingly quote altman in 2019 saying oh yeah well ask the i what it will get profitable but truthfully
I don't think they have a plan.
I was told last year by many people, well, they'll just do custom silicon, and that will make it profitable.
We have the custom silicon now.
We have it.
Why aren't they profitable?
Wait, hold on.
What is custom silicon?
So that just means custom GPUs or custom chips.
Sorry, I should have said that.
Like, so just the argument was they would make customized inference chips, and that would make it so much cheaper and the costs would come down.
We have Cerebrus now.
We have GROQ, GROQ, very good.
different. By the way, if you want to really, you want to feel bad about yourself, go on R-slash
Grock, just the worst people in the world complaining about generating porno with Elon Musk's
L.M. Anyway, terrible. But, yeah, it's just myth on myth. It's, we're going to copy what worked
before. We're going to hire the smartest people, and then company will get big. And when that
didn't work, they spent more money and they hired even more people. And when that didn't work,
they just repeated it at bigger scale, except now, because also a lot of this comes down to how the
media and analysts treat these companies, because analysts in the media being more aggressive
and said, hey, why are you, none of you fucking people talking about the revenues, then these
companies might have had to face the music earlier. But because everyone was scared and everyone
assumed that everything would work the same way it always has, we're left in this situation.
And it is going to be, I don't know if it will be apocalyptic like the GFC, I really don't.
But I think that this leads to a prolonged period of depression in tech.
And I think it permanently changes the valuation of public tech stocks because the only reason
they're doing this is they have no other ideas, no other growth ideas.
And when they're out, everyone's going to be quite mad at them.
I've never seen anything like this.
Because you brought up the example of all these other businesses or sectors of existing
businesses that were like pretty expensive and at the time people did make fun of that thing
with Uber where the basic logic was okay we're going to lose X billion dollars a year
until we achieve this much market share at which point we will become profitable company
but in even even the most ridiculous of those things there was a product there was a conceivable
end goal no matter how ridiculous it may have seemed
With this, there is no concrete explanation of, you know, how you create sustainable productivity, how you avoid just fucking decimating the job market.
While also the core technology in question is like synthesizing child pornography and making everything worse.
I have never said, like there have been bubbles before.
And there have been like, you know, pie in the sky predictions.
And there have been instances like this where it's, it is like a cohort of people under 40 going, okay, this is this will be the generation defining economic innovation.
But never one where it's this vague will also just having so many horrible effects on the world already.
And also, the scale of the burn is just different.
Like, just the $32 billion for Uber, I think it was like since 2019 at best.
Open AI, I think, like by the end of next year, if they survive that long, will have burned over $32 billion.
The, like, here's the thing, Open AI and Anthropic are basically rich kids living off either Amazon and Google or Microsoft's money.
Open AI didn't pay for any of their infrastructure.
their infrastructure costs at least $100 billion. Same with Anthropic. Project Rania out in Indiana, I think. That's, what's $30, $40, $50 billion data center. They, the cost to just spin these fucking things up. But even with Uber, horrible company, evil company. But they had a business model and they burned all that money because they spent a shit ton on marketing and R&D on their failed autonomous cars. There were actual things you can point at and there was an actual business model. There's never been one here.
And in fact, the subscription-based, and this is, I promise, this is a simple point.
Paying 20 bucks a month for an LLM service as a business is insane because you can't guarantee
that that person won't spend more than $20 of compute.
Because large language models are hard to quantify the costs of for both sides.
Just could you just clarify something?
When you say that, like, opening, they're rich kids living off of their parents, Microsoft
and Nvidia, meaning that Microsoft and Nvidia built their information.
for them? Like, what do you mean by that? Well, not in video. So, Microsoft built all of the
data centers that got open AI started. Okay. Their entire, they're, I mean, Microsoft paid
core weave for open AI, Microsoft's own infrastructure. Tons of it is open AI. They have an entire
data center. And that's why they have this revenue sharing deal with them where they get 20%.
That was part of it. Okay. With Anthropic, they have the same thing with Google and they have the same thing
with Amazon. I actually had a story that came out a few weeks ago where Anthropic paid Amazon,
I think through the end of Q3, 2025, $2.66 billion just on Amazon Web Services. So that's, they
didn't have to pay for their data centers. Their data centers were built for them. Now OpenAI
kind of has to pay for them, but not really sure how that's going to work out. But even then,
Oracle is paying to build all of the data centers for their compute deal. So it's not like these
companies have had to build anything. They're just rich kid. They just, they are literally just
living off their parents' money until they die. There is no plan. There's no plan.
When you're talking about like living off your, your parents' money, okay, so like, if I'm
floating my fail-son kid and they keep failing, you know, as a parent, you want to keep supporting
them. You want to keep sort of putting the best spin on their, you know, lack of success or whatever.
And you have a quote in one of your recent pieces, and I hope you give me the context here,
but you're quoting like an investor who says of Altman and like the sort of continued exaggerated
claims that they keep making. The quote here is, some researchers are trying to take advantage
of high investor interest in AI. They have told some investors that growing concerns regarding the
costs and benefits of AI have prompted them to raise a lot of money now rather than wait and
take a risk, risk a shift in the capital markets according to people who have talked to them.
Could you explain that quote a little bit further?
That one is awesome.
So that is saying that startups are raising money now, knowing that there are questions
around their ability to do business in a profitable manner.
Right.
And they're raising the money now before investors get smart enough.
A risk in a capital shift.
Before investors go, wait, do you have no plan to make more money than you spend?
What I love is that's, I think that's a quote from an investor who,
did not realize what they were saying.
It was just like, yeah, it's, they're raising money now before we work out.
They don't have a business model, which is why we funded them.
Mm-hmm.
And this is the thing.
I think the greatest innovation of the large language model era is we've realized how many
fuckwits there are.
It's like anyone who is just insanely impressed by chat GPT, you're like, oh, oh, okay.
Yeah.
Oh, you learned physics in chat GPT.
Sure, mate.
Yeah.
Okay. Oh, you gave cursor $2 billion. Yeah, you're real smart. It's, you see, every CEO is like,
AI will do this and then they go on to describe something that large language models can't do. You're like,
fuck where. Well, the impression I get from this is like you said that like all these guys are MBAs,
they're management consultants. And so that's why I like, back to my point about every ad on TV having
some AI hook, no matter how asinine the product is or how little the product needs it, is like,
I just think these guys get in a room and they just hear the word AI and like nobody really
knows what it means but they just think money and they're like oh well we got to have AI now
everyone else says AI we got to have it we got to have it but like what does it do how is it
going to make us money does this product even need an AI component doesn't really matter they just
don't want they just don't want to be like oh we're the only we're the only gambling and porn
website that doesn't have an AI component so I don't want to get left behind yeah it's also just
how many companies have just contempt for their users?
Yeah.
Which is most of them.
How many of them are just like,
you fucking pigs,
eat this,
you like this,
you hate it?
Well,
you're going to have more.
Pay me more now.
Going over what Felix was saying earlier
about like how this is sort of unprecedented
and just like the weirdest reality of it
is I think we've talked about this before with you
other times you've been on the show.
But like unlike other,
you know,
big shifts in tech or like some new tech product,
like be it,
I don't know,
like the Apple,
the iPhone.
or high-speed internet, like widespread broadband.
Like, people had an affection for these products.
There was a use for them.
Do, does a...
I mean, like, you see the AI, like, the people who have bought into it.
You see the people arguing with Grock.
You see the people who keep pitching this idea, like, it's going to create
some sort of almost religious revelation, like a singularity,
and that all, you know, all accounts will be balanced.
But, like, do AI and its boosters and proponents in the media,
the people who takes him Altman seriously,
Are they aware how loathed their actual product is?
And it's like, it's not just that their product doesn't work or doesn't do the things
it's promised.
It's that like, I think most people do infer from it.
It's not like a direct threat to their livelihood or career.
Like something distinctly like contemptuous of human life, like embedded within all of this
kind of AI ideology.
So I think AI as a, well, AI is a marketing term.
That's not my original thought.
I forget who originally said, but it's true. It's a marketing term. It can mean everything and
nothing. I think there is something unique with large language models that make some people
feel smarter for using them. And when they use them enough and they manage to make them do something,
even though it's the machine training them, even though it is a bad product that is making them think
it's good, they think, wow, I am able to control the machine. And when they see people say,
this sucks, it's too expensive, it doesn't do the things. They say, it doesn't do it.
it for you because you're not chosen.
And Boris Bolkin from the ninth game.
They think they're worthy.
And it's, they think that they're special.
And it's built to do that.
It's, you got this.
That's not just smart.
It's amazing.
They've built the language so that it really is like the imbeciles magic trick,
where it makes you think, oh, you're so special.
Oh, you know this.
Oh, well, only you understand this.
And it brings them into this kind of conspirator thinking.
On top of that, for, I don't know, imbeciles like Ezra Klein, who don't really like thinking more than one thought at once, they think, wow, AI, oh, I'm so scared the computer will wake up and it'll be so scary. And also, I want to be ahead of this because I love licking boots. So I want to, I want to make sure I'm the best at cleaning these things with my tongue before everyone starts doing it. And I'll know I'll be first in line. But really, AI is this thing.
where while there are many people who use a broken thing and say this is broken and stupid,
there are many others when they hear AI in their apps, AI in ads, AI on the media, and then
they use it, and it doesn't work, they go, something's wrong with me. And I think that that is
something that tech industry has been pushing for 20 years. It's the Steve Jobs thing. You're
holding it wrong. If you can't make AI work, that's because you're a moron, not that this is bad,
unreliable software. And I think that that's what's twisting them up. And the more people that say it's
stupid and it sucks, the more that they get stuck to it. And then I also just think there's a lot
of people who want to fall behind the tech industry. I think there's an alarming amount of people
that want the tech people to like them. I think there are alarming amounts of people that
think Sam Altman is very special and very cool. And that for them, it's kind of like politics as
well. It's like instead of accepting that we live in a deeply unfair system where assholes and imbeciles
are on the top of the pile, they think, no, there's something special about Sam Altman. When
Dario Amadei lies. It's not that he's lying. It's that he sees a greater truth that you can't let me in on because of all the smart people doing secret stuff.
I thought it was one of the most harrowing cultural changes in the last 10 years before this was how every terrible open world game gave you a sassy AI sidekick.
The worst example of this would probably be in that really shitty watchdog game.
that takes place in London.
And I, I always,
that specifically,
like a Sassy AI sidekick is such a,
no,
there's no good game that has that.
Yeah.
Not like Cortana where it's like
an actual character,
but like a thing where it's like,
I'm a computer,
I'm a computer that,
who knows I'm a computer?
Isn't that crazy?
I'm gonna,
I'm gonna talk about pop.
Yeah, pop psychology about existence.
And I always thought like,
who,
fucking likes this. Who thinks this is funny or interesting? And then I would go to like Bill
Ackman's Twitter and he would he would go, I just argued with chat GPT about geopolitics. And it's
like, oh, there is there. It's like a dog barking at its own reflection. Yeah. It's just,
it's, are you, how narcissistic are you? Like, yeah, I'm reflecting my own thoughts against myself and now
I feel special.
And I'm sure that there are people who don't think and feel them that way, but people
of the Akman gene are like that.
They think that, like, the fucking former CEO of Uber, even, Travis, what's his name?
He was saying that he did, yes, he did vibe physics.
He learned physics or quantum physics and fucking chat.
GPD's like, no, you've lost logic.
If you didn't understand quantum physics before you started fucking talking to chat GBT,
you're not going to
fucking learn about it
when you start
I love how much of it
yeah I love how much of this
is just like
smart guy pantomime
yeah yeah
all these guys who are
who are great at like
creating an image
and now like
at the end of the day
they're like all right
I'm done doing business
for today for the day
time to time to take
part in the wonders
of science I'm going to sit
in the big room in my house
where I've painted
the periodic table
on my wall
and just admire it
think about how great the elements are
and then I'm going to look at the solar system
then I think if I have time later tonight
I'll spend a little time with E equals MC squared
none of you are fucking smart
but it's the it's the dunce reader
it's like a metal detector for duncees
it's if you thought they were smart
if you thought they were smart you hear how
they talk about chat GPD you're like
oh oh no you just
you learned enough words
you don't know anything do you you were you were good at one thing like everyone talks about
Altman like he's this really smart guy I've talked to people for years like he's so smart he's
so you listen to him in interviews he sounds like a moron he genuinely he can't get a sentence
they all do whether it's Elon Musk whether it's Peter Thiel Sam Altman Mark Zuckerberg
watch any video clip of these guys trying to talk like in public and they all seem like
touched like they can't form a sentence
yeah they they sound like they got on a business call that they forgot happen and like i mean like ed
to your point about this this thing being like like a divining rod for for stupids like i know this
a real thing because like when i see clips that people share on like social media and it's something
generated with chat with with sora or one of these like one of these AI models and it's like a it's like
a 10 seconds it's like a 10 or 20 second clip uh that looks like a video game cutscene and they're like
Hollywood is shaking in its boots
if this is what this model is doing now
imagine what's going to do
and it's like to the extent
that this already looks like
most of the dog shit that's out there
I suppose that's impressive
but that's just the quality of like
how bad movies have gotten
but like the idea that anyone would look
at this absolute drivel
that's like bereft of any context
or character and it's just like
I don't know it's someone like
driving a car and looking cool
and it's just like this facsimile
of like what an action scene
would look like in a video game
and they're like
this is awesome and you just think like oh i i found someone with zero inner life yeah you don't know
anything with no imagination no like uncultured unlettered like just bereft just someone caspar hauser
basically and the greatest thing is as well is nobody seems to want to talk about the funniest
character in this thing and be like this is the worst the mcrow dog from acre to himself massioshi
son where he's just like he sold all of his invidia stock
a bunch of his T-Mobile stock and has taken a margin loan on his arm stock. So all of the
valuable companies, so he can give OpenAI $22.5 billion at the top. We're at the top of the
bubble. And he's like, no, I must double down on this. I must give this so that Sam Altman can
do a year max of inference. So he can run his company for one year if he's lucky. It's just
it's so fucking good. But no one wants to be like, this is the same as we work, which I'm
mean, they shouldn't say that. It's so much worse. We work had buildings. And even, well,
they had leases on a lot of, but still, like, there was a thingy. Like, it's just open AI, like,
this is a really important point. There was a place you could go to do work. Open AI doesn't even
have assets. Well, they have, like, people and, like, office space, but their IP and research
is owned by Microsoft. Anthropic, I guess, owns their IP, but all of their infrastructure is
Amazon and Google. I mean, it's, it's like they don't, they don't, Open AI doesn't own any GPUs.
Like, what the fuck are they doing? Like, I just a small note about Masseyosha, son, the greatest
investor in history. The best. I, I love him. But, um, I think I may have mentioned this last
time you were on. I'm not sure though. He, he was, his mentor. Yeah, was the other greatest man to ever live.
Den Fujita
author of
my personal Bible
the Jewish way
of doing business
Yep
And what was great was
He begged
He was like
I just want like
14 seconds in your presence
Like he begged
Den Fujita
As a child
As Masayosh's son
As a teenager
Was like
I would just
I don't even care
If you say anything
I don't even care
I just want to be
fucking amazing
Just like
A 14 year old
who reads the Jewish way of doing business.
It's like, holy shit.
This is awesome.
Like, it's a shonen.
I think it's wonderful.
I think, I, like,
Masayoshi son, I think he's a Korean guy in Japan.
Yeah, as he's already like a very challenging.
It's like a very weird cultural thing there as well.
But on top of that,
he just runs this business insanely and is pretty much still living off the great
bets he made on Alibaba and Arm,
which he is currently in the process.
of destroying to fund open AI. I think Sam Altman is like, I'm not saying he's like a,
like an antichrist, but he's like a reckoning for this industry. Like the tech industry for like
25 years has allowed various different guys who were not really smart, but could sound smart
to the right people at the right time to get as much money as they wanted. Eventually, one of these
people would try and get more money than anyone has ever tried to. And now we're kind of seeing
what would happen. Now we're what right now, we're in the greatest follower culture in business
ever, when no one has independent ideas. The reason that all of this started was because in
2022, Satchin Adela saw what Bing said, that Bing had GPT, sorry, he wanted chat GPT in Bing.
That was the only reason he bought these fucking GPUs. That was the only reason. The entire story
starts there. Had he not done that, who knows if this would have happened.
and soon not a shy of Google
could have just said, no, this looks stupid
but because they all copy each other,
we're here.
Well, Ed, I want to, like something you said a little bit earlier,
which is that chat GPT is not too big to fail.
But if it does fail,
the sort of the example it will set
for like the next certain Altman
and just like the way it will,
maybe not like burst the entire bubble of the economy,
let's pray, but like it will burst the bubble
that like these guys are geniuses.
And like, do you think that that in some way is an existential threat to the entire idea of Silicon Valley and their ability to make money in the future is that if like at some point people stop talking about these guys like their Da Vinci or Einstein, people will stop giving them money to create electricity using machines?
Yes. So I wrote a thing in the middle of last year called the Rockcom bubble, which was saying that tech is out of hypergrowth ideas. We're past smartphones. We're past cloud computing. We don't have a new thing.
So they all crystallized around AI because they had no other hypergrowth ideas.
They need something that is worth hundreds of billions of dollars of revenue over 10 years for all of them at once.
And because they're all basically a cartel, they all agree on the business models they like, they all come together and say that.
Now, do I think this is the end of startups? No.
But I think that the mythos of the tech industry always knows what's going to work.
I think that will die to an extent.
I also think the knock-on effect is going to be is the limited part.
the people that actually fund venture capitalists are going to start getting a bit stingy.
Now, I believe 2021 damaged the world just as much as 2022, because I think all of the zero interest
free era, free money thing, a lot of venture capitalists spent a lot of money on stupid shit that
year. And limited partners felt very burnt. They were saying, like, why the fuck did we give you
all this money? You wasted it. It's gone. Then AI came around and all the valuations went up and
they kind of went all this forgiven. However, if the
The problem all of these AI startups have is there's no real way out. The same problems that Open AI has are the same problems all of them have. Costs too much money to run the business. Impossible to measure your costs. Well, difficult to measure them at least. And no one is profitable. So if none of these companies can go public, because their economics is so bad and there's no path to profitability, and none of them can sell to anyone, they're just going to die. And a bunch of venture debt and venture capital is going to go in the toilet. This will burn the people who
fund the venture capitalists, which will hurt startups' ability to raise money. This will mean that less
startups are funded, and indeed, people like Sam Hortman have a tough time. They will still raise
money from their own coterie. There are always going to be morons. But this will, this will hurt everyone
in tech. And on the public stock level, I do think it's going to permanently scar some of these big
tech companies, because the natural question after AI is, okay, what else have you got? What else is
going to make you grow? Quantum computing isn't going to
do it. Quantum computing, they haven't worked out to do that properly. They haven't worked out
to turn that into any kind of business. They don't have a new thing. So when these big companies
stop growing eternally, the markets are going to say tech is no longer the ultimate growth
machine because software used to proliferate infinitely. Software was the ultimate growth vehicle
because it didn't have the massive costs of goods sold. AI is the opposite of that. AI is
insanely expensive. It's really quite terrible all round.
If they have accomplished anything, and not just AI, but really everyone who has made this new economy possible from the capital owners to the people who post a picture of themselves using a laptop on an airplane and they're like, this is insane.
Have you ever seen anyone work this hard?
they have done something I never thought possible
which is they made
the authors of the previous economic crash
the finance industry
they've made them seem self-aware,
charming, worldly,
likable.
Well read.
Yeah.
Like, you know,
this could just be roasted in glasses
but with those guys at least there was like
there was this awareness that like,
okay I'm not I'm not really making the world better by bundling all these mortgages
yeah like the CEO of like was the guy Angelo Mozilla or whatever you see that guy you're
like yeah okay I get it like yeah his face looks like a catcher's mitt he's like
Bernie made off that more profit than open AI yeah yeah well um I guess uh to close things out
today I uh we mentioned Bill Ackman uh briefly here and I would like to do this is like
sort of, I don't know, not in a bridge reading series, but like, you know, a short but sweet one
because I don't know if you guys, have you seen Bill Ackman's advice to young men on how to meet
women today?
Oh, yeah.
Okay, because, like, I want to say, I have only been swatted twice attempting this.
I want to say, is it like, ever since the election, ever since Maldon, he won the, like,
Bill Ackman's posts have gotten like 20 times shorter, and he's taken on sort of like a philosophical
cast of mine.
like he's no longer writing like 20,000 words about like the darkness coming.
He's become like some of that positive squad shit now.
And he's being able to.
He knows he's going to be executed on the first day.
And today, I just got to show this today.
This is this is Bill Ackman's advice to young men on how to meet women.
He writes, I hear from many young men that they find it difficult to meet young women in a public setting.
In other words, the online culture has destroyed.
the ability to spontaneously meet strangers.
As such, I thought I would share a few words that I used in my youth to meet someone
that I found compelling.
I would ask, may I meet you?
Before engaging further in a conversation, I almost never got a no.
May I, okay, may I meet you?
If you're approaching someone to a young woman, you're saying, excuse me, may I meet you?
It's like, you should have already introduced you.
How about, how about, hi, my name is, or.
Can I buy you a drink?
Just sliding up with no friction.
Open lines I've ever heard.
If someone said, may I meet you?
I would be, I would go, are you a Terminator?
What the fuck?
I've been watching the X-Files a lot recently.
Me too, me too.
And it's like, it would be like one of the guys that bleeds green goop.
Yeah.
Would be like, may I meet you?
Mr. Mulder, may I meet?
The alien bounty hunter, yeah.
Yeah, yeah, exactly.
The guys, you have to stab in the back of the throat.
Yeah, with the, yeah, with the, yeah, with the.
Ice pick, yeah.
Mr. Boulder, may I meet you?
And it's like you've already, like, may I meet you?
It's like, what a way to introduce yourself to someone?
When did you last talk to a fucking person?
When was the last conversation you had?
And he says here, I would ask, may I meet you before engaging further in conversation?
I almost never got a no.
But did you ever get a yes?
Or was it just what?
Huh?
Or just like it's silent?
Like, what did you do after you asked?
Did you just stand there, like an MPC?
Because, yeah, like, it's a really good question.
Yeah, it's just like, wait a second.
I feel like I kind of already have met you.
You've just introduced yourself, but I don't know your name.
I feel like the meeting has taken place, but what happens after that?
Yeah.
Did you do what you say no?
Well, did you see the post where someone, someone says they tried this?
Yeah.
Yeah.
Yeah.
No.
Do you have that, Will, do you have that one on hand?
I think this is, yeah, this is our, RF.
I met Olivia Nausea.
I meet you.
Someone
says that they tried this.
They went to a bar and they said,
I saw a girl in a denim jacket.
She seemed nice enough.
So I went up to her and I said,
may I meet you?
Her and her friends burst into laughter
and started doing an impression of me saying,
may I meet you?
I just said, never mind and walked away.
never taking advice from a boomer again.
That fucking rocks.
He goes on to say,
I almost never got a no.
It inevitably enabled the opportunity for further conversation.
Cow!
You've never spoken to a person.
Well, Bill, Bill, I thought you were a fucking alien before,
but I certainly don't now.
Now that you've told me it opened up the opportunity for further conversation.
Yeah, the other conversation is what hours do they let you out?
of the group home. And how was like I can never come back to this coffee shop? And it says,
hello, may I insert my genetic material into your vaginal cavity? Hale. He says, I met a lot of
really interesting people this way. I think the combination of proper grammar and politeness was the key
to its effectiveness. You might give it a try. And yes, that's what, never had a conversation.
That's what people were thinking. He's not spoken to another person. This is so polite and his grammar is
so good. I have to know this
guy. May I meet you
is like, that is the type of thing
that like
it wouldn't be in true blood, but
it would be in true blood fan
fiction written by someone
who English is not their first
language. It's a line from that fanfic
My Immortal. Yeah.
Jesus Christ,
may I meet you?
And yes, I think
it should also work for women seeking
men as well as same-sex interactions.
Just two cents from an older, happily married guy, concerned about our next generation's
happiness and population replacement rates.
I love that he adds to that fucking Ed, you want to talk about the alien fucking colonizer
is from the X-Files.
He's like, I'm very concerned about the next generation's happiness and the replacement
rate for viable workers in the future.
You said, may I meet you, Mulder?
What will you do?
I am concerned that there won't be enough infant spines
for me to harvest. I mean,
admire. Are we having enough baby?
He is, I mean, I, I think, yeah, we've brought this up before, but like, I only ever knew
Bill Ackman as like the guy who got owned by Herbalife.
But he really, he, like, he, he was a well-known investor before that.
I just thought, you know, and he was for the longest time.
just like another, like, hedge fund guy
who mostly donates to Democrats.
Something,
I, ever since Elon Musk bought Twitter,
he's pretty much lost his fucking mind.
Yeah.
I mean, it's, it's the dunce thing.
It's the dunce diviner again.
It's just like certain situations
cause these people to be like,
wait, wait, wait, listen up.
What if I said, may I meet you?
You didn't say that as a,
you didn't say, you've never said that to anyone before.
You came up with this today.
You came up with this name, you turned to your blood boy, and you said, how would I meet person?
You said, I can't look at you or talk to you. Talk to me.
May I meet you?
Speaking of people talking to, I just look at the replies here.
Someone replies, does anything Bill is suggesting trigger high value social approval or any other primal signals, GROC?
Create a success matrix.
And Grock replies, Bill Ekman's polite opener,
signals quiet confidence and respect, primal cues that convey high value without aggression or
neediness. It earns social approval by prioritizing civility over bravado, reducing defendants
and inviting reciprocity, success matrix, high confidence, direct ask, low desperation, no games,
moderate primal pull, grammar elevates status, works best in low stakes public settings,
yielding 80 to 90% positive responses per anecdotal data, though context like body language amplifies
outcomes.
I love how it says anecdotal
Kings.
I love how it says anecdotal data
probably just taking
from Bill Akman's post,
I got more yeses than no's.
And that cost seven dollars.
Yeah.
I was like,
how about instead of may I meet you,
hi,
my name is,
or just like,
what,
you know,
no,
no,
I'm saying,
it's nice to be.
Or can I buy you a drink?
That,
can I buy you a drink?
There's a tried and true.
one that I think
woman ain't that
yeah
woman need to be given
like a conversation
from morrow wind
you know
how do I advance
her dialogue tree
you know those
Tiger Woods text
where he's like
texting the girl
at like seven in the morning
and he's like
do you like golden showers
what if we do them
with you and a woman
you trust
that would work better
as a first approach
yeah
may I meet you
well there we go
may I meet you
listener
may I greet you
every week
on chop out scrap house
may I continue to meet you
all right
I think that does it for today's show
Ed Zittron
thank you so much for your time
thanks sir
I'm about to hang out with this
this is a really fun conversation
yeah thank you
thank you for having me
if people want more Ed Zittron
where should they go
what should they do
go to betteroffline.com
or I'm on blue sky
and Twitter is Ed Zitron
you can find me there
and my newsletter is
where's your ed dot at
subscribe to the premium
please
all right everybody until next time
bye bye bye
