Big Technology Podcast - Are AI's Economics Unsustainable? — With Ed Zitron
Episode Date: July 23, 2025Ed Zitron is the owner of EZPR, host of Better Offline, and author of the Where’s Your Ed At newsletter. Zitron joins Big Technology Podcast to discuss whether the generative-AI boom is an unsustain...able bubble ready to pop. Tune in to hear him debate OpenAI’s multi-billion-dollar burn rate, Microsoft’s leverage, and the economics behind ChatGPT. We also cover Nvidia’s GPU market, SoftBank’s colossal bets, advertiser drift from Google Search, and the hype around “AI companions." Hit play for a sharp, no-fluff conversation about the economics of AI. You can find Ed's newsletter at: https://www.wheresyoured.at --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Does the AI business have what it takes to survive?
Our guest today says no.
That's coming up right after this.
Welcome to Big Technology Podcast, a show for cool-headed and nuanced conversation of the tech world and beyond.
We're joined today by Ed Zitron.
He's the owner of EZPR, the host of Better Offline and the author of the Where's Your Ed at Newsletter.
He's here to speak with us about his criticism of the AI business and why it may all soon collapse.
Ed, great to see you.
Welcome to the show.
to see you thank you for having me okay so we've had some very uh different varieties of critics
on the show we've had people who've said it's poisoning society we've had people like gary
marcus who've said that the progress has gone uh we've had in various iterations folks who've
talked about uh how it can be used by bad actors to do things like enhanced viruses soon we'll
have someone who's going to come on to talk about escape risk but you are uh in a different category
You really think that the business of Open AI and the AI industry is unsustainable.
This is something we talk about a lot on the show.
I'm very familiar with your work, and it's great to have you here to discuss it.
Yeah, it's just all very silly when you look at it.
Right now, we're sitting there.
The most important company in AI is Open AI.
They will burn probably $12, $13 billion after revenue this year.
That's based on projections.
they also have no path to profitability.
They don't have one.
They claim the information's reported a few times like 2029, 2030.
They're going to magically become profitable due to Stargate.
Now, how will that happen?
Nobody actually knows an Open AI will not tell us
because Open AI doesn't really discuss their revenues
other than in really vague ways that go like,
we have 3 billion business users.
What's that about?
When you look at the underlying finances, it's genuinely insane.
And it's more insane outside of Open AI.
The information also reported that Microsoft will only make about $13 billion, not profit, just revenue, on AI this year.
10 billion of that is OpenAI's Azure Cloud spend.
3 billion is them selling COBOL.
That's an insanely small amount, man.
3 billion is not a lot of money in the grand scheme of things.
They do like $11, 19 billion in profit a quarter.
And this is on 50, 70 billion dollars of capital expenditures.
These numbers are terrible.
There's an analyst, I was quoted by Laura Bratner, Yahoo Finance, who said that he only thinks that Amazon is going to make $5 billion in revenue, again, not profit, this year on AI.
They are spending $105 billion in capital expenditures.
This is an insane situation, and the fact that I am ever framed as radical or, like, a pessimist, when I'm just doing very basic mathematics, it's kind of strange.
I think it says something a lot about media in general, but also the tech industry in general, and people will say,
oh well uber lost a bunch of money give it the fuck up on that one uber lost a ton of money in 2020
they lost like 6.2 billion dollars i think their worst year on record was like an eight billion
dollar loss but they had a product and then they used it to fuck over labor forces like they used
it to they just drag those numbers down but nevertheless fundamentally different business and also
not a big company not uber is not the face of the savior of the tech industry because that's
what generative AI needs to be now. It needs to be bigger than the smartphone market. It's about
450, 500 billion a year. Bigger than the enterprise software market, about 250 billion. What the,
the current combined revenue of all the generative AI companies is like, and that's including
the big tech companies, it's about 35, 40 billion dollars. It's insane, man. It's insane. And
eventually this has to stop. It has, the growth is not there. All right. So we're just going to talk
through your arguments on this show. And I think that I will pressure test them and we'll just go through
some of the objections. And like we do, you know, I don't think listeners need to agree with everything that
Edith has to say. But I think I won't call you a radical. I'm going to give you a fair hearing today and
we're going to go through some of these claims. It is so strange though. And I know that you're not
characterizing it in this way necessarily. But the fact that the guy who is like, hey, this is losing
billions of dollars and not making that much, I am the one getting the hearing. I don't.
I know that that's not meant to be a negative characterization,
but my pointing at numbers that are out there that are ludicrous
is strange and must be tested versus things like,
oh, we'll have AGI in two or three years in the New York Times.
It's obscene.
Well, look, we test all of these things here.
No, no, and I know you do, but it's just in general.
It's very strange.
Okay, so maybe we'll talk a little bit about the general vibes around AI
later on, but let's just get right into where the value is here.
So if this is going to all fall apart, it means that what's happened in AI has to be, I think, by definition, not valueless, but sort of there's a cap to however good it can get.
You said in one of your shows, AI today is a $50 billion industry masquerading as a trillion dollar solution from a tech industry that's lost the plot.
Yes.
So let me just throw this out to you.
I mean, it seems clear to me that AI will be.
useful for search. You yourself have talked about how search is not a good product. I don't agree with
that fully, but I know, they'll keep going. But let's say it's half as good as Google. Google is at a
$2.14 trillion market cap. So let's say it just gets half there. Then it's already a sizable business.
You're describing search as a product and search as a business. The largest and most successful
search business is Google. Google makes over a hundred billion on this a year. How do they do it?
Well, it's simple. They own the search engine. They own the infrastructure. They own the
advertiser, both the platform that sells the ads and the platform that buys the ads.
These things are being mangled by antitrust. You ever notice how there's no other real competition?
I think Bing makes like a billion, two billion a year. Well, it's interesting because even in
the antitrust hearings, because they're talking about now whether Google will be able to even
pay Apple the $20 billion plus a year.
Yes.
One of the interesting details that has been overlooked in those hearings is that nobody, not
Microsoft, not perplexity or whoever it may be, can make money off of search in the way
Google can.
And that, to me, suggests there's a fundamental weakness in the business to grow to
the size of the group.
Like, perhaps the search market is actually a bit smaller.
I don't know how much smaller, but you're describing two things, which is search as a
product. And I do fully believe that if Google had tried to meaningfully innovate in search other
than ways to make money and ways to screw consumers, open AI would have been no one in it
as big because most people do use it. Because the big thing is, is that open AI and generative
AI, large language models, are really good at inferring meaning from a statement. Really good
is probably a push. But you can give it a vague question like, oh crap, what was that 971 movie
with some gangsters in it? And it will have a much better time inferring the meaning than anything
Google Search has done in a while. That's a big reason. On top of that, people want answers.
And Google has been hesitant, if not entirely resistant to giving answers until chat GPT popped up and they went,
crap, we've got to make a really shitty version of this. And it's still a shittier version of what OpenAI does
with search, which I think is a shitty job unto itself, because any search result that could be
hallucinated is a dodgy one. And I think also Google has just given up any responsibility to their products
and to any of their customers.
I don't think people realize
how much Google has had to do
to make search that bigger business.
Huge advertising.
And I mean, they bought, was it?
Double-click.
Was it double-click way back when?
Like, they bought the Rails for this a long time ago.
And to your point,
no one else has been able to copy it
despite there being multiple other companies that could,
other than Meta, who has created
a competing advertising product.
But that's what search has become.
Search as a product is very different
to search as a business.
All told, OpenAI would have to build such significant sales teams, ad tech.
They would have to be a very different company because selling advertisements is very different
to selling consumer chat GPT subscriptions or enterprise, I guess.
But even then, the information reported recently that that's not going so good either.
So we're in this weird situation where, yeah, you could say Open AI could search GPT could be this.
What happened to that branding, by the way?
Remember when Search GPT was what it's going to be called?
Now it's just ChatChap.
But branding fell away because you just search within chat chat.
I know, but it's like they make this big thing where they're going to compete with Google,
but it's like, what are you actually?
And so, sure, they could make a, they've already made a competitor to the Google.
I think a lot of their success has come from the fact that you can't search on Google search as well.
Google search does not understand what you're asking it.
Chat GPT often does kind of, kind of sort of.
I think it does a great job with search in certain use cases.
Yeah, it's, it's replacing Google for me.
Yeah, and it has for many other people, but that's the thing.
That just means that Google search is bad.
It doesn't necessarily mean chat GPT is good.
And it's the inherent, one of the strengths of large language models is inferring meaning from what you're asking it.
But making that into a search size business is an entirely different thing and will cost them tens of billions of dollars.
Like it's not something where even if they could do it, humoring the idea, I don't think they will.
Let's talk the idea.
They would have to do tens of billions.
Like Google owns thousands of miles of underground cable.
They have content delivery systems all across the world.
Open AI doesn't own a damn thing of their own infrastructure.
Even this is the craziest thing that got reported recently.
The Stargate entity does not exist.
Talk more about that.
It doesn't exist.
They haven't formed it yet.
Oracle said it on their earnings.
It has not been formed yet.
So Oracle is allegedly, though Elon Musk claims this wasn't true, you know, the classic
truth guy.
The guy who never lies.
But he said that this isn't true.
But Oracle is apparently buying, allegedly buying $40 billion worth of GPUs to put in the
Abilene, Texas site,
for Stargate, the first, I think it's eight to 11 buildings, I forget. Now, Open AI, who owns those
buildings? Who knows? I think it's Crusoe. Crusoe just had to raise a $750 million credit line as well
to build it. Their data center builders. Yeah. And they're also, they've never done this before.
They've never done HP, so high performance computing before. It's also good. It's like when you
look at the bits, it goes, oh, this is bad. Oracle has agreed to pay Crusoe, I think, a billion dollars
for 15 years, like they've, they have contracted Crusoe to do the work.
Open AI, according to the information, hasn't even signed a contract for the compute in Abilene.
Open AI has done a great job of getting other people to do the work for them, but if you think
building a giant data center is hard, try building all of the ones you will need to make a modern
search engine. Perhaps there are efficiency gains. Perhaps there are ways of doing it differently.
Who knows? But it's not something where they can just go,
kudunk, kudunk, and now we're a search company as well.
It's not that easy.
And Sam Ollman would love people to believe that.
Notice he's not really talked about competing with search though recently.
Not really heard much of that.
A few months ago we had that story about ads within chat GPT as well.
I haven't heard any stories about the revenue from that either.
That's the thing.
Generally when companies are doing well, they tell you and they boast
or they leak it surreptitiously in a very obvious way,
none of the leaks coming out appear to be positive.
Now, why do you think, so let's just go to this core issue, why do you think that generative AI can't be a good replacement for search?
Because right now, the unreliability of search, Google search right now, was already a problem.
I think that the core technology of large language models could really help with inferring meaning in search.
I think it could, at some point, be useful in that way.
The problem is, it's like replacing a bad thing with a slightly less bad thing.
It's like, I guess you could do that, but I think that it's pretty evident that because nobody else has done it, including chat GPT, open AI even, that you can't really replace the business of search.
But we are getting mangled up in the technology because, yeah, I think large language models are really useful for the intake of information.
I don't know about the presentation of information at the other end.
I don't think that they're great for research.
I don't think that they're great.
I've used it for search and being like, this is a pain in the ass.
This is not what I want.
There's too much crap here.
I have to sift through it.
I can't trust any of this.
So perhaps there are consumers who are just like,
I can trust this, bingo bango, I'm done.
Fine.
If that's mediocre with shit with piss,
I don't know what you call it.
So it's kind of like,
AIO reviews is kind of doing that.
It's just such a mess.
Because you can hear me kind of hesitating over the details
because it's like,
can you replace it as a product?
Yes.
Is it going to be good at it?
No.
Yes.
It's you like it as other people
do because it understands what you're asking it
way better than Google search does.
How is Google not copied that as well?
That's the other thing.
They are in the process of copying it with AI mode.
The AI reviews are so crap.
And AI modes, it's...
I'm not saying that they're better than chat chip T.
I would say Chachapinti is better.
I think that the real argument around Google is
the models, the models perform quite well on the leaderboards,
but you don't see that proficiency when it comes to actually building it into the
products.
I don't think they even do it the same way.
Because you ask a question to chat GPT, it generates a result.
With Google, it's like, all right, it feels just like a disinterested uncle.
He's reading the newspaper with a kid, like knocking his knee.
It's like, what do you want to?
I think it's fucking this.
Here's a bunch of, leave me alone.
Because it's like AI overviews do not do the same thing as how ChatGBTGPT handle search.
ChatGPT spits out an answer for better or for worse.
AIA overviews goes, all right, is I think the answer with some links?
I don't know if they're good.
Here's some other links.
What do you do here?
I don't know.
I'm just here to show you ads.
And that's why it's so hard.
do yous. Goodbye or hello. Please stay on the page. I need money. It's just a really weird.
That is what it does. It's so weird. It's just so strange that you've got these companies with
trillion dollar market capitalizations who run services that look like a dog's dinner. It's just
insane to me. It's everywhere. Did you see the thing on threads today? Talk about it.
So threads there was, I don't exactly know what happened, but everyone's messages were coming up as
the same thing. So you had a bunch of accounts saying like, I don't know what's fucking going on.
It's the same thing again and again and again.
Threads is terrible.
I agree with you on that front.
It's just, but that's the thing.
I think the reason chat GPT has been able to make any meaningful progress against search
is not because of the proficiency of open AI.
Pretty good, U.S. works clean, pretty snappy.
It's because everyone else has given up.
But isn't this how it's supposed to work?
I mean, isn't it, no, let me talk it through.
Like, isn't it supposed to be that some company,
gets the lead in something, then a challenger comes through, build something slightly better,
and then puts everybody on notice that if you don't improve, you're going to lose the lead.
Which is funny, though, because you're right. That's how it's worked. I think that that era
ended like 10, 15 years ago. I think that they kind of, we've not seen that kind of competition.
And Open AI is actually a great example, because to compete with big tech, you need big tech to
support you. So open AI is a Microsoft subsidiary. That's what everyone needs to just accept right now
and what's happening in the news, which I imagine we're talking a bit. So there is no competition. There
is an agreed upon substance that they all agreed to sniff, and then they all sniff the substance
and make money selling it. They all agree that AI is the thing they're doing now. So they're all
going to compete in the same kind of soft punchy way. You've got Amazon and Google backing Anthropic,
you've got Microsoft backing OpenAI.
You have this weird thing where Google filed a suit
to try and stop their exclusive deal
to sell OpenAI's models, Microsofts.
No one's trying to make better search.
I don't think even ChatGPT is trying to be better search.
They're trying to sell a thing by claiming it's AI
that does something they can't really specify.
They're not sitting there going like,
how is this a better search product?
Because if they wanted that, they would have.
They would have built a deliberate search product
that represented it as search, rather than just an everything search.
A search for thoughts, which may or may not be correct.
A better search, I don't even know what a better search platform is,
but that was not what OpenAI started with.
That's not where the...
I don't actually think that OpenAI had much of a product vision.
Oh, for sure not.
I mean, they talked about how they released Chat Chapti as a demo
and have sort of iterated on that since, but...
Which is pretty much how...
Like, I was told by a reporter once
that apparently Microsoft saw ChatGPT,
and they bought all the GPUs, it was because they wanted it in Bing.
They wanted to do that in Bing.
Hundreds of billions of dollars based on being like,
what if Bing was better somehow?
And that didn't work.
It did not work.
But let's talk more about this because,
so I think the thing that's nice about searching through these bots
is that they do, I think like you've talked about,
they give you, they understand your intent better.
I think they are getting better at presenting information
and they are getting better at linking information.
Okay.
So let's just say that this continues on a trajectory where it does, even if it's not the core intent, it replaces a good chunk of search.
I'll just make the business argument here and throw it out to you, which is that yes, marketers really care a lot about the signal that they get from search or the fact that they can, you know, with some consistency, measure their media spend on Google and know if it's working or not.
But ultimately, if people move from Google search to OpenAI or to some other LLM search,
my anticipation is that the money won't go away.
I think marketers have gotten – and advertisers have gotten so used to spending online
that they'll be willing to spend even if they don't get the same signal like we saw.
When you say signal, what do you mean?
Like whether people are going and buying the products that they're advertising.
Just so I'm clear, your argument is that they'll spend the money.
even if it doesn't work as well?
Yes.
When has that happened?
I mean, I think one example, I'm curious what you think about this,
is when Apple cut off Facebook's ability to measure
whether people were buying after seeing their ads.
And then Facebook got...
That was the unilateral app transparency thing,
so it wasn't just focused on Facebook.
Right, right, absolutely.
Right.
Well, I mean, you could also argue that Apple wanted to build its own app install business.
Which they absolutely did.
They built their own app.
So maybe it was not entirely...
focused on Facebook, but you'd have to argue that Facebook was a big motivation there.
Advertisers are still spending a lot of money on Facebook, even if the signal is a little
bit murkier than it was previously.
That's because Meta has effectively a monopoly on social networks, which are a different
advertising platform.
On top of this, right now, Open AI, I don't even believe, has an ad network.
I'm not sure.
No, they don't.
You know how multivasted these things are.
The infrastructure is not there.
And the reason that Google makes so much money
is because they built the infrastructure.
And from what I know, from the digital advertisers I know,
they will try stuff, but they'll try stuff.
And if it doesn't work, they'll stick to what they know.
Correct.
Now, if Open AI can get great CPM, great CPA, fantastic.
They've proven themselves.
Can they do that at the scale of Google?
I don't think they can.
And I don't know whether they're,
we don't know the exact cost, but we know they're burning billions.
If they're losing billions of dollars, it don't matter how good their ads is if the numbers don't add up.
There's so much they have to spend as well.
The staff they would require, I really should have brought, I should have looked up the amount of advertising staff that Google has before this.
But Jesus Christ, they don't have the people.
And they are still hiring and hiring and have to spend all of this money on salaries.
They have to, I think there was one of their executives recently said that they have this incredible pressure to grow.
adding the pressure of building an ad network
and then building the market for it
because remember you can't just say
it's identical to search because it ain't
these things aren't presented as results within a thing
they are presented as answers to a question
theoretically that could have a different reaction
and more sticky one but has anyone fucking proved that yet
perplexity hasn't they wanted fifty dollars a CPM
bloody Aravind
gets head out of his heart just
pie in the sky right now
and I don't know that they have the time
I don't know that they have the time to do this
nor do I think that they have the resources
because they also have to do this
and build data centers
and build a chip with Broadcom
all the crap they promised for 2026
is bonkers
I think you're hitting on exactly
what the problem is going to be
and we talked about this on the show
a bunch which is that
you can you can attract
I think they will attract a large chunk
whether it's open AI or Google
through their AI mode
which will evolve
I think we are going to see a lot of search funnel
through these large language models eventually,
but it's a different format.
It's a different experience.
It's very easy to mess up.
It's not a slam dunk that it happens.
Do I think that if they get their advertising money
will probably follow?
Sure, if they get that.
But it's an F, it's an if.
And I think you're really spot on
and pointing out that this won't be a slam dunk.
So let's talk quickly about some other uses
because, you know, the promise here
or the idea from these companies
is that, you know, maybe you,
Like you said, they're not saying that they're a search engine, Open AI, is saying that.
So maybe you do some search and you build a search business.
But then let's say your bot can also help people code better.
So that's got to be worth some economic value.
So you can amalgamate.
Could it?
Well, yeah, talk about your view on whether these code co-pilots are valuable in any way.
So they're valuable.
They are valuable in the sense that software engineering loves automating shit.
They love shortcuts.
It's an industry that adores.
it, but I think that people misunderstand what a software engineer does.
They don't just code.
Sure, the junior level ones might, and there will be some early stage people, but we don't
know yet, and the numbers being parroted are bullshit.
Nevertheless, as you said earlier, I think this is a $30 to $50 billion TAM,
total addressable market business.
I think that the IDE market, development environment, I think that's like a $13 billion one.
Like, there is a business there.
Of course there is.
it's the code has problems and there's tons of studies about it that suggests that there's real issues with it
but i think that's probably the most lasting one but just because a business exists and is
viable in some sense doesn't mean that it adds up to a trillion dollar industry or even a
hundred billion dollar industry and indeed this is one of the most commoditized things you've got
a cursor came out of nowhere and everyone's like wow look they're going to be so big was it 200 million
AR or something. It's like, great, that's a really solid public company, SaaS business.
No one should be doing backflips. It's changing the world. Is it? It's making developers faster.
Is it? How is it doing it? Which developers? These questions, they really harsh the flow,
so people don't tend to ask them too much. But one of the common misinterpretations in my work is,
and I've definitely said it like a year ago, so it's useless. There are use cases. It's just they're here,
like the industry is this big
and everyone's acting like
it's the biggest thing ever
and it's just
it's not like they want it to replace code
it's actually not going to
because of the hallucination problem
because of the probabilistic nature
there was this insane fucking blog man
that was on tech meme
whereas guy was like
something about AI critics
oh yeah and he used
my AI skeptic friends are all nuts
yeah I ran that through a couple
software engineers like Carl Brown
over internet of bugs and they just kind of
fucking laughed at it. Because he was saying something
and it where he said, oh yeah, mediocre code's fine.
Is it now? Is mediocre code fine?
How do you think, like, Carl Brown from internet
bugs brought up a heartbleed? That was like one thing
that a bunch of software engineers missed
for years in an open source product.
Just because we as human beings can catch things
doesn't mean we will. And just because it might be able
to catch something wrong with the code doesn't mean it will either.
But I trust a human over that more.
If we're turning ourselves over to something
we know to regularly get things wrong, I don't know
how much infrastructure you can turn that over to, which is the only way you're getting to these
massive revenue streams. Unless you can really rely on this, they've already got code automation
things. They hadn't before large language models. So yes, use cases, but how big? Are we really
meant to believe that curse is going to make $5 billion a year? Is that going to happen? Hey, is cursor
profitable? Has anyone asked whether cursor's profitable? You go and you see like a company like
you.com. And one's saying, wow, they got a valuation of a billion dollars. Annualized
revenue of like, I'm going to misquote this. It was like $12 million, $20 million. That's insanely
small, man. This is crazy. It's just nonsensical, almost. And everyone's saying that because we are here,
we will be 70 miles in this direction in two years. It just confuses. I guess it doesn't confuse me.
I think people want it to be true. Well, that hits on the question of whether you think
these models are done with getting better because there's like undeniable. There have been undeniable
leaps from something like a GPT3 to a GPT4. And so I think you get an environment.
But that when did GPT4O come out? Okay. Let me just finish the question. Then you can
shoot it down. Sorry, sorry. I get too excited. You know, I would say that there is, you have an
environment where you get the $12 million valuation funding or the $12 million in revenue and the billion
dollar valuation, where you have venture capitalists and not going to stand on the table
and defend venture capitalists, but where you have them say, there is potential for this
technology to get better. And therefore, if this company continues to do what it's doing
and the technology gets better, then maybe they can hit that market. And they'll bet on 10 of
them. And if one of them actually hits where they think the puck is going, excuse the sports
matter for, then they will, then they'll be, you know, well rewarded for it.
And so that's why I think you're seeing this environment.
It's all predicated on the belief that these models will get better.
So I am curious to hear your perspective on why do you factor that into your analysis
or do you think it's kind of done?
I think the word better is where we need to start.
Okay.
What does better mean?
There's actually a point made by Jim Covello at Goldman Sachs last year.
It's like, these models get better.
But what does better actually mean?
We look at these benchmark tests, which are built specifically because these models can't really.
do regular testing. You can't really give them human testing because they don't do the things
that they're meant to do. So better does not mean, actually, it might be Darren Asamoglu
from MIT who said it was in the same Goldman report. But it's like, better does not mean more
capabilities. It does not mean that these models now can do a new thing. Even reasoning,
what happened there? I mean, it allowed some more, it helped with some coding things, sure. And there
was some growth, but it's to what end? What can we do now? What is the new thing? And I think
that's the craziest thing. I don't know what I meant to be. I'm a, I love new crap. I love
gizmos and gadgets and all that shit. If there was a way that chat GPT could do something
for me, I would make it do it just because I'm like, cool, this is why I love technology. I
love doing things. What's new? What's new? And if the argument is, look, it's improved
coding by X, Y, Z.
Awesome.
Describe it in that term.
Describe it in the terms of boring software as a service or cloud compute.
Talk about it like you talk about Docker.
Talk about it like virtualization.
Talk about it like a technology that's a branch off.
Don't talk about it like it's replacing everyone forever always because it isn't doing it.
So, by the way, you're completely right with the VCs.
They're doing exactly what they've always done, which is make a bunch of bets, talk them up, see when you get in, like see what happens.
Because that's venture capital.
That's it.
That's the root of it.
I'm not defending it either, but they're not doing anything different.
The problem is, is that we're in hysteria.
We really are.
We're in a hysteria.
I had someone tell me, a source tell me, that there are, it's very rare that venture capitalists see the books, the actual accounts, and they almost never see the code base.
That's wild.
It's fucking crazy, man.
And it only gets worse because as deals get more popular, it's like, you don't want to do it.
I've got five more assholes over here who will.
So, which is the mark of a classic bubble.
So it's like, nothing about what I'm writing or saying comes from a place where I'm like,
this is something that I've walked into and said, this sucks, I hate it.
Because when chat GPT came out, I dicked around with it for hours and hours
trying to find out why everyone was so excited.
So excited.
Everyone was so excited.
I'm like, okay, so it can generate crappy text.
like this is like this is the most like 20 like 19 year old at college arse text no wonder it can replace college students who aren't taught to write it writes like them in the same kind of bland intro body conclusion way okay not a business but the actual use cases of this stuff have never emerged they've never emerged the reason that we keep hearing about agents but never about what agents can do is because the most common feature of agents is that they fail there was a sales force paper that came out fairly recently that's
say, I think that they just categorically break down on multi-step processes, like they only
complete like 30-something percent of them.
Multi-step processes, by the way, referring to tasks in general, could you think of just
one thing?
But they failed at a remarkable amount of one-step-ones.
But you're answering the question about what happens when the models get better.
It's that they don't-
They're not getting better.
But this is that, well, I think-
You're saying what's at when?
Look, let's go step by step, right?
If the models get better, then they're able, they'll be able to handle these multistet
processes in a way that they can't today because they are brittle.
If my grandmother had wheels, she'd be a bicycle.
Okay, I hear what you're saying, but like, like I said, like,
so you're, let's circle back to the question I asked you at the beginning of this
conversation, which is like, you're, you're pretty confident that there's no more
improvement, because I asked you about improvement, and you said there's no such thing
as improvement or we can't feel improvement, but now you're saying,
I'm saying that improvement is a, is a metric that they have gamed with the benchmarks.
Well, I'm not, so this is interesting with the benchmark side of things.
Yeah.
I really think that, like, they're useful in some ways, but they're not the be-all and all.
And it's weird to talk about, and I'm sure you have a response to this, it's weird to talk about the vibes of the models.
But like, but let's do it.
I do think that you can, with GPT. 03 from Open AI, it's definitely the vibes are better than GPT4.
It just feels like it's able to do more.
I try it 03 out.
the other day. I took a photo of a thing I had hung up and I said, how much space from the
bottom of that photo of that picture, the poster, to the floor? It took four minutes. It wrote
multiple Python scripts to give me the wrong answer. Well, this is why, I mean, it is interesting
that, and this is why I think people are talking about how they're going to be good at some
things and not good at others. Okay. And there'll be some, like, capabilities where they're
going to be quite effective and some, like the one you... And the thing is, that's a reasonable
position. If that was how this industry had been
solved. But they're not selling it that way. Yeah, exactly.
If it was, no, I really want to say this. Except Sundar Prachide
is from Google talking about jagged
intelligence, but I think... Oh, God.
Jackoff intelligence. Fuck.
I find that so disgusting.
That man last year, he lied about what agents can do.
It was during I-O. He said, oh, yeah,
you're going to have an agent that will be able to, like,
do a full shoe return with a thing
with your email. And I went, and that was
theoretical. What the
why? I can't go and lie to the bank.
Why can he lie to the market?
It's just, I think, though, getting back to the point, because I think it's important, say,
if they were selling this as, yeah, this is kind of unreliable but interesting tech,
and we're expecting it to, there are some things it can't do, there's some things you shouldn't rely on it,
very clear about that.
I wouldn't hate it.
If it was just like, this is what, I, until you get to the stealing from everyone and the horrible environmental stuff,
and then it gets even worse again.
But putting that aside, if this was being sold as like an experimental branch or even just a
industrial use of cloud compute, okay. I wouldn't judge them for that. I judge them for everything
else, but they're not selling it this way. You've got Andy Jassy claiming, oh yeah, we're going
to replace an indeterminate amount of people at an indeterminate time in some way or somehow. I'm
not really sure how, but it's going to happen and it's on the front page a fucking tech meme.
It's insane. The idea that TechMeme had Sam Altman's gentle singularity, we should be calling
911 and doing a welfare check on that man.
thing was fucking insane.
If I said that,
they would check me for a concussion.
Sam Altman suggested that we'll have data centers
that built themselves.
Just, that's the thing.
That is the real distance.
Because you've got what large language bottles can do.
And as far as them getting better,
better how?
They'll increase those percentages.
There is very clear,
and Gary Marcus was just on talking about this,
there's a very clear gap between
what a large language model can do
and what it needs to do to be reliable.
And that gap, I think, is much larger than people realize it's the classic problem with all AI, with self-driving cars even.
Or it's like, it's not the fact that it can't do some things well.
It's that it can't reliably do anything.
Self-driving cars require someone watching them on all times just in case.
You can't do that with chat GPT is too many of people.
So it's just this interesting industry-wide cognitive dissonance, I guess.
It's insane.
When I talk about this stuff, it makes me genuinely worried.
how many people have been taken by it.
You brought up this statement by Andy Jesse,
by now it'll be a few weeks old,
about how he wants to replace.
He thought.
He wants to, well, right.
Oh, he wants to replace people with AI or believe so that it will be a people replacement.
And I think that is enough.
So I've talked,
we talked about search and coding.
I think the thing that's been unspoken so far is that when it comes to the valuations
for a lot of these companies,
they're going to need to have to replace full-time employees,
or at least the work that a full-time employees.
old-time employee does in order to be successful.
Agreed.
And people either want completely autonomous or they want Jarvis.
They want to be able to say, I need you to look up, blah, blah, blah, blah.
Okay, give you an example.
Manus.
Is it Manus? Manus, yeah.
It should be Manus.
I asked Manus to look up every article written about me in the last two years.
It could be a list of links in the spreadsheet.
And I probably guess like 100 of them is what the actual number.
11 minutes later and hunt like a ton of Python these motherfuckers can love Python it gives me 11 links right
I tell it you missed a few gives me another nine after another 10 minutes I think it was
is this how close is this replacing who is this replacing because it's not even replacing offsuring
which I think is what companies really will plan to do they just want to ship people overseas and get cheap labor
it's always been the case Google loves it people who talk to at Google they're saying yeah they
just getting rid of people and replace them with contractors in India or in other countries in
the global south as well. It's very strange what's happening. I think that I'm actually shocked
that so many reporters are still saying agent with a straight face because what job is being
replaced? No, sorry it's not. You've got companies firing people and claiming AI, but notice that
none of these big sexy Kevin Roos stories about replacing people actually include a single fucking
person replaced. Now, Christopher Mims had a story in the Wall Street Journal about a year ago,
a really good one where it was artists, art directors and copy editors who had been replaced
with AI. But the real story was they had been replaced with shittier versions of their product.
Their process was not replaced. Their job was not replaced. They were basically contractors
rejected by business idiots, as I call them. People that don't really understand the process
of their work. And it's fucking tragic, but there are some jobs that will get replaced.
and not as many as they're saying
by people who are assholes
who don't respect their customers
who want to do a shitty job
and always will
and they would have found another way to do it
they would have gone on 99 designs
they would have gone on Fiverr
they would have found cheap labor
to do the labor that they don't respect
but there is right now
and I don't think there's going to be
any replacement of labor
at the scale that they're discussing
my evidence is nobody's bloody done it
you have all the king's horses
and all the king's men you have Google
you have Apple
you have Salesforce, you have service now, you have all these companies who could not talk about AI more if they tried.
Where is the agent? Because if they did this, if they actually were doing the thing they're claiming, they'd be making tens of billions of dollars extra.
They'd be making an absolute shit ton.
The information reported a few months ago that Salesforce does not expect any growth from AI this year.
That is absolutely bonkers for a company that's rebranded, and I paraphrase here, as an agent-first company.
It feels like the most egregious lie I have ever seen told in business history, just completely obscene.
And people are lapping it up.
And it's insane.
Well, I think with this software as a service companies, there's so much broken in SaaS today that you can put AI in there potentially and, like, paper up some of the problems.
With like systems talking to each other and trying to synthesize information that you have in your systems to make sense of it because it's spread.
all over the place and takes hours to pull reports.
So that's a possibility.
And maybe that's economically valuable.
Your argument is that their systems are so poorly designed,
they can't put AI in them yet?
No, my argument is that,
speaking of broken products that AI fixes,
they might be the most broken of all products
with an opportunity for AI there.
Fully agree. And also,
if anyone was going to make money off of it, though,
one of the companies, it's not like a situation
where one company is ahead of everyone else.
Open AI isn't a head event for one else other than scale.
And I would argue they got that because literally every single media outlet has been talking about AI for three years.
And when they talk about AI, they say chat GPT.
It is the world's best marketing campaign ever.
Sam Altman is a genius for that.
And he's also like a business idiot whisperer.
He can talk to guys that run companies that don't know how their companies work and just be like, yeah, we're going to replace everyone.
It's going to take two minutes.
It's going to be the best thing.
Donald Trump or Jason.
He sounds like Trump.
Yeah.
No, he really is, like, he's like the soft-spoken Trump.
But he, it's just, it's so strange, but I get kind of animated about it, because when you start talking about it, I'm not even saying anything, like saying some objective statements, to be fair.
But when you just say, like, they haven't done this yet, they haven't done this yet, there really isn't evidence they can do it.
Like, really, there isn't.
They don't have, it's not like they have a whiz-bang moment.
Like, you could, Waymo is imperfect.
but you can get in a car in San Francisco and it will drive you around.
And you could do that a few years ago in very controlled spaces,
but we don't even have a controlled space where an agent's doing something really cool.
And I think the closest they're going to get is like an agent that can do purchasing on a platform.
And I think that that's just because they'll connect APIs to APIs.
That doesn't feel terribly far away, but that's also not a trillion-dollar industry.
And are you potentially underrating the, um,
bureaucracy part of this and the fact that like big organizations which this could help they move
slow there's bureaucracy there's approvals there's owners of different groups like it's tough for them
to do anything so maybe it's a people problem and not as much a technology problem maybe it is
as you would put it a business idiot problem by that if it was not everywhere and no one had done it
if it if it was a few people were I I understand the argument it's like if there were a few people
that had done this and, like, they'd done a ramshack one,
but it was kind of working, it's like, oh, that would be cool?
Would it?
It's still, like, if someone was doing it in a smaller situation,
I don't know, wouldn't Open AI be doing it?
Like, if, just real blunt, wouldn't Anthropic be doing it?
Wariowama days out there saying, oh, yeah, we're going to replace, like,
what, 10 to 20% unemployment, 50% of white color.
I don't think he said, did he said, did he said 50% of entry-level white-collar jobs would be gone.
10 to 20% unemployment as a result of this.
If I'm wrong, I'm wrong.
I apologize, but the 50% thing was on.
Alison Morrow from CNN has the best piece on this.
Yeah, yeah, we actually read that on the show afterwards.
She is very good.
Possibly the best living business journalist.
She's absolutely fucking incredible.
So the thing is, wouldn't Open AI have these agents?
If they could do this, wouldn't they be doing this?
Indeed, someone once made an argument to me online that I actually found quite compelling,
which is, why would you sell AGI if you made it?
Why would, if you could make an agent, sure, you could sell it.
to everyone you could just run an incredibly profitable business with like nobody the one person
billion dollar company wario amaday's been promising everyone next year next year oh sorry it's in
twenty twenty six with the chip from broadcom that's another thing with stargate of course the
stargate in the uae um also the device from joney i that's also coming all of this is going to
happen in what six to twelve months i can't wait for the future but the thing is where is where's
the beef? Where's the thing? Where's the money even? But the money isn't there. The product isn't there.
And anyone putting this to people who love AI, quote unquote, where's the thing? Why are you
actually excited about not what could it do? What does it do today that even makes you? And if the
answer is, wow, it's kind of like a living encyclopedia. Okay. Can I give you a different
answer to that? Sure. And this is, again, this is, we've been talking a lot about use cases. I do want
to spend a little time talking about the business of these companies, but I think it's worth bringing
up one use case we haven't brought up yet, which is companionship. That is the number one use case,
I think, surprisingly. There was an HBO article that pointed that out. Was that a rank? I thought
that that was just a list. I didn't know it was a ranking. No, it was a ranking. And it became number
one. And it's clear that is not a business. It is clear, well, people are becoming friends with these
bots. Sure. They're paying for them. Absolutely. It seems, and I'm not.
a big fan of the fact that people are replacing.
Oh, I don't love it at all.
With AI friends, but they're doing it.
It's, oh, it's a sign of something wrong.
It is a, we are a decentralized society.
We do not have the shared spaces where we regularly meet people,
tons of people remote working, which is great,
but non-walkable cities means that people aren't meeting people regularly.
Yeah, that is a use case.
We don't know the scale of it.
If I had to actually guess,
I think the majority of people using chat GPT are using it like Google search.
I'm deadly serious.
I think that there is a growing amount of people using it
and I think it's a deeply unsafe technology
I also think that is one of the most easily commoditized businesses
in the world.
To make AI friends or AI search?
I think, well, kind of both,
but really AI companions feels like something that chat GPT
again, because they are all over the place,
all their use cases.
It's something that they're getting
because they are the biggest name said to everyone at all times.
It's something that can be replaced by any number of other things.
Hey, did you read the story about meta?
and how you can have John Sina sexed your child?
Oh, man, you didn't.
No, there was a story, Jeff Hawwitz.
No, I did read that.
Jeff Horwitz, the goat himself, the Wall Street Journal,
where you could have paedophile conversations with Metas AI.
So people are using Meta.
Wait, was it actually pedophile?
You could explicitly have it, have, you could say you were underage,
and it would have a conversation with you.
I think they've closed the gap now.
It's a great story, incredible journalism by Jeff.
But it's like, yeah, people are using this,
and people are likely using it in,
ways and it's disgusting
and hey imagine
if we'd have regulated tech
imagine if we'd ever done that
if we had like an EPA for tech
if there was any restraints on these
companies but no there aren't because
what if we didn't have growth forever
but nevertheless it's a use
case but what does that use case
prove exactly other than this can
do that and people
are somewhat easily
fooled it's the same
the use case that Jeff
Horowitz brings up in the Wall Street Journal is not
one that I think is going to be a common one. But companionship. But companionship is. Wait, you don't
think that a horny teenager would try and talk to one of these things about sex. I hope that the
labs build the... I hope I really, no, I genuinely mean this. I'm rooting for meta and everyone
to stop this. They need to. It's fucking horrifying. But yeah, that's a use case. Is it a business?
Is it not something that can be easy? I would argue that if they get friendship right, it is a great
business because who is they in this case and how big could that be i think could be a big one i mean again
this is not this is not the direction i'm rooting for the technology to go into but if you have a i
that replaces a friend for you or is your companion you would easily i mean pay 20 dollars a month i think
that would that would be an easy subscription to charge but let's get into the business thing though
because i posted this earlier and i mentioned it earlier for this to be as big it would need to be
the size of the software is that because of the funding sorry you say you say
It's because of the investment in infrastructure, it would have to be bigger than the smartphone markets, so $45,500 billion a year, bigger than the enterprise software.
We can take that side.
We'd just focus on the consumer use cases.
For that to happen, this business would have to, for Open AI, I think they've estimated, they're going to estimate these, wank, just total nonsense.
I think they've said like $126 billion of revenue a year by 2029 or something like that.
And just to be clear, Netflix made about $39 billion in subscriptions, last year.
year and Spotify made $16 billion. So you're telling me that whatever this market is is going to be
bigger than both of those doubled. Is that the plan? No, I'm not saying you here. I'm just saying
no, I want to answer this question because I'm the one that threw it out there. Look, I think that
we are inevitably going to see some of the funding that's gone into this industry go to zero or very
low, without a doubt. Some maybe, I mean, if you take it an aggregate, we'll see if it pays off.
Some will win, I think, but many will lose.
What does winning mean, though?
I mean, they'll get their investment back.
Oh, okay.
Yeah, that worked out for scale.
Scales investors pretty well.
Exactly.
So there will be big exit.
I think Open AI will IPO at a certain point.
I think that that is an astonishing leap of logic.
Well, because, okay, you're talking, you want to talk about the structure and the fact that they may never be able to leave the nonprofit side.
Do you think that these horrifying books are going to look good to the markets?
There is nothing in the markets that looks like.
like this dog, a company that burns $5 billion, loses $5 billion by spending $9 billion.
I don't know, Ed. I mean, Corweave is up like an insane amount since it's IPO because people are
interested in a story. That's cool. That's cool. They don't lose anywhere near as much as Open AI.
They're 81 billion, Correve itself, which is literally just an infrastructure company that's sort
of resells and video chips. Does it? Well, you tell me $81 billion.
market cap, and since their IPO, they're up 325%.
Absolutely wild.
So they have a very small float, by the way, most of which is over like
Nvidia and Magnetar.
Right.
So Corweave will probably raise another $10 billion by selling another share sale.
They can plug away for a few years, but what happens if the AI bubble bursts, if growth
slows?
Corweave is a business heavily built on GPUs, on raising money based on...
Here's an interesting question.
Is it round-tripping when Nvidia sells...
GPUs to a company that they own part of, that they own part of the stock in, that they have a $1.3 billion
project Osprey cloud deal with, is it round-tripping if they sell them the GPUs that the
Corweave then takes the GPUs, raises money from institutional investors based on the value of those
GPUs and then uses that money to buy more GPUs from Nvidia? I don't know. Maybe if we had
a government to look into that. But fundamentally, Corweave and Open AI are even more.
are insane businesses.
Call We've owned stuff.
They have actual buildings.
Now, I don't think that they're ever going to scale.
And I do think that that dog will die.
And I will dance.
Mostly because people think that stock valuations actually change anything about my
argument, which that article really drove in mouth of madness.
The reason why I brought it up is you said,
is the market going to read the books about open AI?
But let me finish, though.
But I'm just saying that the market can go with a story.
It can.
But Open AI has.
has no assets, really. They don't, they, Microsoft owns their IPO. They create, sorry, their
IPO, their intellectual property. They own their, they, Open AI owns no infrastructure. They have
their staff. They have their research. Wait, Microsoft also has that. They have the exclusive
right to sell, no, wait, Microsoft can also sell their models. They don't own Stargate. They don't
own the GPUs within any of the servers. In fact, they don't even make enough, I've referred to them as a
Banana Republic because they require exterior money to come in constantly. Because when you look at
what Open AI is, they don't own very much of anything. They own part of CoreWeef. They had about
$350 million worth of Corweave stock. That's, that's fun. By the way, Open AI's deal with CoreWeb is
pretty much the only way that Corweave can raise more money. So I hope nothing happens with OpenAI.
That's the thing. Open AI is an asset light business with research and IP that's owned by another
company. They don't have much to trade other than their name. And their name is insane.
strongly strong. They really do. But as a company, they would have to, at IPO, expose themselves
in a way that they never want to. Because they would have to say all of the material
deficiencies within the company, they would have to list the genuine risks. And the risks
would be every single thing I'm saying. They would say that Corweveve had to amend their S-1 to add
the counterparty credit risk from Open AI. Because Open AI, if they stop paying Call-Wave,
Corleave doesn't get a bunch of their revenue.
Open AI starts paying Corleave in October 2025,
just as Corweave's second loan, DDTL2,
starts requiring them to pay probably more than OpenAI will be paying them.
This is the systemic risk I'm talking about.
500 million user consumer product.
That loses them money, that converts horribly.
All right, I want to talk about that.
Let's take a break and we'll be back talking a little bit more
about the infrastructure costs of OpenAI
and what chat GPT is underneath the hood.
We'll be back right after this.
Hey everyone, let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using,
right now. And we're back here on Big Technology Podcast with Ed Zittron. He is the host of the better
offline podcast. You could also get his newsletter at, where's your Ed at? What's the domain name?
Where's your Ed. Dot at? Great domain name. I know. So let's talk a little bit about the money
that Open AI loses. And so I've been listening to your podcast and whenever someone brings up this
argument that they will learn how to deliver what they have today more efficiently, your next line is
something like I will squash you like a bug or I will compact you like a cube in a car
compactor. Yeah, yeah, exactly. That's Accura. So do that to me, Ed, because
because I mean, the stuff is without a doubt getting cheaper to run. Why do you say without a
doubt? Because if you look at the way, I mean, you could just look at the way that they're, oh, shoot,
now you got it. But if you look at what the price that they're selling this stuff at, it's lower.
mean a god damn thing man well what about okay so now let's let's let so here i am in the in the trash
compactor but but i mean if you think of you don't think so do you deny that there's any
algorithmic efficiency being had within these i'm sure i trying but the one public you think that
this is sustainable they had um they were selling so gpt4 open a i is gpt4 was three cents per
1,000 tokens yeah prom tokens oh four mini is uh it's a what is it a dollar 10 per
million tokens. It's much cheaper. Okay. So you think that they're just losing more money as
opposed to becoming more efficient in the way that they're... Maybe there's some calculation where they're losing
less money, but they're still losing money. There have been... I'm going to get a little out of my depth here
because I'm going to talk about model architecture, but there have been architectural
innovations that have made it cheaper to run these models like the mixture of experts model.
When you say these models, which are you referring to? I mean, you could talk about, I mean,
Yeah, big foundational models.
Okay, but we're talking specifically about open AI.
Open AI's.
So I think they do use, I mean, let's just talk about the mixture of experts model, right?
So instead of lighting up the whole model to get an answer,
they will channel your query into the area where they think the model can answer.
I mean, the folks who built deep seek,
it seems like that was a big part of the way that they were able to make it cheaper.
Right. Why do you think, okay, I shouldn't really be asked the question to your podcast,
With Deepseek, isn't it weird that we didn't really see any efficiency gains discussed by a single one of the model companies,
that none of them even seem to do the same thing other than perplexity releasing like a 1776 version of R1 without the Tiananmen Square thing?
Just one of the, Aravind's friend, he's like, he is so, so lame.
Okay, you brought this up a couple times.
Just let it out about perplexity.
What don't you like about them?
Well, first of all, they're an insanely badly run company.
They did like 35 million.
They lost in, I forget exactly how much they lost,
but they did refunds or discounts like $30 million.
They're literally giving money away to make people use it.
And even then, they only have like 50 million users.
I also think that as a CEO, Aravind just goes and says shit.
That is just annoying.
He just, he could be.
I'm surprised that you're saying that you want him to behave better.
It's not I want him to behave better.
I wish he'd just be more direct about what Plexia can do.
but every fucking few weeks,
he did this whole touchdown dance
after the Google search trial
and then nothing else.
It doesn't feel like he's trying
to create a competitor to Google.
It feels like he's making
a Silicon Valley hero story out of himself.
And it's boring and lame
and it's a bad business.
Give it up.
Okay, that means,
I don't mean like shut down the company,
but he's good at raising money, I guess.
But back to the model thing
and the efficiency thing.
Yes, they are losing money
because it just really,
easy one. They would be saying if they weren't. You think that Sam Altman, if they had managed to make
this profitable, would not go out there and tell everyone. He absolutely would. Also, he'd be telling
investors immediately. They're one of the great reports of the information. I quote them a lot because
they're doing some of the best tech journalism out there. Which reporter? It was, um, might be an
Nisa Gardizzi or Stephanie Palazzo. It's John Victor over there. He's excellent. And our fuck,
who's the old. He's going to kill me. Cory Weinberg's done some excellent work in Coy. There's also
a new person there who I'm forgetting
who did... Okay, that's a good number.
No, but they've got like a really excellent team
but where was I?
You would get leaks that say that they've gone
profitable and that would be. Well, I don't think they
want to go profitable. They're just trying to
at least at the moment, most startups
at this stage don't want to be profitable.
Open AI stage?
They're unlike the equivalent of like series
D or E. That's absolutely
when you go profitable. Okay, so again
let me just... And then they need to go public.
I'm going to bring up their side of it just
For a sake of the argument, I think what they're trying to do is get this technology in the hands of as many people as possible.
And they understand that it's a more capital-intensive technology than most others.
And?
And so, therefore, they're not profitable.
So that they...
But I don't think there is a magic profit button.
No, but that's what I was going to bring up.
I don't think there is a switch that they could flip today and be profitable and deliver the same quality of models.
I think...
Could they just like switch chat GPT to GPT4 and potentially be profitable?
Maybe. Sam Oatman has suggested that they would take away the model selector months ago.
He likes to say stuff and then just they disappear. He gets the articles, nothing happens.
Very good for Sammy. The thing is, I think that what's happened is everyone thought about a year and a half ago that this was going to change.
It was going to, because there was that big jump from GPT3 to GPT4 to GPT4O.
There was the multimodal side. It was like a, oh, this is really interesting. The voice mode was interesting.
It's like, oh, I can extrapolate from here that we made this big-ass leap.
So in six months, we're going to be here, and then six months after that.
Except it's like, in six months, we're going to be here, and then maybe we're here in another six.
For listeners, Ed is doing the very incremental improvement.
Doing like a very small hand movement.
So that's the thing.
I think that they're all wrapped up in it.
And yeah, Open AI is absolutely trying to get as many users as possible.
The problem is if you're losing money on each one, and also their conversion.
Here's my favorite Open AI stat.
Well, more of a question I always ask,
which is why do they not show monthly active users?
They talk weekly.
And the reason is, because if you compared
what their real monthly active users would be,
500 million weekly,
so I'm going to guess 700 million monthly,
divided by the 15.5 million customers that pay for it,
that's a dog's doo-doo of a conversion rate.
That is so bad.
So you're saying that they're giving the lower number
that's more active because they don't want to make it seem like very few convert to they don't want to
they don't want the conversion rate out there they don't want people to say oh you have a conversion
i can't do math very well uh me and me and chat gpt share a problem um it's yeah just throw a number
no but be very confident about it ed two percentish like a really yeah bullshit conversion rate
for the most notable company in the most notable industry with all the press and all the marketing
that's their conversion rate
that's bad man
it means that they can't work out
and no one else can work out
what the hell to sell this on indeed
Sam Alman loves to say
oh yeah I can't wait to see what you build with it
mate what are you building with it
you're the fucking owner
and they want their API business
it sounds like also weirdly Anthropic
is doing better on API
they're selling more
a larger percentage to their business is API
but they still lost like $5.2 billion
last year it's completely insane
But it's just so strange because you can have something this big that fails.
You can have something this big.
And when I say fail, I don't mean chat GPT goes down and everything
and all the people in the building get thrown out.
It would be somewhat messier.
And I can go into that at some point.
But I think that we are in a moment of mass delusion
where no one really wants to talk about these numbers
because when you talk about them, they're scary.
And here's why.
Okay, magnificent seven stocks make up about 35% of the US stock market.
19% of that is made up by Nvidia.
Invidia's revenue, I believe, is like high 80s based on GPU sales.
Data Center revenue and lost earnings from Nvidia was below analyst expectations.
No one really wanted to write about that one because
Nvidia is pretty much holding up the stock market on some level.
It is every time Nvidia earnings come around, there is some story on, like, take him from Barron, says,
I love Nvidia and then everyone else says,
I don't know, I hope that this is good.
It really is like, I hope that this is good.
I think you're right about that.
And the reason Nvidia is making all the money
is that everyone's agreeing to buy GPUs.
Today, and so a couple weeks later
from this, obviously,
Amazon said something that they're using
that someone, it might have been
anthropic, forgive me from wrong, but they're using
500,000 Traneum GPUs,
their own. What happens
if Traynium
takes a meaningful chunk out of Amazon,
spend with Nvidia. That's a chunk of revenue gone. What happens if Microsoft's Data Center
Bullback means that they eventually finish, because I'm assuming that they are retrofitting
Blackwell chips into their previous service. I would humor that argument. Open AI is if Abilene,
Texas goes well, which I don't know if it will, that's $40 billion of revenue once for
NVIDIA. We are basically saying that NVIDIA will continue growing, because it's not like
Nvidia could just keep doing this well.
The market requires growth forever.
Invidia, we are saying that within the next
year or two, Nvidia will be making
100 or more billion dollars in
GPU sales, and the year after
that, it will be at 120,
150, a quarter.
That's, I'm the crazy
one for suggesting that's bad.
And this is all dependent
on one thing, the continued
purchase of GPUs for generative AI.
What happens if
that's not the case? What
happens if, I don't know, there's, say the efficiency gains are there, say that happens,
say that Google, there's, they mentioned that 1H100 can run one of their Gemini models, I forget
which.
What if that is how they scale?
Wouldn't that mean they need less GPUs?
So put aside all of the gains and the growth, Envidio is just holding everyone up,
and the capital expenditures from the rest of the Magnificent 7 is holding Nvidia up, what happens?
What happens? What happens? The market goes, tits up. Do you think the market will go, yeah, well, they're not buying the GPUs and Nvidia's doing badly, but we still love AI. Fuck no, they're going to say, what did we spend all this money on? I'm going under the bed. I'm going to find the pornography you've been looking at. You're all in trouble because people don't like tech right now. People are pissed at the tech industry. And this is all vibes, man. Because when you look at the numbers, numbers are bad. So, yeah, long and short of it is, the reason I am alarmist about this,
is these numbers are alarming and I am shocked and actually kind of disgusted that some of the people in the media for not being more alarmed because if things progress in the way and I really think it will in this way people's pensions retirements are going to be fucked so much lies on this retail investors make up a large chunk of the buying for invidia as well recently as well it's so worrying and the growth from AI isn't there either these companies are not making shit tons of money
Microsoft, two quarters straight, said they would tell you their ARR for AI.
They think it was one quarter.
They said 10 billion, ARR, which is month times 12.
Next quarter, they said 13 billion.
Next quarter, they just didn't bring it up.
Probably because the growth rate's flat.
What are we doing, man?
I think that you're right that a lot of this trade is predicated on scale working.
And that is a risk.
Because, I mean, what we're hearing from the tech companies is that they're getting
diminishing returns from scale and like in terms of making these models bigger building up the GPU clusters
training them with more data like it's not data as well that's true that's true and i think maybe
that's why you see the scale acquisition from at meta insane acquisition one of the most like top of
the market bullshit 14 billion dollars for alexander wang a labor abuser at scale i mean lower case
s there and on top of that basically cutting off the fuel supply for multiple companies
for training data at a time they're running out.
Well, it's interesting because a lot of those companies are,
they're cutting it off on their own.
But yeah, you're right.
Open AI was moving away.
You're right.
But Google was their biggest customer, and they pulled away.
But I think just going back to this scaling thing,
everybody is now admitting that they're diminishing returns
for making these models bigger.
And I think we're really going to hit a point
where they're going to say, do I need,
if I'm, you know, okay, I'm just buying this month,
you know, billions and billions more in video chips
Nvidia chips to make my model a little bit better.
Do I need to be doing that?
Just to go back to a conversation that I had a couple weeks ago or now month plus with
Sergey Brin, where he said he thinks that the improvement is going to largely be algorithmic
of these models, meaning not by adding more GPUs and data by like actually changing
the algorithms inside these models to make them better, things like reasoning.
I'm just saying that like right now within, let's just talk about.
about it. Right now within these tech companies, there is a consideration that maybe scaling up these models isn't what's going to get them there. And then there is that risk to Nvidia. And if that goes down, then it could be a problem.
It will go down. Like, that's the thing. At some point, putting aside my feelings about AI, at some point, there will not be enough space. There will not be enough space for these GPUs. There will be not enough space on the earth to fill with them.
there will eventually not be a need to, are you saying that micro, because the assumption here that this keeps going is that Invidia either comes up with a completely, like the Rubin, for example, are we meant to believe that everyone who's just getting Blackwell, when Ruben comes out, it's going to go, yep, I definitely need that. Is that, that is the gamble. And it's just, it's kind of scary because whether or not AI succeeds, because also the growth isn't there, the software sales aren't there, even if they may,
the software sales profitable tomorrow the actual revenue is really pissed poor like it's not that much
even if open AI was profitable okay they're the biggest AI company cool are they going to a hundred
billion a year bullshit no and also if they made it profitable someone else would and they would get price
fucked it's just it's such a brittle industry there's never been anything of this scale this bad
within tech you can say the fiber boom but no you didn't have every single software company selling a
fiber solution. You did every consumer, because you didn't have apps back then in the same way,
but you didn't have Notepad and Microsoft Word trying to sell your fiber or saying the new
glory of fiber is here, partly because of the society we lived in the time. But it's like,
this is bonkers. The argument that the invidias would make, it would be that eventually
AI use is going to be so intense that you'll actually need more GPUs to fulfill that demand.
fascinating. Jensen Huang, I give him credit, he's got great leather jackets. Sounds like horrible to work with.
Why do you think horrible to work with? The reports, like there have been multiple, multiple reports of it. Like, he's an aggressive CEO. It's probably worse, it's probably better. But he's an aggressive fucking CEO. And he humiliated someone at CES. There was someone at CES sound guy and called him out by name in front of everyone. Disgusting. You have a bazillion dollars. You should be, you should be happy to be there, but he'll never be happy. He wants someone.
GPUs. It's, it's frustrating though. I understand, but also what's Jensen Hong meant to do is
go up on stage below. Yeah, we're fucked. People are eventually not going to buy these. I should
let you know as the C. No, he's not going to say that. He's going to say, yeah, well, there will
always be, he's done it before, he'll do it again. That's the, Nenvidia will be fine long
term. They're actually positioned because they make real things. And Jensen Hong is a pretty good
CEO. They have actual innovation there. They have tons of different layers to the company, the actual
value creation. They have the monopoly on the consumer graphics market. They do make good stuff.
There's a lot of problems with their consumer hardware right now. Sorry, consumer graphics
hardware right now. There's where basically they've killed the mid market. That sucks, but
still a business that sells things and owns things. The rest of them, right now, I think it's
more likely that at some point they go, why are we doing this? This is so annoying. This is so
annoying. It's so costly. I think Satchit Della is also really tired of Sam Allman.
From everything I've heard, by which I mean red, I'm not like it's sourced with them, I wish, I'll fly on the wall in those.
Everything that's been reported, journals done some really good reporting on this, has basically said that that relationship is afraid because I think Sam Altman thought he had more power than he does.
And Redmond, you're against the ultra monopolist. You're against like the OG, the Michael Jordan of monopolies.
like they beat the antitrust claims with MS DOS and Windows
you I know what we're going to talk about this at some point
but the conversation that's sorry the story about the whole
threat of antitrust from OpenAI
I'll just just bring it up now now that you've brought it up yeah it's
it's just been on my mind ever since I read the story so right now
open AI in this wonky thing is trying to convert part of itself
into a for-profit entity with control from the non-profit board
which Samultman's still on but whatever
part of that conversion requires Microsoft to say okay and Microsoft says okay well we'll convert in exactly the way right now
49% of shares and we'll continue having your IP up until you get AGI which is no and we also get to sell your models exclusively and we have all your research too sounds great to us and Sam Alton said no actually you should get 33% you shouldn't be able to have access to our IP after a certain point also the Winsert
acquisition, I don't know if that's ever going to happen, because Microsoft is, according to the
journalist, Berbergen over there. Apparently, the windsurf acquisition has become a major
problem because Open AI is saying, well, we can't give you the IP from them. You compete with
them with co-pilot. And Microsoft says, actually, our contract says you have to. We get, and the line in
the article's hilarious, it's like, Microsoft gave the blessing for the windsurf acquisition
under the current terms. It's just like, yeah, of course they did. And the thing is,
open AI has allegedly hinted at by which I mean leaked to the journal I assume I don't I don't have any interior knowledge there that they were considering an antitrust action against Microsoft for some reason people sign away their first amendment rights and NDAs all the time like they people make contracts to give away their rights all the time it's not anti-competitive because you don't like a contract also even if they failed it today good luck seeing that shit in front of a judge for three years you don't have that kind of time the fact that they're saying that
suggest that things are desperate. Because
understandably, Microsoft said, oh, also, Open AI wants to reduce Microsoft's revenue share.
It's like, I put it in a monologue I recorded today as like being in a hostage situation,
putting a gun to your own head and saying, if you don't give me what I want, I'll give you
the hostage and kill myself. Because it is, it's like Microsoft, the only reason Microsoft
would agree to these terms is because of reputational damage, because Sam Orton believes he is
the most popular, well-liked special boy in the world. And I think he believes,
that Microsoft would just roll over.
And Microsoft said, why?
Why should we bother?
We don't have to do that.
And sure, they could work it out.
There's every chance that Microsoft just says, oh, fuck it, I don't care.
But also, why would they?
Why would they do that?
What possible value?
Indeed, now it would be a reputational harm to Microsoft.
It would suggest that Microsoft can't negotiate.
And then the information had another story today where it went, so a couple weeks ago,
where it was saying that Open AI has been undercutting
Microsoft in deals selling their models and undercutting their enterprise subscription deals.
And just making a deal with Google, by the way.
Oh, the Google, well, oh my God, are you talking about the Google compute deal?
This is my favorite deal ever signed.
Okay, here is how the Google deal works.
Open AI is contracting Google for cloud compute.
Google is contracting CoreWave to serve that compute.
Why would Open AI not just hire CoreWave?
Well, I assume Google needs to add some revenue, even if they're probably just losing them.
It's the most strange situation I've ever heard.
Just, I feel like we need more tech analysts who just look at the absurdity of all this, because it is absurd.
But no, so within this situation, you've got Open AI competing with Microsoft to sell their own models and undercutting them.
Microsoft provides all their infrastructure.
Sure, Microsoft probably fears some anti-competitive action if they start taking measures against Open AI, but Microsoft never.
I don't think that Microsoft has to provide them the discounted like a quarter of the price as your costs, which they, at least as recently as last year, are providing them.
I don't think Microsoft has to give them any of the things they have to.
OpenAI signed a dog shit deal, a really bad deal, that made sense at the time because I assume that they thought this would do something different than it did.
Now they're in a price war.
And what Open AI is doing, the undercutting thing?
That's a Michael, sorry, the Michael Jordan of Monoplies, I should say.
That's a Microsoft move.
A Microsoft move just to go,
yeah, we're just going to lower the prices until you die.
You can't do that when you lose billions of dollars a year, dickhead.
You've got...
Microsoft does that because they have the ability to just go,
we will pay ourselves using our monopoly over business software.
We will use it over a monopoly over Azure.
And one of the three companies
that really makes meaningful cloud revenue.
Like, that's the thing.
Microsoft can bankroll that crap.
Open AI can't.
And on top of that, if Open AI does an antitrust action,
think I mentioned it earlier,
2,000 people in Microsoft's legal department.
2,000 people.
You've got a small...
You've got more people working legal at Microsoft
than work at OpenAI all told.
It's just brazen.
And I think that...
I think it could...
There is a chance.
I'm not saying it's for sure.
But Microsoft could kill Open AI.
Because they need to, by the end of the year,
Open AI must convert to a for-profit entity
or SoftBank does not have to give them
more than $20 billion total.
SoftBanks already given them $10 billion.
dollars. Another problem. This is a small one. I'm sure this is easily going to be
solved. SoftBank to give Open AI that money and to buy Ampair for I think $6 billion or something,
they had to get a one year $15 billion convertible, bridge loan even. And they had to go to
21 banks. Had it hurt their credit rating? Yeah. I think that there was a threat, like there
was a story was there was a consideration of hurting their credit rating. I don't think it's happened. Yeah.
And on top of that, SoftBank does not have the money to do the next $30 billion.
They don't have it.
They would have to raise more money.
Now, another story that went out where it was saying that now they're going to the Souds and they're going to Reliance, I think, in India.
And it's like, you don't go and do the Souts unless things are not looking good.
And SoftBank is, so if they raise another $30 billion, soft bank will only be providing $20 billion of that.
So $10 billion will be syndicated.
So Open AI, on top of SoftBank, having to do all of this gumpf to make this happen, to find money that they don't have, they will have to raise $10 billion, one of the largest private rounds of all time.
And if they succeed, they will have to do it again and again and again and again and again.
Because Open AI will be, according to their own projections, burning money until 2029, 2030 when Stargate, which will somehow exist, which will also require another $19 billion in SoftBank that they don't have.
once that happens they will go profitable somehow it's just really strange that this is considered a like an outlier position versus arguably one of the least stable financial situations in history and perhaps not tantamount to the subprime mortgage crisis because that was so that was so clearly like when you saw the fundament of it i don't think that if this i am not an expert in mortgage security so forget
me, but I can't imagine it would have happened in the same way if it happened today just because
there was more access to information. But in that case, we just had millions of consumers with
loans they couldn't pay off. That was bigger and would have more widespread damage because
there are people losing their houses and then it fucked the economy. I don't think this is going
to be super far off when it happens because of the Mag 7 problem in my video. And what's holding
it up is one company that burns billions of dollars, their sugar daddy out in Japan, run by
Masayoshi's son, who is well known for losing money and making really bad investments.
By the way, another question.
All the reporters talking about the $3 billion a year in agents that SoftBank was going to buy.
Where's the fucking reporting on that?
Absolutely egregious.
Almost as egregious as people claiming Open AI had closed a $40 billion round.
They didn't do that.
They ain't got the money.
No one's got the money.
Why is Open AI raising money for a round that they claim was...
It's just, it frustrates me because people will get hurt.
So let me, what is the best argument against the claims that you're making?
Have you heard one?
Honestly, I would love to.
It very much as if a frog had wings, it could fly.
It's like, if they get better, sure, if they manage to make this much, much cheaper and they end up working out a thing that could sell really well, sure.
Can I ask you, you run a PR firm?
That's your core business.
What do you mean?
But that's what brings in the most revenue?
With easy PR, no.
I mean, it's spread across the businesses.
wait okay as in like as in like media and a PR yeah yeah I mean how do you as someone who owns
a PR firm decide that this is I'm just curious this is not like a lot of my business is working
with journalists okay to pitch clients to them right and I stay away from AI stuff like I
don't work with I worked with a consumer TV company I didn't write about anything like that
for obvious reasons the thing is a lot of my business is talking to journalists
journalists want to be presented stuff that matters to them
that comes from a person who's considered and read their work
the fact is I consider and read their work all the time
it's what I've done for like 10 15 fucking years I've been doing this business
like it's the same thing except I started writing
and yeah I'm fairly well demonstrated that I understand
what I'm talking about in the writing I do
and I also firewall that very precise so it hasn't hurt the PR firm that
no okay no and in fact the clients kind of like it
They appreciate the fact that I can elucidate that I understand business.
And it's one of those things where, yeah, it is at some point the media stuff will probably take it over.
But I'm just, I'm having a great time doing all of it.
But on top of that, when it comes to doing PR and doing media relations,
so much what PR people don't have is basic knowledge.
And I do pride myself on knowing what I'm fucking talking about.
And it helps.
And it's great.
And also there are street firewalls.
CES is a great example.
So I had a client at CES at the time, I would pitch them for the show, pitch a journalist to come on my show beforehand, before I pitched them the client.
Because I didn't want any possible situation where they thought for even a second, even though I don't think they'd think this, that them saying no to my client, anything to do with the show.
And there were people that said no to stuff who came on the show.
And it was fine.
Who gives a shit?
Like it's like they are separate entities.
And my clients are very respectful of that as well.
can we just take a moment of levity because the way I first found out about what you do was when
speaking of CES I think you told you told like a bunch of people that you would meet them at
updug no it was they would say what's up dog and you would say nothing much you oh that was so
much fun they were so pissed they were so you got them good you pants someone they get no
that was great as well what happened there so what it was was I was I was I had
was heading back to England, I think.
It was a few days where I headed back, and I was getting spammed.
And it was like, I'm not, I went to CES because I think I had a blog at some point that got me in the media system.
They screen light you automatically.
So I said, okay, I'm just going to respond to these people who have not.
And none of these people have considered who I was for a second because they just spam me.
So I respond with like, you send me more info on Up Dog.
And they'd be like, what's up, dog?
I'm like, nothing much.
What's up with you?
Most of them didn't respond.
Some respond with, I can't believe you do this.
I can't.
this is so unprofessional.
One of my favorite tweets of me was like,
oh,
making fun of your piss,
you're a real douchebag.
I have that tweet somewhere.
It's so funny.
Because it's like,
look,
if someone got me like that,
I'd be like,
oh, fuck.
Like,
it's like yesterday I said to,
my dear friend,
Casey Gagawa,
I said to him,
um,
yeah,
I've hit this number of paying subs.
They said,
you'll never eat all those.
And I got so pissed at him
because it was such a good dunk
because he was suggesting
I was talking about sub sandwiches.
Not a great joke,
but he got me good.
If you get done with a funny joke in a professional scenario, you should enjoy the fact that you're not having to talk about business for a second.
I don't judge anyone who failed for that.
You're a fucking PR person emailing a bazillion people.
Laugh with me.
We're all having a good time, or you should be.
Apparently a major agency that was sent a company-wide email saying, warning, Ed Zittron, which is really funny.
That was your screen name for a while.
It was, it was, was it made a good callback.
Real OG fan.
No, it's, yeah, that was really, it was really funny.
I meant no harm with it.
And I think anyone who took it,
anyone who took offense to that, go outside.
Let me ask you this to end.
I mean, we have listeners here.
I think that believe in the power of AI
are working in it,
are implementing it, or building it,
and some of that are concerned about it,
worried about it,
and really are curious about the business side of things.
And sometimes those people overlap.
You've built a sizable audience
among people who are really concerned about this.
And I think that every time we do a show,
about like the downsides of AI, people grab onto it.
I mean, even with the Gary Marcus show, like, there are people that will like go in the
comments on YouTube months afterwards and be like, this helped me like sort of have come
down from all the AI-based fear.
I didn't even know like that yesterday.
So what do you think, why do you think people are so concerned about this technology and
why do you think the criticism of it resonates the way that it does?
I think there's a few things.
I think one is the most obvious.
which is, I think anyone would be afraid of someone taking their job.
I think it's a natural thing of the thing I have, someone might take it.
And when you have the entire media and most public companies saying,
I can't wait to replace humans, you mean nothing to me.
Yeah, that's scary.
People, when you have Ezra Klein and Kevin Ruse saying AGI is just around the corner,
baby, and it's going to change everything,
that's that they never say how.
That's very scary.
And this is not saying people are stupid or unformed.
the average person does not have my very special stupid mind where I'm like I must learn all the numbers
and most people don't have the time to sit down they have jobs they have families they have things to do
more fun things I imagine so they see the fucking news and they get scared and then I think there's a layer
deeper where tons of people realize that something is being they're being told a line that they
go and they go okay this search is better my friend talks to it like a therapist which is worrying
but they keep describing
they referring to big companies
Sam Altman as the next big thing
and the power of AI
but when a regular person looks at it
they go
this isn't
this isn't what they're saying
but everywhere saying it is
and their bosses are saying AI's and everything
and I think that people
feel this cognitive dissonance
and they feel it profoundly
it's the same way they felt about the metaverse
it's the same way they felt about crypto
AR VR
all of these things
but none
None of those were this pungent.
And you've really just seen companies so horny for the idea of replacing people.
They're so excited.
You as a CEO, unless you care more about your shareholders and growth, which is Andy Jassies and MBA, as are all of them, I think.
I think all of them other than Mark Zuckerberg have MBAs now, all the major big tech CEOs.
I don't know if Jensen does.
Anyway, you...
Wait, let's get this right.
I think Tim Cook does
Satchan Adela Sondapashai
He worked at McKinsey
Okay
Andy Jassy
I even think the guy who replaced
Andy Jassy at AWS as an NBA
Okay
Pretty sure I'm correct on those
If I'm wrong
Score me
But people realize that there's a disconnection
From what's being told
And yet they are very clearly
Seeing how lascivious people are
Around the idea of replacing them
So they have this dual offense
Of you haven't even built the future yet
but you're doing the touchdown dance and you're so proud of the fact you'll replace me.
You're so excited to replace a real person.
Mark Zuckerberg wants you to have fake friends.
Sam Orman wants you to have fake coders.
And then they see that the outputs are kind of shit.
They see that it doesn't really replace people.
It replaces an aspect of labor and a small aspect of labor in exactly the same way that bad bosses mistreat their employees.
Do not value their labor.
I had this thing I wrote called The Era of the Business Idiot, did a three-part episode on it.
And my principal thing is, I believe throughout most power structures, there are people that do not understand work, that do not want to do work, and exist as a kind of ultra-middle manager.
I think Sam Altman is their antichrist, which sounds dramatic, but hear me out.
Sam Altman is the single most gifted business idiot whisperer of all time.
He convinced, look at what he's done.
I think he's reprehensible, a real scumbag, but I cannot, I cannot ignore the work he's doing.
Just he convinced fucking Oracle to do all these chips.
He convinced Masayoshi's son, Satchinadella.
Of course he's confident that he can con Microsoft.
I think he's wrong.
Because he's done it before.
He convinced everybody that Generative AI was the future without really proving it.
Someone else did that work for him.
Someone else built Chad GPT.
How many of the people who built ChapT are still there?
Ilya Sutskeva, respect to the guy for just doing his own scam, Miramarati, same deal.
Against Stephen Levypiece being like, yeah, they're going to be.
an AI thing. And I was like, oh my God, oh my God. And then on top of all of this, you have this
bullshit about AGI, the most fictional of all fictional concepts. I have said this a few times.
It's like having a bunch of billionaires saying they're going to hunt and capture Santa Claus.
We are closer to the Ninja Turtles. I'm deadly fucking serious. I've talked to biologists.
That's about as firm as Sam Norman can get with AGI too. Because that's the thing.
You have all these people hearing that there's going to be this conscious computer.
And they're fucking scared of that.
Of course they are.
Even though it's a complete lie,
even though it's a forcehood
because Kevin Ruse said
was at a dinner party
with some other credulous people.
You don't like Kevin Ruse.
I think Kevin Ruse was very good at his job
and he has now gone
anti-remote work,
pro-metaverse, pro-NFT.
The Pudgy Penguins, Colin was disgusting.
What was that?
He joined a penguin NFT club.
That was the article.
There was the hemean one as well.
I will say, and I think it's important to note, unlike crypto, Metaverse, AI feels different to me.
It is different.
It is, it seems far more useful.
There are more products.
There are more actual.
I will, I, indeed, when this bubble started, I pushed back on people, said, it's just like crypto, it's just like Metaverse, because there was a thing here.
Right.
Was it as big as people said?
No, but the egregious level, the Metaverse was like, we've made a VR space, this is worth $100 bazillion now.
But with Roos, he did an article about.
that helium, a crypto company
and then Matt Bender
I believe over at Mashable. You got outplayed
by Mashable man. Actually, Bender's amazing.
Not tons of great people there.
That's Celia, I think. Anyway,
with Roos, he did this story where it was like,
yeah, helium works with Lyme and Salesforce.
Turns out they didn't.
Turns out they didn't. Matt Bender went
and asked, and they went, no, we didn't.
Kevin Ruse added by saying,
skeptics have suggested, or critics have suggested
that this wasn't the case. It's like,
motherfucker, come on. I don't. I don't
like Kevin Roos because he has this amazing power. He has this huge audience and he chooses
to support the powerful. He did a story about an AI welfare guy, an AI welfare guy being added
to Anthropic. Just a Ninja Turtles expert. It's we will find the ooze. The ooze is here by
talking about, I thought that was an interesting story. I thought it was fucking stupid because it
didn't discuss the welfare of AI. If you discuss the welfare of AGI, if we have a conscious
computer, you are describing a slave.
If this thing has consciousness, you now have issues of personage.
Are you open to the idea that it could be conscious?
I think it could be possible in 30, 50.
I think we are...
So you're just now saying you're open to this idea of AI.
I'm open to the idea in the same way that I'm open to the idea of teenage mutant ninja turtles
in the sense that if we got the ewes that could do it.
AGI, we do not have evidence it's possible.
We don't understand how humans think, how the fuck are we meant to create it in a computer.
But say we do, and this is the dirty part of the conversation.
No one wants to have.
Say this succeed.
Are you saying that this conscious being, that Microsoft owns, by the way,
Microsoft owns this conscious being with intelligence and consciousness and a personality,
are you saying we wouldn't let that free?
Because what you were describing there would be a slave.
Yeah, it should not be the goal.
I know, but that's what they're thinking.
Now, they could say, oh, we'll do it, but we'll make it so its consciousness just focuses on doing whatever Salesforce wants.
still a slave.
Right.
This is the thing.
If Kevin,
I genuinely would have respected Kevin,
had he done that thing
and then had a really like
agonized discussion,
which is genuinely interesting,
saying what would be the ethical ramifications
of owning a conscious thing?
Fascinating.
But doesn't that story that the labs are thinking about this
kick off that conversation?
Like,
I don't think that you can expect...
I'm Wario Amadee.
Okay.
I have decided that...
Okay, go ahead.
I'm never calling him his real name.
Yeah, all right.
Dario, I'm sorry, Dario.
I am him.
I am trying to work out reasons for people to invest in me in the future.
I think probably give, let's call it a million dollar salary,
probably a couple mill more in stock.
I'll make a new guy, a new guy will come in,
and his thing will be AI welfare.
What does that mean?
What if it's life?
We can do a Google Doc, a back-of-four.
Karen Howell's Empires of AI does an excellent job of discussing
how many of these people fart in a glass and sniff it?
Because they have jobs there where they just say, I'm going, what if this happens? What if this happens? It's a marketing spend and it worked. It worked on a guy who it's worked on before. Kevin Roost did an article recently about a company claiming that they were going to replace workers. You know what they hadn't done? Even created the environment they do it in. Kevin can do good journalism. He's done really good. It's the young money he did. It's great. Like there's actual things. Casey Newton's the same way. They're good journalists. They could do good journalism. They could even.
even do, if they were optimists, they could engage in actual optimism. They'd be interesting thing.
The welfare story is a great example. Man, having a conversation in the Times about what's
considered human or not, where have they not been doing that elsewhere? Anyway, I was saying,
it's just, it's this frustrating thing where ultimately the people that suffer will be the people
who depend on the markets for their pensions. The people, the markets do eventually affect the
workforce. And on top of this, the other thing is that we've got major people in the media
hot and heavy over the idea of replacing people. Hot and heavy, they're excited. I think that's
disgraceful on top of it. Who are you fighting for? Who are you writing for? It isn't clear.
All right. I think you and I will disagree on Kevin Roos and on some other things, but I am
glad that we've had this discussion. I don't agree with everything you've said. I think it was good
that we had a conversation where we, you know, brought some of this out there, tested it.
And I think the one thing I'll say is, I leave open the space that you're right.
Yeah.
And that's, and I think that that is why I think you have a very interesting perspective on this.
And I think that's why it was important for us to have this conversation.
And we've talked to a good about this.
Yes, we have.
And like, I've heard you, like, I'm really excited to be a thank you for having me.
Definitely.
Well, thank you for coming.
Folks.
If you're interested in the podcast, it's better offline.
The newsletter is where's your ed.
dot at and it's also the still alive and kicking easypr easypr.com all right everybody thank
you ed thank you thank you for watching or listening and we'll see you next time on big
technology podcast