Front Burner - Is AI a bubble that's about to burst?
Episode Date: August 12, 2024ChatGPT took the world by storm when it launched in November of 2022, prompting massive investment in generative AI technology as tech companies rushed to capitalize on the hype. But nearly two years ...and billions of dollars later, the technology seems to be plateauing — and it's still not profitable. After tech stocks took a hit in early August, concerns are growing in both the tech press and on Wall Street that generative AI may be a bubble, and that it may soon burst.Paris Marx — author of the newsletter Disconnect and host of the podcast Tech Won't Save Us — has been warning about this for a long time. He explains why, and what these recurring hype cycles tell us about a tech industry increasingly focused on value for shareholders over good products for users.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel
Capital Organization, empowering Canada's entrepreneurs through angel investment and
industry connections. This is a CBC Podcast.
Hi, I'm Stephanie Skanderis in for Jamie Poisson.
So it seems like everywhere you look these days, everything is powered by AI.
Adobe is using the power of generative AI to deliver the most advanced and precise editing tools ever in Premiere Pro.
You can build an AI Power Chrome extension in literally minutes.
This is a $1,400 AI-powered motorized electric shoe.
This is a very specific type of technology called generative AI. And the current hype around it really kicked off in November 2022 with the launch of ChatGPT.
November 2022 with the launch of ChatGPT.
With ChatGPT, I can finally get the answers I need fast and easy.
Basically, these are complex mathematical models that use huge training data sets to generate responses to prompts, text, images, even video. And there was all sorts
of hype and big predictions about what the technology would one day be able to do. It'll
automate your job. It'll replace artists. It'll be the first step toward truly intelligent machines.
But almost two years later, it looks like generative AI might be hitting a bit of a
plateau. On top of that, it hasn't been very profitable for tech companies that have made
massive investments in it. That has a lot of tech journalists and market watchers concerned
that this is a classic economic bubble. And that bubble may be showing its first signs of popping.
This is something Paris Marx has been warning about for quite a while now.
He writes the newsletter Disconnect and hosts the podcast Tech Won't Save Us.
He's with me on the show today to explain why the AI boom might in fact be a bubble
and where the industry goes next after it bursts.
Hi, Paris. Thank you so much for being here.
Absolutely. Great to join you.
Okay. So I feel, Paris, at this point, we've all heard about generative AI,
chat GPT, seeing it popping up in all these everyday tools like search engines.
And we've definitely heard and read about it's definitely the future.
Like this is coming whether we like it or not.
So to me, it seems like a real sudden shift to then hear that it's a bubble and the bubble might be bursting.
Why might it be a bubble?
Yeah, I think a lot of people will probably feel the same way that you do, right? That,
you know, these technologies were supposed to change everything, chat GPT, the image generators,
and all of a sudden it looks like, oh, maybe that's not the case anymore. But there are some
real tangible reasons for it, right? There were a lot of promises made about the technology early
on, you know, in the months after they were introduced. And now that they've been around for almost two years, we're starting
to see that a lot of those things are not really being realized by the companies. And this year,
in particular, we've seen, you know, a lot of the big promoters of generative AI really scale back
the promises that they're making. If you look at some of the announcements being made by companies
like OpenAI or Google recently, the features that they're showing off are not things that are going to change the world,
but it's like, you know, how to more efficiently use a spreadsheet or maybe get some new features in like a, you know, a voice assistant or something like that.
Give me a creative alliteration about these.
Creative crayons color cheerfully. They certainly craft colorful creations.
For the past couple of years, we've been very focused on improving the intelligence of these
models and they've gotten pretty good.
But this is the first time that we are really making a huge step forward when it comes to
the ease of use.
It's much more basic things that we're used to seeing from these companies,
not really world changing things. And the biggest piece of this, of course, is that
these technologies, they require a lot of data in order to run them. They've scraped all of this
stuff off of the web in order to train these models. So they're very computationally intensive
and energy intensive. And that means they're very expensive to run. And so there are growing
questions about whether, you know, these companies and these tools can actually generate the revenue they need
to pay for all the computation that they need to run them.
And how prevalent is this bubble idea now? Like, how big is this theory for people who
are watching the tech world?
Yeah, it's very common. You know, people like me have been talking about it as a bubble for
a long time. But it's interesting now, you know, Goldman Sachs had a report out last month, the major
investment bank explicitly calling it a bubble and, you know, questioning the productivity
benefits that would come from it and saying that, you know, investors can make money from
this, but we think it's very inflated and that these numbers are going to come down.
Sequoia Capital, which is one of the biggest venture capital firms in Silicon Valley, also
had a report out recently talking about it as a bubble and saying that these tools need to basically find
$600 billion in revenue if they're going to be profitable. The information in a major tech
publication was recently talking about how OpenAI could lose $5 billion this year alone because it's
making so little money, but what it's offering to the public is so expensive to run. So this is becoming like
a pretty common concern at this point. And I think you've really seen it gain a lot more
traction over the past couple of weeks as we've seen these challenges in the stock market after
these major drops that have somewhat recovered at this point. But there's still concern that
these things could drop a lot again very soon.
The money is a huge part of this.
And some of these numbers that you're throwing around, I mean, to me, they seem mind blowing.
So just how much money is the industry spending on generative AI?
How much money is there in this?
Yeah, billions and billions and billions of dollars, right?
And if you think about it, you know, there's a particular reason why this happened.
So, you know, if you think back to a couple of years ago, all of the tech industry or a lot of it was interested in crypto, right?
And there was a lot of money put behind cryptocurrencies and NFTs and these promises that these were going to be like the future of the
financial system or whatever. And then we saw that begin to crash in November of 2021. You know,
a lot of people lost money, a lot of companies went bankrupt, a lot of people were scammed,
you know, Sam Bankman-Fried, who ran FTX is now in prison for fraud.
Sam Bankman-Fried has been sentenced to 25 years for what's been called one of the biggest
financial crimes in US history.
These are the live pictures from...
But at that moment, we also saw governments really start to raise interest rates for the
first time in almost 15 years in a really substantive way.
And so that meant that the tech industry had, you know, this access to easy capital turned
off really quickly.
And it was at a moment when the industry was already kind of on a downslope because you
saw cryptocurrencies crash and the metaverse was not becoming the next big thing.
So they needed something else.
And then when ChatGPT came along, that gave the whole industry this reason to go really
heavy into AI, to position this as the future, to make a lot of
big promises about it, but also to drive the share prices up and to drive investment at a time when,
you know, interest rates were going up and things were shaky. And so now we're almost two years into
that cycle. And the questions are naturally emerging about what is actually going to come
of AI and whether this can be a real business. And so that's why, you know, after pouring in
tens and hundreds of billions of dollars, you know, investors are asking, hey, how are we going
to make money from this? And the companies don't have a lot of answers. Well, that's also what I'm
wondering, because all the money flowing in, how much is coming back out, like in terms of actual
cash flow, how much of that is generated by the AI products themselves? Yeah, not a lot, right?
A lot of these things have been free or like offered as part of something that you might
already use, especially when they're coming from a major tech company. You know, think of what
Google is offering and how it's just kind of implementing these things in its products,
or say Apple is supposed to be launching some new generative AI features in the fall
that will just be kind of part of using one of its devices.
There are companies like OpenAI and Microsoft and even Google that do offer some of these
tools as part of like their cloud computing services or things that consumers can sign
up to through OpenAI, for example, to get certain features with ChatGPT.
But again, the amount of revenue that they're making is nothing near what it actually costs to run these things. That's not a way to, you know, have a profitable
business. And part of the reason for that is just how expensive it is. There's estimates that,
you know, if you think about doing a Google search, for example, you know, that's something
that we're all used to doing, we do quite often, likely multiple times a day, right? But if you
replace something like that with, you know, a conversation with a chatbot, the estimates are you're using 10 to
even 30 times as much energy just to get those same level of responses, right? And so the question
is, does it make sense then to replace so much of this with much more computationally and energy
intensive generative AI, if some of these things just work perfectly fine as they are. The energy stuff is something I'm also very interested in diving into. But just
back on the money thing for one sec, because talking about that relationship between OpenAI
and Microsoft, is it fair to say that some of these smaller companies are being propped up by
the bigger ones, like OpenAI being propped up by Microsoft? Yeah, that's absolutely the case. And
we see
that with a lot of these supposed AI startups is actually they have a lot of investment from these
major tech companies, in particular, Amazon, Microsoft, and Google, because those companies
are the major cloud providers. So they have a lot of massive data centers around the world,
which are essentially big warehouses filled with 10s of of thousands of computer servers that run a lot of what happens on the web.
And they're in the process of doing a massive expansion of those data center networks, you
know, pouring tens of billions of dollars into expanding them around the world.
And those AI startups need access to all that computational power that Amazon, Microsoft
and Google have in order to run their AI tools. And so if you look
at a deal between like Microsoft and OpenAI, Microsoft actually has without, you know, owning
OpenAI outright, it has a lot of power over how that company actually works. It has a lot of rights
to the technologies that they're developing. And in exchange for, you know, having all of that
influence over OpenAI, it gives them kind of like preferential access and discounts on access to all the computation that that company needs to run its tools.
We have no current plans to make revenue.
We have no idea how we may one day generate revenue.
We have made a soft promise to investors that once we've built this sort of generally intelligent system, basically we will ask it to figure out a way
to generate an investment return for you. In the Dragon's Den, a simple pitch can lead to a life-changing connection. Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here.
You may have seen my money show on Netflix.
I've been talking about money for 20 years.
I've talked to millions of people and I have some startling numbers to share with you.
Did you know that of the people I speak to,
50% of them do not know their own household income?
That's not a typo.
50%.
That's because money is confusing.
In my new book and podcast, Money for Couples,
I help you and your partner create a financial vision together. To listen
to this podcast, just search for money for couples. There are so many other issues aside
from the money stuff like chat GPT. If we just look at this one that kind of took over everybody's
thinking over the last couple of years. At first, it was sort of blowing a lot of people away with
what it could do, what the potential of it was. And then all of these
problems came out. It was getting so many things wrong. It was racist, things like that. Like the
problems seemed endless. And now here we are two years later, have the AI companies been able to
overcome those fundamental problems? No, absolutely not. You know, and we see many
examples of this, right? OpenAI has been continually
releasing new models and updating models. And it says that as it releases these new models,
trained on more and more data, you know, the models are larger and larger, they say that that
is supposed to make things better. But there have been, you know, a lot of reports from users that
they find the tools are actually not working as well as they used to in the past.
And then, of course, we see when some of these tools get rolled out to the public, that is when we really start to run into the problems that they're having, right?
A lot of these companies talk a lot about hallucinations as something that these models do.
Really, I think that's a marketing term for false information, right?
If you do a Google search in the United States right now, often you're going to get what's called an AI overview, you know, before you start to see all the links that you
usually see from Google. And that's something generated by Google's AI models. And when that
tool was rolled out a few months ago, people immediately started to see that, you know,
it couldn't figure out what the truth was in so many, you know, very basic prompts, right? It was
recommending that people eat a certain number of rocks per day and put glue on their pizza, out what the truth was in so many, you know, very basic prompts, right? It was recommending
that people eat a certain number of rocks per day, put glue on their pizza, and all this kind
of stuff, right? And, you know, an average person would say, this is ridiculous. But these tools
were often pulling a lot of information off of Reddit, where people were making jokes, and then
just, you know, it couldn't figure out that joke or sarcasm was not, you know, something true that
someone was actually saying or recommending. And so then it was putting it in its search results.
And Google is like this platform that, you know, we tend to trust, we tend to go to for information.
And all of a sudden, it is directly feeding us all of these lies and all this false information
and all this potentially dangerous information. And you can really see the problem with these
tools and how, after almost two years, they haven't overcome these fundamental problems.
And as part of that, I guess, it's just hitting the limits of what is possible?
Oh, definitely. You know, the argument that these companies were making was that as you make the
models bigger and bigger, as you use more and more data, they are going to get better and better.
Until, you know, even some of these people would say until the machines couldn't, you know, basically think like a human, right?
That the computers are going to replicate a human brain. There are a lot of people who question that
from the beginning. And I think as we get further and further into the cycle, we can see that that
was always kind of ridiculous science fictional thinking, not real things. But there was an
interesting report out a couple of months ago that surveyed a bunch of
C-suite executives and asked them, you know, what they expected generative AI tools to do for their
companies. And 96% said they thought it would improve productivity. And then the same researchers
talked to employees who are using generative AI tools. And 77% of the employees said that they
found it added to their workload and made them less productive. So there's a real disconnect
between what people think it might do, you know, if they have the incentive to try to
roll it out and what is actually happening there. And then the physical limits as well, which you
touched on before. And I just want to come back to you to understand this better, because we've
heard about how the electricity demands are going to start to outpace what local power grids can
supply. So how does that impact this technology?
Yeah, definitely. So that's becoming a bigger problem as well. And that really relates back
to the question of the data centers, right? So Amazon, Microsoft, and Google in particular,
but also some other major tech companies and companies beyond that are building out these
major data centers around the world in order to power all of these technologies that we rely on,
right? And generative AI in particular is one of the major drivers at the moment
because it does require so much computation, so much energy in order to run it.
Take a look at Google's latest environmental report.
It revealed that its greenhouse gas emissions
jumped nearly 50% in the past five years on the back of its expansion
of data centers that power AI systems.
Microsoft's 2023 emissions were nearly 30% above 2020 levels.
Amazon's carbon emissions, they jumped 40% from 2019 to 2021.
And so in places like Ireland, for example, where a lot of these are being built,
21% of all of the electricity that they generate now goes to powering data centers, which is more than
all the urban homes there, you know, combined. And this is becoming a problem in many other
jurisdictions where these technologies are being rolled out as well, whether it is the amount of
energy that they require and the threats that that causes for the energy grid, or even just
making it harder to like, you know, reduce the reliance on fossil fuel energy
as you're trying to, to, you know, replace it with renewables to meet climate targets,
or also the amount of water that is needed to cool these things, right? Because if you think
about a computer, even having your own laptop or desktop computer, you can think about how hot
that can run, right? If it's doing intensive tasks. And now just imagine putting 10s or hundreds of thousands
of those computers in one building, you know, that requires a lot of cooling. And that's where
a lot of the power and the water is needed.
What about the impact on the internet overall?
You were talking about hallucinations, the models making stuff up.
People have also been actively using generative AI to make things like deep fakes and to promote disinformation.
I've watched one of me on a couple of shows.
I said, when the hell did I say that?
But all kidding aside, three seconds recording your voice and generate an impersonation good enough to fool, you know, I was going to say your family, fool you. I swear to God, take a look at
it. Like, is there an argument that these tools have actively made the internet less trustworthy?
Yeah, I definitely think so.
And in many ways, right?
Kind of the more obvious ways are producing false information and, you know, making it
easier for people to make deep fakes of people, which is a serious problem that I think is
under-considered at the moment.
But then the other piece of this as well is that, you know, we see an increasing desire to roll out generative AI technologies in, say,
you know, public service delivery or in militaries or, you know, in other areas where
the potential false information that can come out of these models can have some really serious
consequences. And then the other piece of this is, of course, you know, we've all been reliant
on these major online platforms for a long time now, whether it is Amazon for shopping or, you know, Facebook or Twitter for social media or Google as a search engine.
And as we see this generative AI push take hold, what we're seeing is a lot of these companies actually degrading the experience of using their platform because they're trying to, you know, continue to increase their profits
and increase their revenues.
And that means that, you know, there needs to be a greater push on rolling out more advertising
or getting us to look at more ads or trying to restrict what information people can access
so that they can sell it to generative AI tools in order to try to maximize the amount
of revenue they make from it.
So overall, you know, we had this promise that the internet was going to be this incredible place
where we were going to have access to all this information
and get to talk to so many people around us.
But increasingly, we're seeing the benefits of that,
the information, the communication become eroded.
And the companies don't really have a solution of how they're going to make that better.
I guess what I'm thinking of is the dot-com bubble burst, but the internet
is still here.
Is that kind of where we are with the generative AI stuff?
Like, do you think, what part of the AI bubble do you think, I don't know how to say this,
like, how much of the AI that is out there right now is part of the bubble that you think
could burst?
Because I don't think it's everything, right? No, absolutely not. And, you know, if we think about AI in general,
this is a term that's been around since the 50s. AI is not a wholly new thing. It's generative AI
in particular that is the more novel part of this and that all the hype is surrounding. And so if we
think about what is going to happen when an AI bubble potentially bursts, I don't think that to put generative AI into absolutely everything.
Because the reason that happens right now is that every company that talks about generative AI is
probably going to see a boost to their share price in the short term. And so that's why you see a lot
of this stuff happening. It's like in previous cycles, when, you know, a lot of companies would
talk about the metaverse, or a lot of companies would talk about NFTs and crypto, just because
that was the hype at the moment.
And there would be a boost to the share price just because it was mentioned, say, on an investor call.
That's kind of what we're seeing right now.
And I think it does pose a deeper question when it comes to the model of the tech industry in general, where it always seems to be chasing these bubbles.
But the benefits that we as a public receive, you know, as each of these
bubbles happens tends to diminish further and further. Like when we think back to the dot com
boom and bust, certainly, that was a problem. Certainly, people lost money. Certainly,
there were a lot of companies that were destroyed as a result of it. But like the early stages of
the internet, and a lot of what was being rolled out had very clear, you know, benefits to the
public as we were getting access to social media and e-commerce and, you know, all of these other benefits that that came with going online.
But it feels like increasingly, you know, whether it's cryptocurrencies or the metaverse or even generative AI, which is a bit more tangible than those previous ones.
there's a lot of benefits that are promised, but not as much as actually being realized or, you know, ends up having a long-term impact on how we use the web in a positive way.
So where are we in this kind of cycle?
Because I know you and some other journalists have been calling this a bubble for a long time.
You were ahead of that, I think.
But then over the last few weeks, we've been seeing that sentiment become a lot more mainstream,
like seeing these big tech stocks like in Microsoft, Apple, Google,
Amazon, NVIDIA. They all took these huge hits in this kind of mini crash in the stock market at the start of August. From the opening bell, stocks plummeted, led lower by big tech companies like
NVIDIA and Apple, which had soared for months on AI-related hype.
Nearly $2 trillion in equity losses today.
That led to a ton of articles about the headlines calling AI a bubble.
Do you think that this is the start of the bubble bursting?
It's possible.
It feels like you can only fully know when the bubble starts to burst in hindsight, looking back, unless there's this like massive drop that is just super obvious. And it looked like potentially that's what we were seeing
a week and a half ago when things really started to drop down, but they've since recovered.
But we know that, you know, there's still a lot of concern about what's going to go on there.
And ultimately, I think when we look at the bubbles and when they're going to burst,
a lot of that relies on the confidence of a lot of institutional investors and things like that. And when they start to really sell off,
and that causes a larger sell off in the market, right? Often, it's not, you know, what smaller
commentators or what the wider public starts to think. It really comes down to a small number of very influential people who decide when to sell big, you know, a bubble here and that there is going to be a correction,
that we're getting closer to that. But when exactly that moment is going to hit,
I think that one's a bit hard to predict. And we talked so much about the money,
what happens then? Because we know the massive amounts of money the tech industry has tied up in this. The tech industry accounts for about 10% of the American economy. So what kind of
economic impact is it
going to have if this blows up in these companies' faces? Yeah, it's huge, right? Like, you know,
when the crypto bubble burst, we saw trillions of dollars being wiped out then. A lot of people
would say, you know, a lot of that was probably fictional money, you know, in cryptocurrencies.
But with generative AI, this is much more tangible stuff. And so once this
bubble does ultimately burst, like when any bubble bursts, we will clearly see companies, you know,
go bankrupt, certainly some of these AI startups, I'm sure people will lose jobs as a result of it,
we've already seen a lot of job losses in the tech industry over the past couple of years,
as interest rates have been raised, and investors been demanding you know higher profits and revenues from these major tech companies and they've laid off many
thousands of workers but one thing that really stood out to me was you know after the early part
of the pandemic amazon was really planning to build out a lot of warehouses for its e-commerce
side of things and then when people started you know, shop more at stores physically again,
and go out into the world, they've realized that they had overestimated the demand for,
you know, e-commerce services into the future and pulled back and canceled a lot of those
warehouse projects. So I wouldn't be surprised if we see some moderation on the build out of
data centers around the world. But that doesn't mean that they won't continue to expand into the future. And I'm sure like we're seeing in Ireland and places like that,
there will be a growing pushback against those as they do get built out.
This is a pretty stark picture you're painting of where things could go from here.
We've got an entire industry right now from startups all the way to the giants that have thrown huge amounts of money at this technology.
And then after almost two years, it's only kind of useful.
And from what you've been saying, it may not get any better than this.
So how did we end up here? How did it get to this point?
Yeah, that's a really great question. You know, I think it's always interesting to think back
about the way that the tech industry has structured itself for quite a long time,
right? You talked earlier about the dot-com bubble and then, you know, how that burst in
the early 2000s. And there's an author, Malcolm Harris,
who wrote this book, Palo Alto, where he basically argues that the lesson the tech industry took from
that moment, that big inflation of the values of tech stocks and then their crash was not so much
to never do this again, but rather that it was a good model to keep doing by hyping up a certain
technology, getting the values of
certain stocks to really increase, and then knowing when to get out at the exact precise
moment before it crashes again, and just to kind of keep doing that again and again and again.
And that is what we've seen with a lot of technologies, right? You know, if we go back
to the mid 2010s, there were a lot of promises about how self driving cars were going to wipe
out, you know,
a ton of jobs in, say, trucking and taxi driving and all this kind of stuff. And that never really
happened, right? We had the significant increase in deliveries now. You know, Uber is still a very
popular thing. There's still a lot of people doing that work. Rather, you know, just the conditions
of their work changed because of how the technologies were rolled out. And we've seen that time and again, right? Crypto didn't transform the
financial industry in a really major way. And we're not, you know, living in the metaverse now,
as Mark Zuckerberg predicted. Imagine you put on your glasses or headset and you're
instantly in your home space. It has parts of your physical home recreated virtually.
So there's many times that the tech industry has made these claims about how it's going to
transform the world and then not been able to follow through on that. But for the moment when
we got really excited about it, there was a lot of investment driven. Stocks went up a lot. A lot
of people made money when they sold at the right moment. And so it looks like the tech industry, you know, finds this model to be one that works
for them.
And I think that on our end, we're not really getting the benefits of that.
You know, if we're looking at crypto or metaverse or generative AI, certainly there are some
benefits of generative AI, but I think they're much smaller than what we used to get from
these digital technologies earlier on in, you know in the era of the smartphone and the internet
than we've received today.
And I think that is worth kind of reflecting on whether this tech industry that gets so
much attention, so much investment is really serving the broader society at this point,
or whether it's just about who can make the most money in the shortest period of time.
Paris, thank you so much for this.
I really appreciate it.
Absolutely.
Great to talk to you.
That's all for today.
Thanks so much for listening to FrontBurner.
I'm Stephanie Skanderis.
Jamie will be back tomorrow.