Daybreak - If AI is really changing everything… where’s the evidence?
Episode Date: October 20, 2025Is the AI revolution already running out of steam? Despite years of hype about a world transformed by smart tools and endless innovation, the data tells a quieter story. The growth is flat, t...he excitement is fading, and there have been fewer breakthroughs than expected. Has AI already peaked or are we just looking in the wrong places? In today's episode, we dive into one of The Ken’s most thought-provoking essays by Praveen Gopala Krishnan, 'What does an AI bubble burst look like?'Tune in.Daybreak is produced from the newsroom of The Ken, India’s first subscriber-only business news platform. Subscribe for more exclusive, deeply-reported, and analytical business stories.
Transcript
Discussion (0)
Hi, this is Rohan Dharma Kumar.
If you've heard any of the Ken's podcasts, you've probably heard me, my interruptions, my analogies,
and my contrarian takes on most topics.
And you might rightly be wondering why am I interrupting this episode too.
It's for a special announcement.
For the last few months, I and Sita Raman Ganesh, my colleague and the Ken's deputy editor,
have been working on an ambitious new podcast.
It's called Intermission.
We want to tell the same.
secret sauce stories of India's greatest companies.
Stories of how they were born, how they fought to survive, how they build their
organizations and culture, how they managed to innovate and thrive over decades, and most
importantly, how they're poised today.
To do that, Sita and I have been reading books, poring over reports, going through financial
statements, digging up archives, and talking to dozens of people.
And if that wasn't enough, we also decided to throw in video into.
to the mix. Yes, you heard that right. Intermission has also had to find its footing in the world of
multi-camera shoots in professional studios, laborious editing, and extensive post-production.
Sita and I are still reeling from the intensity of our first studio recording.
Intermission launches on March 23rd. To get an alert, as soon as we release our first episode,
please follow Intermission on Spotify and Apple Podcast.
or subscribe to the Ken's YouTube channel.
You can find all of the links at the ken.com slash I am.
With that, back to your episode.
Hi there.
So today you are in for a treat
because I am moving away from regular programming
and I'm going to be reading out a really compelling edition
of one of the Ken's most popular subscriber-only newsletters,
The Nut Graph, written by my colleague Praveen Gopal-Krishnan.
And it is titled,
what doesn't AI bubble burst look like? It basically revolves around a simple question.
If AI is really changing everything, where is the evidence? For years, we've been told that
tools like Chart GPT would spark a revolution, millions of new apps, businesses and ideas, flooding
our world. But what if that revolution actually never happened? What if the charts are flat
and the promise is fading and the hype reaching its peak.
In this newsletter, Praveen dug into the strange disconnect
between what AI should be doing and what is really happening
and why the answers might say more about us, our economy and our expectations
than about the tech itself.
Welcome to Daybreak, a business podcast from the Ken.
I'm your host, Nick Dha Sharma, and I don't chase the news cycle.
Instead, every day of the week, my colleague Rachel VIII.
Gargiz and I will come to you with one business story that is worth understanding and worth your time.
Generally, there are two ways to predict the future. One is to observe what people are doing
and then state that what is happening will continue to happen. You can extrapolate what will
happen if the thing that is going on becomes more and more popular and if it takes off.
For most part of the past few years, this is how people have been talking about AI. Someone uses it to
say, write some code and then they think, wow, this is amazing.
They look around and see everyone else doing the same.
So they loudly wonder, imagine what will happen if everyone uses AI to write code.
Some will write a tweet thread about how everyone can become a coder now.
Another person jumps in with predictions that this will be the end of human coders.
A third person, usually a VC, will write a blog post that a world where anyone can write code
is a fundamentally more equal world because it quote-unquote democratizes creation,
which will lead to great outcomes for everyone.
But there is another way to talk about the future,
and that is to start from the exact opposite starting position.
The normal way is to start from a verifiable action,
that is, someone is using AI to do something and extend that into a speculative question.
That is, what happens when everyone does the action?
Instead, a more powerful way is to just flip it.
You start from a speculative question and then go looking for a verifiable truth.
In other words, you assume that the outcome that should be happening is already happening
and you look for evidence to support it.
And well, this is what Mike Judge, a middle-aged programming nerd and an early adopter
of the AI coding did.
What he found out, in his words, made him very angry.
Here is the crux of his hypothesis, which he wrote about on Substack, and I'm quoting,
If so many developers are so extraordinarily productive using these tools, where is the flood
of shovelware? We should be seeing apps of all shapes and sizes, video games, new websites,
mobile apps, software as service apps. We should be drowning in choice. We should be in the middle
of an indie software revolution. We should be seeing 10,000 Tetris clones on Steam.
Consider this.
With all you know about AI-assisted coding and its wide adoption,
if I showed you charts and graphs of new software releases across the world,
what shape of that graph would you expect?
Surely, you'd be seeing an exponential growth up and to the right as adoption took hold
and people started producing more.
End quote.
Judd says in his substack post that he spent a ton of time and effort
trying to find out whether there really wasn't up.
in the amount of software that was being created.
As you can probably guess, all the graphs were utterly uninspiring and flat across every major
sector in software development.
His point is that if we are truly in the middle of a coding revolution, you would expect
a huge uptick in software being created either as apps or as code repository comments.
But there is no indication that this is happening right now.
You should read his post in full because he also loved.
lays out the counter arguments and addresses them.
You will find the link in the show notes of this episode.
Anyway, my larger point is that the vibe around AI
seems to have definitely shifted a bit in the last few months.
I guess it all began with the much anticipated GPD5 launch by OpenAI in August.
There were a ton of expectations placed on it
and well, the results were disappointing.
It is not that GPD5 was worse,
it is that it wasn't noticeably better.
and the whispers got much louder.
What happens if AI doesn't get much better than this anytime soon?
There are other signals that we are living in an era of peak AI hype.
There is an MIT study which found out that 95% of all generative AI pilots are failing.
AI is still hallucinating stuff and making fundamental mistakes.
And increasingly, it is getting easier to tell who is using AI to craft entire post-sonson.
LinkedIn, just look for phrases like, and then it hit me, or variants of silently, quietly,
as adjectives. Then there are people like Mike Judge who are asking the most important question,
that is, if AI is truly working, well, where are the results? Now, at least one important person
seems to be saying the same thing. Let me read you a passage from an article from Futurism
about Microsoft CEO admitting that AI is generating basically no value.
And I'm quoting, to Nadella, the proof is in the pudding.
If AI actually has economic potential, he argued,
it will be clear when it starts generating measurable value.
He said, so the first thing that we all have to do is when we say that this is an industrial
revolution, let us have that industrial revolution type of growth.
Nadela said the real benchmark is the world growing at 10%.
Suddenly, productivity goes up and the economy is growing at a faster rate.
When that happens, we will be fine as an industry.
Needless to say, we haven't seen anything like that yet.
OpenAI's top AI agent, the tech that people like Open AI CEO, Sam Haltman, say,
is poised to upend the economy, still moves at a snail space and requires constant supervision.
End quote.
So are we in the middle of an AI bubble?
Probably.
And that is not just my opinion.
Even Sam Altman believes that.
Right now, there are 1,300 AI startups globally that are valued at over $100 million.
That is a pretty big number of startups with little to show for it in terms of results.
At some point, sooner than later, a correction is due.
And when that happens, there will be consequences.
Obviously, the most immediate effect will be economic.
As Christopher Mims, the Wall Street Journal columnist, noted, and I'm quoting,
the AI infrastructure build out is so gigantic that in the past six months,
it contributed more to the growth of the US economy than all of consumer spending.
End quote.
As a percentage of GDP spending on AI infrastructure has already exceeded spending on telecom
and internet infrastructure from the dot-com boom.
This is what is driving US growth right now.
If that bubble pops, even for a little while,
the economic effect is going to be catastrophic.
But honestly, the bigger outcome isn't economic at all, but social.
Langdon Winner, who I have written about before,
is, in my mind, probably the most important academic figure
who wrote about the effect of automation and technology on politics and society.
And he did it at a time of great techno-optimism in an era where nobody could even imagine what it would become.
Last month, I picked up one of his books called The Whale and the Reactor,
a search for limits in an age of high technology.
Originally published in 1986, Wino writes about the choices we make as a society around the technology that we want,
want to use and what that does to us.
Honestly, some of the stuff that he wrote nearly half a century ago gives me goosebumps
when I read them in 2025.
It is incredibly prescient.
There is a chapter in the book titled Myth Information, where winner dissects the argument
of enthusiastic technologists who believe that computers will provide humans with so much
information, which will then become knowledge, which will lead to power, which will lead
to democracy.
According to Winner, this is a mirage and social progress will never logically come through technological development.
I am going to read out the last paragraph in that chapter.
Once again, this was written in 1986.
And I'm quoting, computerization resembles other vast but largely unconscious experiments in modern social and technological history,
experiments of the kind noted in earlier chapters.
Following a step-by-step process of instrumental improvements, society creates new institutions,
new patterns of behavior, new sensibilities, new contexts for exercise of power.
Calling such changes revolutionary, we tacitly acknowledge that these are matters that
require reflection, possibly even strong public action, to ensure that the outcomes are desirable.
But the occasions for reflection, debate and public choice are extremely.
rare indeed. The important decisions are left in private hands inspired by narrowly focused
economic motives. While many recognize that these decisions have profound consequences for our
common life, few seem prepared to own up to that fact. Some observers forecast that the
computer revolution will eventually be guided by new wonders in artificial intelligence,
its present course is influenced by something much more familiar.
the absent mind. End quote. Winner is not the only person who believes that the political and
social fallout are going to be far more damaging than the economic one. Aaron Benenev,
a much more contemporary writer and author of the book, Automation and the Future of Work,
has a similar conclusion. His argument is that the real threat posed by generative AI is not
that it will eliminate work on a mass scale, rendering human labor obstacles.
It is that, left unchecked, it will continue to transform work in ways that deepen precarity,
intensify surveillance and widen existing inequalities.
In his argument, a world where AI has limits is a world where firms will increasingly use it
for lowering costs, disciplining workers, and consolidating profits.
So, the most important question before us is not about what an AI bubble burst looks like.
but whether we are truly willing to confront and change how we want this technology to move forward
and in which direction it should when it does.
Daybreak is produced from the newsroom of the Ken, India's first subscriber-focused business news platform.
What you're listening to is just a small sample of a subscriber-only offerings
and a full subscription offers daily, long-form feature stories, newsletters and a whole bunch of premium podcasts.
To subscribe, head to the Ken.com and click on the red subscribe button.
on the top of the website.
Today's episode was hosted and produced by my colleague,
Snitha Sharma, and edited by Rath CN.
