The Dividend Cafe - AI Productivity and Bounced Checks
Episode Date: February 13, 2026Today's Post - https://bahnsen.co/4cpsVcz In this episode of the Dividend Cafe, host David Bahnsen discusses the intersection of artificial intelligence (AI) and economic productivity. Speaking from O...rlando, Florida, David examines the potential and vulnerabilities of AI as an investment theme. He highlights the need for a deeper understanding of AI's impact on productivity and critiques the current optimism surrounding AI investments. David reflects on past tech investment bubbles, specifically the dot-com era, to draw parallels with the present AI investment climate. Emphasizing the importance of prudent judgment and strategic planning, he cautions against overestimating the immediate economic benefits of AI while advocating for a long-term, judicious approach to AI-driven technology. 00:00 Introduction and Conference Update 00:44 AI Investment Themes and Vulnerabilities 05:00 Economic Productivity and AI 08:32 Studies and Reports on AI Productivity 11:35 Historical Parallels: AI and the Dotcom Bubble 14:58 Investment Strategies and Risks 19:07 Conclusion and Final Thoughts Links mentioned in this episode: DividendCafe.com TheBahnsenGroup.com
Transcript
Discussion (0)
Welcome to the Dividend Cafe weekly market commentary focused on dividends in your portfolio and dividends in your understanding of economic life.
Hello and welcome to the Dividend Cafe. I am your host, David Bonson, and I am recording from beautiful Orlando, Florida, where I've been speaking at a conference last couple of days and I'm leaving momentarily to head back to New York City, but not before talking to you all about the subject of AI productivity.
I have spent a lot of time in the Dividend Cafe talking about AI, and I will be spending a lot more time talking about it for good reason.
This is a major investment theme for a lot of reasons.
I, of course, have tried my best to speak to a lot of the vulnerabilities about it, but also the opportunities, the macroeconomic reality, the social ramifications.
There's plenty to say about the AI story, and I'm going to repeat some of those things.
today to set us up, but then get into the topic of where we stand with economic productivity
being enhanced by artificial intelligence. That is the crux of the matter. I think too many do not
understand what is really at stake. We're going to unpack it today with a view on economics
and a view on history. Let me repeat for those that are a little new to the dividend
Cafe. Some of the things I said in our annual white paper that came out a couple months ago,
or actually about six weeks ago now, setting the stage for a big theme of 2026, which is my view
that this is a year in which some of the vulnerabilities in the AI investment story would come
to light. A lot of that has already been happening. And in fairness, a lot of it was happening
even in the final six weeks, let's say, of 2025. It's not the boldest prediction other than the fact
that I'm trying to be as specific as I can
as to why I think these vulnerabilities exist
as opposed to just putting a generic cloud
over the valuations, let's say, of the AI story,
which are in and of themselves a problem.
But the nine very quick points,
and I mean it when I say quick,
so don't be daunted by the nine issues,
the nine things that I think best encapsulate
our view at the Bonson Group
of vulnerabilities in the AI investment story
are that while the technology driving it is real and the transformative impact is substantial,
we believe that the impact is way less known, quantifiable, predictable, and practically investable
for that matters than many seem to realize that the major vulnerability in the story is not,
well, let me put it this way, the major investment opportunity so far has been in what we call
pick and shovel companies, the infrastructure of AI, but not in those that are actually end users
monetizing AI. The economic model for those making chips to power it and those building the language
models, that there is a circularity in the funding model, that people are paying each other for
order flow, and that that hasn't been fully appreciated by markets, although that's certainly
changing. But there is a Ponzi-like dynamic in the funding model, albeit perfectly legal.
The major capital expenditures, this is probably my biggest point of all the ones I'm mentioning,
the major capital expenditures powering all of this, lack an economic rationalization
at this time, that the hypers are vulnerable to consequences of malinvestment and excess
investment. And then that means those that they're buying computing power from,
to the inevitability of declining orders.
Number six, the assumption that there is broad,
cultural, and political embrace of this whole story,
I think is poorly thought through,
that there is more skepticism coming
and not only politically, legislatively, regulatory,
but just in the broader cultural appetite for AI
than people realize.
Number seven, the assumption that all AI-related companies
can win at once versus creating
environment where some win and some lose is dangerous. It is priced for a winner take-all assumption,
but then people are investing as if all can be winners and both those things cannot be true.
Eight, the belief that China represents no competition to U.S. companies in the AI space is wishful
thinking. It may very well end up proving to be true, but it is not something that I would
assume there is no risk of being different. And then finally, number nine, the finance
so far has been largely cash flow driven. Companies have been able to pay for this through their
own operating cash flows to another degree from equity, but we are entering a stage where you're
going to see significant debt financing of this, and that always changes the risk-reward profile.
So the question today is not visiting those various vulnerabilities that I've already talked
about a lot in the Dividing Cafe, as much as it is asking a different,
question about the underlying promise of AI, which is the productivity boom it is supposed to
generate.
I think a lot of AI skeptics will come and say, hey, I like the AI story, but we're not
going to be able to power it and that we lack the electricity that is necessary for the
demand, and we don't really have the ability to generate that electricity.
And I completely agree, by the way, that we lack the electricity, necessarily.
to meet the demand as it has been funded or committed to thus far, I don't agree that we'll
lack the ability to necessarily power it, just simply because every time I've ever heard
that sort of Malthusian argument of inadequate, let's say scarcity of whether it's power,
commodities, energy, et cetera, it's always been wrong. And people underestimate our ability
to go create a solution to generate the larger need. But I accept that we're
that that is an obstacle and an issue in front of us, but it's not the one I'd hitch my wagon to,
if you will. I can be plenty skeptical about the way in which the AI thesis has played out to
the public as a matter of investment. My issue, though, is that I very much recognize
ways in which it can increase efficiency for individual workers, but I'm not certain that
we have answered the question about macroeconomic productivity.
What I mean by this is actual value creation.
And I want to offer a distinction between two things that I think are very simple,
but nevertheless very important in their differentiation.
We talk a lot about how many reports they can generate,
how many spreadsheets that can analyze,
how many emails that can write,
how many things it can do,
all in a quantity of activity.
And that is, I think,
totally legitimate and more or less empirically verifiable. What we don't necessarily do is connect
that to enhanced output. Okay. And that's what I mean by value creation. That's what I mean by
an increase in productivity. If what you're saying is that we can replace a person who cost
X doing something with AI function doing the same thing at a cost much less than X. You have spoken
to enhanced margins, but you have not spoken necessarily to enhanced output. The value being created
might be the same, it just may be at a different cost, but all you've done is now shift the opportunity.
What we are needing to come out of the trillions of dollars of expenditures in AI is an enhanced
productivity that builds real GDP growth. Margins themselves are not output. If we replaced phone
operators and customer service reps and junior analysts with AI function, that is very different
and not necessarily different than profit margins. I see all of that as having a good path to
increase marginability. But to enable more investment into tangible productivity, this is a
challenge that I'm not sure has been answered yet. And what I believe is in front of us right now
is some recent studies. I have links to them in the dividend cafe.com written commentary.
this week that are noteworthy. Now, the MIT Media Lab did a very extensive report a few months ago
that was sobering. It also had a path to how a lot of these things can change. But their belief is that
95% of use of AI so far is not led to an enhanced productivity. But that they're hinging that
largely to the belief that there's been inadequate customization and integration. You get wide
adoption, but on the follow-through, the back-end, the feedback that is to be received and then
implemented, they see a lot of shortcomings there. Those are all real problems. They're
substantive and the data is what it is. But there's all solvable problems, too. Okay?
The Stanford Media Lab had a report, and I thought was also very fascinating because I've observed
some of what they're referring to around this term that they call work slop, where AI generation is
doing more and more tasks, but the substance of the tasks are subpar, and then leading to workers
that don't know if they can rely on it or have to redo it or have to spend a lot of time
reviewing it anyways. It's creating more work product, but not necessarily more solutions,
and the accuracy, the depth, and the enhanced conclusions from it are subpar. The CFO survey
last year that got a lot of attention stating that 70% of CFO said, so far, there's
not seeing real improvement in productivity, even higher percentage being negative on what they're
seeing in other elements of KPIs, but particular to productivity, 70%, not sure that they're seeing
any impact yet.
They, by the way, maintain an optimism that they will, that there will be revenue growth,
for example, from a lot of wider adoption.
We can dig in all we want.
There is not yet empirical evidence of AI-generated productivity enhancement in a macroeconomic sense.
There's all kinds of evidence of profitability increase in select usage.
There's all kinds of evidence of tremendous promise and certainly just captivating technology.
But we are not dealing with the world being impressed with a new technological toy.
We are talking about a level of expenditure capital investment that is going to require a greater productivity boom.
And you can say, well, we need more time for adoption.
And I think that's entirely true.
It's totally fair.
But market valuations suggest not just broad adaptation, but an end result that remains somewhat speculative.
not just in that it will come, but how it will come.
That led me to a book that I read in late December, early January that was not remotely
about AI.
And you can't even really say it was about the internet bust because the book was written
in 1997 and the internet had not even busted yet.
It wasn't even done forming yet.
And the book came out in early 98, was written in 97.
And it was a book called Burn Rate.
and author is Michael Wolf, who many of you know
has now become famous as just a serial Trump book writer.
His first book he wrote was called Burn Rate,
and he was sort of a new media.com guy in the 90s,
and his thing kind of fizzled out.
He wrote a book on it that more or less put him on the map as an author.
And I don't know how much to say about Michael Wolf of anything past Burn Rate.
I don't care that much, but this is a really, really good book.
Very interesting.
But what I couldn't get past is that I was,
reading a book now historically, right? It was written 28 years ago, describing events of 28, 29, 30, 31 years ago,
all before the things that would become much more famous two or three years later than when he was
writing it, which is when we got some more known and knowns about the internet implosion as far as
the bubble of it, not the internet itself, but the dot-com investment bubble. And what I would say
is that there are certain similarities in the internet tech moment of the 90s and the AI moment of the 2020s are living in.
But there are a lot of differences too.
But the similarity I want to highlight is this indiscriminate investment that is based on something that I think we need to more accurately identify, more honestly identify, if we're to avoid making serious investment mistakes.
I want to read a quote from Michael in the book, word for word, that really got to me.
And just keep in mind, he's literally referring to the actual early stage.com moment in the 1990s.
Nobody knows what's going on.
The technology people don't know, the content people don't know, the money people don't know.
Whatever we agree on today will be disputed tomorrow.
Whoever is leading today, I can say with absolute certainty will be a lot.
a drift or transformed some number of months from now.
It's a kind of anarchy, a strangely level playing field, the Wild West.
It's uncanny how accurate that ended up being in the Internet moment a couple years ago,
but aren't there some similarities to what we're dealing with now?
I don't mean any of this bullish or bearish.
I mean it descriptively.
I'm not predicting a particular outcome.
I'm simply describing something that there are a lot of companies right now in the AI moment that might succeed and a lot that might fail.
But what we don't have is clarity of the plan, the strategy, the vision, the specificity behind it.
And that can all end up being okay.
It was not okay for an awful lot of Internet companies that ended up in the graveyard, but it did turn out to be perfectly okay for several.
My point is not necessarily what people believed, but in the way in which we operate, the mentality
behind it, the risk-taking thesis, whether it be for the entrepreneurs and startup folks getting
in it, or for the investors trying to back it, that what you had in the 90s, I think was one
of two categories.
And I want you to tell me if you think this is comparable to what we're dealing with now.
First, you just had those saying, buy anything and assume the momentum of the moment of the
mania will carry you higher and that you will exit to someone else who will pay you a higher
price later.
And it was a trading mentality, it was a speculating mentality at varying degrees of
self-consciousness, but it was permeated quite broadly.
I would argue that was a systemic moment of the 90s, a systemic approach to investing in the
internet in the 90s, and I would argue that it's systemic with a lot of the AI investing now.
But the second category is interesting, which is those who had in mind, they were investors
who had a certain revenue model or strategy or expectation of how it would go, an acquisition
plan, a growth plan that they found to be attractive.
They were trying to base it to something fundamental, something strategic.
When you hear these two descriptions, anyone who knows me or has followed my investment
philosophy and mentality for any time would probably assume that I'm going to be much more critical
of number one than of number two of the kind of just buy and hope the whole thing goes up and
then get out thing versus having a theory of the case that's rooted to some strategy or belief.
But the fact of the matter is the only people that actually came out okay were those who
did number one and executed their timing well, that they exited at the right time.
The second group, some of them may have ended up making money, but they only did it if it was
accidental and accidentally aligned with the first group. That they had a theory of the case,
but what ended up pushing it higher in the end was not their theory of the case, and then they did
end up exiting at a right time. And then those that just had a real long-term vision, those things
all executed very, very differently than people thought. So none of that is necessarily good or bad.
it, I think it's an accurate description of that moment. But when I apply that to the AI thing now,
I think it's just very important to say that there's people who have a theory of the case
as to how certain things are going to monetize, but they have to understand that the companies
are investing in do not have the same theory of the case, that there is a invest now, figure it out
later mentality that very well could work out for some that I believe is not going to work out
others. And yet, even when it does work out, it will not work out because people have
perfectly vision-in-vorted how this is to go. There's a Wild West component, and that can be
exciting, it can be opportunistic. But when you frame it that way, it does involve a risk-reward
paradigm that I don't think people fully appreciate. There is a dynamic nature to this that doesn't
offer the clarity about ROI, and it doesn't give you a real clarity on the
promise behind it, even in the current moment, what the enhanced productivity is supposed to be.
And I think that that is probably the most important thing for investors to understand in the
current moment. I want to read a second quote from that burn rate book from Michael Wolf,
again, written in 1997. You can't say, hey, what did you think was going on? There's a fire burning
like crazy that we have to keep throwing dollar bills on. And while that was true of this business and
every other business in this new internet industry. And while everybody knew it was true,
that is that cash was just being consumed at a rate and with an ill logic that no one could
explain, much less justify, but you must never, ever have admitted it. It's daunting. And I think
that there is a significant lesson to be learned from what we're talking about there in that
historical lesson. I look forward to an increased productivity from AI. I look,
look forward to increased efficiencies and some degree of quality of life getting better.
Just so you know, my biggest hope is that it can end up becoming useful and getting more
medical advancements to market.
That expediting elements of FDA approval for new drugs, new technologies, I see that as having
a lot of promise.
It's not hard for me to imagine where a lot of this can become remarkable.
But when we talk about from an investment standpoint, just buying companies connected to AI,
I think it's going to work out about as well as just buying companies connected to Internet did.
There has to be a theory of the case.
And a theory of the case right now lacks the clarity to weight it as an investment the way a lot of people have weighted it
because that clarity is going to be proven wrong just by the dynamic nature of the technology and its transformative reality.
What gives things productive use and productive capacity is not the mere existence of a technology.
It is when we put it to work an actionable way and then get value creation out of it.
Electricity is not valuable to us because it exists.
Electricity is valuable to us because human beings then put it to work in a meaningful way.
And it creates value.
That will happen with AI.
But the path to that is right now one that I think folks are talking about as if it's going to be divorced
from judgment and wisdom, stewardship, sensibility.
And I just can't speak against this more firmly.
When AI gets utilized as a tool in tandem with judgment, wisdom, stewardship, and does so prudently,
I expect good things are going to happen.
But understanding the difference between the chicken and the egg, the means in the end, the cause and the effect, the, if you will, primary and derivative,
those differences are going to be the key to investor's success out of this.
and those differences are lacking in our current AI investment conversation.
Thank you, as always, for listening, watching, and reading the Dividing Cafe.
We have a Monday holiday, federal holiday, markets and banks closed for President's Day.
So we will be back with you with a daily recap on Tuesday, no Dividendon Cafe on Monday.
And I'll be back with you, as always, every single Friday for next Friday's Dividendon Cafe.
Thanks so much and have a wonderful weekend.
The Bonson Group is a group of investment professionals registered with High Tower Securities LLC, member Finra and SIPC, and with High Tower Advisors, LLC, a registered investment advisor with the SEC.
Securities are offered through High Tower Securities LLC. Advisory services are offered through High Tower Advisors LLC.
This is not an offer to buy ourselves securities. No investor process is free risk. There is no guarantee that the investment process or investment opportunities referenced Turin will be profitable.
Past performance is not indicative of current or future performance and is not a guarantee.
the investment opportunities referenced herein may not be suitable for all investors.
All data and information referenced herein are from sources believed to be reliable.
Any opinions, news, research, analyses, prices, or other information contained in this research
is provided as general market commentary and does not constitute investment advice.
The Bonsor Group in Hightower shall not in any way be liable for claims and make no, express, or applied,
representations or warranties as to the accuracy or completeness of the data and other information,
or for statements or errors contained in or emissions from the,
the obtained data and information referenced here in. The data and information are provided
as of the date reference. Such data and information are subject to change without notice. This
document was created for informational purposes only, the opinions expressed, are solely those of the
Bonson Group and do not represent those of Hightower advisors LLC or any of its affiliates.
Hightower advisors do not provide tax or legal advice. This material was not intended or written
to be used or presented to any entity as tax advice or tax information. Tax laws vary based
on the client's individual circumstances and can change at any time without notice.
Clients are urged to consult their tax or legal advisor for any related questions.
