Motley Fool Money - Morgan Housel on History, AI, and the Future of Investing
Episode Date: April 26, 2026Morgan Housel is the bestselling author of The Psychology of Money, Same As Ever, and The Art of Spending Money. At our recent Motley Fool member event, Senior Vice-President of Rule Breakers strategy... Brian Richards sat down with Morgan for a conversation about how the AI boom is intersecting with human psychology and investing. Host: Brian Richards Guest: Morgan Housel Producer: Bart Shannon, Mac Greer Disclosure: Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, “TMF”) do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement.We’re committed to transparency: All personal opinions in advertisements from Fools are their own. The product advertised in this episode was loaned to TMF and was returned after a test period or the product advertised in this episode was purchased by TMF. Advertiser has paid for the sponsorship of this episode.Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
What's very unique about AI historically, though, is that it's the first new technology that the people making it promise that if they're successful, they could destroy society.
Was Morgan Housel, best-selling author of The Psychology of Money?
I'm Motleyful producer Matt Greer.
At our recent Motleyful member event, Senior Vice President of Rule Breakers Strategy Brian Richards talked with Morgan.
about AI, history, investor psychology, optimism, pessimism, and the future.
It was a great conversation. Enjoy.
All right, Morgan, I've been looking at this conversation for a few weeks,
and not just because you're an old friend.
The topic, de jour, AI, and innovation, I wanted to get your take on it as somebody who has studied
behavior, psychology, investors over time.
I'd love to hear what's the most useful thing you've learned yourself in the last year as a thinker.
One that's very unique to me, maybe, is I don't write nearly as much as I used to.
You know, I had a good 15-year run of writing every day.
At fool.com was writing two or three pieces per day, many of which you edited, thank you.
But about two years ago, I cut way, way back, and I haven't really written anything significant in about two years.
And what was interesting for me that I noticed is how much of writing.
is not just an output, it's an input.
It's a very clear way to crystallize your thinking
and to understand what you've been learning.
And as soon as I stopped writing,
I felt like even though I was reading more
with my newfound spare time, I was learning less
because I wasn't spending a lot of time
actually trying to crystallize the thoughts
that I'd had from learning.
And so I think for everybody, no matter who you are,
forget professional writers, everybody,
if you're just reading all day and learning,
but you're not going out of your way
to really crystallize those things,
really crystallize those thoughts by just writing down what you've learned, taking notes in the
books that you've read, you lose a lot. And I think I knew that five years ago, but it was interesting
to see in the last two years how quickly my brain turned to mashed potatoes when I stopped writing.
So that's one thing. The other thing that's, I think, been very prevalent at, like, the society
level in the last five years is the reinforcement of how addictive pessimism is at the society level.
And that's always been true.
John Stuart Mill was writing about that 150 years ago.
This is not a new thing.
But I think 25, 30 years ago, cable news figured out
that you can gain attention with pessimism.
They all figured it out.
And it's just been in the last five years, I think,
that the social media algorithms figured it out too.
And you see this at the economy level
with levels of consumer confidence
are the lowest they've ever been ever right now.
Lower than they were during the 2008 crisis,
than they were in the darkest days of COVID,
consumers have never felt worse about the economy
in the history that we've been tracking this stuff
than right now.
Of course, there's a lot going on right now,
but it's not even political.
It spans different presidencies.
It's been spanning for a while.
And I think at least an element of that
is that people, particularly young people,
are more exposed to pessimism
than they've ever been at all.
There's some interesting studies about
tracking New York Times headlines
over the decades.
And even as the world has gotten objectively better,
objectively better in terms of life expectancy and average income or whatnot. The headlines progressively
get more negative over time. And that's been going on over decades. And in a media world where you're
just trying to get attention, you just need everyone's attention, very different from like the
Walter Cronkite days of you had a monopoly on people's attention. Now when there's an arms race for
attention and you're never going to get there faster by being pessimistic, it's like you can live
in a world in which things are objectively, analytically getting better and people feel worse.
and worse about it. So I think that's always been true. But the last five years, it went steep.
If you have your bingo card, that is John Stewart Mill, Carl Marks, Joseph Schumpeter, and Friedrich Hayek,
all in one morning. So wait until we get to the afternoon.
All right, Morgan, your second book, the title is same as ever. It's a great book, and it's
basically an argument that the most important things about human behavior never really
change, despite all of the sort of technological progress that we make. I want to start there.
Does AI feel to you like a new variable or is it just a stage for the same old human drama?
And I don't know if we can, but put the pole back up there because I think that was a pretty fascinating result.
I think there's plenty that rhymes with things that have happened in the past.
There's been, every 20 or 30 years there's a new technology that at least promises to fundamentally change everything.
And usually does.
The Industrial Revolution, radio in the 1920s, nuclear energy in the 1950s, the internet in the 1990s, a new technology.
that says, this is going to rewrite everything that we know, and your jobs, your careers are
not going to be the same in a short period of time, five or ten years. That's been going on forever.
One thing that's interesting about those trends, if you look back with the glory of hindsight,
looking back, is that even the people who invented those technologies and were the most ambitious
and had the most foresight in those, could not have fathomed what their products turned into.
And so Henry Ford could not have ever imagined
that he was going to basically create the American suburb
with the car.
He understood cars and motors and whatnot.
He couldn't fathom that this meant that people were going to live 40 miles
from where they work and commute in.
All these things that the Wright brothers could never have imagined
Delta Airlines.
And so all these things that the people who have the greatest vision
can't see where it's going.
If Steve Jobs was alive,
I don't think he could have possibly foreseen
what social media was going to do to society
on the phones that he had.
he built. And so I think if that's the trend, even the people who have the most wild AI
visions today and who are creating the technologies themselves probably can't comprehend where
it's going to go in 10 or 20 years. The people who make Adobe Photoshop, which is like a software
from manipulating images, they create tools within Photoshop that they have no idea what people
are going to do with it. And they just understand that if you create every imaginable tool to
manipulate an image, somebody will find a use for it, even if they don't know what that use is going to be.
I think there's a lot of that with technology,
particularly with something like AI, where the people who are making
these can't fathom what other people are going to do with it.
They know what they would do with it.
But what is somebody else going to do with that technology?
That's where these things go way offhand.
What's very unique about AI historically, though,
is that it's the first new technology that the people making it
promise that if they're successful, they could destroy society.
That's a very unique thing, that, hey, if we achieve what we're trying to
achieve, we could wipe out 50% of white
jobs and hack every government database. And they're explicitly warning about this on a daily basis.
That's a very unique thing. Most of the time when you have a new technology, the people making it
want to advertise the good that it's going to do versus constantly warning about how dangerous
they are. It's a technology that there's a lot of existential risk, almost like the nuclear era,
and it feeds into the pessimism you're talking about. What's interesting about the nuclear era, too,
is that if you go back to the 1950s, the peak of nuclear optimism, when the vision back
then all over the world was that every town big and small in the world would have its own small
little fleets of nuclear reactors and that the fossil fuel era, at least for power plants, was over,
that nuclear was going to take over everything. That was the vision back then. It obviously
didn't come to fruition, at least as the optimist saw, because it's dangerous. And so as soon as
it started growing, governments all over the world said, you have this amazing, powerful technology,
but it's dangerous. So we are at a minimum going to regulate it into the ground, if not outright
ban it as Germany and Austria I've done. And so is that analogy for AI? It's like if the optimists
are right and it actually is a tool to put half of white collar workers out of business,
what government is going to say, good for you. Congratulations, guys. Thanks for destroying. So they're not
going to let that happen. It's the same way that they did with nuclear energy. I don't know if it's a
perfect analogy, but the more disruptive a technology is going to be. And there's like a paradox where
like the higher the odds, it's just going to be regulated out. And so, but what's different about
AI too is like how dispersed it can get globally. If the US regulators regulate everything, but there's
one model in China that could just spread out all over the world and everyone can use it. So it's hard
to like put it back in the box relative to other technologies. Amazon presents Jeff versus
Taco Truck Salsa, whether it's Verde, Roja or the Orange One. For Jeff, trying any salsa is like
playing Russian roulette with a flamethrower.
Luckily, Jeff saved with Amazon and stocked up on antacids, ginger tea, and milk.
Habaniero? More like habanier, yes. Save the everyday with Amazon.
In the morning session, Bill did a great job of talking about AI's impact on the market.
As a person who's sold 12 million books talking about investor psychology, I'd love to hear your take as to what you believe AI's
impact on the investor will look like?
I think largely it's a continuation of what's happened over the last 30 years, which is if you go back before 30 years ago, the edge that you could find in investing, if you wanted an edge, was informational.
So you have stories of Warren Buffett in the 1960s going into the library in Omaha and reading every page in the Moody's manual so that he could find cheap stocks.
That doesn't work anymore. Everyone is the same information. It's all on your phone. A kid in Africa has the same information.
that the people working at Goldman Sachs do.
So informational edges almost don't exist like they have.
And so over the last 30 years,
what has become more important in terms of having an edge
is behavioral.
Very hard to have an information edge.
But if you can remain calm when others are panicking,
that's your edge.
And to the extent that AI is another layer on top of that,
that the people building investing models on Wall Street,
you know, discounted cash flow models,
even 10 years ago, that was kind of a unique thing,
that was a unique skill they had.
Now, any AI can whip out those.
models in three seconds and anyone can do it for free. So that edge doesn't exist anymore.
What could backfire with AI and investors is everyone knows if you've used chat GPT or Anthropic
or whatever, they're all synchifference. They just tell you whatever you want to hear. And very
much like social media, it's very good at keeping you engaged. They know exactly how to keep you
scrolling. And I think with the LLMs now, they want to keep you on the page. They want to make you
happy. They want to tell you that you're doing great. And so if you were to upload your portfolio
to chat GPT and said, what do you think of this? It's going to say you're the most brilliant
investor ever. You're doing great. And if it were to say, hey, you're an idiot. These are the
worst companies. You would stop using it. And the companies know that. They want to keep you
engaged. And so maybe just like a lot of times with politics and news in the last 20 years, everyone
found their own bubble. Whatever you want to believe, there's someone out there who's going to tell
you that you're right. If LLMs are that for investors,
That's probably a risk.
So the for you page in LLM world,
you just serve the things that you want to be served.
Well, the other thing with LLMs, too,
is that if you take a field that you are an expert in,
that you truly are an expert in this field,
and you start querying chat GBT about some of the basics
of your field, you'll see how much of it it's just making up.
And so you don't know that when you're not an expert.
You read it and you're like, oh, this is all the right information.
But whatever your profession is, ask an information,
and you're like, it's making a third of this up.
But if people don't know that,
It just furthers them down to that bubble that they want to be in.
It's going to tell them whatever they want to hear.
I want to stay on this topic.
You've written before that bubbles aren't really about valuation or they're not exclusively
about valuation.
They're about narrative and zeitgeist and identity.
People don't just own a stock.
They sort of become it.
By that definition, do you think we're currently in an AI bubble?
Two things about bubbles.
One, there's no definition of what a bubble is.
So people can just subjectively say it is or it isn't a bubble.
But I think what's interesting.
about it is that AI is so expensive to build.
It will cost trillions and trillions of dollars
to build out these data centers,
that the company is raising money,
whether it's Open AI or Anthropic or XAI, any of them,
they have to be hyperbolic when they're describing it.
They have to.
If they just went out and said, we're creating a technology
that's going to be a marginal improvement
for a couple white collar workers, you can't raise $2 trillion
on that.
They have to say this is the technology that ends all technology.
There's no other way that they can do it.
What's also interesting about how expensive it is,
is that at least for the chips that they're building,
the fundamental inputs in these data centers,
right now I have a 12 to 24 month shelf life before they're obsolete.
So not only does it cost trillions of dollars,
you've got to redo that every couple years,
which means they have to be hyperbolic squared now
when they're talking about what they're going to do.
I mean, it's not dissimilar for when you were buying a new laptop in 1995,
it was obsolete by 1996.
So that's very much what they're going through right now.
And so we don't know if it's a bubble, but we know that they have to talk as if there's nothing that comes after AI.
This is it.
The chip obsolescence is a bit of an argument in favor of NVIDIA.
But I want to stay on this and piggyback off that.
So a critique of behavioral finance or the behavioral finance worldview is that it's sort of it's conservative.
So you're always, you know, not you, but that worldview is always preaching about the dangers of, you know, recency bias or overcon.
or, you know, narrative seduction.
The issue with that is that some of the greatest wealth creation over time accrues to
people who are wildly optimistic or almost hype men and women in the case of the leaders
of some of the AI companies who are out raising money.
And so I guess I want to ask you the line between bias and vision or optimism, like, how
do you strike the balance between those two things?
I think part of this is like being very careful who you look up to because a lot of the people who are extremely successful, outsized, huge multi-billionaires are successful because they don't think about the world in the same way you and I do.
Some of that is very positive. They create amazing products, create a lot of wealth for their investors.
Inevitably, with every single one of them, there's going to be parts of the world where they think differently in bad ways.
And so when people have negative views about Elon Musk for his political statements,
his statement, all that, whatever might be, like, yeah, this guy's been trying to colonize
Mars since he was 25 years old.
He doesn't think like you and I do.
Of course he has very strange views about what we should do politically.
But go on down the list, whether it's Zuckerberg or Bill Gates, like Jeff Bezos, all of them,
the reason they're so successful is because their brains don't work like us.
And a lot of them, I think, have hardest their demons for productivity.
And there's another saying from Paul Graham, the investor, who he says,
half of the traits of the eminent are actually disadvantages.
And they succeeded in spite of those things.
And so it gets dangerous when people try to mimic those traits of like,
oh, Steve Jobs was successful and he was kind of a jerk to his employees.
So maybe I should try to do that too.
Like, no, he succeeded in spite of being a jerk to his employees.
And so I think there's a lot of that.
But in terms of like the thin line between bold and reckless is always very difficult.
to understand in hindsight.
One example of this is Cornelius Vanderbilt.
It was the richest man in the world during his day.
By any account, by even the most optimistic account,
the most charitable account of what he did,
the huge portion of his wealth came from breaking laws,
just completely flouting the laws.
And he admitted that he had no qualms about that whatsoever.
It was an era where he could get away with it.
He could pay off judges.
He did pay off judges.
And we remember him today, by and large,
as being wealthy, entrepreneur,
or maybe like a Maverick.
It's so easy to imagine an alternative history
in which eventually caught up with him
and they threw him in prison
and we remembered him as like the old school Bernie Madoff.
So that line between bold and reckless
is very, very thin and hard to know
different ways in which it would have turned off.
You know, Sam Bainfrey of FTX
who's now in jail right now
for the crimes that he committed.
He's tweeting a lot.
He's tweeting he's trying to get a pardon.
He's trying to get a pardon.
But that is another scenario
where he easily, easily could have gotten
away with that. If he had kept it going for another two months, he probably could have gotten
away with it. And there's an alternative history where you and I right now were praising how much of a
genius he was. Like those outside successes, there's always, there's a graveyard of people who
made the same decisions of them and ended up with a very different outcome.
Local news is in decline across Canada. And this is bad news for all of us. With less local news,
noise, rumors, and misinformation fill the void. And it gets harder to separate truth
from fiction. That's why CBC News is putting more journalists in more places across Canada,
reporting on the ground from where you live, telling the stories that matter to all of us,
because local news is big news. Choose news, not noise. CBC News. In broad strokes, has the AI
technology changed anything about how you personally think about your financial life, even
something small? I'm a little about my financial life, but as a writer, I mean, it was a,
would be literally talking my book to say, it's not going to replace writers, that we're still
going to be in demand. But let's say what has changed for me as a reader, I consume a lot more
content than I create. Obviously, there's a lot of discussion and whatnot, like, will this
replace not just authors, but musicians, artists of all kinds? I'm not optimistic on that at all.
I think people really like art and writing fits into that category as being able to connect
with a fellow human. I'll give you an example of this. One of the best business,
books in the last 20 years, Shoe Dog by Phil Knight. I'm sure half of you have read it. It's
phenomenal book, unbelievable book about how he created Nike. And this was not hidden. They didn't
try to hide this, but I learned after I read it. And after I read it, I said this is one of the
best business books I've ever read. I learned that it was ghostwritten. And they didn't hide that.
Interesting. The same ghostwriter wrote Prince Harry's biography and Roger Federer's biography,
very good ghostwriter. But after I learned it was ghostwritten, it took away some of the magic
that I had really cherished that book for.
And look, it's the same story.
It is his story.
But there's something very special about reading it and saying,
like, I'm reading Phil Knight's words right now.
And when you learned you weren't, like,
ah, it doesn't, it kind of took it away.
Two years ago, Google came out with a product called Notebook L.M.
Which what it was is an AI product that would create a custom podcast for you.
So you go in and say,
make me a podcast about the fall of the Roman Empire,
about technology in the 19th,
whatever you want. Or you can even upload a PDF and say, make me a podcast about this topic.
And it would spit out a perfect 10-minute podcast describing anything you want. When that came
out, I was like, this is the end of podcasts for people. Everyone's just going to listen to their own
custom podcast. Why would anyone want to listen to a human? That was what I thought two years ago.
How many No Book LN podcast have I listened to since then? Zero. Because I don't want to listen
to a bot describing it, even if it's perfect and accurate and fluent. I want to listen to the
messiness of another human who's actually experienced these things going into it.
So I'm actually not that optimistic that AI is going to disrupt art, writing, music,
those kind of things as much as some other people.
But of course, I have a stake in that game.
Morgan, you have a gift for finding the question behind the question.
And so I want to ask, what's the thing about AI that you think almost nobody is asking
and that you wish more people were?
Well, one, there's a lot of time that if it is as disruptive to labor and employment as people think,
a lot of times people will say, oh, well, there's a solution for that, it's universal basic income.
That, look, we're going to have 30% unemployment, but we'll just send people five grand a month and say,
you can just go write poetry and toil in your garden and, like, you don't have to work anymore.
We'll take care of you.
I think there's so much evidence that if you think work is hard, try boredom.
It's a hundred times harder.
And the idea that we can just pay a third of society to not work, the amount of mental illness
that would unleash on society would be off the charts.
And so that's pretty much the only solution that people have of like, oh, it's going to put
people out of business, but we'll take the profits from AI and just pay them off effectively.
That would not work in a million years.
And you see this during deep recessions.
Like after 2008, a significant number of people were unemployed for more than 12 months.
And that destroys people.
That's not unemployment.
that leads to mental breakdown at that point.
So the idea that you could do that forever for long periods of time, that would never, ever work.
As always, people on the program may have interest in the stocks they talk about, and the Motley Fool may have formal recommendations for or against.
So don't buy or sell stocks based solely on what you hear.
All personal finance content follows Motley Fool editorial standards and is not approved by advertisers.
Advertisements are sponsored content and provided for informational purposes only.
To see our full advertising disclosure, please check.
at our show notes. For the Motley Full Money team, I'm Matt Greer. Thanks for listening,
and we will see you tomorrow.
