Motley Fool Money - Interview with NYU Professor Vasant Dhar: Thinking With Machines
Episode Date: December 28, 2025NYU Professor of Business Vasant Dhar is a pioneer in the field of artificial intelligence. He’s the host of the Brave New World podcast, and author of the new book, Thinking with Machines: The Brav...e New World of AI. Motley Fool analyst Asit Sharma recentled talked with Professor Dhar about that new world. Host: Asit Sharma Guest: Vasant Dhar Producer: Bart Shannon, Mac Greer Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, "TMF") do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. We’re committed to transparency: All personal opinions in advertisements from Fools are their own. The product advertised in this episode was loaned to TMF and was returned after a test period or the product advertised in this episode was purchased by TMF. Advertiser has paid for the sponsorship of this episode Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
My fear is that we are slipping into a Huxleyan kind of world, perhaps even without our realization, right,
that we are gradually disempowering ourselves in many areas of our life.
The machine has become a gatekeeper of human activity in many ways.
That was NYU Professor Vassant Dar, author of the new book, Thinking with Machines,
the Brave New World of AI.
I'm Motley Fool producer Matt Greer.
Now, Motley Fool analyst Asit Sharma recently talked with Professor Dar about that brave new world.
Greetings, fools.
I'm Asit Sharma, senior analyst and lead advisor at the Motley Fool.
And my guest today is Vassant Dar, Robert A. Miller Professor of Business at NYU's Stern School of Business.
Professor Dar is a pioneer in the field of artificial intelligence.
In fact, he received his PhD from the University of Pittsburgh with a specialization
in artificial intelligence in 1984.
Among his many achievements, Professor Dar is noted for bringing machine learning to Morgan Stanley's
proprietary trading groups in the 1990s.
You may have listened to the professor's popular Brave New World podcast, and he's out with a new
book entitled Thinking with Machines, The Brave New World of AI, which is the topic of today's
discussion.
Vassant Dar, welcome to the Motley Fool.
Thank you, Asa, delighted to be a fool.
Awesome.
Well, I wanted to see a fool.
start with your early childhood, which you recount in the introduction to thinking with machines.
You were born in the 1950s in Kashmir, India, and you note that you rode to school in a horse-drawn
cart. You also moved around quite a bit in India, and by the time you were nine, your father was
posted to Ethiopia on assignment as India's military attach to Africa. So I wondered, Professor,
can you tell us a little bit about these formative experiences and how they helped shape the person
and scholar you became. You know, all amazing experiences growing up, including, you know,
what you mentioned in Ethiopia, you know, my, and I described this in my book in a humorous
kind of incident. You know, my mother put me in the wrong grade. She put me in seventh grade
and sort of fourth grade by mistake and only realized, you know, her error six months later
when it was too late to do anything about it, so here I was hanging around in class with,
you know, 15, 16 year olds, you know, and I was like nine. So that was a hell of an experience
growing up. Then I went off to boarding school in India after that, which was also another,
you know, so my trajectory was third grade, seventh grade, eighth grade, and then six, seven,
eight, right? You cannot make this up, you know, that's what it happened. But it made me
resilient, I guess, in some way. And it was a really unusual kind of upbringing. I'm happy for
it. So fast forward to Pittsburgh, Pennsylvania at a time where you were attending school and
intersecting with the very exciting world, the very nascent world of artificial intelligence. You met
an AI pioneer in Herbert Simon who had received the Nobel Prize in Economics for his work in revealing
the limits of human rationality and decision making. Now, Professor, I remember still in the early 90s,
late 80s, early 90s, taking a college class in microeconomics in which rationality was still the
governing principle or said to be the governing principle, but, you know,
by which most humans make their economic decisions.
That's right.
But Professor Simon had a different idea.
He called it bounded rationality.
I wondered if you could explain that to us.
Well, essentially what he said was that humans have limited cognitive resources,
that we are not able to, you know, enumerate all possible alternatives and evaluate them.
That's just like too taxing.
You know, we'd never get through the day if we did that.
And that our attention is limited.
and then we tend to focus on the most plausible things to pursue, you know,
and we do this through heuristics that are learned through experience.
And so heuristics actually sort of focus our attention, you know,
to the right parts of the problem.
And when we find an acceptable choice, we take it, you know, and we move on, right?
So that was a theory, which was called bounded rationality.
But I have to say that economists sort of said, yeah, that's true, but let's just move on.
You know, so for the most part, you know, they still sort of, you know, because it doesn't
lead to very good theories, right? I mean, it sort of messes up sort of nice mathematical model.
It's messy. And economists don't like that. So, you know, it was just, yes, it's true,
but thank you very much. Whereas his ideas really sort of took root in artificial intelligence,
you know, which was really all about, at that time, all about, like, how do you represent
knowledge and how you traverse it intelligently. And that was called heuristic search at the time.
And so heuristics became big in AI and they were the sort of primary.
sort of paradigm at that time, you know, of expert systems where we tried to build these impressive
applications in areas like medicine, you know, where you would extract knowledge from experts
and use their heuristics, you know, that they had acquired through experience to actually do
medical diagnosis. So, and that was my first real experience to AI, just watching this
system called internist interact with an expert and elicit information and arrive at the correct
differential diagnosis. I mean, I was just watching this and it just blew my mind. And that's when I
decided, this is what I'd like to do with my life. You posit that oftentimes success in the markets or in
other probabilistic endeavors is made up of small edges that compound, compounding small edges. And you
bring up the commencement address of tennis great Roger Federer last year to the graduating class of Dartmouth.
Can you start with what interested you in that commencement address and explain the concept of compounding edges to us, please?
The statistic that Federer said that, you know, really sort of stayed with me and it's so similar to, you know, financial markets.
You know, so I have you financial markets and sports as being sort of two sides of the same coin, right?
He said, you know, like over the course of 1526 matches, I want 80% of them.
you know, what percentage of points do you think I won?
And he paused and he says,
54%, barely better than even, right?
In financial markets, you do 54%,
you should be managing the world's money, right?
As long as your winners and losers of equal size, right?
But what Federer was really saying is that, you know,
it's that little edge that just compounds over the course of the match, right?
If the match was just one point long,
then Federer would win 54% of his matches, right?
But the fact that it sort of goes on over time
means that he's got time to regroup, even though he loses a point, right?
It's that little edge that just sort of keeps multiplying over time.
And so the longer the match, the more matches he'll win.
Of course, as long as he doesn't get exhausted, right?
So stamina also matters.
Boris Becker, by the way, won almost 80% of his matches
with only like less than, a little over 52% winning points
because he had a tendency to win the really important ones like tiebreakers.
But that's the point is that you don't need to be perfect.
You don't even need to be really good.
You need to be just slightly better than the average or some benchmark in order to be successful.
And that applies to like almost everything in life, you know, that as long as it like slightly better,
that edge will just continue to compound, you know, and that you'll get better and better in your outcomes.
The old adage goes, it isn't what you say, it's how you say it, because to truly make an impact,
you need to set an example and take the lead.
You have to adapt to whatever comes your way.
When you're that driven, you drive an equally determined vehicle, the Range Rover Sport.
The Range Rover Sport blends power, poise, and performance.
Its design is distinctly British and free from unnecessary details, allowing its raw agility to shine through.
It combines a dynamic sporting personality with elegance to deliver a truly instinctive drive.
Inside, you'll find true modern luxury with the latest innovations in comfort.
Use the cabin air purification system alongside active noise cancellation for all new levels of quality,
quiet. Whether you prefer a choice of powerful engines or the plug-in hybrid with an estimated range of
53 miles, there's an option for you. With seven terrain modes to choose from, terrain response to fine-tuned
your vehicle for the roads ahead. The range rover event is on now. Explore enhance offers at
range rover.com. Do you think that some of the same principles you've applied to systematic investing
on a short-term basis where you're looking for a higher probability trade with a shorter duration,
apply on the other side to long-term investors like myself.
They do.
And for the reasons that you pointed out, right, that you need numbers.
And in fact, you know, in 2015, I went to my colleague Aswat Damoleran, you know,
because I sort of believed that machine learning and quant methods really applied to short-term trading,
right, where, you know, you could identify an edge, where there were lots of numbers involved, right?
But it was hard to apply to long-term investing where, you know, withholding periods of like many months or even years, right, because you just couldn't get enough sample size. You couldn't get enough training data. But I was really intrigued by my colleague Aswad Damoverin, you know, who's considered Mr. Valuation on Wall Street. And so I went to him in 2015. And we had this conversation about whether it would be possible to create a bot of him, you know. And I'd had a similar conversation with my colleague Scott Galloway at their time.
you know, should you trust your money to a robot?
I'd just written this article,
should you trust your money to a robot?
And I made the case that you should when it comes to,
you know, high frequency trading and short term,
but when it came to long term investing,
that it was impossible to like train a machine,
you know, like you could with shorter duration stuff.
And remember Scott, and at the end of that conversation saying,
okay, so what you're saying is that trading floors will disappear,
but venture capital and private equity is safe.
And I said, yep, that's pretty much it.
And my conversation with Damodran was similar that, you know, it would be too hard to actually try and replicate him.
What's interesting is like post-chat GPT, we sort of revisited that question, you know.
And so I went back to the mother and I said, you know, do you think we could actually build a part of you now given this new technology?
And he said, sure, let's give it a shot.
You know, you've got all my training data.
And so that's what I've been, you know, involved in for the last couple of years.
You know, we've built this bot that's designed to think like him.
And, you know, my initial thinking was that we could apply that systematically as well, you know,
that we could just apply Demoltherin to the S&P 500.
Like, you know, it's impossible for him to do it because he can't evaluate 500 companies,
you know, in a day or even in a week.
It's just like too much work.
But, you know, my thinking was if we can build a machine like him, why can't we just
apply to the entire index and then use it systematically.
It's an interesting idea.
It may actually, you know, work.
But I've actually become intrigued with a different type of application of the bot,
which is something that allows people to think and reason about companies in a deeper
kind of way, to run scenarios and say, you know, what if Trump escalates tariffs,
like what will valuation of Apple or Nvidia, whatever looked like?
or what if his tariff was a head fake
and we go back to, you know,
sort of the era of low trade barriers.
This kind of stuff is very laborious
for people to do and it's very time consuming.
I find it sort of interesting
that we can apply AI now
systematically to long-term investing as well.
What was the thing that surprised you most
about the demotor and bot?
So basically you had access to all the training materials,
publicly public, famously public materials.
And you also,
had access to Professor de Modern's very elaborate write-ups, his blog post, which they themselves,
if you marry up the public spreadsheets that he has for investors, they're an object lesson in how
you draw together numbers and narrative. What surprised you most in this latest phase post-chat
GPT where you took more modern tools, let's see, more contemporary tools, and recreated the
idea, maybe the biggest success you had or the biggest pitfall that you didn't expect?
You know, when I started this two years ago with one of my colleagues, Jav Sidoch, who's a
LLM person, we had no idea whether this would work. It was a wild idea, you know, to build a
bot like him, you know, and we tried what most people might try, which is, you know, give all his
valuations to an LLM, fine tune it, and then have it, you know, think about a new case. It just
It didn't work.
It didn't sound like him.
There was nothing deep about it.
There was nothing profound about it.
So we just sort of went back to the drawing board and said, let's just identify all components
of his thinking.
You know, fundamentally, he's got this quantitative model that he calls the Ginzu.
That's like...
It's a spreadsheet, right?
It's a spreadsheet.
I've used it.
Yeah.
It's incredibly complex.
It held all kinds of switches and context and all that kind of stuff.
But at the end of the day, it's a quantitative model.
Inputs gives you a valuation.
You do a sensitivity.
And there you have it, right?
But the question is, like, how do you marry a story to the numbers, right?
Like, what's the story that is consistent with the numbers?
And the story, you know, involves sort of stepping back from the particular company, right?
So I'll give you a great example.
So when he evaluated Nvidia in 2023, the first question he asked, I call this a framing question
was, is AI an incremental or a disruptive technology?
Now, why would you ask a question like that?
Well, you ask a question like that because the markets in those two scenarios tend to be very different.
If it's incremental, it's pretty well defined.
You know, you can put a boundary around it.
If it's disruptive, it's much more uncertain, right?
You need to think about what that really means.
Disrupting what?
Every industry is AI like electricity?
Is it like the internet, right?
So it makes you think about the problem in a really broad kind of way, right?
And then his subsequent question, you know, when disruptions happen,
what's the distribution of winners and losers?
And he shows that you get a few winners
and lots of losers, lots of wannabes.
And he says, okay, I think invidia is going to be a winner,
so they're going to have a dominant position.
And then he goes about sort of
thinking about it.
Like, what are their margins going to be like?
Well, and he says, well, what are the margins of
people in the semiconductor industry?
Well, that's a good place to start.
And the work of Phil Tetlock, by the way, also applies here,
right? He has this work on super forecasters,
you know, what makes them good.
And what makes them good is that they
anchor themselves in sort of in the right part of the problem, as opposed to like a biased
part of the problem.
They tend to be sort of relatively unbiased.
And so I realized that Aswat the Modran was what I call a super forecaster, right?
He just has those properties of what Tetlock calls, you know, super forecasters, the ability
to really ask the right kinds of questions, you know, and insatiable curiosity of anchoring
himself.
What does leadership really look like?
On the power of advice, a new podcast series from Capital Group, you'll hear from
athletes, entrepreneurs, and executives who've led on the field in the boardroom and in their
communities. It's not about titles. It's about impact. Discover what drives them and the advice
they carry forward. Subscribe and start listening today. Published by Capital Client Group Inc.
If you had a scale today, where would we be waiting more towards that we will govern AI or
AI will govern us? And why? You know, my fear is that we are slipping
into a Huxleyan kind of world, perhaps even without our realization, right, that we are
gradually disempowering ourselves in many areas of our life. The machine has become a gatekeeper
of human activity in many ways. You apply for a job, you're screened by the AI. You know,
you might even be interviewed by the AI increasingly these days. You know, it's not a warm,
fuzzy feeling, right, when the machine has become a gatekeeper to human activity. So my fear is that we
might just slip into this, you know, without the machine sort of having evil intentions or being
programmed to do harm, right? That we just sort of slip into this, you know, without our explicit
realization. That's really my concern. Which stakeholders do you think would be important to ensuring
that we don't slip into such a future? I mean, is it obvious answers would be okay, governments,
perhaps we need regulations, academics, also big tech maybe, but I don't know, what about people
who use the machines as well? Who are the stakeholders that should put a voice forward in this
decision? Well, they more than anyone else, like everybody, right? And that's why I wrote the book
for everyone. I meant for this book to be accessible to everyone because this applies to all of us,
and I tell my students this as well, that, you know, it's easy to use this technology as a
crutch. It is so tempting to use it as a crutch, but that'll, in the long run, will be debilitating,
right? You don't want to go down that road where you got a question and you just throw it to chat GPT
and say, what do you think? Because that's the surest way of going to cognitive decline. And I can feel
it, by the way, when I use maps, I don't think I navigate as well spatially as I used to. You know,
I think I've lost that facility by relying more and more on maps. And I'm aware of that. And I'm aware of
that, and I now try and navigate myself manually sometimes just to sort of keep that spatial
mental muscle alive.
And that applies to all areas of our lives.
And so individuals, more than anything else, really need to ask themselves, you know,
how they're consuming this technology.
I mean, as it is, my colleague Jonathan Haidt says that, you know, some of these social media
platforms have caused tremendous harm to teenagers.
We ain't seen nothing yet, you know, in terms of the potential harms that AI could cause
if we just let it go unfettered.
And it's a tough area because, as someone said,
I mean, I think I was reading a piece by Ezra Klein
this morning where he said,
you know, who are we to tell people what to consume, right?
I mean, and Sam Altman said,
we don't want to be the moral police of the world.
You know, we'll open chat cheap due to adult content.
All true, you know, all fair.
But that's why it imposes the burden really on the consumer.
And so among all these people,
the burden really is on the consumer to be aware of how you're consuming AI. And as I say in my book, you can consume it to become superhuman, right? It can really serve to amplify your skills if you use it in the right way. But if you become dependent on it, it'll lead to cognitive decline. And that's no good. And that's one of the points I tried to make in my book is how to think about that, how to think about being on the right side of what I see as this sort of impending bifurcation of humanity. I think one of the clearest examples of all this is a choice.
you make that you describe in the book.
Some people asked you, why don't you use
your chat TPD to write the book?
And you said, well, right now, the machines
don't write as well as us for now.
Okay, I get that. But I think also underneath
that is the desire to express
yourself in your own unique style
to make the points that you want
to make and to
have your expression,
which is beautiful, by the way, it's
a great expression and well-written book,
to be the statement that you put into this
work, not to rely on the crutch just because it would be easy to input some bullet points
and perhaps spit out the product and you go talk about it.
It's a world of ideas that you are putting forward.
So I really appreciated that part of your book, which is, hey, there's a reason that I'm writing
this myself.
Exactly.
So, by the way, thank you for that compliment.
I really appreciate that.
But, you know, in addition to the fact that I think I write better than chat GPT and I want to
express myself in my own style, it's also so much more fun, right, to do that.
And at the end of the day, what's life about if not for having fun?
I mean, life is about having fun and this should be fun.
And I had so much fun writing it.
And there's so much of a sense of accomplishment and satisfaction from producing something good
by yourself.
And that's what we should strive for.
Professor Vasantar, this has been an extremely illuminating conversation.
And above all things, it's been a lot of fun.
I really appreciate your time today.
And I hope that you will come back for another conversation at some point in the future.
I'd be delighted.
Thanks so much for this, Asit.
I really enjoyed it.
Great questions.
And I love the conversation.
Thank you again.
Thanks.
As always, people on the program may have interest in the stocks they talk about,
and the Motley Fool may have formal recommendations for again.
So don't buy yourself stocks based solely on what you hear.
All personal finance content follows Motley Full editorial standards and is not approved by advertisers.
Advertisements are sponsored content and provided for informational purposes only.
To see our full advertising disclosure, please check out our show notes.
For the Motley Full Money team, I'm Matt Greer.
Thanks for listening, and we will see you tomorrow.
