We Study Billionaires - The Investor’s Podcast Network - TECH011: The History of AI and Chatbots w/ Dr. Richard Wallace (Tech Podcast)
Episode Date: December 31, 2025Dr. Richard Wallace, creator of ALICE and AIML, shares his journey from 1990s chatbot innovation to today’s AI frontiers. He and Preston also explore AI’s learning methods, human vs machine inte...lligence, and the evolving role of creativity in artificial minds. IN THIS EPISODE YOU’LL LEARN: 00:00:00 - Intro 00:02:46 - How a 1990 New York Times article inspired Richard Wallace’s AI journey 00:03:42 - What made the ALICE chatbot revolutionary in its time 00:07:20 - The principles behind minimalist robotics and their influence on AI 00:12:00 - How AIML works and why it was crucial to early chatbot success 00:16:30 - The contrast between supervised and unsupervised learning methods 00:17:20 - Why LLM decision-making processes remain hard to interpret 00:20:33 - How humans and chatbots use language in surprisingly robotic ways 00:24:43 - The philosophical roots of the Turing Test and its modern critiques 00:40:19 - Insights on combining symbolic and neural approaches in AI today 00:41:18 - What Wallace is working on now at Franz in medical AI predictions Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES The platform behind ALICE: Pandorabots.com. Website: Franz. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok. Check out our Bitcoin Fundamentals Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value Newsletter. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORSSupport our free podcast by supporting our sponsors: Simple Mining Linkedin Talent Solutions Alexa+ HardBlock Unchained Amazon Ads Vanta Abundant Mines Horizon Public.com - see the full disclaimer here. References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Transcript
Discussion (0)
You're listening to TIP.
Hey, everyone, welcome to this Wednesday's release of Infinite Tech.
Today's episode is a deep dive into the early foundations of conversational AI
and what they reveal about today's language models.
My guest is Dr. Richard Wallace, a pioneering chatbot creator and three-time Lobner
Prize winner, best known for building Alice and the AIML language that powered early
conversational systems.
The Lobner Prize was an annual competition designed to implement.
Alan Turing's imitation game awarding the chat bot that could most convincingly carry
on a human-like text conversation with judges, just as an FYI.
So during the show, we talk about why simplicity beat scale in the early AI race, how supervised
rule-based systems differ from modern LLMs, what the Turing test actually misses, and
why combining symbolic reasoning with neural networks may matter more than raw model size.
This is surely an episode you will not want to miss.
So without further ado, let's jump right into the conversation.
You're listening to Infinite Tech by the Investors Podcast Network, hosted by Preston Pish.
We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money.
Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today.
And now, here's your host, Preston Pish.
Hey, everyone, welcome to the show.
I'm here with Richard Wallace.
And wow, this is really exciting for me to talk to such a pioneer in this space in the
chat bot, AI space.
And first of all, welcome to the show, excited to have you here.
Thank you.
Thank you, Preston.
It's a pleasure to be here as well.
So where I want to start is I'm just super curious how people kind of fall into their field
of expertise.
And when I look at what you accomplished very early on back in the 1990s, I'm curious what drove
you or motivated you to be paying attention to chat bots and the Turing test.
And all of that, it's such an early phase because I think for most of the listeners, they
know that all this stuff has really come to fruition in the last five or ten years that's
got on everybody's radar.
But you were doing this literally decades before anybody was even aware of these ideas of
chat bots and whatnot. So what was your initial motivation to get into this kind of stuff?
That's absolutely right. You know, I like to say that nobody knew what artificial intelligence
was until a couple of years ago. Yeah. And now I'll be sitting in a restaurant somewhere and
I'll hear a conversation at the table next to me and they're talking about AI. Well,
anyway, there are several threads that came together that inspired me to work on the chatbot
Alice. And I'll just pull on a couple of those threads here. One is that in 19,
I read an article in the New York Times about the first Loebner Prize contest.
Now, the Loebner Prize was an annual Turing test, an annual contest based on the Turing test,
funded by a rather eccentric philanthropist Hugh Loebner.
And the story with the very first contest was that none of the programs competing came
close to passing the Turing test.
They were all just terrible chatbots.
But Loebner awarded a bronze medal every year to the chatbot that was ranked highest by the judges in terms of being the most human.
And that first year, the bot that won was simply based on the old Eliza psychiatrist program, which, if you familiar, that was a very primitive chatbot developed by Joseph Weisenbaum in 1966.
and it, you know, had very few responses, but it had some clever tricks to it. You know, it could
sort of match keywords in the input and had canned responses associated with those keywords.
It could invert prepositions. So, you know, if I said, I came here to talk to you, then it would
repeat back, you came here to talk to me. So it did that sort of pronoun swapping trick. But when I was
in graduate school in the 1980s,
the Issaa program was basically considered kind of a dead end or at best kind of a hoax in AI.
And not only that, the inventor Joseph Weisenbaum ended up pulling the plug on it because he thought it was too dangerous.
He thought that people were reading too much into it and was actually there.
It was a psychiatrist program, so people were trusting it with their personal issues and problems.
they were surprised to find out that Wiseenbaum could read all the transcripts of their conversations.
Wow.
And so he wrote a whole book after that, Computer Power and Human Reason, where he criticized the whole field of AI in his Eliza program in particular.
You know, it's really hard to imagine this now that someone would come up with a new AI application that's very engaging and popular.
If people are using it, then they would say, oh, no, this is too dangerous.
We have to put the genie back in the bottle.
I think most people would have it and run out and try to find venture capital to start a company to commercialize.
True.
Very true.
But isn't this fascinating that the thing that he discovered very early on in the 90s, he started playing around this in the 60s, which is mind blowing to me.
But what he found in the 90s was that there was a huge centralization concern with privacy in what people were putting.
into these discussions, which is now a major talking point with AI.
And it doesn't seem that I know I'm generalizing here, but it doesn't seem like the
population really cares too much or even thinks about these issues that caused him to shut down
his entire effort behind this.
I don't know.
I find that really fascinating that he discovered this, what, four decades before it became,
like, or three decades before it became something that the rest of the world should be
very concerned about. Well, he really discovered it in the 1960s. Wow. When he first created the
program. The other ironic thing about Eliza was that up until very recently, I would say, well,
let's say 20 years ago, Eliza was by far the most widely distributed, popular and well-known
AI application. If you knew anything about AI up until maybe the year 2000, then you would know
about Eliza.
I'm curious when you read this, I think you said New York Times article in 1990, did you ever
think that you would be the winner of this Lobner Prize a decade later?
Well, that planted a seed in my mind, and I didn't really do anything about it for about
five years.
Okay.
So another thread that led to the development of Alice or the inspiration for Alice was around
that time in the early 90s.
It was the end of the Cold War.
And so there was decreased amount of government funding available for AI and robotics research compared to the 1980s.
And so a number of us in the robotics field, I was working in robotics at the time, got interested in the idea of minimalism, robot minimalism.
And basically that was the idea that we could build robots with very simple, inexpensive sensors and actuators, you know, very commodity microprocessors.
And as a result of that, you could actually get more lifelike behavior out of these robots than you could with, you know, approaches people that tried in the past with much larger computers and so forth.
One of the interesting inventions that came out of that period was the Rumba.
Yeah.
If you think of the Rumba rolling around and, you know, bumping into things and changing its direction, it's all basically just a stimulus response application.
We call that stateless.
So it's sensing something and then taking an action based on what it's sensing, you know, changing direction, for example.
Yeah.
So that whole approach of minimalism, you know, was also in my mind at the time.
And that kind of dovetailed with the very simple approach of the ELISA program, which was also kind of a stimulus response.
You know, it was so simple that it could respond very quickly.
It didn't have to go and do a lot of computations to come up with the responses.
What was your inspiration for thinking that simplicity was going to lead you to better results?
Was there something in your life or something that you were reading at the time or, you know,
what drove you to that intuition?
Like I said, we were working on the minimalist philosophy of robotics.
Oh, okay.
And yeah, at that time, I was working on the development of a robot eye.
and by that I mean a visual sensor that's based on the architecture of the human eye.
So the human eye differs from a TV camera in the sense that a TV camera is basically a square grid of square pixels.
But the human eye is more like concentric rings of pixels with higher and higher resolution towards the center.
We call that a log map.
And so we developed a sensor that had that log map pixel organization in order to use a camera like that.
that effectively, you have to be able to point it.
So we developed a little motor, a high-speed pointing motor, based on a direct drive design,
and that motor could point the camera, the eye camera, and pan and tilt directions very, very quickly.
And again, it was a very simple kind of actuator, simple sensors, and it could move very quickly.
could move actually faster than the human eye. So you'd sort of see this thing whipping around
and looking at different things. And it was very lifelike. So just for the audience to understand,
so in 2000, I believe 2001 and 2004, Dr. Wallace won the Lobner Prize, which is this Turing
test with his Alice protocol or chatbot that he had created. And I guess for me, what was
the major insight that you think that you had back then? You talk about
this idea of simplicity, but what would you say was the major insight that you had to outperform
everybody else that was competing on what is, I mean, for anybody listening to the most complex,
challenging, you know, problem you could ever try to go after, right? Like, what would you say
was your keen insight that you had that allowed you to do this? Well, it was basically the idea
that I could build on the Eliza program. So the Eliza program had about 200 rules, 200, you know,
stimulus response rules. And you can think of that as a pattern and a response. And my idea was
to kind of was to build kind of a super Eliza where instead of 200 rules, you had thousands and
thousands of rules. Yeah. And in fact, by the time I was entering those contests, I got Alice up to
about 50,000 patterns and responses. Wow. Amazing. So Richard, one of the things that I found really
fascinating about you back at this time was that you came up with this artificial intelligence
markup language.
You effectively, for all intents and purposes, and correct me if I'm mischaracterizing
this, you had to come up with your own language in order to kind of build efficiency into
how this chat bot was working, which is, you know, as a person who's not very good with languages,
I'm much more of a math person.
I'm reading this and I'm thinking, this is mind-blowing.
So talk to us about this.
And what was this insight that you had to come up with the AIML artificial intelligence markup
language at the time that you did this?
Well, AIML is based on XML and XML is very popular at the time.
One thing that appealed to me about XML for the purpose of writing chatbots was that I
always say XML has an implicit print statement.
So when you write the responses, you don't have to put in an expression that says
print, blah, blah, blah, something, you know, between the parentheses because the XML already
just provides the text inside the markup. So the response is just the text inside the markup.
And then basic unit of knowledge in AIML I call the category, which is like the rules I was
talking about a second ago. So the category consists of a pattern that matches some input,
some natural language input.
And then a response called the template.
The reason it's called the template is because it's not exactly the answer,
but it's a template for the answer that can be populated with various other things.
And then there was also a recursive element to it
where the response could actually simplify the input into a kind of simpler input.
So the example of that is, I want you to tell me who you are right now.
So you can reduce that by removing the right now.
So I want you to tell me who you are.
And then you can remove the I want you just so it reduces to just tell me who you are.
And then that reduces to who are you.
So there was that recursive element built into the responses as well.
So in general, you're just, you were taking language and you were making it way more efficient.
And they're like, where do you even start with something like that?
I mean, you literally have to go through.
There's just so many different variations of language, and I think of the complexity of this.
I wouldn't even know where to begin to start writing something that makes it more efficient.
Right.
Yeah.
So how did you think about solving that problem?
Well, it all goes back to the conversation logs.
So just like wise and bone, you know, I can read the transcripts of conversations people around.
By the way, this would have never worked without the internet, without the world going web.
Yeah.
Because with the World Wide Web, I could start to accumulate conversations from a very large audience of people.
And by looking at the transcripts of those conversations, I could basically program responses to the things people were saying.
Later on, I realized that there was kind of zip distribution over the things people were saying.
So, you know, there's kind of a most common thing people say, which is hello.
and then who are you and how are you and I like something.
So you can create the responses in order of how frequently people say particular things.
Let's take a quick break and hear from today's sponsors.
All right.
I want you guys to imagine spending three days in Oslo at the height of the summer.
You've got long days of daylight, incredible food, floating saunas on the Oslo Fjord.
And every conversation you have is with people.
who are actually shaping the future. That's what the Oslo Freedom Forum is. From June 1st through
the 3rd, 2026, the Oslo Freedom Forum is entering its 18th year bringing together activists,
technologists, journalists, investors, and builders from all over the world, many of them
operating on the front lines of history. This is where you hear firsthand stories from people using
Bitcoin to survive currency collapse, using AI to expose human rights abuses, and building
technology under censorship and authoritarian pressures. These aren't abstract ideas. These are tools
real people are using right now. You'll be in the room with about 2,000 extraordinary individuals,
dissidents, founders, philanthropists, policymakers, the kind of people you don't just listen to,
but end up having dinner with. Over three days, you'll experience powerful mainstage talks,
hands-on workshops on freedom tech, and financial sovereignty, immersive art installations, and
conversations that continue long after the sessions end. And it's all happening in Oslo in June.
If this sounds like your kind of room, well, you're in luck because you can attend in person.
Standard and patron passes are available at Osloof Freedom Forum.com with patron passes offering
deep access, private events, and small group time with the speakers. The Oslo Freedom Forum isn't
just a conference. It's a place where ideas meet reality and where the future is being built by
people living it. If you run a business, you've probably had the same thought lately. How do we make
AI useful in the real world? Because the upside is huge, but guessing your way into it is a risky
move. With NetSuite by Oracle, you can put AI to work today. NetSuite is the number one AI cloud
ERP, trusted by over 43,000 businesses. It pulls your financials, inventory, commerce,
HR, and CRM into one unified system.
And that connected data is what makes your AI smarter.
It can automate routine work, surface actionable insights, and help you cut costs while making
fast AI-powered decisions with confidence.
And now with the NetSuite AI connector, you can use the AI of your choice to connect directly
to your real business data.
This isn't some add-on, it's AI built into the system that runs your business.
And whether your company does millions or even hundreds of millions, Nets
Suite helps you stay ahead. If your revenues are at least in the seven figures, get their free
business guide, demystifying AI at net suite.com slash study. The guide is free to you at netsuite.
com slash study. NetSuite.com slash study. When I started my own side business, it suddenly felt like
I had to become 10 different people overnight wearing many different hats. Starting something
from scratch can feel exciting, but also incredibly overwhelming and lonely. That's why having the
right tools matters. For millions of businesses, that tool is Shopify. Shopify is the
commerce platform behind millions of businesses around the world and 10% of all e-commerce in the U.S.
from brands just getting started to household names. It gives you everything you need in one place,
from inventory to payments to analytics. So you're not juggling a bunch of different platforms.
You can build a beautiful online store with hundreds of ready-to-use templates, and Shopify is packed with helpful AI tools that write product descriptions and even enhance your product photography.
Plus, if you ever get stuck, they've got award-winning 24-7 customer support.
Start your business today with the industry's best business partner, Shopify, and start hearing...
Sign up for your $1 per month trial today at Shopify.com slash WSB.
Go to Shopify.com slash WSB.
That's Shopify.com slash WSB.
All right.
Back to the show.
My question for you is, and I don't, I'm struggling to find a way to frame this, but do you write
these thousands of rules and you're also working on a way to compress or make the English
language more efficient?
What did you fundamentally learn through the experience of, you?
of writing these thousands of rules and rules of thumb of compression.
Because when I think about it, like you're doing, like we look at these LLMs and machines are doing
all of this really hard and complex work.
But I would imagine what you were doing there in the 90s and early 2000s was exactly what
all these LLMs are doing today, but you were doing it manually.
And so I guess it's almost, I hear these people that say, well, we have no idea what's behind
these ones and zeros in all these LLLMs.
which I guess is a true statement, right?
Yeah.
But if a human was going to maybe be able to understand what it is that it's doing,
I think you would be one of the very few people on the planet that could maybe help us
understand what that is because you did this manually for so many years.
Right.
Well, there's so many things wrapped up in that question.
So let me see if I can pull it apart.
Yeah.
So there's always been a kind of tension in the history of artificial intelligence.
between, let's say, supervised learning and unsupervised learning.
So what I was doing was what we call supervised learning because I was playing the role
of a teacher or, you know, a guide.
So whenever I added a new response, it was manually added, as you're saying, driven by a
particular input that I saw in the conversation logs.
And so the way that I'm teaching the robot is by acting as its teacher basically and saying,
You know, when you see this, you should say that.
Yeah.
And, you know, that's in contrast to unsupervised learning, which is what these
LLMs are doing.
They're basically, you know, trying to accumulate a lot of inputs and, you know, find
the neural network weights that match it to particular outputs.
And so with that technique, you can get phenomenal results, obviously.
But as you're saying, it's difficult to know how the LLM came up.
with particular responses. Whereas in the supervised learning case, where it's all a symbolic
process, it's very easy to trace back through the logic of the program and see what caused
a particular response to be generated. And I always say that people who do supervised learning
approaches spend all of their time doing creative writing, which is what I was doing with the
AlistBot. But people who do unsupervised learning spend all of their time,
deleting crap from the database. Yeah. And that's sort of what's going on, will be LLMs now is,
you know, they're having to put a lot of work into filtering to make sure they don't say
anything inappropriate or offensive or political. And, you know, that ends up being a lot of
manual work as well. Yeah. And I guess my understanding is that everybody that's on the cutting
edge of AI today, like, that's the holy grail for them is to get the human out of the loop and for it to
be completely AI generated and filtered and just like there's no humans there.
As a person who deeply understands this, and the way that you frame that is this back and
forth and there's consequences to one side and the other, is there a moment where you think
that they will be able to get away from complete removing the human out of the loop and
it progressing in a way that's actually beneficial?
Or do you think that the more that they lean into removing the human out of the loop that
they actually are setting themselves up for a systemic failure because it's going to spiral
into this AI slop, if you will, or it's creating and generating content in a direction
that's so fast and so extreme that they get away from human filtering altogether and it just
kind of turns into this almost like a runaway virus, if you will. Is that how you kind of
see this, that it needs to be balanced or is it even possible for it to go in that direction
without humans.
Well, it's so hard to predict the future of that I would have never expected this whole
LLM development to come along in the first place.
Yeah.
But, you know, I always think of a child learning language.
And there are big differences here between a child learning language and an LLM.
Yeah.
You know, a kid doesn't have to scan the whole internet to learn how to speak a language.
In fact, they're pretty good at, you know, what we call one-shot learning.
You know, if you say to a kid, this is a dog, then they can instantly write a language.
recognize every dog in the world as a dog.
Yeah.
But what also comes into play here is the supervised, unsupervised learning dichotomy,
which is if you are a kid and you have a good teacher and good parents,
you'll learn to speak very well.
But if you're a kid who has to pick up language on the street without any supervision,
then your language learning won't be nearly as good.
And so the LLM is more like the kid out on the street.
learning language without any supervision.
And that's why they learned so much inappropriate and offensive material and so on.
What did the wins teach you back in the day when you were winning this about how humans judge intelligence?
Well, you know, I can say the same thing about LLMs now that I said about my chatbot back then,
which is that people say, well, these chatbots are becoming more and more like humans.
And, you know, I have a different opinion about that, which is that what it's really showing,
us is that people are more like robots than we would like to think we are. Because it's not that
the robots becoming more like a human. It's that it's revealing to us how robotic we are. And
back in the early days of working on Alice, I came to realize that most people most of the time
are saying things that they themselves have said before or that they've heard other people say
before. And even when they're writing, they're basically synthesizing thoughts and ideas that
are not necessarily original. And all of these chatbots work because language is predictable,
and predictable means robotic. So I always say that if we were all William Shakespeare's
uttering an original line of poetry with every sentence we spoke, then these chatbots would never
work because they're based on language being predictable, not original.
Yeah.
Yeah.
Is it fair to say that you would suggest that humans judge intelligence by their flow or by
this response of like most people are looking at that and they're saying, oh, that's intelligence,
but then you're looking at it and you're saying that's not intelligence.
It's just repetition.
I think that's kind of what you're getting at.
Yeah.
So repetition, robotic, predictable.
Yeah.
You know, it's interesting.
I just read something, it was like last week, and I think Google came out with this many months ago,
but for them to do this long-term learning where it has much more of a memory, it's highly based
on whether something's novel or not relative to its index of everything that it's been trained
on. And when it sees this novel thing that it wasn't predicting or expecting to come next,
that it then stores that in its long-term memory, or I apologize for the terminology here,
Dr. Wallace, but it flags it as something that is worthy of being remembered because it's novel
and so different and outside of would have predicted to be the next thing. And it's interesting
that it's in keeping with Claude Shannon's information theory and how it's all aligned.
I'm curious if you have any opinions on that in particular and whether you think that that
has a key component to intelligence or how new things are discovered in knowledge in general.
Well, that really gets to the heart of what I think the difference is between humans and robots, which is that, like I said, I think most people most of the time are acting like robots. They're just acting in kind of a stimulus response fashion. Just as an aside, I always used to say that most human conversation is stateless, meaning that what I'm saying to you right now only depends on the question that you just asked me. And we can forget the whole history of our
conversation up to this point.
You know, one of the pieces of evidence for that is, you know, if you can imagine yourself
having a casual conversation with someone at a party, say, and then you say, oh, where did you
go to college?
And they say, oh, I went to Harvard.
I already told you that.
You kind of forgot that, you would already talked about college earlier in the conversation.
Yeah.
And, you know, you're just responding to the most recent thing you heard and most recent input.
But what really gets to the difference between humans and robots is, even though most people,
most of the time, are speaking in this kind of reactive, behaviorist way, it is possible for people
to have original thoughts and be creative.
And it's almost like a muscle that you need to exercise in order to build it up.
If you want to break out of that robotic mold, then you have to put some effort into trying to be
creative and original with your thoughts and thinking.
and ideas. Let's take a quick break and hear from today's sponsors. No, it's not your imagination.
Risk and regulation are ramping up and customers now expect proof of security just to do business.
That's why VANTA is a game changer. VANTA automates your compliance process and brings compliance,
risk, and customer trust together on one AI powered platform. So whether you're prepping for a
SOC two or running an enterprise GRC program, VANTA keeps you secure and keeps your
deals moving. Instead of chasing spreadsheets and screenshots, Vanta gives you continuous automation
across more than 35 security and privacy frameworks. Companies like Ramp and Ryder spend 82%
less time on audits with Vanta. That's not just faster compliance, it's more time for growth.
If I were running a startup or scaling a team today, this is exactly the type of platform
I'd want in place. Get started at Vanta.com slash billionaires. That's Vanta.
Ever wanted to explore the world of online trading, but haven't dared try?
The futures market is more active now than ever before, and plus 500 futures is the perfect place to
start.
Plus 500 gives you access to a wide range of instruments, the S&P 500, NASDAQ, Bitcoin, gas, and
much more.
Explore equity indices, energy, metals, 4X, crypto, and beyond.
With a simple and intuitive platform, you can trade from anywhere, right from your phone.
Deposit with a minimum of $100 and experience the fast, accessible futures trading you've been waiting for.
See a trading opportunity.
You'll be able to trade it in just two clicks once your account is open.
Not sure if you're ready, not a problem.
Plus 500 gives you an unlimited, risk-free demo account with charts and analytic tools for you to practice on.
With over 20 years of experience, Plus 500 is your gateway to the markets.
Visit Plus500.com to learn more.
Trading in futures involves risk of loss and is not suitable for everyone.
Not all applicants will qualify.
Plus 500, it's trading with a plus.
Billion dollar investors don't typically park their cash in high-yield savings accounts.
Instead, they often use one of the premier passive income strategies for institutional investors.
Private Credit. Now, the same passive income strategy is available to investors of all sizes
thanks to the Fundrise income fund, which has more than $600 million invested in a 7.97%
distribution rate. With traditional savings yields falling, it's no wonder private credit
has grown to be a trillion dollar asset class in the last few years. Visit fundrise.com
slash WSB to invest in the Fundrise Income Fund in just minutes.
The fund's total return in 2025 was 8%, and the average annual total return since inception is
7.8%. Past performance does not guarantee future results, current distribution rate as of 1231,
2025. Carefully consider the investment material before investing, including objectives,
risks, charges, and expenses. This and other information can be found in the income fund's
prospectus at fundrise.com slash income. This is a paid advertisement.
All right. Back to the show. Do you think that the Turing test actually measures intelligence,
or is it something else entirely? I'm so happy you asked me about the Turing test.
So the Turing test, most people understand the Turing test as sort of game where there's three
players. You have a person who's called the interrogator or the judge.
And then they're communicating through a teletype, a text-only medium, you know, much like texting on your phone, but without any audiovisual just typing.
And then the two entities that the judge is talking to, one is a human and one is a machine.
So then the judge has to decide which one is the human and which one is the machine.
And if they misidentify the machine as the human, then it's said to pass the Turing test.
You see, this has a big problem as a scientific experiment because it's not really clear
how often the interrogator has to, you know, misidentify the human.
Is it 50% of the time?
75% of the time?
100% of the time.
What does that even mean?
Yeah.
The robot is more human than a human.
So in Turing's 1950 paper, computing machinery and intelligence, he actually describes two
different versions of the test or the game. And earlier in the paper, he described something
called the imitation game, which, as far as I understand, was based on a real parlor game that
people played in Victorian England. And in this game, again, there are three players,
the judge or the interrogator. And the other two players are a man and a woman. And let's just set
aside the gender issues and the context of writing in 1950 here. So there's a man and a woman.
and sequestered away in the Victorian England case in different rooms.
And then the judge is sending them handwritten questions back and forth.
And the judge's job is to decide which one is the man and which one is the woman.
Now, furthermore, Turing stipulated that the woman should always tell the truth and the man
should always lie.
So now, if you ask the man, are you a woman?
He would say yes, because he has to lie.
Okay.
And then, you know, the judge's job is to try to figure out which one is the man and which one is the woman.
Now, if you replace the line man in that scenario with a machine, okay, let's say you replaced
the man with a very crude chatbot like Eliza or even Alice.
Yeah.
Then the judge could identify the woman correctly 100% of the time.
Okay.
Because it's clear that only one of the players is a human at all, and that has to be the woman.
So now, as a scientific experiment, we can say, let's run this experiment with, you know,
100 judges and 100 men and 100 women.
I don't know exactly how many are needed for statistical accuracy, but let's just say we did a
random sample where we collected the results of this game for, you know, a large number of players.
then you could measure a certain percentage of the time that the judge would identify the woman correctly.
And, you know, let's say that's 70% of the time.
Now, if you replace the lying man with a computer and the computer is a very good AI that can actually play the role of the lying man,
then you should get closer and closer to that actual 70% measurement.
So that's actually a better scientific experiment than the Turing test.
Very interesting.
Yeah, yeah.
The Lovener contest was really based on the original standard Turing test.
Turing test, okay.
Yeah.
And, you know, the rules change from year to year depending on, you know, who was hosting
the contest.
But Loebner's rule was basically if 50% of the judges, usually there were four judges of
two out of four judges misidentified the robot as a person, then he would award the silver
medal for passing the throwing test.
That's so cool.
It was never awarded, by the way.
It was never awarded. Interesting.
Yeah.
Yeah.
If you could get in a time machine right now and go back to your days, call it 2000 when you
had done this, what would be the thing that you would whisper to yourself as a hint as to
how to improve the chat bot that you had back then?
I would probably tell myself, don't even do this.
Why?
Because you know how hard it is.
No.
Okay.
Why?
There was no money to be made from chatbots until very recently.
You were very early.
The Loebner contest was always the domain of, you know, hobbyists and amateur programmers.
There were a few, you know, academic entries, but no, no big companies ever got involved in it.
And then in the 2000s, I organized a number of chatbot conferences, you know, international chatbot conferences.
And we have a hard time getting 25 people to attend.
Oh, really? Okay.
Yeah.
So, you know, after many years of really struggling with this and trying to figure out how to make a living with chatbots.
And I did co-found a company called PandoraBots, which is, you know, based on attempting to commercialize the
AIML bots. But, you know, after a while in the early, after in the early teens, I should say,
I just decided to get out of the fields completely and I went to work in healthcare.
Yeah.
But then in the past five or six years, I've gotten back into AI as it's become more, you know,
lucrative, I should say.
It seems like in 2017, Google came out with this paper. It was called attention is all you need.
And this seemed to be a very seminal breakthrough in,
how to, for all intensive purposes, do what you were doing in a very manual way and let machines
do it way faster and with way more horsepower and more data, right? I'm curious when this paper
came out. Did you read it when it first came out and were you kind of aware of this? Or did it
kind of hop on your radar a couple years after when we started seeing the breakthroughs?
Yeah. I was really not paying attention to it at the time. Like I said it was. Yeah.
working in healthcare. You know, I don't think the LLM industry really came to my attention until,
you know, we started hearing about GPT. Do you think that that paper was kind of like a really
important seminal piece of work for people to kind of understand how to start doing this in a
mechanical machine kind of way? Yeah, obviously. Obviously. That was a breakthrough. Yeah.
Wow. And so in your own words, what would you say, I mean, we know attention is a big piece of it.
But I think for somebody that just kind of hears that label, it's like, okay, well, what does that mean?
If you were going to try to explain to somebody in a very simple way, like, what is that paper saying that has enabled, you know, machine learning to do what it does?
Well, in a way, I'm reminded of the work we talked about earlier, which was the robot eye in the early 90s, because that was also an attention-based mechanism.
So I described how in order to make use of that log map arrangement of pixels where there's high resolution towards the center, you have to be able to point the camera so that the high resolution can be aimed at something interesting.
Well, how do you know what's interesting?
It's by if you see something in the periphery, for example, movement.
You want to move your eye towards the thing that you're seeing in the periphery and place the attention on that.
So attention has to do with focusing your highest resolution sensor sensory capability on whatever
seems most interesting in a scene.
I think there's an analog for that in the LLM version of attention as well.
You know, they're sort of swinging in the direction of where the gaze of the robot is looking
depending on what they see in the periphery.
Okay, so this is super, I love this example because it's very physical.
and you can kind of make sense of it very simply because it's dealing with vision.
And so when you're changing your attention and you're able to zoom in because you have the
capacity to zoom in on something, how are you filtering or knowing what's novel in that
broader site picture in order to know to adjust the focus to that thing?
What gives us that capacity to know, oh, well, I'm looking at you and now I'm focusing on
the tree back behind you.
And I'm zooming in on that and I'm putting my attention there.
What would be that insight in order to say, oh, that's different.
That's something I need to dial in on or pay more attention to.
Yeah.
A long time ago, a guy called Hans Moravec, who is very interesting.
We should talk about Hansa Moore.
He came up with an attention mechanism called an interest operator.
And this is for computer vision again.
Okay.
And it's basically that things in your visual field that have high variance, you know,
high ratio of dark to light more interesting than other things. So that would typically be
edges like the edges of the tree you just described or corners of things or just any sort of bright
spot against a dark background or vice versa. And then recognizing those in the periphery of
your visual field would cause you to move the center of your visual field towards whatever
the interest operator is highlighting. Fascinating. Okay. Here's an odd question for you.
Do you think your real subject of study ended up being humans rather than machines?
Oh, well, you know, I'm a computer programmer.
So I was always more interested in the machine side of it.
I think I did learn a lot about human conversation from monitoring those conversation logs.
The reason I asked this question is, you know, in kind of research and preparation for the interview,
It seemed to me that you have this opinion, I suspect, and correct me if I'm saying any of this wrong.
But it seems like you were not convinced that any of these chatbots were actually saying anything intelligent.
It was just, it was this canned response that was coming back.
And then the reaction that humans had was like, wow, this thing is real.
And there's like something behind it.
And so I guess that's the impetus for the question is because you were, I suspect you were fascinated.
at the response of people and how duped, I guess, they were by interacting with some of these chatbots.
So I guess that's more of the impetus for the question.
And would you agree with everything that I just said?
Well, I used to categorize the users or the clients, I call them, into three categories,
A, B, and C.
Okay.
So, and A clients are abusive.
Okay.
So they're going to say, how can I put this, you know, very inappropriate things.
the chat bot and you see those in the conversation locks.
Although you always have to wonder if someone is saying, you know, I hate you or I love you
even, is that what they're, what they really have in mind or are they just, you know, trying to
get a response out of the robot and testing the limits.
Yeah. Testing the limits.
Yeah.
And then the next category B are just average users.
So those category B people were the ones who could suspend their disbelief and they would be very
very engaged with the bot and have, you know, very long conversations come back and continue
their conversations and so on. And so that would be the group that, you know, as you're saying,
would be kind of reading more into the bot than was actually there because they're engaged
with it on an emotional level. And then the last category I call the critics, which are people
who know something about computer programming and AI, and they just think this thing is terrible.
and, you know, they walk away after a few interactions.
Yeah.
Well, I'm curious to hear your thoughts on where we're at now and where you see some of
this going next.
You know, you have some really smart people in this space that have, you know, demonstrated their
knowledge through the things that they've built.
And I think, you know, if we back up the tape three years ago, many of them were very
suspect as to whether AGI could ever be possible today.
And I have a hard time knowing if.
if this is them trying to get more capital or they actually believe that we're on the cusp of
AGI. I don't know which one of those two it is. But I'm just curious to hear your general
thoughts on where you see us today and like what the next five years might bring. As exciting of a
next five years as we've seen in the past five years, kind of just give us your one over the world
on it. Well, I definitely think it'll be exciting. You know, the term HGI seems a little strong
to me because it's what we've always called AI.
You know, AI has always been a goal that's just out of reach.
And, you know, we have an imagination of what it is based on seeing science fiction
movies and that sort of thing.
You know, Hal and R2D2 and all those examples give us a template for what we'd like to see
in an AI.
And so things kind of odd that they've come up with a new term, AGI, to kind of move
the goalpost even further.
But I'm very skeptical about that.
You know, a very simple answer to this question, which a lot of people I know would not agree
with, is that God gave human beings a soul, but machines don't get a soul.
So, you know, in the sense that human beings have freedom of thought and self-reflection
and creativity, I don't think those things will be reproduced in a computer anytime soon.
Yeah.
And I think I'm with you 100% on what you just said.
And I know there's a lot of people that want to argue these ideas and we're not here to do that.
But I'm with you 100%.
I think that there is something very special and unique about just any living being, not just humans.
I think any living being has this special connection from, you know, a higher source.
And I don't think that we're necessarily going to see, you know, these humanoid robots have whatever that is.
And I have no idea how to define that.
But I do think that some of these humanoid robots, call it five or ten years from now, are going to do things.
And it goes back to some of your earlier comments about these chatbots and how people were just like, oh, my God, I feel like I'm talking to a real person.
This feels real.
And I think that some of these humanoid robots are going to feel like real humans to a lot of people.
But that doesn't mean that it's the same thing as us.
I think we are something very hard to define, very different.
But, oh my goodness, Richard, I really enjoyed this conversation.
Anything else that you think is super important on this particular topic that you see right now or kind of going into the future that you think is worthy of highlighting or that the audience should know?
Yeah, well, the company I work for right now, Franz.
Okay.
It's actually a very old AI company founded in 1985.
And Franz started out as a company selling LISP compilers.
But then, you know, by the end of the 1990s, very few people were paying money for software,
you know, because there's so much free language software available.
So they pivoted to graph database technology.
Okay.
And, you know, without getting into too much detail about what that is, now that we have the LLMs,
we are taking an approach called neurosymbolic computation.
So in the history of AI, I talked about supervised versus unsupervised.
supervised learning. Another dichotomy in AI is between symbolic and neural approaches. So
symbolic approaches are things like, you know, theorem-proving programs or the early chatbots
that we were talking about based on rules where basically are manipulating symbols. Or you can
also think of a chess playing program, you know, which is very mechanical and manipulating
symbols and searching through the space of moves. And so the symbolic
approach is in contrast to this neural learning approach.
And now we're basically trying to find the best of both worlds.
So one example of that is in the medical field, you can make predictions about how
likely someone is to be, well, their mortality, how likely they're going to be readmitted
to the hospital after being discharged.
You know, within 30 days, how likely are they to be readmitted, or how likely they are
have a stroke and the various other things.
But the medical field has developed these symbolic techniques for making those predictions.
And so in the case of stroke from AFIB, there's a test called Chad Vask, and it basically
takes into account criteria like, you know, your age and gender, whether you've had congestive
heart failure, history of hypertension, and various other factors like that.
And when you plug in those values, it produces a number, which can then be used to, you know, estimate the likelihood of you having a stroke.
And now you could also do that with a neural network, a recursive neural network, where you basically train it by feeding in the, you know, the patient data, the diagnostic data and their medical history, and then just look at whether they had a stroke or not.
So you can train this neural network to take a new patient data and, you know, give some prediction about whether they're going to have a stroke.
Then the third way of doing that is to use an LLM.
You can just simply upload the entire patient chart to the LLM and say, how likely is this person to have a stroke?
And so what we've been doing is sort of combining those three approaches together.
You know, we've got the symbolic estimate, we've got the neural estimate, and we've got the LLM estimate.
you know, you could potentially display all three of those and then it's up to the clinician
to make a judgment or you could even put them all back into a different L-L-L-M and ask the
L-L-M which one of these measurements is best, which one of these predictions is best.
Wow.
So it's an effort to combine the best of the symbolic approaches with these newer neural approaches.
Wow.
Say the name of the company one more time.
I want to make sure I have the name of it in the show notes for people if they want to check it out.
F-R-R-A-N-Z.
All right.
Well, I'm just so thrilled to be able to talk to somebody who's been in this space for
decades.
It's miraculous to see what's happening.
And I can only imagine where we're going to be in five years for now.
But Dr. Richard Wallace, thank you so much for making time and coming on the show and imparting
all of this knowledge that you have.
We really appreciate it.
Well, I'm glad people want to talk to me about it after a long time of people not
being very interesting.
Well, there's a lot of people interested in now.
Let me tell you so.
But thank you again for making time and coming on the show.
My pleasure.
It was great talking with you as well.
Thanks for listening to TIP.
Follow Infinite Tech on your favorite podcast app and visit the Investorspodcast.com for show notes and educational resources.
This podcast is for informational and entertainment purposes only and does not provide financial, investment, tax or legal advice.
The content is impersonal and does not consider your objectives, financial situation or needs.
Investing involves risk, including possible loss of principle and past performance is not a guarantee of future results.
Listeners should do their own research and consult a qualified professional before making any financial decisions.
Nothing on this show is a recommendation or solicitation to buy or sell any security or other financial product.
Hosts, guests, and the Investor's Podcast Network may hold positions in securities discussed and may change those positions at any time without notice.
References to any third-party products, services or advertisers do not constitute endorsements,
and the Investors Podcast Network is not responsible for any claims made by them.
Copyright by the Investors Podcast Network.
All rights reserved.
