Theories of Everything with Curt Jaimungal - Mark Bailey: LLMs, Disinformation, Chatbots, National Security, Democracy
Episode Date: May 28, 2024This presentation was recorded at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider. Center for the Future Mind (Mindfest @ FAU): https://www.fa...u.edu/future-mind/  Please consider signing up for TOEmail at https://www.curtjaimungal.org  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything Â
Transcript
Discussion (0)
Disinformation erodes democracy. Democracy requires intellectual effort. I see democracy as being this sort of metastable point, if you think of it in some sort of a state space, a state space of different social configurations.
And the reason it's metastable is because it takes a lot of effort. It requires an engaged population, and it requires an agreed upon set of facts about what's real about the world.
about what's real about the world. Mark Bailey is a faculty member at the National Intelligence University,
where he is the department chair for cyber intelligence and data science,
as well as being the co-director of the Data Science Intelligence Center.
This talk was given at MINDFEST, put on by the Center for the Future Mind,
which is spearheaded by Professor of Philosophy Susan Schneider.
It's a conference that's annually held held where they merge artificial intelligence and consciousness studies
and held at Florida Atlantic University.
The links to all of these will be in the description.
There's also a playlist here for MindFest.
Again, that's that conference, Merging AI and Consciousness.
There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff, Sarah Walker,
Stephen Wolfram, and Ben Gortzel.
My name is Kurt Jaimungal and today we have a special treat because usually Theories of
Everything is a podcast. What's ordinarily done on this channel is I use my background
in mathematical physics and I analyze various theories of everything from that perspective
and analytical one, but as well as a philosophical one discerning well what's consciousness'
relationship to fundamental reality?
What is reality?
Are the laws as they exist even the laws and should they be mathematical?
But instead I was invited down to film these talks and bring them to you courtesy of the
Center for the Future Mind.
Enjoy this talk from MindFest.
Okay.
So, um, I want to welcome everybody to our final session of this year's MindFest conference,
which apparently I'm moderating even though I've had very little sleep. So I'm sure I'm going to
mess up drastically and I apologize in advance. But our first speaker is Mark Bailey, who's a
professor at National Intelligence University, wonderful co-author, and he's
going to be talking about some of the same issues that Michael Lynch discussed, but from
a national security perspective.
And he's going to talk for just a short amount of time, and then we're going to have some
Q&A, and then Steven Gupka is going to be speaking about some philosophical issues and then I'm
going to be asking questions of the audience and of course other people I
mean the participants here at the roundtable and instead of going around
the room for the video and introducing ourselves one by one which would take
all the video I just ask that before you ask your first question
or make your first comment, just say who you are.
OK?
And of course, at the end, the audience
will have an opportunity to ask questions as well.
OK, so let's go ahead and get started.
Awesome.
Thank you so much, Susan.
It's so great to be here.
I love coming to MindFest.
It's really a lot of fun and just a great group of people.
So like Susan said,
for those of you who don't know me,
my name is Mark Bailey.
I'm the Department Chair for Cyber Intelligence and
Data Science at the National Intelligence University.
You heard my Dean speak earlier this morning, Dr. Tom Pike.
But basically, so where a lot of people don't really know
what the National Intelligence University is and We're federally funded university.
So I sometimes use the analogy of a service academy.
So like a West Point or a Naval Academy because we serve government personnel.
So we serve the intelligence community and military personnel who are adjacent to the
intelligence community.
So a lot of our work focuses pretty heavily on national security related issues.
So my background, so I'm going to talk a little bit here about chat bot epistemology and sort
of how that relates to democracy and sort of the erosion of democratic norms.
My background is actually in engineering.
And so I fancy myself an engineer who plays a mathematician on TV because I teach mostly math and I also
dabble in philosophy because I teach a lot on AI ethics as well.
And Susan and I publish a lot sort of in that realm.
So I will begin here.
What I really want to do here is sort of start the discussion.
So we have this lovely panel here.
A lot of what I talk about right now,
Michael already covered wonderfully in his previous talk.
But I do want to focus a bit on what I see as
the major issues with AI chatbots and how they relate to the erosion of democratic norms.
So as we learned earlier, AI is a black box.
So oftentimes AI problems are encapsulated into like three main issues.
So you have explainability.
Because of this black box issue, AI can be unpredictable.
There are a lot of reasons for that.
It's hard to understand how you could have a neural network with billions of parameters
and then sort of map that to some deterministic function
to understand exactly what's going on
and why it makes decisions the way that it does.
And because of that unpredictability,
sometimes you can have unexpected behaviors
that are unaligned with human expectation.
So that leads to what we call the alignment problem.
So it's the ability to align AI with human expectations
or human moral values or how you would want AI to behave.
And then by extension,
you get into what we call the control problem, which is how do you ensure how you would want AI to behave. And then by extension, you get into what we call
the control problem, which is how do you ensure
that you can control AI and ensure that it's aligned
with human expectations and does what you want it to do.
So there's this AI black box that leads to some
of these security risks with chat bots.
And we'll talk a little bit about that.
AI is also an enabling tool.
So we have a lot of issues with the spread of
dis and misinformation online on Facebook and Twitter and other social media
platforms and a lot of times that's facilitated by like troll farms who
create disinformation because they want to target what I would consider social
fissure points within a society that they want to create discord in.
So they might target specific issues
that are divisive in different ways,
and they create these posts to kind of basically
stir the pot a little bit,
and create social discord in that way.
And so right now humans do that,
but AI chatbots are going to enable that at a grander scale,
because it's gonna be a lot easier to use things like chat GPT
to basically to create
large amounts of information around a specific narrative,
and then propagate that on social media.
Then of course, if you have AI empowered chatbots that are
trained on this particular narrative,
then it compounds that problem.
That also leads back to the AI black box problem.
So if you can't really predict
how these things are going to behave,
there's going to be an added level of uncertainty if you have
an AI-driven chatbot that's propagating and
talking to other people online in this capacity.
I'm sure a lot of you remember a few years ago,
Microsoft had this Tey chatbot,
which was, and this was pre-GPT,
but it was this chatbot that they released on Twitter,
and within, I don't know, a few hours,
it became this vehement racist and anti-Semite,
because Twitter is basically a cesspool of nonsense,
and it learns from Twitter,
and so it started to repeat all of these different,
really terrible things.
And so we are gonna see more of that,
especially with these large language models enabling and empowering these
types of devices.
And then finally, disinformation erodes
democracy.
So as I'll talk about in a little bit, democracy
requires intellectual effort.
I see democracy as being this sort of metastable point, if
you think of it in some sort of a state space, a state
space of different, you know,
different social configurations.
And the reason it's metastable is because
it takes a lot of effort.
It takes intellectual effort to manage and run a democracy
because it requires an engaged population
and it requires an agreed upon set of facts
about what's real about the world.
And then you can debate policy about those facts,
but you can't debate something if you don't agree on what the facts actually are.
And so I see that problem sort of being exacerbated by a chat bot,
so we'll talk a bit about that.
So, epistemic breakdown.
So, you know, we define knowledge as justified true belief.
And that's sort of the classical definition
of what knowledge is from epistemology.
And there's some nuances to that,
and maybe that's not the best definition,
but we won't go into that here.
So even before chatbots, this epistemic breakdown,
this breakdown of this justified true belief,
and when I say justified, I mean there has to be
some sort of a chain of justification that leads to validation of that
knowledge.
So you can kind of think of it like in academia, we cite our
sources.
And you do that because you have to map it back to some
chain of reasoning to justify what it is you want to present.
And that breaks down in a lot of ways.
But even before these AI enabled chatbots, this was already evident in social media.
You know, we saw, I mentioned earlier, a lot of the, you know, disinformation and everything that was propagated by troll farms and whatever else.
It sort of breaks down this idea of what knowledge is,
and creates these echo chambers, you know, of unjustified belief, and it propagates this dis and misinformation which erodes democracy.
And like I mentioned, knowledge discernment requires cognitive effort.
And if you're not willing to put or able to put in the cognitive effort to discern what's
true and what's not, based on what you read on social media, based on what's propagated
by a lot of these different chat bots and whatever else,
you're not able to really contribute intellectually to try and to understand and agree upon these facts
and make, I would say, a valid discernment about the facts
that you can then debate policy.
So this ends up causing an over-reliance
on confirmation bias, a heuristic that leads
to unjustified or false belief.
And then, you know, in that way, these algorithms in some ways promote, you know,
the amplification of extremism.
And then, you know, like I said, as these large language models are integrated into some
of these, you know, disinformation opportunities, it's just going to catalyze this and
accelerate this epistemic breakdown.
Again, I mentioned the idea of this AI black box.
So explainability. So if you have a chatbot that's powered by AI,
even if you train it on a particular set
of ideological position or something,
it may still behave in ways that you don't
understand or you don't anticipate. So that's the whole problem with this AI black box.
And then of course, you know, sort of this epistemic crisis that you see in democracy,
right? So as I mentioned, knowledge, knowledge determination is critical for a function of
a healthy democracy. Yet these LLMs may write the news or be our personal epistemic assistance in different ways, right? So you know if you
if you rely too heavily on these LLMs you kind of lose that chain of epistemic
justification because you don't always know where the knowledge came from.
Because it's you know essentially interpolated from the training set of
these of these models. So there's no there's no epistemic chain of
justification that you can follow to validate the knowledge that you have training set of these models. So there's no epistemic chain of justification
that you can follow to validate the knowledge that you have
about whatever you're asking it about.
And then of course democracy requires
an intellectually engaged population.
And then more critically this agreed upon set of facts
upon which you can debate policy.
And then when this chain of reasoning is broken
that creates this epistemic disintegration.
So this has global security implications as well,
this erosion of truth.
So the erosion of democracy creates opportunities
for totalitarian tendencies to take root.
So as a capacity of individuals to ascertain truth
and productively debate policies grounded in that truth degrade,
humans are likely to relinquish their capacity for reasoning about policy to charismatic leaders
whose rhetoric may align with biases in the population. So Michael I think did a
very eloquent way of sort of explaining the fact that the epistemic situation of
humans is very fragile and so you know it can break pretty easily and if it
breaks you know you may be more inclined to rely on these,
these heuristics about how you,
how you understand the world.
And sometimes those heuristics are things like confirmation bias,
which or any other biases that you may have and internalize.
And that may lead you to be more inclined to sort of relinquish your ability to,
to epistemically analyze or make some, you know,
inference about what's true and what's not to some
charismatic leader.
And so that leads to a less, I would say more stable form of social structuring, which would
be something that would be more totalitarian, right?
Because it takes less effort.
It's more energetically favorable in that way.
So this degradation of healthy democracies
because of this epistemic erosion
may create opportunities for the emergence of potentially
a new global hegemony built on some authoritarian worldview.
And you may see countries sort of lurching
toward this authoritarian tendency because
of this accelerated spread of disinformation
and the erosion of our ability to discern what's true
and what's not, and then debate appropriate policies on that. So thank you so much.
Are there any questions for Mark? I have one, may I? So this relates to what came up in Michael's wonderful talk too.
Yeah.
Someone in the audience, I forgot who it was, they said, how does the use of an LLM from
an epistemological standpoint, a chat bot that is, differ from the use of a calculator?
How would you answer that?
I mean, it reminds me of some issues with maybe a symbolic approach and.
Yeah, I mean, that's interesting.
I mean, I think a calculator is,
so for one thing, a calculator doesn't necessarily
lack explainability, right?
It's very deterministic in terms of how, you know,
how the output is gonna present itself.
Because math is deterministic in a lot of ways.
And if you use a calculator to add two numbers together,
that answer is going to be the same,
regardless of the context.
But if you rely on a chat bot, because of the stochasticity
that exists within these types of models,
you might get different types of things.
So it's different than, I would say,
it's different than a calculator in that way.
Wonderful.
Any other, oh, yes, Kurt.
Yeah.
Would you still say that it's drastically different
than a calculator if there wasn't that probabilistic
quality to the chatbot?
So if you turn the temperature all the way down to zero
and it was deterministic, would you then still have a problem with the chat bot?
If it's entirely deterministic, as in like you have some kind of a decision tree where you ask it
very specific things and it gives you very specific
answers?
Is that kind of what you're describing? Something of that sort?
If the temperature is zero, then the output, then it gives you the same response and response to the fake problem every time. Oh, I see what you're saying.
It's still you don't know in advance, like when can you trust the answer and when not.
That seems like a more fundamental difference compared to the capital.
Yeah, in that way for sure.
Yeah, thank you, Mark. Actually, this brings up something we were talking about Wednesday night,
and I'll say it for everybody else for the benefit, right? In the sense that we were talking a little
bit like, I'm actually really happy when I see
how poorly Congress does, right?
Or how slow things change, right?
Because rapid change is much scarier, right?
Say things were efficient,
when we want to, when something changes, right?
It just happens, right?
And now, in the same way, like,
I was thinking while your talk was going on,
couldn't the fight against epistemic erosion
actually be sort of weaponized in itself?
So take the, and maybe this kind of connects up
to when Elon Musk is talking about truth maximizing GPT-12,
whatever it is he wants to make.
It feels like you could just weaponize that
as easily as the erosion.
So you get the negative consequence
because you put into place a system
that fights the erosion but leans a particular way.
And I'm just wondering, in some ways,
is that something that you're kind of thinking about
in the same way?
Your talk kind of brought this up for me of,
well, couldn't our fight against epistemic erosion
actually cause the thing we wanted to prevent
in the same way of just letting it do whatever it does?
Maybe it's messy, maybe it's not great,
but we figure out how to navigate it in maybe a slower way?
I don't know.
Your thoughts on that?
Yeah, I mean, I think there's a lot in there,
what you just said, and I think you're absolutely right.
I think there are certainly opportunities
for someone with autocratic tendencies
or some ideological bent to create
sort of a fine-tuned version of some of these chatbots
that sort of toe the party line.
And that leads to its own dangers, epistemic dangers,
because now you have a different potential set of truth upon which, you
know, these bots are going to propagate information. I mean you may have some
stochastic deviations from that because of, you know, the black box issues, but
yeah that's certainly a problem. And you also sort of implied the, I guess, the
glacial pace that government typically makes decisions. You're very true and I think democracy is naturally a slow process
because a government has to be resistant to authoritarian tendencies.
So if you have an authoritarian leader who comes in and wants to change everything,
they can't do that because it has to build momentum in order for that to happen.
But which I think is,
even though we complain a lot about the government bureaucracy being
super slow, which especially for us, it can be infuriating at times, but it's purposely
slow.
There's a reason for that sort of, that pace, that glacial pace.
One of the things that we do know is that state actors are already using the formation of the internet and social media to manipulate content and to provide structured
responses and answers to people in ways that direct them to certain conclusions
and we know people are largely a product of the information they consume and
their attitudes and behaviors are too. How much worse could a language model do
when the state actors are already operating to
manipulate attitudes and opinions through the use of some of these tools?
Yeah, I mean I think that's a great question. I see AI affecting that in a
few different ways. So one, it's enabling. So it helps generate content that sort
of toes that party line. You know, it also it has the capacity to create chat
bots that are, you know, more stochastic in a way.
So they're not just going to respond in a predetermined way,
but they can articulate things and respond to questions
in a more human-like way.
So in that way, they might become more believable
to some degree.
But then, of course, that comes with its own risks.
So you might have a totalitarian regime
that has a particular view of history
that it doesn't necessarily want its population to know.
So if they have this sort of, you know, the uncertainty in some of these types of bots
might not work in their favor and they may be disinclined to use these types of things
because they can't necessarily guarantee that it would tow the party line in those ways.
Yeah, just I have a question just going back to the explainability point.
So if the system is, I cannot explain what's going on, I don't have a model to interpret
how the system is getting the output.
But if the output is always right or giving me reliable answers, why explain how the systems get the answers is irrelevant?
If the answers are answers I can rely on
or if I can trust in the algorithm.
So why explainability would be a point
if I have a reliable answer?
Maybe this would be the point.
So if it has a reliable answer, if it's a reliable answer why I still need to explain how the algorithm got
To that answer well, I think if it had a reliable answer then explain ability wouldn't necessarily be an issue
But I think because of the way that these models work. They're inherently unexplainable in certain ways
So that makes them not always reliable so you get get these hallucinations, you get these unusual outputs,
like we heard earlier from Scott,
that's kind of a, it's a feature,
you can't really code it out.
So, but if they were in fact,
deterministic in some way,
where you could always rely on them
to give you the same answer,
then that wouldn't be a problem.
Part of what government represents,
and I'm not necessarily asking you a question.
We work together.
I just retired.
But part of what government represents and what I find fascinating about these systems
is the systems are, to some extent, attached to particular components of government mechanisms
that make the system work.
Therefore, if the people lose trust in that system,
they could potentially lose trust in the governments that are,
quote unquote, responsible for monitoring or,
or, you know, likewise,
presenting laws or regulations regulating such systems.
So that's one of the problems with anarchy and chaos is that those systems might pose,
is that correct?
Yeah, I think that's a good observation.
So she said comment and not question.
I mean, I'm just going to pause and say, yes, we have a significant kind of democracy and
crisis problem, but talking with the mayor over lunch,
there's also some great opportunities.
I think this kind of echoes some things that Mike said
where number one, we could use these chat bots
so now maybe some of these opaque bureaucratic processes
of the government, like figure out where your property line
is can now be simply answered, right,
if you have that chat bot, right, or your tax problem.
So I think, you know, like any tool, it could be used for good or ill,
and how do we take some of these tools
and actually use it to make people's lives easier.
And I mean, that's not like saying it
just tritely and optimistically.
The Shining Path in Peru in the 1980s was defeated
because they launched a TV show
that was showing how they're reducing
massive government bureaucracy.
And then people could go into that knowledge and make better, say, hey, I can actually
get this loan now or I can go to college because they made this rule change last night.
So I think there is some good in here that if we can exploit this technology for it,
it will actually can make democracy stronger.
Yeah, I mean, you're totally right.
I mean, there are good and bad points to everything,
including these large language model-driven types
of chatbots and opportunities.
There are definitely great opportunities to help
make government perhaps more efficient,
but there are also opportunities for sowing disinformation
where an nefarious actor could use it
in ways that it would erode democracy
Yeah, the I'm just in response to that a little bit now I'm starting to feel like a gadfly
I'm not trying to be I promise um I
Guess in some way tendency I I do sometimes worry about say the situation where I go well wouldn't it make it so much easier if
I could just interact with a large language model like a chat bot, right, in this kind of situation?
And I tend to think of like, well, anytime I have to deal with Xfinity that told me the same thing,
I go, was that really what I want to do? Do I really want to have to talk to a chat bot before then I get to this place?
In some ways, I kind of like the human fallible kind of messed up nature of it, right?
Where I go, actually, I don't like what you tell me, I'm gonna try to find somebody else.
But if it's this first initial barrier of, well,
to access your ability to influence
what happens between you and your government
has this barrier, I wonder, is that actually achieving
the thing it's supposed to achieve?
I don't know.
Sorry, man.
I think those points are valid.
So just in the sense that they found the new generation,
they don't want to call to make an appointment, right?
They just want to go online so i think it's really you know how do you get that optimal
environment that you know maximizes your tools right but doesn't take away our humanity so i
won't disagree with you but i'd say it's it's not an easy problem thank you so much thank you
Thank you so much and welcome. Thank you.
Firstly, thank you for watching, thank you for listening.
There's now a website, curtjymongle.org, and that has a mailing list.
The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like.
That's just part of the terms of service.
Now a direct mailing list ensures that I have an untrammeled communication with you.
Plus soon I'll be releasing a one page PDF of my top 10 toes.
It's not as Quentin Tarantino as it sounds like.
Secondly, if you haven't subscribed or clicked that like button, now is the time to do so.
Why?
Because each subscribe, each like helps YouTube push this content
to more people like yourself, plus it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, which means
that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows
YouTube, hey, people are talking about this content outside of YouTube, which
in turn greatly aids the distribution on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for theories of everything where
people explicate toes, they disagree respectfully about theories, and build as a community our
own toe. Links to both are in the description.
Fourthly, you should know this podcast is on iTunes, it's on Spotify, it's on all of the audio platforms. All you have to do is
type in theories of everything and you'll find it. Personally, I gain from rewatching
lectures and podcasts. I also read in the comments that hey, TOW listeners also gain
from replaying. So how about instead you re-listen on those platforms like iTunes, Spotify, Google
Podcasts, whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com slash KurtJayMungle and donating with whatever
you like.
There's also PayPal, there's also crypto, there's also just joining on YouTube.
Again, keep in mind, it's support from the sponsors and you that allow me to work on TOW full-time.
You also get early access to ad-free episodes, whether it's audio or video,
it's audio in the case of Patreon, video in the case of YouTube. For instance,
this episode that you're listening to right now was released a few days earlier.
Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much.