Theories of Everything with Curt Jaimungal - Michael Lynch: AGI, Epistemic Shock, Truth Seeking, AI Risks, Humanity
Episode Date: May 24, 2024This presentation was recorded at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider. Center for the Future Mind (Mindfest @ FAU): https://www.fa...u.edu/future-mind/Please consider signing up for TOEmail at https://www.curtjaimungal.org  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything Â
Transcript
Discussion (0)
We already know that everything we access on the internet, almost, is personalized.
All the news that comes down our Facebook feed, all the ads that we face when we're reading the
New York Times, all these are personalized to fit our personal preferences and our past history
online and off. And that's fantastic when you're trying to figure out what to watch tonight.
It's not so fantastic when you're hunting for facts, because when you're only to figure out what to watch tonight. It's not so fantastic when you're hunting for
facts because when you're only getting the facts in your searches and on your social media feed
that fit your pre-existing preferences, that's not a recipe for bursting your bubble, it's a recipe
for hardening it. Michael Lynch is a professor of philosophy at the University of Connecticut
whose research specializes in truth, democracy, ethics, and epistemology.
This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded
by Professor of Philosophy Susan Schneider.
It's a conference that's annually held, where they merge artificial intelligence and consciousness
studies and held at Florida Atlantic University.
The links to all of these will be in the description.
There's also a playlist here for MindFest. Again, that's that conference, Merge an AI
and Consciousness. There are previous talks from people like Scott Aaronson, David Chalmers,
Stuart Hammeroff, Sarah Walker, Stephen Wolfram, and Ben Gortzel.
My name's Kurt Jaimungal, and today we have a special treat because usually Theories of
Everything is a podcast. What's ordinarily done on this channel is I use my background
in mathematical physics
and I analyze various theories of everything from that perspective and analytical one,
but as well as a philosophical one discerning, well, what's consciousness's relationship
to fundamental reality?
What is reality?
Are the laws as they exist even the laws and should they be mathematical?
But instead, I was invited down to film these talks and bring them to you courtesy of the Center for the Future Mind.
Enjoy this talk from MindFest.
Alright, thank you.
Thanks so much for being here.
So...
Just to put a little fear of God into you.
So last year, as many of you know, Elon Musk, the richest man in the world, announced that
he was going to pour his considerable resources into funding what he called a maximum truth-seeking
AI.
Now, this, you know, was enough to cause a little worry, not the least of which, for reasons having
to do with the fact that he made this announcement in an interview with Tucker Carlson.
But as Susan and Mark noted in their Nautilus piece published shortly thereafter,
the way he had of framing his mission
actually raises some deep and interesting epistemic questions,
questions about knowledge.
And I'm gonna ask three of those questions today.
In what sense or to what extent can we use generative AI as an epistemic tool
to help us achieve true beliefs and knowledge? That's a question which in one sense I think
is really easy to answer, but in another way is a little bit harder to answer. And I'll
explore that. And then the second question I want ask is, how might we using this epistemic tool,
if we can use it effectively,
how might using this epistemic tool
affect our own human epistemic agency?
Where by epistemic agency for this talk,
all I mean is our capacity
for deciding what to believe based on reasons.
And I'm gonna be particularly interested
in how it affects our epistemic agency
with relation to questions that we might be interested in
that have social and political resonance.
And then sort of an implicit question
that is really actually on the front of my mind,
but is gonna be in the background of this talk
is how is all this going to affect
democracy?
All right?
So the rough idea is I'm going to explore two different kinds of problems that we face
in trying to use AI in a certain way, a certain kind of AI in a certain kind of way, as an
effective epistemic tool. And I'm gonna say that these problems
do actually pose some risks for our epistemic agency.
And I'm gonna say that these problems grow worse
the greater we socially integrate,
in a sense I'll explain, generative AI
into various parts of our lives.
So I've mentioned this term, epistemic tools.
So I wanna talk about what I mean by an epistemic tool.
But in order to do that,
we need to talk about tools a little bit in general.
So one way we can evaluate, just one way,
but a very common natural way to evaluate our use of tools
is in terms of their reliability, right?
A good tool, an effective tool, is one that is reliable in getting us the desired results.
But, and this is a crucial distinction, I think, and one we're all probably too familiar with,
a tool can be reliable in principle, that is in ideal conditions or even just solid conditions,
but it can be the case that we might not be able
to use it reliably in actual conditions.
All right?
And that might be because first,
there might be facts about us,
like we're unable, perhaps for a variety of reasons,
to actually use the tool reliably
in a particular actual condition,
or it might be something about the actual conditions
departing from the ideal conditions.
Example, thought I'd use an example
that would be really relevant here in Florida,
the tool, a snowblower.
Sure, all you Floridians are really familiar with this.
Yeah, you know, well, in case you're not, I see some puzzled looks. Snow is
the stuff that fall, white stuff that falls from the sky and then in some
parts of the country it annoyingly collects on your driveway so you
purchase these things called snowblowers which you use to throw the snow off the driveway, ideally.
Though when you buy a snowblower, if you have done that and you get it home, you'll see
that the instructions, like a lot of tools we buy at Home Depot, will say, well, actually
this snowblower works to do these things that it says on the front of the box, as it were,
but only in certain conditions.
For example, the snow can't be wet.
Okay, snow can't be wet.
Okay, only dry snow.
And, you know, there's gotta be certain inclines.
Can't, you know, the driveway can't be steep
or something like that, et cetera.
And of course you have to be of a particular weight
to operate it particularly effectively and so on.
So there's all sorts of ways that we're familiar
in which a tool can be reliable in principle
in certain conditions that it might've been designed
to be reliable in, but we might not be able
to use it reliably in other conditions.
That's a pretty common sense distinction.
And we might say that, look, insofar as our agency
is gonna be increased, our ability to do things
is gonna be increased, it's gonna be increased in part
by our ability to use things is gonna be increased, it's gonna be increased in part by our ability
to use the tool reliably.
When we're really picking what tool we wanna use
in a particular actual conditions to get a job done,
what we're worried about is picking one
that we can use reliably in those conditions.
We don't really care so much about whether it could be used
by other people reliably
in other conditions. All right? Okay. So epistemic tools. By epistemic tools, I mean for purposes
of this talk anyway, very broad definition, any methods like the scientific method, machine,
like a calculator, or source of information like CNN or New York Times or whatever,
that could be used to help you generate true beliefs
and knowledge or not perhaps.
And we could say again that an epistemic tool is effective
in so far as it can be used reliably.
And we can say that epistemic agency,
our ability, our capacity to form and decide what to believe
based on reasons that can be increased if we can use our epistemic tools reliably.
And obviously we're here to talk about whether we can use generative AI as an epistemic tool
reliably and I'm interested in its particular use of it.
One that was forecast all the way back in 2021.
Scott today was talking about even further back, 2019,
even way back down 2014 and so forth.
I mean, it goes back really far,
but if you can remember as far back as 2021,
there was a paper, now well-known paper
by Don Metzler
of Google research who suggested that it would be great
if we could use natural language processing techniques
and large language models to create a system capable
of producing results with or without actual understanding
that are of the same quality as a human expert
in a given domain.
And in this paper, Metzler and et al,
they raised about eight different problems
that need to be solved in order for this to happen
in the way that they hoped.
And whether or not those problems have been solved,
we now of course have perplexity,
GPT-4, and so on.
Google's Bard, Muskrox, et cetera, et cetera.
And we have this now being integrated,
some cases on an opt-in basis, into our search engines,
and we have it accessing the live internet,
as Scott talked about today.
Now there's a particular vision of this
that was proposed in this, the use of these tools
or was proposed by Metzler and has been,
you know, perplexity is an example
of really following up on this,
which is to use AI as a search replacement tool.
In the paper by Metzler, he was suggesting really
that we can re-envision search. We can re-envision search, replace search in the way that we
now grown accustomed to it with the links and whatnot with authoritative
single responses that perhaps or perhaps not, depending on the platform, might be
footnoted with links to sources
you can follow up on.
So I'm gonna call this type of use of AI.
And again, is this the only way we can use these platforms?
No, but it is the way that I'm interested in.
I'm gonna call that AISR.
So the question is,
can we use this as an epistemic tool, an effective one?
And I think the typical answer is,
well, of course we can, man.
Dude, just check it out.
And I think to many, you know, these lovely gentlemen
do have a point. They do have a point.
And of course, Bard agrees.
I asked Bard about this.
I was like, you know, can I get you
to give me true answers? And he goes, dude, dude, man, it's my primary goal, man. It's
my primary goal. Et cetera. So end of talk, right? That settles that. All right, maybe not.
Maybe not.
Because epistemic tools are ineffective insofar as we can use them reliably to help us have
true beliefs.
But we might think that what I'll get to, I'm not going to explain this yet, the social
integration of AISR itself might end up undermining our ability to use it reliably.
That is, it could be that actually
embedding this type of AI in our everyday lives
might itself cause problems that will prevent us
from using it reliably, even if it were to be example, I
am a reliable in the ideal conditions.
And I think the answer to that question is that the case is yes.
And I think there are at least four reasons for thinking that we face some problems here.
These four reasons are all going to be familiar to people in this room. There are four reasons that I like to label with the
happy name of the four writers of the Epistemic Apocalypse.
Okay, so let's just go through the four writers of the Epistemic Apocalypse.
Number one. Well, one problem is AISR and AI in general, of course, but AISR, as again, these are meant
not to meant to be new.
These are not original to me.
These are just issues that I want to flag.
There's the problem of weaponization.
That is that state actors, political campaigns, and various other actors might seek to weaponize AI and AISR
to feed people propaganda, misinformation,
or even incite violence.
And this can happen for all sorts of reasons
that we can obviously use AISR to help us
generate propaganda, no doubt happening every second.
We can try to game the information that chatbots consult
and on which they're trained,
that is, try to game various sources
that you think the chatbot might consult
when asked about certain things.
And then deliberately construct, of course,
weaponized AISR platforms.
That is, deliberately construct a platform that's built to feed propaganda.
Just as an antidote on this, when, as some of you may know, when Elon proposed his maximum
truth-seeking AI to Tucker Carlson, Tucker's immediate response was, oh, you mean Republican
AI.
And he really said it.
That was a joke.
Right?
It was a joke, right?
Right, okay.
Well, anyway, I mean, whatever your lenience,
the point is is that it does seem possible
that you could do that.
Maybe, maybe not, I don't know.
Another possible worrying thread is what I call polarizing AI.
We might think of polarizing AI could be a weaponized AI that is actually constructed
to push people to the extremes on certain issues.
That is, divide people in a certain way in the way that, for example, Mark and I have been talking in breaks about how the Russia's Internet Research Agency has been so effective
at doing.
And you can imagine, you know, working with certain platforms to try to get that done.
But this could, you know, in a sense, you might say, well, this is just a sort of byproduct
of weaponization.
But there's another way that this could happen, and that's, I think, possibly, I don't know,
but seems to me that is something worth taking seriously, that is that this could happen
organically.
We already know that everything we access on the internet almost is personalized.
All the news that comes down our Facebook feed,
all the ads that we face when we're reading the New York Times or Fox News or what have you,
all these are personalized to fit our personal preferences and our past history online and off.
And that's fantastic when you're trying you know, trying to figure out what to watch tonight, right?
Or what books to buy. It's awesome.
It's not so fantastic as we all know when you're hunting for facts.
Because when you're only getting the facts in your in your searches and on your social media feed that fit your pre-existing preferences,
that's not a recipe for bursting your bubble. It's a recipe for hardening it.
That's not a recipe for bursting your bubble. It's a recipe for hardening it
So that we already know is happening the internet right now imagine our
responses on The chatbots become personalized to that degree super helpful
maybe
Right. So something think about no doubt and on all these, there's people in the room, Scott, who knows a lot more about
this stuff than me.
It's just to make sure that Scott's paying attention.
I'm going to mention his name every five minutes.
Okay, because God knows the content's not going to do it.
All right, so, such a jerk.
I am a terrible person.
So the third problem is another familiar problem, the problem of the self-poisoning well, which
is that we already know that a ton of stuff on the internets is generated by the AIs.
We don't know how much is actually generated by the AIs.
There's a paper that was published, this talk going around saying it's like 50%.
I don't know.
And of course, what do we mean by AIs?
If we mean just algorithms and like all of it.
So in any event, there's a lot of stuff, an increasing amount of stuff on the web already
generated by AI.
Some of that is infected by polarization, weaponization, and the aforementioned in the
previous talk, hallucination.
That is the propensity for AI to sometimes make things up like human beings.
And that may mean, in other words, that AI is poisoning the well from which AI drinks.
It scrapes the internet. it gives back to the internet, scrapes the internet, it gives
back to the internet, and it could be that things are degrading.
Don't know whether that's the case, not saying it's inevitable, but it's something to think
about.
The one that I'm actually really interested is the problem of trust collapse.
This is my own term for a phenomenon that a lot of us have been worried about.
And that is to the extent that the problems like one to three occur, it seems that we
could suffer trust collapse in the sense that people will start to, if they become partly
aware of these problems or even hear about these problems,
or even just experience things in light of these problems without realizing that the
problems are going on, they could get to a point where they start to trust less the information
they're receiving.
That they get to a point where they are no longer sure what to believe.
So if that's the case, then it may be that consulting experts is
going to be more problematic. People will become less trusting of them, but they
might also become less trusting of the AISR. That is, they might just start to
trust collapse. The other thing to keep in mind is that this process of
trust collapse can be weaponized itself and is being weaponized.
The very idea that some people are
trusting less can in fact be used
by people to get them further confused. For example,
by claiming that some footage that was actually taken of you, right,
was generated by AI, as a candidate for public office in this country recently did.
Right? Something actually, this particular person did something,
was filmed doing that thing, those public remarks, and then later claimed that could
be AI.
Okay, it could have been, I guess.
I don't know.
And that's the point.
I don't know.
People start to worry about what to trust.
That could be a problem.
Now I talked about AI and social integration.
I gave you these examples first before telling you what I meant, partly just to provoke your
imagination.
What I'm trying to say here is that plausibly the threats raised by the Four Horsemen of
the Epistemic Apocalypse get worse to the extent to which they're why that AI, SR, is
widely adopted and not just by individuals, but by governmental institutions,
educational institutions, and corporate institutions, and to the extent to which
AISR is normatively embedded. And by that I mean to the extent to which it's sanctioned by and
encouraged, its use is sanctioned and encouraged by those institutions.
At least that's my hypothesis. So can we mitigate these risks? Well, yes, I really think we can.
I hope so. And I know that lots of people in this room are worried about that. And some of you,
Scott, are actually trying to, told you, are actually trying to, you know, mitigate them, work on it.
But it's still going to be the case that even if we can get AI more reliable in principle
or more reliable with regard to certain sorts of questions, that the four horsemen are still
going to threaten our use of it in all sorts of contexts,
particularly in contexts which have social
and political resonance.
And as we all know, that's a tiny little context, right?
Because nothing's politicized in this context, right?
Nothing.
It's not like the coffee we drink, the cars we drive,
the clothes we wear, the things we say,
none of that can ever be politicized.
The political and social realm, it's a tiny little realm,
very self-contained with neat borders.
So I think that's an issue
with our collective epistemic agency.
And now I wanna turn to the other problem that I mentioned.
So the first problem was the problem of,
well, could there be factors
like the four horsemen of the epistemicocalypse that could undermine our ability to use AI reliably?
But there's another way we can evaluate our use of tools.
Another way we can evaluate our use of tools.
And that is in terms of our ability to use those tools reflectively.
All right, so now I'll give you the definition first and then give you some examples.
So I'm going to claim that you use a tool reflectively
to the extent that you know, you confidently use it
to generate the right results,
but you also understand its limits.
You understand to some extent how it works,
and you care about using it effectively.
You actually have a sort of attitude of giving a crap.
Okay, now notice that I define this as a matter of degree, right?
Everything I've done so far has been a matter of degree.
That's a choice.
I'm not saying it's, you use it reflectively
like a button that goes on and off.
It's an extent, the extent to which you use it.
So I'll give you some examples.
Almost everybody here I hope can use a screwdriver
pretty reflectively, right?
You know that, you know how to screw in, you know, turn it. Right? You know, righty-tively, right? You know that, you know how to screw in,
you know, turn it, right?
You know, righty-tighty, right?
You know that it's not particularly effective as a saw, right?
You can defend why the Phillips head is the better one
to use in this situation.
And you care, at least when you're using it,
about getting the job done.
Okay, so we can do that.
On the other hand, there could be situations
in which we can be trained to use a tool
and be trained to use it mechanically.
So for example, someone like myself
might be trained to use a metal detector, right?
Just trained to use a metal detector, a handheld one,
without knowing particularly how it works,
without knowing its limits,
like how much metal it can detect and where,
and without really caring whether it's effective,
like, because it's not my job to care about that.
I'm just worried about,
I'm just doing the thing they told me to do.
When that happens, I'm going to say that we're using a tool not reflectively, but to a greater
degree more mechanically, which is to say that rather than the tool becoming an extension
of us, when we use tools less reflectively, we become extensions of them.
Okay.
I think, obviously, agency, at least in my opinion,
also increases to the extent that we use our tools
not reliably, but reflectively.
The person who's able to reflectively use a tool
can get more stuff done intuitively and more effectively than the person who's just to reflectively use a tool can get more stuff done intuitively and more
effectively than the person who's just, you know, waving the metal detector and not giving
crap.
Agency decreases, we might think, to the extent to which we aren't able to use it reflectively.
And again, these are matters of degree.
So can we use a ISR reflectively?
Well, sure we can.
Of course we can.
But there are some barriers to us being able to do so in a way that when it becomes socially
integrated.
That is barriers to the general use of it reflectively.
One reason is that, you know, we use it reflectively to the extent to which we use it competently
to help us generate true beliefs, but we understand its limits and we can defend it as reliable.
But of course, we can't defend it as reliable if we can't, in a particular context, use
it reliably.
Nor can we defend it as reliable in a particular context if we're not sure whether that's
a context in which we,
not Dave Chalmers who could use it reliably in any context, but we, right, can use it reliably
in that particular context.
And the Four Horsemen of the Epistemic Apocalypse
already have shown us that we have worries
about being able to use it reliably.
So if we can't use it reliably,
then we're not gonna be able to defend it
as usually reliably.
So therefore we're not gonna be particularly reflective in this sense in using it. Second problem, opacity. This is pretty obvious.
There's a sort of explainability problem with any black box AI, but certainly with LLMs.
It's difficult to know why exactly, I mean, we can know why they work in sort of the big picture,
but why they generate those particular results can be awful hard for us to understand,
particularly if you're just straightforward civilians like myself, right, who, unlike
people like Scott and Dave and many other people in this room, you know, I can barely,
you know, do addition, right?
So I'm not going gonna have any way to,
y'all are smart folks, but I just have those problems.
So I'm gonna face a problem.
Secondly, here's a paper from Sean Bender,
who I disagree with some of their stuff,
but this is an interesting remark from 2022.
They say, look,
AISR synthesizes results from different sources
and it masks the range that is available rather than providing a range of sources
in which the user can as it were play around in.
So intuitively they seem to be suggesting
that by just having a single authoritative expert response,
then you get the same problem you get
when you just consult an expert.
You might get the right results,
but of course your own epistemic agency
is in a sense being handed over to the expert.
That's not necessarily a bad thing in lots of contexts,
but that's evaluating whether it's a good or bad thing, because right now what
I'm not doing, I'll get to that in a minute, I'm just asking does it, in what ways might
it hinder epistemic agency?
And this seems to hinder in the reflective sense.
And then there's this other issue about social integration.
And this is a point that I'm going to make, which is,
you know, I'm not as sure of,
you know, I'm not certain of any of this stuff I'm saying.
That's the moment in which we live.
But I think, again, that the more we widely adopt
and socially embed, that is, normatively embed these tools,
the less reflective we might become with them.
And I'll give you a brief thought experiment
to back that up.
Call this the Delphi problem.
So imagine a society that consults an oracle
with regard to what to believe.
So whenever they have a problem about what to believe,
they consult the oracle.
Now imagine the society does this over a period of generations.
They ritualize the performance.
They normatively embed it.
That is, their institutions encourage people to consult the oracle
whenever they're figuring out the answer to a question.
Like in school, consult the oracle.
Learning math, consult the oracle.
Writing stories, consult the oracle.
So it becomes ritualized.
It becomes normatively embedded.
It becomes habitualized.
It's just what we do.
After a while, people might even forget about why they did it.
They might cease to care about what,
because it's, you know, see, that's the thing
about ritualized to habitualized social behaviors.
People cease, that's cease to know or even care.
It's just how we do things, man.
Now, imagine that the oracles are actually in this case pretty darn reliable.
Yay, then.
They're getting mostly good answers.
Sometimes they don't know that they're getting good answers, sometimes they do.
Depends on the question.
Because some questions, it's hard to verify what the oracle says.
But imagine they do. Well, to some extent you might say,
yay them, their epistemic agency is increasing, right?
They've got a reliable tool.
But in another sense, we might think
they're missing something.
They're missing that reflectiveness.
They may even be missing the motivation
to care about the reliability after a certain point,
or at least with regard to some questions.
That's the sort of worry that I have.
The Delphi problem. So some implications and
and
and objections. Well, the implication, again,
I said this was implicit in the talk, I'll bring it to the surface for a moment.
I think epistemic agency is an important part of democratic
practice. I think that when we engage in democratic practice,
we ideally treat the political space, to borrow a phrase by the, and hijack a phrase really,
from the philosopher Wilfred Sellers, we treat that political space
as a space of reasons, or we should.
That is when we engage in democratic practice, truly democratic practice, we try to engage
in it as people who are trying to do the right thing together with other people and are trying to do the right thing together with other people
and are trying to figure out what to believe together with other people.
Democratic practice, if understood as a space of reasons,
just is a space where we treat each other, or should,
as equal moral agents and as equal epistemic agents
in one basic sense.
Not that we treat each other as equally good.
I certainly don't think that of other people in my space all the time, and they don't think
of it of me.
Nor do I think, am I saying that democratic practice requires us to treat each other as
equally epistemically?
That is that treat each other as if we all know the same things, because we obviously
don't. What it does though, require of us, is to treat each other in the sense as being equally
capable of doing something, equally capable of coming up and making our own minds up about
something.
To the extent to which a political environment starts to treat people within that environment
as not capable of making
up their own minds, and so therefore maybe making up the minds for them, to that extent
that that environment becomes less democratic.
So I think the punchline here is that epistemic agency is important for democracy.
So when we worry about epistemic agency, we are to some extent, or should be worried
about democracy.
Okay, that's a whole book in there coming out next year, but never mind about that anymore.
That's a sketch.
Here's a couple of objections.
I mean, one objection you're going to no doubt raise again to me anyway is, well, gee, right,
Lynch, okay, fine, but can't we still use AI and AISR reliably and perhaps reflectively
in some domains and that is in some questions and can't some people do it?
And I wanna say yes, as I've already said, yes, we can.
Hallelujah.
Isn't that awesome?
Great, I use it.
I hope sometimes I'm using it reliably and reflectively,
although again, I'm not so, you know, who knows?
The question I was asking though, was not that question.
The question I'm concerned about is the question
of what happens when these tools become normatively
embedded and widely adopted, socially integrated.
That's the question that I was worried about.
Another thing I might add here, this isn't on the slide, but just in light of previous discussions,
you'll notice here that I have not said one word until now
about whether I think AI LMS have beliefs themselves,
whether they can't, I haven't said,
I haven't talked about them themselves,
them being epistemic agents, they could be,
that's another question.
It's a whole nother question.
Are they epistemic agents? Either, and are they reflective ones? Are they reliable
ones? That's a different question. In this talk, I've been just, and we'll continue,
I'm just agnostic about that. I don't know. I don't know the answer to that question.
I'm interested in the answer. I just don't know the answer.
What I am interested here though is how our use of these things as tools, if that's what
they are, and not agents themselves.
Our use of LLMs as tools in a particular way, how that affects our agency.
That's an immediate problem.
Another objection you might raise is what I call the same old same old.
I mean, after all, you might point out, and correctly, that the four writers of the Epistemic
Apocalypse, we've seen them come thundering into view with other technologies.
It's not like there aren't other technologies that raise the problem of weaponization, polarization,
yada, yada, y yada yada, trust collapse,
right? I mean, yeah. Right? Writing. So sometimes people will say to me, you're just acting like
Socrates. Back when Socrates snarled at the possibility of writing, which he did. Maybe
it was Plato. Actually, you know, Plato, Socrates, hard to tell apart on these things. But the point is, is that, yeah, I sort of am being grumpy like that.
That's what I'm doing.
Yes.
But the fact that we've seen a problem before does not mean it's not a problem.
Okay?
So the fact that, yes, these problems have emerged before with other epistemic and informational
technologies.
Okay, but they might be emerging again, and we should pay attention to them.
What we need to ask is, not only what can this tool get us, but what is it going to
do to us. So I want to end on that note, echoing something that Scott said a couple hours ago.
I think the human epistemic condition is inherently fragile.
As it turns out, I think we're actually not particularly effective, not particularly reliable, not particularly
reflective epistemic agents ourselves.
As Kant said, we're constructed from very crooked timber.
And it seems like that's a relevant thing to keep in mind when we consider widely adopting
and normatively embedding these sorts of technologies.
Because actually, I think, because we as individuals are such ineffective epistemic agents much
of the time, particularly with regard to things that are of social and political relevance, because we're clouded with bias
and so forth.
We need to promote and protect those institutions and practices that encourage reflective truth
seeking and epistemic agency.
That encourage epistemic agency.
I mean, I think that the more AI, or I worry, I might say, that the more we incorporate
AI into our social and political life, AISR in particular, the more we risk becoming extensions of AISR, as opposed to it becoming an extension of
us, the more we risk having the human epistemic condition become the artificial condition.
Thank you.
Thank you, Michael, for your talk.
Questions?
Thank you for your talk.
I have a question, right?
So you talk about the epistemic condition and that implies to me at least some ethical
component.
For me, the whole process of discovering or finding knowledge and choosing to believe
something is a choice.
There's a lot of activity that goes into it.
You have to weigh options and come to certain conclusions.
At what point do you think that we'll be able to program artificial intelligence with the capacity to make the kind
of decisions that we make in our day-to-day lives?
And do you think that it will take that much more
development for them to make, for artificial intelligence
to make more effective decisions than perhaps we can make
because it's able to compute more factors at the same time?
Well, I'm not an AI expert,
not an expert on the technology of AI,
as everyone who is in this room can tell.
But from the people that I talk to,
and certainly, you know, lifting into Scott earlier today,
my own sense is that we can already use AI
to help us make decisions that we make every day.
And in some cases, we have, due to work of AI safety experts, we have installed certain
guardrails to make that, you know, make it more difficult for us to ask certain sorts
of questions.
But, I mean, it certainly seems to be possible. I don't see it. I mean, again,
I'm not an AI expert. Certainly seems worth thinking about. Let's put it that way. That,
you know, when we're going to start using AI as therapists, I know people are working on that
already, right? And, you know, another way of another example that I think about is this one.
Another example that I think about is this one.
One thing humans beings, I don't know about you guys, but you might've noticed that human beings in general,
not anybody here in this room,
but you've heard people find,
you've heard that human beings
are often make bad parenting decisions, right?
You've heard that, right?
There are some people out there
that sometimes make bad parenting decisions.
Now imagine a use of AI as a parenting consultant.
And now imagine, because I'm a philosopher and I'm not held prisoner by the facts, imagine
that a society starts to think, well, actually, AI isn't perfect at the parenting decision,
but a lot better than the average person.
So why don't we just have our kids raised by little AI,
maybe put them in, you know, things like this, you know?
And, you know, that way we can all kick back.
And I mean, if you are a parent, particularly of a,
let's say three-year-old, who hasn't perhaps wished
for AI parent,
may need to come along, right?
And entertain your kid.
I don't think that would be a society I'd want to live in.
I know I'm riffing off your question.
Sorry, I apologize. I hope that helped.
It's okay. Thank you.
Yeah, thank you. Great talk.
So I would like to go back to the same old objection.
Yeah.
If actually your point is the same of the case of other tools.
Right.
Because it seems, I don't know, I don't use my brain anymore for make calculations,
just very, very little ones.
Probably when I got 80 years old, I'll be
completely bad in mathematics.
I don't feel as if I'm an extension of my
calculator, although probably I will become.
So I don't know, I think you probably want to
raise a point that is different from the other
technologies.
So what is
this specific difference that makes us the risk of becoming an extension of
those tools that I don't see the same in other types of technologies? So that's
a great point. I think that we often, and I agree with you, that I'm a
little farther along than you. I just use calculators for everything.
Well, maybe not one plus one,
but once it gets past that, it's too hard for me.
Like you, that doesn't make me
feel like less of an epistemic agent.
It doesn't make me feel more of
an extension of the calculator.
But we may disagree.
I think you actually, in the sense I was trying to explain, and no doubt in a metaphorical,
not particularly precise sense, I think to some extent I am.
I'm a lot more like the person I imagined with regard to calculators that just uses
the metal detector mechanically.
Now, the difference between me and the person I was imagining is I actually care about getting that just uses the metal detector mechanically.
Now the difference between me and the person I was imagining is
I actually care about getting the right answer.
When I'm calculating the tip, I want the right answer from my calculator.
So to that extent, I am using it reflectively, right?
Remember reflectiveness comes in degrees and it has various components
and you could be good at one component and not the other one, right?
Like the calculator, I don't know how it works by magic, I think.
So the idea, the metaphor of becoming an extension of a tool rather than it becoming an extension
of us is a metaphor, but it's also meant to be something that comes in degrees.
Now I don't deny that there are also going to be differences, and I thank you for asking
this between the calculator and AISR.
Obviously there is.
One has to do with scale.
One has to do with the nature of the technology itself, the most obvious being that it can
produce results that are the sorts of results that I would, and this is echoing something Scott said,
which I often say myself,
which is that it produces results
that I would judge to be the results of a human, right?
Were in a different context,
not sitting down at my computer,
actually knowingly talking to a GPT-4.
That I think does make the tool different.
It makes the tool different for all sorts of reasons.
It raises questions that are similar to my consulting, and this is by design, right?
My consulting an expert.
This is why I raised the question of the oracle, right?
If we think about these things as oracles, which they're not, but if you think about
these things like the society was thinking about the oracle, there you might have, I
might have said, imagine a society that has a bunch of experts, an expert panel, right?
On an everyday decision of what to believe, it consults the expert panel.
All sorts of ways, that's a good thing, depending on what it is that we're consulting, right?
If it's a medical issue, it's an issue about the climate, I think consulting experts is
the right way to go.
What I'm suggesting is that even in the consultation of experts, we've got to be aware that there
is a sort of, we're taking our epistemic agency and handing it off to somebody else.
Sometimes that's a good thing.
In the calculator case, with me, it's a good thing.
But it's not necessarily always a good thing.
And even if we thought it was always a good thing, the four horsemen of the epistemic
apocalypse, I suggest, actually suggest that we're going to have problems doing that in
a reliable way.
A great question.
I'm sorry, I can't do better than that.
Uh, yeah.
Thank you, Michael, for the talk.
Uh, I actually have two questions and I've been oscillating between which one to ask.
So I'm going to stick with this one actually.
Um, the easy one, the one that's easy for me to answer.
Actually, it's actually a clarification about how you're kind of just defining
an epistemic tool in this situation at the very early slide.
Um, and I was just wondering, cause it looked like in there, it had, at the very end of
that definition, epistemic tool had something to do with producing beliefs or knowledge.
Yeah.
I hope this isn't pedantic.
In some ways, I wonder if it's not important to make a distinction between epistemic tools,
which are used by what we typically associate as being humans of epistemic agency, which produce some kind
of epistemic output, right, beliefs, knowledge of it might be, and then epistemic producers,
right?
And it seems like to a large extent, right, a lot of people see large language models,
right, as kind of epistemic producers, right?
It's telling me something that we typically associate as being belief or knowledge
expressed by a human.
It's unfortunate that it's so good at language,
right?
Cause that's how we express epistemic
statements or propositions, right?
That we can evaluate for truth.
But I'm just wondering if it's in some ways,
the definition there, right?
It makes it look like epistemic tools are part
of the process of generating belief or
knowledge, which is true, but it also makes it
sound like they're generating it.
But it seems like this is actually a distinction.
And what we're looking for when we're creating,
say AI or AGI, we're looking for those things,
which are epistemic producers in their own right
that have that epistemic agency.
And it feels like in that situation, that's the
production, right?
Or that's producing beliefs or knowledge.
Um, and that way it could be kind of
pedantic, maybe you're just like, actually, I
meant to say the second thing.
Yeah, this is helpful.
I think this is helpful.
I was not claiming that
the chat bots are epistemic agents.
They may be, right?
I see your point that when we're using an epistemic tool,
maybe this will help. I think it's, I like your way of putting it. When we're using an epistemic tool, maybe this will help.
I think it's, I like your way of putting it.
When we're using an epistemic tool, we're engaged in a process, the process of using
the tool and also our own whatever our own cognition of any in relation to that, right,
which may not be much, right?
Like in the, like in my case with the calculator, right?
No cognition, empty blank slate.
The process that we're engaging in, I'm claiming, is one in which the goal of the process is
to generate an epistemic output.
I'm remaining neutral on whether the AI itself has epistemic outputs in the sense which I'm using that term, that is, as you
correctly noted, beliefs.
So for it to have epistemic being an epistemic agent on my account, it would have to be capable
of deciding what to believe based on reasons.
That would require it to have beliefs and other things.
Do they have beliefs?
I don't know.
I don't know.
Like, I literally don't know.
Sometimes I think they might.
I mean, it depends on what you mean by belief, right?
I think this is a time in which the instrumental stance, right?
Dan Dennett's instrumental stance, if you took that, I mean, the intentional stance, excuse me, it's an instrumentalist position, and the intentional stance, the intentional stance where
you're, you know, things have beliefs insofar as you take a certain stance towards them.
Well, that's starting to look to me like a plausible stance to take up with regard to AI
in some contexts. But I don't know enough yet to feel like that is
whether that's warranted or not.
So I remain neutral.
Eternalism versus externalism about content
in philosophy of mind could be an interesting distinction
as well when it comes to belief.
Yeah, absolutely.
I mean, questions of what content is.
I mean, right now, all I can say is what we've already said,
which is, do their states, the generated states,
have content?
Well, in the following sense, their answers,
that is, the strings of text, are such that we take them
to express propositions in a particular, you know, in the language
which we are interpreted on, which seems controversial.
We take them as, right?
Do they regard themselves as having content?
Do they, should we say that their internal states
have content?
Can they have content independently of their connection
to the environment, right?
You know, does an LM need to be embodied?
All those questions are gonna be relevant.
Know the answer, I don't know the answer
to any of those questions, or any other probably.
These last couple of questions actually covered
some of the things I wanted to ask, which is pretty cool.
But the thing that I'm pretty concerned about, philosophically especially, is this kind of
dependence, right?
That we're having on all of these tools that we make in the sense that what used to be
an extension of us, they're almost starting to use us now as tools in a sense.
I was talking earlier about how plants and all these different things are essentially
using us to propagate, right? So I wonder, you know, in terms of how we're trying to replicate a lot of human cognitive capabilities with AI and computation,
what's the minimum amount of tools, regardless if it's technological, not whatever words you want to describe to it?
Why are we not more focused on figuring out the most independent way to increase our own abilities?
There are people out there that have extraordinary creative artistic abilities.
There's savants, you've probably heard of them.
They have immense ability to calculate.
That would give a lot of people a run for their money of what they can quickly put in their calculator. So I'm just kind of interested in why haven't we started to look more into that in terms of
changing our output as opposed to just having machines do it. Hopefully I said that well.
Yeah, I don't know. I mean a couple things I would say is, you know, it does seem to me that a
lot of the people who have been interested in socially integrating AI,
the sort of AI we're talking about,
are in good faith actually interested
in helping us become better epistemic agents.
I mean, right?
I mean, I think you'd agree.
I mean, like I'm not imputing,
neither of us are impugning.
I mean, there's some people are gonna have bad intentions.
Some people are gonna have the intention only to make money,
but other people are in good faith
trying to help us become better epistemic agents.
And to some extent, I think they're being wildly successful.
I think you're, with that qualification,
I think you're know about, well,
another way to think about this is
about approach these sorts of problems,
is to try to figure out how to make human beings
More productive on their own how to become more creative people how to how to scale up creativity
That would be a really cool thing if we could do that
Haven't figured out yet how to do it. Sorry, but I like the thought.
Hi, in the last three months,
I've been a substitute teacher at middle school.
This has been quite a experience for me.
Thank you sir for your service.
I appreciate that.
Seriously, there you go.
All right, and what I have learned is that the students there do not know how to do anything.
They know how to get an answer.
They do not know how to develop that answer.
And that is definitely coming from their ability to search and to find answers in other ways.
And I just wondered how that fits into here
about this knowledge base, the knowledge versus the answer,
versus how to get to an answer.
The most dramatic one was to me,
was when I was conducting band,
which is something I really love doing.
And I got to a certain point and the student says,
well, the teacher hasn't told us how to do it.
And it was just the same notes
that they had been playing before almost. So it's the same thing. How how to do it. And it was just the same notes that they had been playing before almost.
So it's the same thing.
How do you do it versus what is it?
Right.
I think this is something that we've all been worried about with education since we, the
idea of widespread education became socially integrated, which is how to do it at scale
in a way that actually nourishes the creative part of the human being, right?
The part that wants to figure things out, that wants to, to echo a comment I made earlier today that wants To push the boulder up the hill themselves
right
That isn't just worried about the boulder being on but the top of the hill
You know, so yeah, the thing that you're worried about is the thing that I'm worried about with my
University students an extent to which we might say going back to the same old same old
We've been worried about this as I said at the top of my answer to you, since the beginning
of education.
The worry is now, I think a lot of us have, is that this particular tool is so effective,
it's so good that sort of questions that we've had
with other tools, including like just Google search,
with writing, with books, with calculators,
these sorts of questions we had before,
the scale, independently of there's a difference in,
you know, differences, let me put it this way,
differences of scale that are big enough
become differences in kind,
which is maybe what I should have said to Claudia.
What's the real difference?
Well, the difference is one of great scale, which eventually becomes a difference in kind.
The difference between the horse and buggy and the car, somebody might say, well, why
are you getting all worried?
It's not that different.
It just goes faster.
Well, that would be to really underestimate
the difference between those technologies.
So I think that you're right to be worried about that.
And I think we as a society need to start,
as Scott was telling us earlier today,
we really need to start taking some of these questions
very, very seriously right now, as educators, as citizens.
Okay.
That's the point at which I'm at.
I don't obviously, yeah, it's a different question
and a different talk to think about what are those ways
we can intervene in middle school, for example,
to make things better,
but we could talk about that afterwards.
Thank you.
I hope that helped. Oh, really?
Hi.
Those were, first of all, really interesting talk.
Thank you.
I guess so when you talk about the threats of AI and we talk about epistemic agency and
democratic politics, I guess I'm interested in how do you, what's your view on how does
that factor in with NICOM's censorship and
restrictions on users using these tools?
And given the policies that a lot of companies have taken with that, I guess, do you think
there should be less or more restrictions or maybe it's okay how it is now or maybe
that it's not relevant at all?
I don't know.
Super relevant question.
Have been thinking about it.
I feel at this point, and I'm sorry to keep saying this,
but I think this is a situation where a lot of us here today
at this conference have been with regard to AI
is that things are moving very quickly
and it's really hard to give particularly reflective answers
to when you're worried about a moving target.
That said, clearly if we're going to institute these tools on a widespread basis, we need
to get better, we need more prompt training, right?
You know, if we're going to use them, we at least should be able to use them responsibly,
right?
Secondly, I think in terms of preventing them from responding, answering certain questions,
I think that is actually perfectly responsible.
I mean, we don't, there's all sorts of things that we prevent our fellow citizens from doing. Sometimes I wish we would prevent them from doing more things,
like buying assault rifles, in my opinion, is not particularly
not a great, you know, grew up hunting. I never used a machine gun to shoot things.
But I guess some people do now.
But we do even, you know, I mean, you don't see people saying, hey, let's pass a law that
will actually do see this, but you don't see many responsible people saying, hey, let's
hand out tanks.
Right?
People are generally like, whoa, whoa, whoa, whoa, maybe I have a tank to me, but not to
my neighbor, right?
I hate that guy.
So there are all sorts of things that we do.
And on the informational sphere, that's certainly the case.
I mean, we think about the terrible things that Congress was originally worried about,
about child pornography, those sorts of things.
I think there's a lot of agreement, right?
Getting AI to help you build a bomb, right, is a scary thought.
In fact, I'm even sorry for mentioning it as it's a trigger warning, right, is a scary thought. In fact, I'm even sorry for mentioning it
as it's a trigger warning, right?
All these things.
So I think we're doing the best that we can right now.
And I think, you know, you can talk to Scott
and other people who are AI safety experts
to think about what the problems are
and what else we should be doing.
But I'm not, I don't think of this as like, you know, clearly censorship could be an issue
at some point.
But I don't think that's really the worry that I have right now.
Right?
Hi.
Thank you for the talk.
I think it's great, especially when you mentioned the ability of reflective use of tools.
I think it applies particularly to epistemic tools.
But I'm just wondering if you have any practical suggestions
of how we can actually make people to use tools
reflectively, whether it's by policy regulations,
social norm, education, or in any realm that you think,
especially in terms of exonity, not exposed.
So it's not just that you use the tools badly and then you get punished, but how do we encourage
people to use that?
Yeah.
Yeah.
Great question.
Yeah.
I mean, I think this is the end of the session, thank God, so I don't have to actually give
you a lot of great detail.
And again, I want to remind you that I'm a philosopher, not a policy person.
So I'm good at pointing out problems, not necessarily solving them.
This is truth in advertising people, right?
I work in, I'm in business school, so I'm looking for solutions.
I know.
And thank, I'm glad that you are.
I think broadly speaking though, we can give some general solutions that are, that we really
need to take more seriously.
Right now in this country and a variety of countries around the world, there are certain
institutions that are devoted to the reflective pursuit of knowledge that are under attack.
And those are institutions like this one and other ones. And I think right now, we need to do a better job
protecting and promoting the work of those institutions.
I think these institutions, including my own
and other institutions, have not helped things themselves.
I mean, we're not often very good at sort of marketing,
as it were, our own contribution to society, right? Which I think goes beyond just getting people jobs, although that's an important part of marketing, as it were, our own contribution to society,
right, which I think goes beyond just getting people jobs,
although that's an important part of it,
but actually making them into more reflective,
democratic citizens.
I believe that John Dewey was right.
That's the goal of education,
to get people to be better democratic citizens.
I also think that clearly our ability to transmit information, what we call news, to people
in a reliable way has become compromised, as we all know, in recent years.
I think that what we sometimes call the news media, the traditional news media, right, has obviously
it's a disappearing, possibly doomed financial model for transmitting reliable information.
If it is doomed, we need to quickly come up with another model.
And I have thoughts about that.
But it may not be doomed if we could intervene at a societal level to try to promote and
protect those institutions.
Because I think those are the things, those institutions, the two I just named, together
with another institution, the legal system, that really are the three pillars that stand
between us and the end of democracy.
Something which, like many of you here,
I'm a little worried about.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you.
Firstly, thank you for watching.
Thank you for listening.
There's now a website, curtjymungle.org, and that has a mailing list.
The reason being that large platforms like YouTube, like Patreon, they can disable you
for whatever reason, whenever they like.
That's just part of the terms of service.
Now a direct mailing list ensures that I have an untrammeled communication with you.
Plus soon I'll be releasing a one-page PDF of my top 10 toes.
It's not as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or clicked that like button,
now is the time to do so. Why? Because each subscribe, each like helps YouTube
push this content to more people like yourself, plus it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, people like yourself, plus it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, which means
that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows
YouTube, hey, people are talking about this content outside of YouTube, which in turn
greatly aids the distribution on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for Theories of Everything,
where people explicate Toes, they disagree respectfully about theories,
and build as a community our own Toe.
Links to both are in the description.
Fourthly, you should know this podcast is on iTunes, it's on Spotify,
it's on all of the audio platforms.
All you have to do is type in Theories of Everything and you'll find it. Personally, I gained from rewatching lectures and podcasts. I also read in the
comments that hey, toll listeners also gain from replaying. So how about instead you re-listen
on those platforms like iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like this,
then do consider visiting patreon.com slash Kurt JaayMungle and donating with whatever you like. There's
also PayPal, there's also crypto, there's also just joining on YouTube. Again, keep
in mind it's support from the sponsors and you that allow me to work on toe full time.
You also get early access to ad free episodes, whether it's audio or video, it's audio in the case of Patreon, video in the case of YouTube. For instance, this episode
that you're listening to right now was released a few days earlier. Every dollar helps far
more than you think. Either way, your viewership is generosity enough. Thank you so much. you