Theories of Everything with Curt Jaimungal - Epistemology of Chatbots | Steven Gubka
Episode Date: July 2, 2024Hosted at Mindfest!...
Transcript
Discussion (0)
It depends on the heart problem with consciousness.
Why we like to anthropomorphize large language models, chatbots in particular,
is because they communicate to us linguistically.
In order to have empathy, you need to care about something.
And it's not really clear to me at the moment
whether our chatbots have the capability to care about anything.
Dr. Steven Gubka is a postdoctoral associate in ethics of
technology at the Humanities Research Center at Rice University. His work
analyzes the philosophy of emotions as well as the ethics and epistemology of
technology. In this short talk, Dr. Gubka discusses the metaphors we use to
conceptualize LLMs such as chat GPT as well as the criteria for determining whether
LLMs are reliable sources of information. This talk was given at MindFest put on by the Center
for the Future Mind which is spearheaded by Professor of Philosophy Susan Schneider. It's
a conference that's annually held where they merge artificial intelligence and consciousness studies
and held at Florida Atlantic University. The links to all of these will be in the description
There's also a playlist here for mind fest again. That's that conference merging AI and consciousness
There are previous talks from people like Scott Aronson David Chalmers Stuart hammer off Sarah Walker
Stephen Wolfram and Ben Gortzel. My name is Kurt J. Mungle and today we have a special treat because usually theories of everything is a podcast
What's ordinarily done on this channel is I use my background in mathematical physics
and I analyze various theories of everything from that perspective and analytical one,
but as well as a philosophical one discerning well what's consciousness' relationship
to fundamental reality, what is reality, are the laws as they exist even the laws and should
they be mathematical, but instead I was invited down to film these talks and bring them to you courtesy of the Center for
the Future Mind. Enjoy this talk from MindFest.
Okay, so now Steven, Gopka is a wonderful postdoc of mine, absolutely just incredibly
helpful, smart, and he's about to give a talk that I'm really
looking forward to, and I want to thank you again for your help with this
conference, Stephen, for your infinite patience.
Thank you everyone for coming.
Applause
Hello? Okay. So it seems gratuitous almost to say anything more about chatpots since it's already been
said, but I will try to add to the conversation just to stimulate what follows afterwards.
What I want to do in this brief presentation is discuss three narratives that I've collected through talking about, talking to my friends essentially
about large language models.
One of the joys of being a philosopher
is that one way you can do research
is you can pump the folk as it were for their intuitions
about what they think about philosophical cases,
ethical dilemmas, and in this case, chat bots.
philosophical cases, ethical dilemmas, and in this case, chat bots.
So the first narrative that I encountered
when I started thinking about this
and asking my friends about how they use chat bots
like Chat GPT is the issue of mistakes,
which has already come up in Scott's talk.
And the common thing that gets,
the term that gets used is hallucination.
And I think on this hallucination metaphor,
at least the way that I initially understood it,
although maybe we've made some progress since then
in this conference, is that large language models
make mistakes seemingly at random,
as if they are capable of perception
and some kind of inexplicable error happens to them,
and then they report what they seem to see in this case.
Something that kind of got developed,
I think in conversation or in Q and A,
was this idea that maybe we could think about
this sort of mistake as confabulation
rather than hallucination.
And indeed this is what some of my friends have said
about how large language models act
when they ask questions.
The kinds of mistakes that get made,
such as adding extraneous details, embellishing, largely seem consistent
with the information given by the large language model but nonetheless go beyond
in some way that ends up being extraneous and false. And this is much
like how we construct narratives about ourselves when we're asked about our
behavior, right? So maybe I don't know why I have a particular annoying habit,
but if you ask me about it,
I can probably come up with some rationalization,
some story I could tell you
about why I do the thing that I do.
And similarly, especially when you try to ask a chatbot
about its sources, so I remember in particular,
wondering if I could ask ChatTP3 in particular,
detailed questions
about one of Ursula Le Guin's books.
And it started just making up chapter titles,
extraneous plot details, all these things
that are maybe consistent with the surface level summary
of a book, but nonetheless missed the mark.
Now, ultimately, I think that this way of thinking
about chatbots as either their mistakes or hallucinations or confabulations is a false dichotomy.
And I think that largely the reason why it's a false dichotomy and one that we shouldn't
accept either option for is because these metaphors end up being objectionably anthropomorphic
and reductive.
In thinking that chatb bots are the kinds of things
that can hallucinate or confabulate,
we're thinking about them like they're human beings,
like they're agents with goals,
that they have some kind of purpose,
maybe to tell us the truth,
maybe that's what they'll report to us if we ask them.
And we're assuming that the reason that they make mistakes,
that there's just one good explanation
for all the mistakes that they make, and that there's just one good explanation for all the mistakes that they make.
And that there couldn't be multiple types of mistakes,
multiple types of explanations.
So if we're going to, I think, kick away this idea,
I think what we need to recognize
is that the way that ordinary people interact
with chatbots and the way that we're tempted to,
perhaps because of how they're designed,
is we think about what they're doing
as giving us testimony, as if they believe things
and they're reporting those beliefs to us.
But independent of some serious philosophy of mind,
I don't think we should think that chatbots have beliefs,
that they are reporting things that they think to us.
And I think to the extent that we can get away
from this thinking, to the extent that the ordinary person can get away from this thinking, we might be able to improve our
ability to think critically, reflectively about this technology. So I instead suggest
or wonder whether or not we could, instead of approaching it like a testifier, we could
approach large language models like potential knowledge tools, perhaps related to Michael's conception
of an epistemic tool.
This is a conception that Garrett with his co-author,
Carlos, developed as thinking about a distinction between,
as sort of prefigured by your comments earlier today,
distinction between technology that is an epistemic agent
that has beliefs, potentially has knowledge versus a tool that we use
to gather information to try to form our own knowledge.
So the third narrative that I think is maybe engineered
primarily by the people who want to promote the use
of these things, especially as a component of search engines,
is this idea that these things are getting better.
They're improving.
More data is being added to them
now that they have access to the internet.
They're more reliable.
And I don't disagree that more data and more training
couldn't make large language models more reliable,
but that's not a necessary consequence of additional data.
So some of these tests of the abilities of large language models that show them scoring
very well, in fact, also sometimes demonstrates a degradation of abilities in other areas
and sometimes the addition of bias that was unexpected.
So these kind of emergent properties
or abilities of large language models,
these like unpredicted but sharp differences in behavior
are not always improvements.
So they might be sort of like,
oh, a jump in cognitive ability,
the ability to say, instead of getting a B in calculus
and A in calculus in a college course,
may nonetheless be accompanied
by unexpected differences in behavior
that aren't desirable,
unexpected inaccuracies.
Now more to the point though,
ordinary users aren't going to know
whether large language models that they use
undergo changes at all.
And to the extent that they do,
they might assume that those changes
positively improve its reliability.
So one of these narratives here
that I think is worth taking a critical look at is
this idea that these changes are necessarily improvements and that once we've achieved sort
of a state where we can trust a large language model that we don't run into a further problem
where we can then ask should we trust it in the future once it's been updated,
once it's undergone additional training?
In conversation with Susan, we've called this problem
the problem of diaconic justification.
So the idea is that even if you had evidence at one point
that a large language model was reliable,
because of their unpredictability in these ways,
that trust maybe shouldn't stick around after an update.
You would have to then reestablish its reliability
through whatever mechanism you initially established it.
So I just wanna close with a couple of things
that I've been thinking about.
So part of this story that I've been telling you
about how my friends in particular,
huge large language models, how they think about them,
relies on this idea that they may be trusting them
for the wrong reasons,
that they're anthropomorphizing them.
Now I think one of the reasons that strikes me
as most obvious about why we like to anthropomorphize
large language models, chatbots in particular,
is because they communicate to us linguistically.
But I'm curious if there are other features
of large language models that inclines us to trust them,
and if they could be designed without those features,
so that we don't trust them for the wrong reasons.
Another question I have is whether or not
there are better metaphors or analogies
for when chatbots make mistakes.
So if you agree with me that there's something objectionable
about saying that ChatGP3 hallucinates or confabulates,
and maybe you don't agree with me on that,
are there better ways to talk about the kinds of mistakes
that these chatbots make?
And finally, I'm curious, what kind of forthcoming evidence
would we in fact need to trust
a particular large language model that
it's reliable enough to trust. Thank you.
Thank you Stephen. Those were wonderful questions and let me just ask the people
on the panel if they have any ideas here.
Mark.
So this idea of anthropomorphizing LLMs I think is really interesting.
I'm not sure it's possible to decouple it from that though, right?
Because if you talk to something that talks like a human, it kind of like you naturally
infer human characteristics to it.
I mean if you look at some of the early use cases of LLMs or like the app replica which came out even before GPT-3 was available I believe
and just all the instances of the new sort of fine-tuned chat bots that people have created
using GPT, they're all, they all simulate people, right? They simulate, you know, like
romantic partners, they simulate, you know, various other things that people want to talk to, right, people, right? So it's, I don't know that you can really escape that
because language is such a human thing that if you communicate with language with something
else, like you're naturally going to do that. Or another idea, another example, right, the
Google engineer who was convinced that the chatbot had consciousness,
was absolutely convinced about that.
Like those kinds of things made me think
that it's not really possible to decouple
that humanization of these things from the tech itself.
Yeah, that's absolutely, I agree with you, Mark,
and I also would like to put something a little further
in that these chatbots, they also change the way that humans speak
to them and the way that they structure their language based on what those chatbots understand
to some extent.
And so if people are changing the way that they talk to chatbots, because the chatbots
can understand speech in certain ways and they don't understand it in other ways, for example, giving commands, etc. I think that it also changes the way that people
relate to them. So it's a dual way. It's not just the chat bots to the people, but it's
also how the people are changing the way that they are communicating and it's also a commodification
of human language, which is something that
I think is another thing that we have to be looking at.
Yeah, no, I completely agree that once you build something whose purpose is to interact
like a human, it would be bizarre not to anthropomorphize it, right?
I mean, that's the whole point, that you can interact with it using the language that you know.
But if anyone is worried about that and wants to resist that, then I guess following from
my talk, I can strongly suggest to them, try submitting the same prompt over and over.
Try rewinding it and saying that, okay, yes, I can do this very, very non-human-like thing with it.
I can just see the whole branching tree
of possibilities that it could have given
other than the one that it did.
Now, I wanted to respond to something.
If this was a critique of,
when I talk, you were saying people say that language models are getting better.
Right, I mean, just to be clear,
like I would not say that there is any a priori reason
why we knew that GPT-4 had to be better than GPT-3,
nor would I say that there were no examples
where GPT-3 is better than GBT-4, right?
I would just say that if you try them both out
on a range of things, you will see that GBT-4
plainly, obviously, is better.
Yeah, I don't contest that at all.
Someone who's used both of them.
Yeah, yeah, yeah.
I was really struck, yeah, upon reflection
hearing you describe this sort of like
shattering the illusion moment, right, when you rewind.
So I've done this with art image generation, right,
and it gets to the point where I just,
I don't even know what I'm looking at.
I just keep like trying out the same prompt
and seeing what it'll do again.
I wonder if maybe, yeah, like something like this,
like you're suggesting might be effective,
is sort of like, oh yeah, this is just something
that's following instructions, it's not, yeah.
Just talking about trust and perhaps part of the problem
is that we look at large language models today
as expecting that they're this omnipotent
sort of all-knowing model and ask such broad questions
and expect it to be 100% every time.
And is it perhaps what we're gonna to see is that there's going to
be more focus on domain-specific models that I think are already showing that they are
more deterministic and that maybe eventually, and actually you can see it now I think with
the mixed-draw models, which are basically a mix of models that ultimately it's going
to be a collection of domain domain specific models and then just selecting
the right model and I think, but that seems like
that's probably gonna be the path to developing trust
by just having kind of more focused models
as opposed to these generic omnipotent models.
Yeah, I think that does sound promising.
So thinking about the applications in the medical field,
if you had a model that was just trained on information
about diagnosis, for example, that could be potentially
like a very useful tool to quickly generate,
based on the symptoms, here are a bunch
of likely alternatives.
I think like someone who is trained in medicine
could use very reflectively using Michael's
sort of understanding here about the limitations about what this thing can do.
So I agree with you.
I think one way forward here might be seeing trained professionals using models that are
devoted to very specific tasks, have very specific expertise in a certain area.
The possibility of combining all of them to get kind of the supermodel again didn't occur
to me.
That's interesting too.
Yeah. I have a question for Richard and anybody in the
audience about that to follow up on that. How feasible is it to create effective
domain specific models in areas like medicine for example or autonomous
vehicle development when the 1% situation could arise
that actually seems to require troubleshooting
from something like an AGI.
Like how would that work?
And maybe I'm wrong about.
I mean I think first of all,
I think deep learning demonstrated that
with their protein folding models, right,
that they could be very effective in a very,
and get pretty amazing results
from the main specific problem area, right, that they could be very effective in a very, you know, and get pretty amazing results from a very domain-specific problem area, right?
But I think it is domain-specific,
like because self-driving cars,
clearly that is a problematic area,
and I think it, you know, clearly getting the data
to train with is, again, there.
So I think you're gonna see certain domains
advancing quicker than others, right?
And I think that's just a reality.
And I know also talking from industry private sector,
clearly we're not gonna see the investment
in the private sector,
or it's gonna start slowing down
if we can't get that deterministic,
if we can't get results in a sort of
domain specific environment, right?
So I think it's an inevitability that we will see that.
But of course, Tesla shows us that
full self-driving is a hard problem,
and the hard problems are not gonna be
solved overnight with this.
This is actually a little bit more kind of
what Mark was saying earlier a while ago.
But maybe it's kind of on the questions
that are going on here,
so maybe this is for Stephen as well.
You know, when it comes to this issue,
I kind of feel like we don't need to reinvent the wheel,
right, I mean, didn't Weissenbaum have this exact worry
after Eliza, I mean, there's a reason why
right after Eliza happened,
he wasn't concerned with the capabilities of AI
or the capabilities of computer and technology progressing.
He was worried about the way that we react to it
when we come across it, right?
He didn't look at it and go,
oh my God, computers are gonna take over the world.
He went, oh my God, look at all these humans who walked up,
got a very quick script response,
and went, oh my gosh, there's a person behind this,
there's a mind.
In some ways I kind of think like,
yeah, maybe it's much more capable than Eliza,
but it feels like it's the same problem, right?
I mean, I might be crazy about this, right, in that way.
But yeah, I don't know what you think about that.
In the sense, do you think this is qualitatively
a different problem than that?
This anthropomorphizing of technology or AI,
it seems very similar.
Yes, it's gotten better, it's gotten fancier,
but yeah, I don't know.
Yeah, I'm very sensitive to this,
is this a new problem or old problem question?
I mean, Eliza came to mind immediately
as this conversation started for me
and sort of thinking about, yeah,
it doesn't take much for us to get convinced
that we're talking to another human being
or this thing we're interacting with is relevant like a human being that we can think
about it that way. I do wonder if this is like that but much much worse and I also
wondering if thinking about this experience that's got illustrated about
like oh like the illusion being broken like if we could have more of those
moments built into the experience of using this thing
Would that be worth it or would that somehow cripple its function in some interesting way or could it like wake us up to the fact?
That yeah, but we shouldn't be anthropomorphizing it
trust is a tricky issue because you think there's a right answer and
Life is complex life is ambiguous life is sloppy. Life is ambiguous. Life is sloppy.
There's social dilemmas.
There's trolley problems.
There's more dilemmas and so forth.
My fear is that we trust in AI because it seems so expert.
Just like we trust a real expert,
we assume that's the answer.
And life is ambiguous.
There's many answers.
And it all depends upon the values you build into the system
and so forth.
And I think we might lose some of that
if we have such an expert system like an AI
that we don't doubt its credibility.
There's this issue that social epistemologists are worried
about called epistemic trespassing where someone
who is knowledgeable in one area enters another domain
and says a bunch of stuff even though, you know,
their PhD was in mathematics,
now suddenly they're saying things about philosophy.
Now, saying that, I'm not saying no one should
Investigate fields other than their own or do interdisciplinary work
But the worry here is if we start thinking about oh this person or this chat bot is credible after all they say all these true
Things about mathematics. Oh, but then we can ask at these harder more difficult questions and thinking we can trust it because it showed off
Its expertise in one area.
So we could be under this kind of illusion then
that because it gets the simple problems right,
that the more difficult problems are also ones
that it would get right.
Yeah, so just a couple responses.
So as far as I could tell, a central reason
why the academic sort of AI and cognitive science
and linguistics communities were like
very very slow to sort of react to GPT in you know 2020 or 2021 you know is that they
were all inoculated by Eliza right they had all learned the lesson from Eliza and from
you know the Lubner Prize that came after it that if you see something that looks like
you know a superficially
impressive chat bot, there is actually nothing interesting going on under the hood.
And the only questions here are human psychology questions of why would people be so stupid
as to think that there's something under the hood when there isn't.
And it was so strong that when there actually was something under the hood, they just could
not see it in front of their faces.
Right?
So that was the first thing.
But the second thing is, you know, I was,
I'm sort of amused when people say,
well, why has it been so much harder
to build a self-driving car than to build a chat bot?
And it seems like it's easier.
And I think the answer to that, in some sense,
is that it's not harder at all.
In fact, compared to making an 80% accurate chat bot,
it's quite easy, easier to make an 80% accurate driver.
It's just that you now want 99.999% accuracy,
and that's the only hard part.
But there's also this,
like, you know, where do you set the threshold?
I would set it, personally, as where it's about as safe
or safer than a human driver.
But it looks like in practice, people might not allow it
until it's like 100 or 1,000 times safer.
Okay, so I am a second, I actually feel like
I just landed in Mars the last two days with all the language and all you're doing.
I'm, oh am I not talking through the mic? Okay. Can you hear me?
And I'm the vice chair of the board of hospice and quality control committee.
Okay, so my background is not only psychology, it's a lot of medicine and it's a lot of corporate medicine
and it's insurance companies, all right.
Susan told me like three days before
we were gonna be on the panel.
Now that's the worst thing for me
because I am very OCD and I want to have
a whole presentation already,
but I couldn't give you a presentation
because I don't really,
this is like a new learning experience.
So I had articles, AI is changing every aspect of psychology.
Then there was another article I got,
how AI could transform the future of psychology profession,
and it goes on and on.
I'm not so sure. My doubts are I think it can be used for specific things. Let's say like somebody is in recovery for
addiction and wants to go to pick up a bottle of vodka. they talk to their, I'm calling the chat box Rose, I love flowers.
And they talk to Rose and Rose gives them,
you know, no, Rose doesn't give them vodka.
Rose says, go to a meeting, you know,
there's a whole specifics, call your coach,
call your sponsor, on and on.
So I see specific avenues for this, for your side,
all of your side, because I'm kind of like
the alienated, isolated person here.
What I can't see is, and I want you to answer,
can you see, I think the most important quality,
which I was saying, the Martin Buber,
I do know a little philosophy, I and thou,
how do you see developing or can it develop
or can it not have empathy, and what do you see empathy as,
and do you think that's a possibility?
And I'm opening it up to everybody.
That's my question.
Well, I think that's a fantastic question.
It's a question that I, you know,
nobody really here is gonna know the answer.
But it does seem to me like right now,
I'm dubious, I am, when we ask about what chatbots can
do I am open to the idea as I said that they might have beliefs they might have
quite complicated mental states and I and it's is it possible that they have
emotions we get into a complicated question about what those are but
certainly something like but I'm really hoping that they don't have emotions like
subtle ones like resentment. That would be not good. Do not want resentment in my chat
bot. But empathy, it would be nice if it had that. But at the moment, I think in order to have empathy,
you need to care about something.
And it's not really clear to me at the moment
whether our chat bots have the capability
to care about anything.
But that sort of depends on what we mean by care.
But if they don't, then I don't think they have empathy
because I think empathy requires having a caring attitude,
at least possibly, towards the person that you're in that
I-Thou relationship.
Anyway.
Yeah, of course.
I actually think the harder question isn't.
Because I don't think a chatbot, by its very nature,
can ever have empathy.
But I don't think that's the question.
I think the question is whether or not
people can perceive it as having empathy, which
I think is considerably possible.
I know lots of people that fake empathy.
That's an excellent point.
Yeah.
That's a good point.
I mean, I think, and just to say you're not alone at this table,
I'm a social scientist, not a computer scientist.
So I join you here.
I actually think the bigger danger here
is social community capital,
which is to say technology increasingly isolates us.
And there's great books about it,
Robert Putnam's, you know, Bulling Alone,
the value of television is isolating.
But imagine if I perceive a chat box as a
person and that becomes my interaction, that becomes, that satisfies my social
need, right? I chat with the chat box, it chats back, it pretends it has empathy, I
perceive it as having empathy whether it does or not, for it, right? Right, right,
right, and then we create this sort of universe where I don't
have to engage, interact, or participate in the greater society around me. That, I think,
is highly dangerous. And I think that, you know, we were already on our way before we
had LLM, right? But now people can be like, I don't need to talk to anyone. I talked to
Rose and Rose tells me all the things I want to know and feels bad about my day, right?
That is isolating and it's bad for society, it's bad for democracy.
I mean, it's not functional and that's a problem for me.
So whether it feels it or not, I don't know, but I can tell you if I perceive it does,
then that's dangerous.
These questions that people like to ask, doesn't AI have empathy?
Doesn't that have intentions?
Doesn't that have beliefs?
You have to separate out and the metaphysical part, the part that's going to be the most
important thing is the idea of what you're doing. like that ask, doesn't AI have empathy? Doesn't that have intentions? Doesn't that have beliefs?
You have to separate out, and the metaphysical part,
the part that depends on the heart problem with consciousness
or whatever, what is his internal experience,
from the empirical part, from the part that could be
measured in principle.
And so I think that's what you were getting at.
I think one of the surprises if you spend some time with Jigglypuff is that it's much better at getting emotional nuance as right. Whether it be the use of language problems for therapeutic reasons or for people who suffer from social anxiety to get practice.
How that works is an empirical question. I look forward to clinical trials that will tell us more about this.
I don't think that we should arrogate
to make sure we can guess the answers
to that question in terms of our own things.
So Kevin, I totally agree with you.
I think that's, I really think the key question is
how humans relate to it and perceive these affective states
within the system itself.
But Miriam, I think you raised a really interesting question
as to whether or not an LLM can actually
have an effective state.
So if it can demonstrate empathy,
I think it would require some emotional state, correct?
But I guess what I'm getting at is
humans have multiple dimensions in which we cogitate.
We can think about things in terms of our five senses
and how we perceive the universe.
Not just language.
I mean, language is certainly one medium upon which we think about things, but like
chatpots only have that one medium. So they lack, I don't know that you can
necessarily encode some effective state simply in that language medium. So I
don't know that an LLM in its current state could potentially have an
affective state that would be required for true empathy. Yes, sorry for going far away from empathy, but I think maybe what I'm saying can apply
to this.
So, I just want to go back to the idea of attributing agency and what kind of underperforming
for me station we might be doing.
So I think a distinction that there is a recent debate in developmental psychology trying to think what kind of a cognitive systems those systems are and
So they make a distinction between
Learning from imitation and learning from exploration. I think it's a framework that
Alison Coppenick and her team has been using and I think one idea is, so those systems are probably learning through imitation.
They kind of imitating a process
that they learn from other agents,
the agents that they find on internet, their information.
But they are not doing the kind of exploration
that a young child, a small child would have,
like a kind of a truth-seeking cognitive system that go to explore the world and learning things from
the world.
So maybe what we might be seeing is that the system are just like learning through imitation
what is empathy is and learning how to respond in a way or to behave in an empathic way through
this learning system. So I think this framework might be interesting because
then we have to reframe the idea of hallucination. The hallucination is more a
metaphor that is related to a truth-seeking cognitive system and not an imitation learning system.
Yeah, I just want to add to this framework.
GBK assures me that it does not experience everything.
All right, that was awesome. And Claudia, thank you for that distinction.
That really helped. And Marion, everybody, that was wonderful. Stephen, Mark. So now, cookies, it brought me,
we actually probably need shots after talking
about all this national security stuff. website, curtjymungle.org, and that has a mailing list. The reason being that large
platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever
they like. That's just part of the terms of service. Now a direct mailing list ensures
that I have an untrammeled communication with you. Plus, soon I'll be releasing a one-page
PDF of my top 10 toes. It's not as Quentin Tarantino as it sounds like.
Secondly, if you haven't subscribed or clicked that like button, now is the time to do so.
Why? Because each subscribe, each like helps YouTube push this content to more people like
yourself, plus it helps out Kurt directly, aka me. I also found out last year that external
links count plenty toward the algorithm, which means
that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows
YouTube, hey, people are talking about this content outside of YouTube, which in turn
greatly aids the distribution on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for theories of everything,
where people explicate toes, they disagree respectfully about theories, and build as a community our own Toe.
Links to both are in the description.
Fourthly, you should know this podcast is on iTunes, it's on Spotify, it's on all
of the audio platforms.
All you have to do is type in theories of everything and you'll find it.
Personally, I gained from rewatching lectures and podcasts.
I also read in the comments that hey, toll listeners also gain from replaying. So how
about instead you re-listen on those platforms like iTunes, Spotify, Google Podcasts, whichever
podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com slash KurtJayMungle and donating
with whatever you like. There's also PayPal, there's also crypto, there's also just joining
on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to
work on toe full time. You also get early access to ad free episodes whether it's audio
or video, it's audio in the case of Patreon, video in the case of YouTube. For instance,
this episode that you're listening to right now was released a
few days earlier. Every dollar helps far more than you think. Either way, your
viewership is generosity enough. Thank you so much.