Theories of Everything with Curt Jaimungal - Preparing for AGI with National Defense Researcher | Thomas Pike
Episode Date: May 21, 2024In this talk at Mindfest 2024 Thomas Pike, Dean at NIU, details ways in which society can best mitigate the chances for a catastrophic outcome with AI and how society can flourish in this new age of t...echnology and development. Please consider signing up for TOEmail at https://www.curtjaimungal.org  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything Â
Transcript
Discussion (0)
As a democracy in crisis, how's this going to tip into the future?
So your signals have changed, the butterfly has kind of flapped its wings, will we tip into another strange attractor?
Because we could take a student and in a 10-week class, take it from almost no coding to building a facial recognition with a support vector machine.
Like that's amazing.
Thomas Pike is a faculty at National Intelligence University.
This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded
by Professor of Philosophy Susan Schneider.
It's a conference that's annually held, where they merge artificial intelligence and consciousness
studies and held at Florida Atlantic University.
The links to all of these will be in the description.
There's also a playlist here for MindFest.
Again, that's that conference merging AI and consciousness.
There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff,
Sarah Walker, Stephen Wolfram, and Ben Gortzel.
My name's Kurt Jaimungal and today we have a special treat because usually Theories of
Everything is a podcast.
What's ordinarily done on this channel is I use my background in mathematical physics
and I analyze various theories of
everything from that perspective and analytical one, but as well as a philosophical one discerning
Well, what's consciousnesses relationship to fundamental reality?
What is reality? Are the laws as they exist even the laws and should they be mathematical?
But instead I was invited down to film these talks and bring them to you courtesy of the Center for the Future Mind
Enjoy this talk from MindFest.
This morning it's my distinct pleasure to introduce Dr. Tom Pike.
He is the Dean of the Ottinger School of Science and Technology Intelligence at the National
Intelligence University.
Today he's got a talk planned for talking about how complex systems can alter the course
of evolution of artificial intelligence.
Really looking forward to this talk, Tom.
So when you're ready, please take it away.
Thank you very much, Eric.
All right.
So hello, Tom Pike.
So first, I take care of some mandatory stuff.
So my views do not reflect those of the US Army, National Intelligence University or
US government.
And then the second bit to kind of
deal with the elephant in the room, right, is intelligence is often associated with, I mean,
bluntly jackbooted thugs, right, like the Gestapo, KGB, things of that nature, right? And so when
people hear intelligence, they usually think, you know, like in the US where they're putting
LSD in the water in San Francisco and and we have the pike and church committees.
Although there will always be a need for secrets, and that's always going to be paramount to even
your relationship with your neighbors, it's important to understand there's a whole other aspect, which is just trying to understand how the world works. If you look at Afghanistan,
in particular, as the most recent example, right?
Like that wasn't a failure
because we didn't have enough secrets
on how the Taliban were operating, right?
That was because we don't understand, right,
really how to grow a democracy, right?
We don't understand how to get those critical dynamics
into place, right, in order to have a functioning,
productive society, you know, that respects humans' rights.
Okay, and so it's said there's like intelligence, the secrets part, really my focus of all the
work I do with Dr. Bailey is on how do we just understand the world, and that's all
in the open.
And so we do it in the open.
I do open source development, I do things of that nature.
So we say intelligence in this part, I really say it's more like we're just trying to understand
how the world works.
And this is, you look at Director Haynes, she did an op-ed in the Wall Street Journal about two
years ago where she was saying, like, hey, as a democracy, we need less secrets. And that's
definitely the direction, particularly after the fall of the Soviet Union, that the US government
and intelligence community is going, is be more engaged with the population, be more open about
what we're doing because we have to have that trust, right? So deal with that elf in the
room. All right, so I'm gonna use this quote from Scott Page, Santa Fe Institute,
the University of Michigan, right, which is like, the right perspective makes
hard problems easy while the wrong perspective makes easy problems hard,
right? So what I'm gonna deal with, like yesterday, we had Origin of Life from hard problems easy, while the wrong perspective makes easy problems hard.
Right. So what I'm going to do with yesterday, we had an origin of life from Sarah Walker.
We had lots of talks about consciousness and quantum mechanics.
Right.
So what I can look at is more like societies is kind of like a meta consciousness.
Right.
And so as you look at this, as we've gone through life, or as humanity's, you
know, kind of evolved
over the last couple thousand years, right,
is that we found superior perspectives
that then makes our life better, right?
So you look at the Copernican view of the universe,
theory of bacteria, gravity, all those things,
Newton and whatnot, made our lives better
as we found those perspectives, right?
And we were able to share those perspectives. Now humanity could say,
oh, this actually really works. I can take that work and continue to build on it.
And so I got this story of a tale of two printing presses. I'm doing a specific story on it,
but this comes from a book by Hilton Root out of George Mason University called
Global Origins or Network Origins of the Global Economy.
Right.
And so the way to understand this is that what's interesting
about this is the Chinese developed the printing press 400
years, right, before Gutenberg did in Europe.
Right.
Not only did they invented 400 years before Gutenberg, right,
but they stopped using it 80 years before Gutenberg.
Right. They said, eh, this technology,
this knowledge replication technology, if you will,
nothing to see here, right?
We don't need it, right?
And then you go to Gutenberg, right?
In Europe, which is this massive, you know,
small world network of competing fiefdoms
that are forming alliances and getting in wars
with each other and stuff like that, right?
And it becomes this massive catalyst for the reformation, right.
Uh, the scientific enlightenment, you know, and really Western Europe, just
blowing up as kind of like a global powerhouse.
All right.
Um, and so in Hilton Root's book, right.
The way the conclusion they make is like, was how that network was functioning.
It's the differences between those.
Right.
So you look at the Chinese network
under the Chinese emperor at the time, right?
It was a hub and spoke network.
And so the bureaucratic elite were incentivized,
you know, to maintain their status.
And if somebody didn't want that technology to get out,
they could cut it off, right?
They'd be like, nope.
Right, well in Europe, because it was a small world network,
people like Galileo and Copernicus
could sneak their writings that
the earth is actually not the center of the universe, like only paths the Catholic church
into places like the Netherlands, and all of a sudden that knowledge was getting out there
and spreading. So it became this, and it just blew up for good and bad.
So if we look at this, much like a large language model or
any deep learning network, right.
I look at humanity's collective consciousness as like a
fractal problem solving network.
You have all these folks, all over the world, particularly like the
open source community, right, that this is this global problem solving network.
Right.
And it's in very real way functioning just like a neural network.
So they're just searching this hyperdimensional landscape of possibilities,
as kind of Sarah Walker talked about yesterday, and finding new perspectives
that help us understand the world and engage with it better.
And so our knowledge storing and sharing technology, right, are these artifacts, all Herbert Simon,
if you're familiar with sciences of the artificial, right,
are ways to help us optimize, right,
our explore, exploit search mechanism, right?
So yeah, just cause it's complexity,
big fan of complexity theory.
It was like Sarah Walker said yesterday, right,
she's effectively came up with assembly theory, right?
We know from science, like rigor really matters, right?
And how do we test this and make it, uh, you know, how do we have rigor to this
new theory, but she's effectively proposing a new way to look at the world
that defies our traditional physics types of measurements.
All right.
Uh, and so, you know, potentially that will blow up and give us a whole new ways to interpret and understand the
world, right? Give us a new superior perspective. So all these problems we're
finding really hard right now will become easy. Right? And so after, you know, we
look at the printing presses, like things have got a lot better in very numerous
and numerous areas, right? So we, increased lifespan, I mean, child mortality has plummeted, improved
healthcare, for many, improved quality of life, right, increased education,
probably the most literate society that we've ever had, right.
But it's also come with some absolutely horrible things, right.
I mean, you have slavery and child labor, indigenous people's genocide, pollution.
Right now, we're looking at the current rise of autocrats, kind of based off the dynamics of
real-time, in my opinion, the dynamics of real-time bidding and the need to optimize
wealth functions. So it also comes with a lot of bad. And those are the things we want to try
and prevent if we're going through another kind of great revolution.
And so the idea is, from this is that we really have a new
type of printing press, if you will, right?
Knowledge storing and sharing technology.
So you have the, if you're not familiar with this,
if you don't code, just give you a quick rundown.
So you have the internet, right?
Where you can share knowledge globally.
So I work on a shameless plug, but I work on Mesa, agent-based modeling in Python.
And we got people from all over the world, particularly right now the Technical University of Delft,
that are contributing to it and making it vastly better.
Like, I can't even keep up. Right?
And so we're able to store that in GitHub on the left. There's also GitLab, Bitbucket.
There's all these, like, knowledge coding repositories that make it much easier to say like, well this was your idea, I just
made this change to it, now it's much better. Right? And you can instantly
merge it and then push it out to the entire world. Right? And then you get
these like naturally selected libraries, right? Where somebody has a really good
library, right? Or somebody has a really good idea, right? Then somebody else can
go in there and use it and take that knowledge.
So I think about like a year and a half ago
before I got stuck in academic administration,
we built a crop simulator,
look at famines in West Africa,
and I know Jack about crop growth,
but I was able to find a reputable library
for a person in England,
and we could reliably show based off the current weather
patterns in West Africa, this is the dangers of a famine based
off these kind of staple crops in Niger.
All right?
So you're able to find those, right?
You've got things like Scikit-Learn, TensorFlow,
PyTorch, right?
Now we're getting like hugging face,
and they'll have different large language model attributes.
So we can now take that knowledge so I don't
have to spend years doing it. I can rapidly apply it. Right? Which even comes
with its own problems. If you don't understand what you're doing you start
applying tools that can give you answers that are appealing but somewhat wrong.
So now we're in this, the real point is we're in this kind of like next great
revolution of knowledge storing and sharing. Right. I personally contend this is a real revolution of that's driving AI.
Right. It's not that if it was, you know, really stuck at like these boutique firms or these
universities, right. It would not blown up. It would not have blown up like it has. Right.
But because we could take a student and, you know, a 10 week class, take it from almost no coding,
you know, to building a facial recognition with a support vector machine, right?
Like that's, that's amazing, right?
So always the other bit that we kind of contend is that local optimization will always mess.
This is a little bit of an aside, but it becomes important later on some of the choices we're
making, right?
Is they always want to be a locally optimized, right?
So you can kind of see this now with chat GPT.
I'll let Scott contradict me
because he knows much more about it than I do.
But where you're starting to get this family,
like you're on a chat GPT now,
you get these families of GPTs you can use
that are optimized for image generation or code tutoring
or things of that nature.
But from the no free lunch theorem,
the idea is like, well, they're always like,
I'm gonna have a local problem that I need to solve
and I could get like the 90% solution that's out there.
But I'm gonna have to tune that at the end
to make it as optimized as possible
for my particular problem set.
And I contend this also is very fundamental
to complex systems.
So you got like Darwin's finches
is a classic example from biology
where you have the proto finch.
And then they customized its beak based off
which island they lived on.
So it could maximize this chance for survival.
And then it goes all the way down to the Lorenza tractor,
which is where the butterfly effect came from,
which is that details will always matter.
There'll always be a measurement error.
There'll always be some other rounding variable out there
that you didn't account for, right?
So you wanna always be able to locally tune
any type of model that you have.
All right, and so from this, right?
We wanna, the thought is from my view is,
from democracy, is that a great idea
could come from anywhere, right?
And so what really is you go back to, you know,
like kind of feudal times or whatnot, is that's where
they said, hey, free speech matters because just because you're a royal, or you're a blue
blood or you live in a castle, doesn't mean that that person that's out there living in
squalor in the hut doesn't have some great idea that can make us all better.
So if we empower people to share their great ideas,
then we as a society will get better.
Now the challenge is we got to prevent the bad.
Now from this, and this is kind of all Jeffrey West
out of the Santa Fe Institute,
this will, the concern I suppose is that technology
is accelerating faster and faster.
And then if we get to a perfectly up and down,
so you're dividing by zero,
then we'll have a singularity, right?
Or it could just be sigmoidal where a bounce off
or it will kind of flatten out
and we'll reach some kind of upper limit, all right?
But regardless, embracing this idea
will create exponential growth for good and bad, right?
And so the question is,
how does humanity mitigate the bed?
Right?
Cause we don't want to replicate some of the horrors
that we've seen over the last couple of hundred years.
The thing is, I don't know.
Right?
It's a very hard problem, right?
Because you can't, how do you anticipate something
you can't necessarily predict from emerging effects?
Right?
But never give up.
So we start from a,
so this is where I start from,
like complex adaptive systems, right?
So first we acknowledge our limitations,
which is like, well, we don't know, right?
And fundamentally, and I look at this,
so I say, we don't know, I'm saying from a policy
perspective, we have no good way to say,
hey, we need to stop, let's say the violence
that's happening in Haiti right now,
or, you know, the kind of turmoil that's happening
in Central America with like a massive refugee crisis, right?
But we don't know how to deal.
We don't know what policies we can put in a place that will require minimal
resources for maximum benefit to create a stable society that allows people's
life to flourish, right.
Life, liberty, and the pursuit of happiness so much.
Right.
Um, and, and we don't really have tools to deal with it.
So the thought is, right now, this is, at this point, it's all conjecture on my point.
And this is what we're trying to get to.
So I would say take everything with several massive grains of salt.
But what that thing down there on the right is supposed to be is a bunch of
different strange attractors orbiting around a larger strange attractor.
Right.
And so what, what we do know is that, you know, you get into a kind of stable state
as an, as a, and this could be scale-free,
so it could be your business, right?
It could be your neighborhood.
It could be entire country, right?
Could be the globe.
And so the globe kind of makes it the most reasonable
where we'll talk about climate change, right?
Is that the earth is in this kind of like stable homeostasis
where it's a dynamical system that is constantly changing,
but it's orbiting around these kind of like
strange attractors, right?
We can't really envision, all right?
And then as we've, in this case,
put pollutants into the world and water
and created global warming and climate change, right?
We've changed the technical terms,
your fitness landscape, right?
And then we're gonna go to another type of saddle point, right?
And so we've changed it and that could be different, right?
And you saw this in the great depression, right?
Where they had this massive economic collapse, right?
And there were a lot of people say, well, it will right itself, right?
You know, kind of liaison fair, the Austrian School of Economics, it'll
right itself, don't touch it, right? And it didn't because it was stuck in one of these like low
points here, right? And it couldn't get back out. And so then you had like the New Deal and stuff
like that, that changed the fitness landscape, right? And then we came to another stable state
and kind of got pulled out of the Great Depression, right? And so that's the idea of how we think it's working. The question is, well, how do we understand that?
Okay.
And so this is by far not like an exhaustive list.
There's a great article by David Krakauer and several other people
from the Santa Fe Institute that talk about some of the computational tools.
But one of the things I've been looking at is a work from Martin
Schaeffer out of Germany and his team.
And he's a, he focuses on critical transition points and changes, right?
We identify signals, right?
Uh, that are coming essentially in between the nodes on the, if, sorry,
I give the bumper sticker for a complex adaptive system is just an adaptive
network, right?
And so you're trying to identify those signals that are going between that
network.
This seems to work, uh, at the brain level, to work at the brain level, at the ecosystem level, and at the global level.
Although we haven't seen the research on this yet, I will contend this is intuitively how
I understand societies.
We want to understand if we're having an impact on a society, it will follow a similar pattern.
And then you can see as it changes, this work can identify when a system's about to tip.
But at this point, I haven't seen anything that could say
what it's gonna tip into.
So if you think of the bifurcation diagram,
is that it's like, hey, we know it's about to go
one of two directions,
but we can't tell you which one it's gonna go to.
But I would say you could see that
when you look at even the US society right now, as
a democracies in crisis, is like, okay, how's this going to tip into the future?
And your signals have changed, the butterfly has kind of flapped its wings.
And so will we tip into another strange attractor?
How will our society evolve, right?
And so that kind of shows on this picture here, right?
We have a resilient status quo, right?
And then you get to a fragile status quo
and you would theoretically tip
into another type of status quo, right?
And the reality is we like don't understand it, right?
We're getting better, right?
But this would be, you know, from an understanding
perspective, like how do we institute policies that create
stable, thriving societies, right?
You know, through consciousness or like through your kind of
meta-consciousness, we don't get like hands down, right?
But we're trying.
All right, and so the other bit too is that,
so now it's okay, what tools can we do?
And again, I've no idea.
I'm just saying this is the little path that I'm on
to try and help figure this out, right?
But the one thing I do know is that better understanding
is not effective policy, right?
Because you have these emergent systems,
like it's like, well, I know that this is having this impact.
It's like, well, that's great, but what can replace it?
All right. Like what new policy can I put in place to replace what's happening in
order to produce this new emergence behavior of the system? All right. So, so
we need to develop new tools. This is primarily what my PhD was in, in
computational social science, right? It's saying like, hey, we got to use these
artifacts. We got to use these artifacts.
We got to generate a simu essentially virtual laboratories of the world.
So we can test things out in silico, as they say, prior to actually implementing it.
Right.
And so you have generative science, which is fundamentally different
from reductionist science, right?
Where it's like, I'm not trying to grow or I'm trying to grow the system.
Right. Cause I can't just disprove things, right?
And then, so create simulations, you know, essentially these virtual laboratories,
we can try stuff out. And so that is happening.
So George Mason, Oxford and Princeton, I think, yeah,
built a simulation back in like 2014 to try and replicate the 2008 housing market crisis.
Right. They got, you know, so in this kind of science, because you can't just prove stuff,
you do what's called pattern-oriented modeling. So they had nine statistics from that time period
on the greater DC housing market area, right. And they matched like eight of the nine. They said,
eh, that's good enough. Right. And so now the central bank of London uses a version of it and the central bank or central
bank of England uses it for London.
And the central bank of Norway has one for Oslo where essentially they try out policies,
right?
In silico and then they try and implement those policies to prevent in London's case
the housing crisis in London and same with Oslo.
So there are places that are trying to use this
and develop these tools,
but it's still a very nascent technology.
So then what am I doing personally for this?
So first off, you're trying to increase tech literacy,
which is, there's a lot of myths and stuff there
about AI and what it can do and fear and too much
people watching cable news. Right. And so it's, you know, the, uh, luckily on our faculty, we have a
PhD in curriculum and instruction, uh, who's very good. Uh, and so we're trying to feel, well, how
do we, instead of like, uh, giving people the classic data science thing was like, okay, we're
gonna start you out coding. You're going to learn, you know, all these statistics and these data
science tools be like, because we deal with, um, people're going to start you out of coding. You're going to learn all these statistics and all these data science tools.
Be like, because we deal with people
that are professionals for our student base,
hey, you really know a lot about, let's say,
political science or dynamics of something or other.
Then we'll give you an analytic that meets you in your space
where you're an expert, right?
And then as you can realize that, hey, this analytic is not exactly right, you could just
flip into the code, right?
So if you're familiar with Jupyter notebooks and stuff like that, and not see the code
right away, but see the choices that that developer made, right?
And then start to explore that computational tool at deeper and deeper levels based off
where you feel most comfortable.
Right.
And that kind of takes best practices of adult learning.
It's a, I put it up there, but one of my thesis students built cell toolbar, which is work.
It's not exactly where it should be yet.
So you might want to give it a couple of months, right.
But it's the idea is that you can hide code better and store knowledge.
So it's easier for people that aren't technically literate to interact with the code.
Right. And that's on the PI PI library't technically literate to interact with the code.
And that's on the PyPI library.
It's an extension for Jupyter Notebooks.
And then I still continue to work on Mesa,
agent-based modeling in Python, which is how do we
develop a simulation ecosystem?
So this is very much my own heavily biased opinion.
But if you're familiar with like scikit
learn, which is great, you can, you know, build all sorts of machine learning,
uh, uh, models and like three lines of code more or less, right.
And you've got like TensorFlow and PyTorch for TensorFlow is from Google.
PyTorch is from Facebook.
And so you can very easily build out these deep learning models
and things of that nature.
Right.
Um, but what does that look like for the age of base modeling ecosystem?
So again, bias is, which is I would say is order of magnitude harder to have an ecosystem where I can rapidly bring together, uh, the pieces for, and build
a model, you know, quickly that's relevant, right?
So, uh, the example I use is like an elementary school principal
during like a COVID outbreak, right?
Like how can that person try out, you know,
policies that are valid for their particular school
based off their situation, right?
Like how can you say, okay,
I got the mindset of a kindergartner
and I got the mindset of a sixth grader, right?
And now this is exactly in how my school's laid out,
whether it's in remote Kansas or, you know, New York City,
and what policies can I put in place, right,
to try this virtual laboratory
that will dramatically mitigate the outbreak in my institution.
It might be as simple as make sure a teacher watches the kindergartners wash their hands.
Then you reduce it by 20%.
But the idea is how do we create an environment we can have simulations that you rapidly put
together?
There are some companies trying to do this. I think Hash.ai is one of them. But there's a couple other agent-based model
communities out there. But what I don't know, what I'm very interested in, is how can you create
this ecosystem that allows for this rapid building of agent-based modelers, virtual simulations
that's as easy and effective to use
as a current machine learning ecosystem.
Right, so that was, ooh, I think it went fast.
That's good.
All right, so questions.
So you mentioned the housing crisis model,
and you said that they used eight of nine indicators,
and that the Bank of England is now using this model,
but given the butterfly effect, you're skipping an indicator. How do you resolve that kind of quandary?
But I think you'll ever resolve it, right? Because it's always going to be in there,
right? So you got like an event horizon, if you will, we could kind of see out this far,
but it's always kind of,
you're always having to constantly tweak the knobs
and update your understanding of the situation.
Right, so you can never solve it.
It's, I would say unequivocally impossible problem to solve.
I think the technical term from chaos theory
is it's deterministically unpredictable, right?
Which means the past creates the future,
but because of all these rounding errors and variables, you can't actually predict it.
Thank you so much.
It's very enlightening and I don't really know how to ask this, but there's
studies about the morphic field.
Because the thing is like the United States, when he started the country,
they had really good people
worrying about the people, for the people and everything. And they also had a saying of,
in God we trust, because they believe in the supernatural. So I feel like a lot of the science
sometimes they're forgetting about this supernatural thing that has been being studied from,
you know, ancient Egypt or mysticism or code sciences.
So the morphogenetic field is the thing that is not in the visible.
And it's still like, right now it's shown like for therapy.
And it's actually being implemented in Brazil in the judiciary and the health system,
because it has shown effectiveness.
Because this field actually is kind of, we have the Akashic field that is our cloud in a base,
and the morphogenetic field is our programming. So the way that things are programmed, we can go
back and reprogram, but it's more on the occult scientists, I guess, I mean, the supernatural.
Nikola Tesla said, like, when we start studying the non-physical phenomena,
we will have more advances in science in a decade than in centuries.
And I do believe we need to go back to the sacred, like ancient scriptures. And not only that, like the, like the sacred,
I'm sorry, I'm nervous.
I'm not really good with talking, but are you familiar
with Manley P. Hall?
With who?
Manley P. Hall.
No.
He's the writer of the sacred teachers of all ages, right?
I'm not, I can't remember the book now,
but there's a lot of wisdom in that.
And right now what we need in consciousness
is also the technology, understanding wisdom.
Because like quantum computing is like you getting information
from all sources in order to build this intelligence.
We have all this information.
We're only missing this part that is like doing the things that is right.
Because even everything that the government has done, that you're saying that now we need to be more clear about it, is still there.
It still shows on a more fixed field.
Whenever we do this right through, you know, the supernatural like using love, compassion, we can rewrite what was done wrong.
And I think we can have a better society and better leaders because that's what we're needing
right now.
We need someone that is going to guide us without interest of businesses or things like
that, people that really care about people.
And we have that, we care about each other, but in a way we've been
programmed to think differently.
So, so I got to take what you just said and I'm not going to disagree.
I got to put it in kind of like in these terms, right?
So what I would say is you're arguing for a different perspective, a superior,
right?
They say like, Hey, I think this is superior.
Uh, if we implement these types of policies, we as a humanity, society,
of different levels will do better.
Right.
And so I'd say is like using the kind of idea of an explore, explore,
exploit algorithm is like, you can say that.
And then as people implement that, the trick is, are we optimizing our environment,
right, to allow the ideas that are working, right, to actually bubble to the surface.
Because the trick is there's a lot, but a lot of bad ideas that are working to actually bubble to the surface. Because the trick is,
there's been a lot of bad ideas that have bubbled to the surface that didn't work,
that people encouraged for like Nazis and fascism and stuff like that.
I would say, I don't disagree. It's just like, okay, so you say, hey, these ideas,
we're seeing these in Brazil, they're working really good.
Right?
How do we optimize the policies, you know,
technological environments of our society
so that those ideas can be validated, right?
And bubble up to the surface, right?
And I mean, just politically, as you can all see
in our very kind of caustic environment in the US right now,
right, is that people will say something,
it's like, hey, this is right.
Right. Like, uh, Hey, uh, you know, it's been shown, uh, it's like a controversial
idea that you could take somebody that's, you know, like say, uh, homeless and
alcoholic and it's better just to put them in, in like a home where they have
access to alcohol because then, uh, you're actually spending less money on
ER visits and stuff like that.
But that's a very controversial idea.
Right. And so it's, you know, I don't know the answer, but that's, I would say the bit is
how do you take the ideas that even might not be intuitive, right.
And say like, Hey, if we do this, it actually works really well.
Right.
And you know, the trick is we start going against culture, right.
Then people are like, no, I'm not going to, I will literally not believe
that no matter what happens, right.
Like the earth is flat. Uh, yeah. Thank you for the talk, Tom. no, I'm not going to, I will literally not believe that no matter what happens. Like the earth is flat.
Yeah. Thank you for the talk, Tom. I enjoyed it. I want to press you a little bit on this notion of
stability, I guess in a sense, right? So part of what the project is to maintain stability as a way
to maybe effectively govern or create a more just society. I guess the way stability is in this
context, it makes it sound like it, the way stability is in this context,
it makes it sound like it's just, stability
is always good, right?
But in some ways I wonder if like, actually
that should be our kind of maximizing principle,
right?
Because you might think, well, say something
unjust is going on, the complete opposite of
stability seems to be what actually would be
the normative, like you ought not to endorse
that system.
So I wonder like, you know, say we put certain
practices in the place, right?
Which always maximize stability, but you have
unethical or unjust society or something like that.
Well, then it seems like you're just helping to
maintain that particular thing.
So I'm just wondering in some ways, like how do you,
because the pressure I get from the talk, right, is
this is to have a positive impact, right?
It's in some ways.
And I'm wondering, well, actually it seems like maybe
stability could have a very negative impact in some ways. So I'm wondering, well, actually it seems like maybe stability could have a very negative impact
in some ways.
So I'm wondering, like, how does that get counterbalanced
in the work?
So, I mean, that's an awesome question, Garret.
That's phenomenal.
I'd say you're absolutely right.
Like, it can never stop moving, right?
And I think, you know, like the trick in economics is,
well, if you're gonna have change,
you just want it to not be completely abrupt, right?
So much like, you know, much like a democracy, right,
that in my opinion, after, you know,
spending my adult life fighting
in counter-insurgency and stuff, right,
is that democracy is, nobody wins, right?
Once somebody wins, they just try and steal all the power.
So I figure, right, like that stability is more like,
hey, it's not like some kind of utopian stability
that can't exist, it's more like,
how do we, as we're constantly evolving and maybe going through these
spots of punctuated equilibrium, do we, you know, make sure you don't have like
a massive famine, you know, where like this happened a couple of times in, uh,
in China where literally millions of people are dying, right?
Like how do we, essentially as we shift from one kind of strange attractor,
go through a punctuated equilibrium, uh, to a new kind of homeostasis stable state, right?
How do we help manage that so it's not as traumatic?
Great talk, Tom.
So there's been a resurgence around agent-based thinking
recently with the LLM models.
What are your thoughts about taking this
agent-based modeling, which has been around for some time with all this great work in complex systems,
how do we tie in the LLMs?
Do you envision putting an agent on to each little agent,
rather put an LLM inside each little spot?
I think so, and actually Mark and I
kind of talked about this for a while,
but it's pretty straight, if I had a student
do this last quarter, a directed directed study is that you can link up
LLMs to talk to each other as essentially like little agents, right?
Uh, I think in the end, like it's, you know, you start talking abstractions,
uh, you know, if complex systems are an adaptive network, what's effectively
on LLM it's an adaptive network.
Right.
So I think it's some of the problems that open AI in particular is trying to
solve when you get to AGI,GI, is in my own mind,
and this is like very much a conjecture, right?
Is that at some point the kind of tools
will probably merge together, right?
And then just how they're presented
by the computer scientists is just different abstractions.
So they're more understandable.
But I think it's both adaptive networks.
And now it's like, well, how do these adaptive network
functions to solve problems?
Because you can imagine like the housing
simulator but each one actually wants a house or thinks it does. Yeah, yeah.
There you go. So you have a chart there where we're going towards a
singularity. One of the interesting things about getting close to a
singularity is the laws start breaking down, whether you're leaving from a
singularity or going towards a singularity. And I would suggest that a lot of the assumptions
are beginning to break down. For example, the assumption that policy matters
because used to there was policy, people would follow the policy. Now we have so
much not following the policy, there's ways around the system.
There are other effects like facts don't matter as much. And so a lot of the assumptions we have,
those are beginning to break down as we're getting closer to the singularity. So we almost need a
meta consideration about what we would do. Do you have any thoughts about that?
I would say so.
I'm not sure we're getting to the singularity.
I'll go to this slide, right?
As I think we're at that back hill up there
where it's a more fragile status quo.
As the fitness landscape has changed,
like our chances of tipping into a new dynamic
are just increasing, right?
We're seeing that all the time with numerous
unprecedented historical events, right? So I think that's it. a new dynamic are just increasing. Right. We're seeing that all the time with numerous
unprecedented historical events.
Right.
So I think that's it.
And I think my only other thought would be that it
is like kind of fractal in nature.
Right.
And kind of like yesterday I started talking to the
universe and the universe is a universe is all the
way up, right?
Like, can you, you know, is, you know, can you use
tools like this to, you this to essentially view your world
like you're in the universe up, right?
And to the point of things breaking down,
I think we're definitely seeing that,
doing some stuff on the open source ecosystem,
like, hey, this defies all traditional economics, right?
Like, why are people volunteering their time
to do these massively valuable inputs, right?
When they should be getting paid for, right?
What does that mean for an economy?
Right?
So I think you're absolutely right.
I'm not sure we're going to the singularity though.
We are going faster,
but I think we are definitely in a spot of more,
like a more fragile state where we might tip
into some new type of reality, right?
Not like, like just another historical reality, right?
Like, you know, 18th century to 19th century,
not like a, like a, I guess more like tipping into a singularity.
I wanted to ask you a question
that's related to those last two questions.
You started your talk with the basically diversity
of thought being a good thing
and leading to expansion of knowledge
and cultural revolution.
I can't help but think to recent history in the 1850s,
it seems like there was a lot of diversity of thought in America.
And I would argue that that could be perceived as a bad thing
because it led to the Civil War, 600,000 Americans dying.
And so that's like a saddle point, basically.
And the way out of that saddle point was, you know,
arguably Abraham Lincoln was implementing what would be considered
very fascistic policies, like imprisoning dissenters, political dissenters.
So two-part question is, how do you distinguish good diversity of thought from bad diversity
of thought that's destabilizing?
And to what extent or what value can you, or how can you implement a network that is
going to be able to feed forward knowledge from centuries of lessons learned
that can actually impact the future evolution?
I can't help but think that any future civil war
that might be avoided by a recollection
of the not too distant past, the horrors.
That's it.
So the short answer to that is I have no idea
because you're absolutely right.
Like the right, because we just,
we don't understand it, right?
Like we can't, we don't, if you,
I mean, you go back to the, like the Civil War
or even World War II, there was moments
that were very brief and very important,
comparatively like Gettysburg,
the Battle of Midway and stuff like that,
where you could have had a vastly different reality,
right, that came out of it.
And so I guess what I would say, you just said it like that,
to me epitomizes the problem.
We have no scientific or rigorous way to say,
well, in this situation, you should do these things
and that will help push the system in this direction.
Right, like everything I've seen is that've seen is we're making small progress,
but we functionally don't have tools that
will allow us to do that.
And so that's what now I've spent 15 years trying
to figure that out.
It's a tough one, right?
It seems like in those last few slides,
you're talking about human tractable and interpretable
algorithms with human tractable and interpretable variables so that someone can take their local
situation and sort of tailor the algorithm to them, like your example of the school with
COVID.
In situations like that where the person's answer from the algorithm actually matters
in them making real local policy decisions, how do you reconcile this idea of building human tractable and interpretable algorithms
with the fact that it seems like the non-tractable and non-interpretable algorithms where you
just throw in 2000 data points and ask it to predict something seem to have much more
predictive power and would be much better for local policy decisions?
Well, so this is what I would say, and this is why I'm giving this conference,
because that way people can call BS,
and I know there's a lot of folks very proficient
at LLMs and stuff like that, right?
Is that the challenges with like these current tools, right?
Is I can break those tools if the world changes, right?
So my understanding was when COVID hit
and people's behavior changed drastically overnight
and did something that
never seen before, right, that that broke a bunch of like the tech giants tools, right?
Because the trick is we're not trying to predict the future, we're trying to shape the future,
right?
Like, so if it's a choice between, you know, kind of like genocide, war, World War III
and nuclear weapons, right?
It's like, oh, we don't want that to happen.
How do we implement things that help us, you us, that put us on a different path?
So far, and I'm not knee deep into
some of the large language model stuff,
is I haven't seen, from my perspective,
I haven't seen models that can account for that, right?
That can say, hey, here's a policy you should do
to create this new future.
In fact, the only thing that I've seen that that even gets close to this kind of
records, uh, uh, uh, echoes harmless, uh, talk yesterday was the bomb experiment
from quantum mechanics where it's like, Hey, it's actually not small.
It's all the way up in the bomb experiment.
They're able to show that quantum could show what didn't happen.
Right.
Uh, and that's, I mean, that's like, if you can see all these realities and then it's like, why are these realities didn't happen, right? And that's, I mean, that's like, if you can see all these realities and then it's
like, well, I know these realities didn't happen, right?
Because of this thing that becomes a big, you know, like kind of computational
like breakthrough in my opinion.
So I just wanted to go back to in the beginning, you talked a little bit about
a meta consciousness.
Could you give us a definition of what do you mean by meta consciousness and what
does that entail as far as looking at these complex systems?
Yeah, sure.
So I can't give you a good definition, but I can say this.
So if you look at like, so kind of classic one I've been exposed to on this is Jane Jacobs,
like the wisdom of cities, right?
Like it's amazing.
You can look at cities, you know, like Tammany Hall in the 1800s in the US and Lagos, Nigeria now,
and it's like they form a kind of collective consciousness on how they function.
Even London during Dickinsonian times.
So you have these essentially complex systems that form that although they're made up of
a bunch of little people, they exhibit the same type of behaviors and the same type of optimization functions. I know the Santa Fe Institute's really
looking into that and they started a new journal with, I think it's Jessica Flack and Scott Page,
called Collective Intelligence, right, but it's that idea. I think the book is Solaris, or no,
yes, there's a book called Solaris, then there's a Russian movie and then there's a George Clooney movie that sucks.
Right.
But, uh, uh, but in, in that it's the idea that, um, that, you know, this one planet is a singular consciousness.
Right.
And it's all just functioning together.
Right.
Uh, so I would say the, that's the, I won't give you a definition.
I give you like the field of study.
Uh, and I would look at the collective intelligence journal.
That's all like a year old or so now.
But just to clarify, it's collective intelligence, not like sensational consciousness of a society.
Yes, yeah.
Right, but that's, well, I mean, because that goes to the other bit where you could go down, downward, right?
Where are we as a conscious human being just the byproduct of a microbiome that needed to survive.
Right. So I've had, I've heard numerous microbiologists, you know, make that joke, right. Which is that, I mean, you got more bacteria in you than you do have you in you.
Right. And so, you know, are we just a side effect of them trying to survive?
Right. So are we really just a, a collective intelligence for this bacteria?
Just wanted to make a quick comment on the collective intelligence, consciousness kind of thing.
I always like to think of our vegetables, fruits,
certain plants that we like to use
to feel in different altered states
are kind of using us to propagate their species in a sense.
Like if you think about a lot of the animals that we eat
or the plants that we eat,
what better way of ensuring their survival
than being tasty or nutritious to us
and now they're just everywhere.
So it is kind of a very interesting dynamic to think about.
That's good defer to an evolutionary biologist.
Hi, so I lived in New York for 30 years
where the conversation was a lot about learning organizations
and just in time knowledge, that sort of thing.
So now I live in a rural area in Florida
where the biggest problem locally
is whether or not people are allowed to park
big trucks in their backyard.
And they resent any kind of opinions
coming from anywhere else about the aesthetics of it,
the utility of it, all of that sort of thing.
So how do you get past the local culture that totally rejects any kind of input
from a larger outside organization?
Well, so I don't think you do.
And this kind of goes, I think, to your question a little bit, Gare, which is,
so I don't know if you know who David Kilcullen is.
He's an Australian, kind of one of the world's foremost counterinsurgents, was advising the U S at the most senior levels through Iraq and Afghanistan.
And he wrote a book called Out of the Mountains, where he proposed the theory
of competitive control, out of the mountains, right?
And in that the idea is, so we're in these like homeostasis stable states, but you're
never going to have like homogeneity in a population, right?
And that'll probably be desperate.
So I wouldn't necessarily say that diversity of opinions isn't necessarily bad, right?
But they do select over time, right?
And so, you know, in, you know, because rural Florida is vastly different than urban New
York City, right?
You're going to select different things based off, you know, whatever your
optimal, uh, whatever your optimal dynamic is.
So, uh, where he starts to get into problems is like, well, now your
optimal dynamic is actually, you know, like say the fertilizers in Iowa,
you know, creating red tides and killing the fish down in Texas.
So how'd you balance that?
Right.
Um, and that's, uh, so it's, I guess I just say, you always want heterogeneity in any type
of population and then it's, what are the, how do you set up dynamics to optimize that
so you can, you know, let a lot of people just go about their life and not be interrupted,
but also optimize how that society continues to function and doesn't, you know, get in
a war with itself.
Okay, great.
I'd like to thank Tom.
Applause
Firstly, thank you for watching, thank you for listening.
There's now a website, kurtjymungle.org, and that has a mailing list.
The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like.
That's just part of the terms of service.
Now a direct mailing list ensures that I have an untrammeled communication with you.
Plus, soon I'll be releasing a one-page PDF of my top 10 toes.
It's not as Quentin Tarantino as it sounds like.
Secondly, if you haven't subscribed or clicked that like button, now is the time to do so.
Why?
Because each subscribe, each like helps YouTube push this content to more people like yourself,
plus it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, which means
that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content
outside of YouTube, which in turn greatly aids the distribution on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for theories of everything,
where people explicate toes, they disagree respectfully about theories, and build as
a community our
own Toe.
Links to both are in the description.
Fourthly, you should know this podcast is on iTunes, it's on Spotify, it's on all of
the audio platforms.
All you have to do is type in theories of everything and you'll find it.
Personally, I gained from rewatching lectures and podcasts.
I also read in the comments that hey, toll listeners also gain from replaying.
So how about instead you re-listen on those platforms like iTunes, Spotify, Google Podcasts,
whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever
you like. There's also PayPal, there's also crypto,
there's also just joining on YouTube. Again, keep in mind it's support from the sponsors
and you that allow me to work on toe full time. You also get early access to ad free
episodes whether it's audio or video, it's audio in the case of Patreon, video in the
case of YouTube. For instance, this episode that you're listening to right now was released
a few days earlier. Every dollar helps far more than you think.
Either way, your viewership is generosity enough.
Thank you so much.