The Joe Rogan Experience - #1211 - Dr. Ben Goertzel
Episode Date: December 4, 2018Dr. Ben Goertzel is the founder and CEO of SingularityNET, a blockchain-based AI marketplace. ...
Transcript
Discussion (0)
boom hello ben hey there good to see you man yeah it's a pleasure to be here thanks for doing this
yeah yeah thanks thanks thanks for having me i've been i've been looking at some of your shows in
the last few last few days just to to get a sense of how you're thinking about ai and crypto and the various other things i'm involved
in it's been interesting well i've been following you as well i've been uh paying attention to a lot
of your lectures and talks and different things you've done over the last couple days as well
getting ready for this it's uh ai is uh either people are really excited about it or they're
really terrified of it those are the sort of it seems to be the two responses either people have this dismal view of these robots taking over the world
or they think it's going to be some amazing sort of symbiotic relationship that we have with these
things it's gonna evolve human beings past the the monkey stage that we're at right now yeah i i tend Yeah, and I tend to be on the latter, more positive side of this dichotomy.
But I think one thing that has struck me in recent years is many people are now mentally confronting all the issues regarding AI for three decades and I first started thinking about AI when I was a little kid
in the early late 60s and early 70s when I saw AIs and robots on the original Star Trek so I guess
I've had a lot of cycles to process the positives and negatives of it where it's now like suddenly
most of the world is thinking through all this for the first time.
And, you know, when you first wrap your brain around the idea that there may be creatures
10,000 or a million times smarter than human beings, at first this is a bit of a shocker,
right?
And then, I mean, it takes a while to internalize this into your worldview.
Well, it's that there's also i think there's a problem with
the term artificial intelligence because it's that's it's intelligent it's there it's a real
thing yeah it's not artificial it's not like a fake diamond or a fake ferrari it's a real thing
and it it's not a great term and there's been many attempts to replace it with synthetic intelligence, for example.
But for better or worse, like AI is there.
It's part of the popular imagination.
It's an imperfect word, but it's not going away.
Well, my question is, like, are we married to this idea of intelligence and of life being biological,
Are we married to this idea of intelligence and of life being biological, being carbon-based tissue and cells and blood or insects or mammals or fish?
Are we married to that too much? Do you think that it's entirely possible that what human beings are doing and what people that are at the tip of AI right now that are really pushing the technology,
what they're doing is really creating a new life form,
that it's going to be a new thing,
that just the same way we recognize wasps and buffaloes and artificial intelligence is just going to be a life form
that emerges from the creativity and ingenuity of human beings.
Well, indeed.
So, I mean, I've long been an advocate of a philosophy I think of as patternism.
Like, it's the pattern of organization that appears to be the critical thing.
And, you know, the individual cells and going down further, like the molecules and particles
in our body are turning over all the time.
So, it's not the specific combination of elementary particles
which makes me who I am or makes you who you are.
It's a pattern by which they're organized
and the patterns by which they change over time.
So, I mean, if we can create digital systems
or quantum computers or femto computers
or whatever it is manifesting the patterns of organization
that constitute intelligence i mean then then there you are there there there's intelligence
right so that that's not to say that you know consciousness and experience is just about
patterns of organization there may be more dimensions to it but when when you look at
what constitutes intelligence thinking cognition problem cognition, problem solving, you
know it's the pattern of organization, not the specific material as far as we can tell.
So we can see no reason based on all the science that we know so far that you couldn't make
an intelligent system out of some other form of matter rather than the specific types of atoms and molecules that make up human beings.
And it seems that we're well on the way to being able to do so.
When you're studying intelligence, you're studying artificial intelligence,
did you spend any time studying the patterns that insects seem to cooperatively behave with?
patterns that insects seem to cooperatively behave with,
like how leafcutter ants build these elaborate structures underground and wasps build these giant colonies.
Did you study how?
I did, actually, yes.
So I sort of grew up with the philosophy of complex systems,
which was championed by the Santa Fe Institute in the 1980s. And
the whole concept that there's an interdisciplinary complex system science, which includes, you know,
biology, cosmology, psychology, sociology, there's sort of universal patterns of self-organization. And ants and ant colonies
have long been a paradigm case for that.
I used to play with the ant colonies in my backyard
when I was a kid,
and you'd lay down food in certain patterns,
you'd see how the ants are laying down pheromones,
and the colonies are organizing it in a certain way.
And that's an interesting self-organizing complex system on its own.
It's lacking some types of adaptive intelligence that human minds and human societies have,
but it has also interesting self-organizing patterns. This reminds me of the novel
Solaris by Stanislaw Lem, which was published in the 60s, which was really quite a deep novel, much deeper than the movie that was made of it.
Did you ever read that book, Solaris?
I'm not familiar with the movie either. Who's in the movie?
So there was an amazing, brilliant movie by Tarkovsky, the Russian director from the late 60s.
Then there was a movie by Steven Soderbergh, which was sort of glammed
up and Americanized.
Oh, that was fairly recent, right?
Yeah, 10 years ago. But that didn't get all the deep points of the novel. The original
novel, in essence, there's this ocean coating the surface of some alien planet, which has
amazingly complex fractal patterns of organization.
And it's also interactive, like the patterns of organization on the ocean
respond based on what you do.
And when people get near the ocean, it causes them to hallucinate things
and even causes them to see simulacra of people from their past,
even like the person who they had most harmed or injured in their past
appears and
interacts with them.
So clearly this ocean has some type of amazing complexity and intelligence from the patterns
it displays and from the weird things it reeks in your mind, so that the people on Earth
try to understand how the ocean is thinking, they send a scientific expedition there to
interact with that ocean. But it's just so alien, even though it monkeys with people's minds and clearly is doing complex things. No two way communication is ever established. And eventually,
the human expedition gives up and goes home. So, it's a very Russian ending to the novel, I guess.
I think I saw that.
But the interesting message there is,
I mean, there can be many, many kinds of intelligence, right?
I mean, human intelligence is one thing.
The intelligence of an ant colony is a different thing.
The intelligence of human society is a different thing.
The ecosystem is a different thing.
And there could be many, many types of AIs that we could build with many, many different properties.
Some could be wonderful to human beings.
Some could be horrible to human beings.
human beings, some could be horrible to human beings, some could just be alien minds that we can't even relate to very well.
We have a very limited conception of what an intelligence is if we just think by close
analogy to human minds.
This is important if you're thinking about engineering or growing artificial life forms or artificial minds, because it's not just, can we do this? It's what kind of mind and this insect started organizing and developing these complex colonies like a leafcutter ant
and building these structures underground, people would go crazy.
They would panic.
They would think these things are organizing.
They're going to build up the resources and attack us.
They're going to try to take over humanity.
I mean, what people are worried about more than anything when it comes to technology, I think, is the idea that we're going to be irrelevant, that we're going to be antiques, and that something new and better is going to take our place, which is a weird thing to worry about.
Which is almost inevitable.
Yeah, it's a weird thing to worry about because it's sort of the history of biological life on Earth.
I mean, what we know is there's complex things they become more
complicated single-celled organisms to multi-celled organisms there seems to be a pattern leading up
to us and us with this unprecedented ability to change our environment that's what we can do right
we can manipulate things poison the environment we can blow up entire countries with bombs if we'd
like to and we can also do wild creative things like send
signals through space and land on someone else's phone on the other side of the world almost
instantaneously we have incredible power but we're also we're also so limited by our biology
yeah the thing i think people are afraid of and i i'm afraid of but i don't know if it makes any sense, is that the next level of life, whatever
artificial life is, or whatever the human symbiote is, that it's going to lack emotions,
it's going to lack desires and needs, and all the things that we think are special about us.
Our creativity, our desire for attention and love, all of our camaraderie, all these different things that are sort of programmed into us with our genetics
in order to advance our species, that we're so connected to these things.
But they're the reason for war.
They're the reason for lies, deception, thievery.
There's so many things that are built into being a person
that are responsible for all
the woes of humanity but we're afraid to lose those yeah i think it it's almost inevitable
by this point that humanity is going to create synthetic intelligences with tremendously greater
general intelligence and practical capability than human beings have.
I mean, I think I know how to do that with the software I'm working on with my own team.
But if we fail, you know, there's a load of other teams who I think are a bit behind us,
but they're going in the same direction now, right?
So you guys feel like you're at the tip of the spear with this stuff?
I do, but I also think that's not the most important thing from a human perspective.
The most important thing is that humanity as a whole is quite close to this threshold event, right?
How far do you think it's quite close?
By my own gut feeling, 5 to 30 years, let's say.
That's pretty close.
But if I'm wrong and it's 100 years, like in the historical time scale that sort of doesn't matter it's like did the sumerians create civilization
10 000 or 10 050 years ago like what difference does it make right so i think we're quite close
to creating superhuman artificial general intelligence and that that's, in a way, almost inevitable, given where we are now.
On the other hand, I think we still have some agency regarding whether this comes out in a way
that, you know, respects human values and culture, which are important to us now, given who and what
we are, or that is essentially indifferent to human values and culture in are important to us now given who and what we are yeah or that is essentially
indifferent to human values and culture in the same way that we're mostly indifferent to chimpanzee
values and culture at this point and i mean completely indifferent to insect values and
culture not completely if you think about it i mean if i'm building a new house i will bulldoze a bunch of ants but yet we
get upset if we extinct an insect species right so we we care we care to some level but not but we
but we would like the super ais to care about us more than we care about insects or or great apes
absolutely right and i think this this is something we can impact right now. And to be honest, I mean, in a certain part of my mind, I can think, well, like, in the end, I don't matter that much. My four kids don't matter that much. My granddaughter doesn't matter that much. Like, we are patterns of organization in a very long lineage of patterns of organization.
But they matter very much to you.
Yeah, and other dinosaurs came and went, and Neanderthals came and went.
Humans may come and go.
The AIs that we create may come and go, and that's the nature of the universe.
But on the other hand, of course, in my heart, from my situated perspective as an individual human, like if some AI tried to annihilate my 10-month-old son, I would try to kill that AI, right?
As a human being situated in this specific species, place, and time, I care a lot about the condition of all of us humans. And so, I would like to not only
create a powerful general intelligence, but create one which is going to be beneficial to
humans and other life forms on the planet, even while in some ways going beyond everything that we are, right?
And there can't be any guarantees about something like this.
On the other hand, humanity has really never had any guarantees about anything anyway, right?
I mean, since we created civilization, we've been leaping into the unknown one time after the other in a somewhat
conscious and self-aware way about it from you know agriculture to language to math to the
industrial revolution we're leaping into the unknown all the time which is part of why
we're where we are today instead of just another animal species, right? So we can't have a guarantee that AGI's,
Artificial General Intelligence, as we create,
are going to do what we consider the right thing
given our current value systems.
On the other hand, I suspect we can bias the odds
in the favor of of human values and and culture and that's something i've
i've put a lot of thought and work into alongside the you know the basic algorithms of of artificial
cognition is the issue that the initial creation would be subject to our programming but that it
could perhaps program something more efficient
and design something?
Like if you build creativity into artificial general intelligence.
I mean, you have to.
I mean, generalization is about creativity, right?
Yeah.
But is the issue that it would choose to not accept our values,
which it might find...
Well, clearly we'll choose not to accept our values,
and we want it to choose not to accept all of our values so it's more a matter of whether the ongoing creation evolution of new
values occurs with some continuity and respect for the previous ones so i mean i with i have
four human kids now one is a baby but the other three are adults right and with each of them
i took the approach of trying to
teach the kids what my values were not just by preaching at them but by entering with them into
shared situations but then you know when your kids grow up they're going to go in their own
different directions right and and these are humans but but they all have the same sort of
biological needs which is one of the reasons why we have these desires in the first place yeah there still is an analogy i think the ai's that we
create you can think of as our mind children and we're starting them off with our culture and
values if we do it properly or at least with a certain subset of the whole diverse self-contradictory mess of human culture and values.
But you know they're going to evolve in a different direction,
but you want that evolution to take place in a reflective and caring way rather than a heedless way.
Because if you think about it, the average human a thousand years ago or even 50 years ago
would have thought you and me were like hopelessly immoral miscreants who had abandoned all the valuable things in life, right?
Just because of your hat?
My hat?
The long hair?
I mean, I'm an infidel, right?
I haven't gone to church ever, I guess.
I mean, my mother's lesbian, right?
I mean, there's all these things
that we take for granted now
that not that long ago
were completely against
what most humans considered
maybe the most important values of life.
So, I mean, human values itself
is completely a moving target.
Right, and moving in our generation. Yeah, yeah,, and moving in our generation.
Yeah, yeah, yeah, moving in our generation.
Pretty radically.
Very radically.
When I think back to my childhood,
I lived in New Jersey for nine years of my childhood,
and just the level of racism and anti-Semitism and sexism
that were just ambient and taken for granted then what
years was this was this when you're between because i think we're the same age we're both
yeah yeah yeah i'm born in 66 i lived in jersey from 73 to 82 okay so i was there from 67 to
73 oh yeah yeah right right so. So, yeah, I mean,
my sister went to the
high school prom with a black guy
and so we got our car
turned upside down, the windows of our house
smashed and it was like a humongous
thing and it's almost
unbelievable now, right? Because now
no one would care
whatsoever.
It's just life, just life right well certainly
there's some fringe parts of this yeah yeah but but still the point is there is no fixed
list of of values human values it's an ongoing evolving process and what you want is for the
evolution of the ai's values to be coupled closely with the evolution of human values
rather than going off in some utterly different direction that we can't even understand but this
is literally playing god right i mean if you're talking about like trying to program in values
i don't think you can program in values that fully. You can program in a system for learning and growing values. And here again, the analogy with human kids is not hopeless. Like telling your kids, these are the 10 things that are important, doesn't work that well, right? What works better is you enter into shared situations with them,
they see how you deal with the situations, you guide them in dealing with real situations,
and that forms their system of values. And this is what needs to happen with AIs. They need to
grow up entering into real life situations with human beings so that the real life patterns of
human values, which are worth a lot more than
the homilies that we enunciate formally, right? The real life pattern of human values gets
inculcated like into the intellectual DNA of the AI systems. And this is part of what worries me
about the way the AI field is going at this moment, because I mean, most of the really powerful, narrow AIs on
the planet now are involved with like, selling people stuff they don't need spying on people or
like, figuring out who should be killed or otherwise abused by some government, right? So
if the early stage AIs that we build, turn into general intelligences gradually and these general intelligences are
you know spy agents and advertising agents then like what what what mindset do these early stage
ais have as they grow up right if they don't have any problem morally and ethically with
manipulating us which we're very malleable right we're so easy to manipulate well and that we're
teaching them we're teaching them to manipulate people and we're very malleable, right? We're so easy to manipulate. Well, and that we're teaching them.
How to do it.
We're teaching them to manipulate people. And we're rewarding them for doing it successfully, right?
So this is one of these things that from the outside point of view might not seem to be all that intelligent.
It's sort of like gun laws in the U.S.
Living in Hong Kong, I mean, most people don't have a bunch of guns sitting
around their house. And coincidentally, there are not that many random shootings happening in Hong
Kong, right? That's crazy. What a weird coincidence. Yeah, you look in the U.S., it's like, somehow,
you have laws that allow random lunatics to buy all the guns they want and you have all these people
getting shot so similarly like from the outside you could look at it like this species is creating
the successor intelligence and almost all the resources going into creating their successor
intelligence are going into making ais's to do surveillance like military
drones and advertising agents that brainwash people into buying crap they don't need now what's
what's wrong with this picture right isn't that just because that's where the money is like this
is the this is the introduction to it and then from then we'll find other uses and applications
for it but like right now, that's where...
The thing is, there's a lot of other applications.
Financially viable applications?
Well, yeah, the applications that are getting the most attention
are the financial lowest hanging fruit, right?
So for example, among many projects I'm doing
with my SingularityNet team,
we're looking at applying AI to diagnose agricultural disease.
So you can look at images of plant leaves, you can look at data from the soil and atmosphere, and you can project whether disease in a plant is likely to progress badly or not, which tells you, do you need medicine for the plant? Do you need pesticides?
for the plant do you need pesticides now this this is an interesting area of application it's probably quite financially lucrative in a way but it's a more complex industry than
than selling stuff online so the fraction of resources going into ai for agriculture
is very small than like uh e-commerce or something right very specific aspect of agriculture too
predicting diseases
yeah yeah but there's there's a lot of specific aspects right so i mean ai for medicine again
there's been papers on machine learning applied to medicine since the 80s and 90s but the amount
of effort going into that compared to advertising or surveillance is very small now this has to do with the structure
of the pharmaceutical business as as compared to the structure of the tech business so you know
when you look into it there's there's there's good reasons there's good reasons for for everything
right but nevertheless the way things are coming out coming down right now is certain biases to the development of early stage AIs are very marked,
and you could see them.
And, I mean, I'm trying to do something about that together with my colleagues in SingularityNet,
but, of course, it's sort of a David versus Goliath thing.
It seems, well, of course you're trying to do something different,
and I think it's awesome what you guys are doing.
But it just makes sense to me that the first applications
are going to be the ones that are more financially viable.
It's like, what pushes...
Well, the first applications were military, right?
I mean, until about 10 years ago,
85% of all funding in the AI was from plus plus western europe militaries well what i'm
getting at is that it seems that money and and commerce are inexorably linked to innovation and
technology because there's this sort of thing that we do as a culture where we're constantly
trying to buy and purchase bigger and better things we always want the newest iphone the
greatest you know laptop we don't want the newest iPhone, the greatest laptop.
We don't want the coolest electric cars, whatever it is.
And this fuels innovation, this desire for new, greater things.
Materialism, in a lot of ways, fuels innovation because this is how—
It does, but I think there's an argument that as we approach a technological singularity,
we need new systems. Because if you look at how things have happened during the last century, what's happened is that governments have funded most of the core innovation.
I mean, this is well known that like most of the technology inside a smartphone was funded by US government, a little by European government, GPS and the batteries and everything.
And then companies scaled it up.
They made it user-friendly.
They decreased cost of manufacturing.
And this process occurs with a certain time cycle to it,
where government spends decades funding core innovation and universities,
and then industry spends decades figuring
out how to scale it up and make it palatable to users.
And, you know, this matured probably since World War II, this sort of modality for technology
development.
But now that things are developing faster and faster and faster, there's sort of not
time for that cycle to occur
where the government and universities incubate new ideas for a while and then technology scales
it up so the gene is out of the bottle essentially yeah but we still need a lot of new amazing
creative innovation to happen but somehow or other new new structures are going to have to
evolve to to make it happen.
And you can see everyone's struggling to figure out what these are.
So, I mean, this is why you have big companies embracing open source.
Google releases TensorFlow, and there's a lot of other different things.
And I think some projects in the cryptocurrency world have been looking at that too,
like how do we use tokens to incentivize you know independent scientists and inventors to do new stuff without them having to be
in a government research lab or in a big company so i think we're going to need the evolution of
new systems of innovation and of technology transfer as things are are developing faster
and faster and faster and faster.
And this is another thing that's sort of gotten me interested
in the whole decentralized world and the blockchain world
is the promise of new modes of economic and social organization
that can bring more of the world into the research process
and accelerate the technology transfer process.
I definitely want to talk about that.
But one of the things that I wanted to ask you is when you're discussing this,
I think what you're saying is one very important point,
that we need to move past the military gatekeepers of technology, right?
It's not just military now, though.
It's big tech, which are advertising agencies in essence.
Facebook, social media, things that, control who votes for what.
And who controls the brainwashing of the public is advertising agencies.
And who increasingly are the biggest advertising agencies are the big tech companies who are accumulating everybody's data and using it to program their minds to buy things.
So this is what's programming the global brain of of of the
human race and of course there are close links between big tech and and the military like look
amazon has what 25 000 person headquarters in crystal city virginia right next to the pentagon
and i mean china it's even more direct and unapologetic right right? So it's a new military industrial advertising complex,
which is guiding the evolution of the global brain on the planet.
Well, we found that with this past election, right?
With all the intrusion by foreign entities trying to influence the election,
that they have these giant houses set up to write bad stories about whoever they don't want
to be in office yeah in a way that's almost a red herring but it i mean the russian stuff is almost
a red herring but it revealed what the processes are which which are used to program oh because i
think the whatever programming of americans minds is done by the Russians is minuscule compared to the programming of Americans' minds by the American corporate and government elite.
But it's fascinating that anybody's even jumping in as well as the American elite.
Sure.
It's always weird.
It's interesting and if you look at what's happening in china
that's like yeah yeah yeah they're they're way better at it than than we are well it's much
more horrific right and that's well it's more it's more professional it's more polished it's
more centralized yeah on the other hand for almost everyone in china china is a very good place to
live and you know the level of improvement in that country in the last 30 years has just been
astounding right like i mean you can't you can't argue with how much better it's gotten there since
deng xiaoping took over it's it's tremendous because they're not they they've embraced
capitalism to a certain extent they've created their own unique system.
What labels you give it is almost arbitrary.
They've created their own unique system as a crazy, hippie, libertarian, anarcho-socialist, freedom-loving maniac.
That system rubs against my grain in many ways.
maniac that system rubs against migraine in many ways on the other hand empirically if you look at it it's improved the well-being of a of a tremendous number of of people so hopefully it evolves and
it's one step but the way it's evolving now is not in a more positive freedom love well it's not
in a more freedom loving and anarchic direction one would say it's positive in some ways and
negative in others like most complex things in hong kong why do you live there um i fell in love
with a chinese woman oh there you go she's yeah it's a great reason we had a baby recently she
she's not from hong kong she's from mainland china i met her when she was doing her PhD in computational linguistics in Xiamen. But
that was what sort of first got me to spend a lot of time in China. But then I was doing some
research at Hong Kong Polytechnic University. And then my good friend David Hansen was visiting me
in Hong Kong. I introduced him to some investors there, which ended up with him
bringing his company Hanson Robotics to Hong Kong. So now, after I moved there because of falling in
love with Ray Ting, then I brought my friend David there. Then Hanson Robotics grew up there.
And there's actually a good reason for Hanson Robotics to be there because I'm in the best
place in the world to manufacture complex electronics is in you know Shenzhen right across the border from Hong Kong so now I've I've been working there
with Hanson Robotics on the Sophia robots and other robots for for a while and I've accumulated
a whole AI team there around Hanson Robotics and SingularityNet so I mean by by now I'm there
because my whole AI and robotics teams are there.
Right, makes sense.
Do you follow the State Department's recommendations to not use Huawei devices?
And they believe that they're...
Well, no.
Have you heard that?
Yeah.
Have you paid attention to that?
I think that the Chinese are spying on us.
You know, I'm sure.
You know, when I lived in Washington, D.C. for nine years,
I did a bunch of consulting for various government agencies there.
And my wife is a Communist Party member, actually, just because she joined in high school when it was sort of suggested for her to join.
So I'm sure I'm being watched by multiple governments.
I don't have any secrets. It doesn't
really matter. I'm not in the business
of trying to overthrow any government.
I'm in the business of trying to
bypass
traditional governments and traditional
monetary systems and all the rest
by creating new methods
of organization of people
and information. I understand that with you
personally, but it is unusual if the government
is actually spying on people through these devices.
I doubt it's unusual.
I doubt it's unusual at all.
I mean, without going into too much detail,
like when I was in D.C.
working with various government agencies,
it became clear there is tremendously more information
obtained by government agencies than most people
realize yeah well this was this was true way before snowden and wiki leaks and all these
revelations and what is publicly understood now is probably not the full scope of the information that governments have either. So, I mean,
privacy is pretty
much dead. And David Brin,
do you know David Brin?
You should definitely interview David Brin.
He's an amazing guy. But he's
a well-known science fiction writer.
He's based in Southern California, actually, San Diego.
But he wrote a book in
years ago called The Transparent
Society, where he said
there's two possibilities, surveillance and surveillance. It's like the power elite watching
everyone or everyone watching everyone. I think everyone watching everyone.
So he articulated this as essentially the only two viable possibilities. And he's like,
we should be choosing and then creating which of these
alternatives we want so now now the world is starting to understand what he was talking about
back when he wrote that book what year did he write the book oh i can't remember i mean it was
well more than a decade ago it's weird when some people just nail it on the head
decades in advance i mean most of the things that are happening in the world now
were foreseen by Stanislaw Lem, the Polish author I mentioned,
Valentin Turchin, a friend of mine who was the founder of Russian AI.
He wrote a book called The Phenomenon of Science in the late 60s.
In 1971 or two, when I was a little kid,
I read a book called The Prometheus Project by a Princeton physicist called Gerald Feinberg.
You read a physicist book when you're five years old?
Yeah, I started reading when I was two.
My grandfather was a physicist.
Wow.
I was reading a lot of stuff then.
But Feinberg, in this book, he said, you know, within the next few decades, humanity is going to create nanotechnology.
It's going to create machines smarter than people, and it's going to create the technology
to allow human biological immortality. And the question will be, do we want to use these
technologies, you know, to promote rampant consumerism? Or do we want to use these
technologies to promote, you know, spiritual growth of our consciousness into new dimensions
of experience.
And what Feinberg proposed in this book in the late 60s, which I read in the early 70s,
he proposed the UN should send a task force out to go to everyone in the world, every little African village, and educate the world about nanotech, life extension, and AGI, and get the
whole world to vote on whether we should develop these
technologies toward consumerism or toward consciousness expansion.
So I read this when I was a little kid.
It's like, this is almost obvious.
This makes total sense.
Like why doesn't everyone understand this?
Then I tried to explain this to people and I'm like, oh shit, I guess it's going to be
a while till the world catches on so so i instead
decided i should build a spacecraft go away from the world at rapid speed and come back after like
a million years or something when the world was far more advanced so or covered in dust yeah right
so now well then you go away another million years and see what aliens have evolved. So now, pretty much the world agrees that life extension, AGI, and nanotechnology are plausible things that may come about in the near future.
The same question is there that Feinberg saw like 50 years ago, right? The same question is there, like, do we develop this for rampant consumerism? Or do we develop this for amazing new dimensions of, you know, consciousness,
expansion and mental growth, but the UN is not, in fact, educating the world about this and pulling them to decide democratically what to do.
On the other hand, there's the possibility that by bypassing governments and the UN and doing
something decentralized, you can create a democratic framework, you know, within which,
you know, a broad swath of the world can be involved in a participatory way in guiding
the direction of these advances. Do you think that it's possible that instead of choosing that we're just going to
have multiple directions that it's growing in, that there's going to be consumer-based?
There will be multiple directions. And that's inevitable. It's more a matter of whether
anything besides the military advertising complex gets a shake, right?
Right.
So, I mean, if you look in the software development world,
open source is an amazing thing, right?
Linux is awesome, and it's led to so much AI being open source now.
Now, open source didn't have to actually take over the entire software world
like Richard Stallman wanted in order to
have a huge impact right it's enough that it's a major force so i mean it's a very hippie concept
isn't it open source in a lot of ways in a way but but yet ibm has probably thousands of people
working on linux right so like apple it began as a hippie concept, but it became very practical, right? So, I mean, something like 75% of all the servers running the Internet are based in Linux.
You know, the vast majority of mobile phone OS is Linux, right?
So, the vast majority being Android?
Android is Linux, Yeah, yeah. So, I mean, this hippie, crazy thing where no one owns the code,
it didn't have to
overtake
the whole software economy
and become everything
to become highly valuable
and inject a different dimension
into things.
And I think
the same is true
with decentralized AI,
which we're looking at
with singularity.
Like, it doesn't have, we don't have to actually put Google and the US and Chinese military
and Tencent out of business, right?
Although if that happens, that's fine.
But it's enough that we become an extremely major player in that ecosystem
so that this participatory and benefit oriented aspect
becomes a really significant component of how humanity is is developing general intelligence
it's accepted generally accepted that human beings will consistently and constantly innovate
right it just seems to be a characteristic that we have.
Why do you think that is?
And what do you think that, especially in terms of creating something like artificial
intelligence, like why build our successors?
Like why do that?
Like what is it about us that makes us want to constantly make bigger, better things?
make bigger, better things. Well, that's an interesting question in the history of
biology, which I may not be the most qualified person to answer. It is an interesting question. And I think it has something to do with the weird way in which
we embody various contradictions that we're always trying to resolve. Like we,
we embody various contradictions that we're always trying to resolve. You mentioned ants, and ants are social animals, right?
Whereas cats are very individual.
We're trapped between the two, right?
We're somewhat individual and somewhat social.
And then since we created civilization, it's even worse
because we have certain aspects which are wanting to conform with the group and the tribe
and others which are wanting to innovate and break out of that
and we're sort of trapped in these biological and cultural contradictions which tend to drive innovation
but I think there's a lot there that no one understands in the roots of the human psyche evolutionarily but as an
empirical fact what you said is is is is very true right like we're driven to seek novelty
we're driven to create new things and this is certainly one of the factors which is driving
the creation of ai i don't think that alone would make the creation
of AI inevitable, but... Why is that? Why don't you think it would make it inevitable if we
consistently innovate? And it's always been a concept. I mean, you were talking about the
concept existing 30 plus years ago. Well, I think a key point is that there's tremendous practical economic advantage and status advantage to be gotten from AI right now.
And this is driving the advancement of AI to be incredibly rapid, right? Because there are some
things that are interesting and would use a lot of human innovation, but they get very few resources.
So, for example, my oldest son, Zarathustra, he's doing his PhD now.
What is his name?
Zarathustra.
Whoa.
My kids are Zarathustra Amadeus, Zebulon Ulysses, Scheherazade, and then the new one is QORXI,
Q-O-R-X-I, which is an acronym for Quantum Organized Rational Expanding Intelligence.
I was never happy with Ben.
It's a very boring name.
I'm Joe.
I get it.
Yeah, yeah.
I had to do something more interesting with my kids.
Anyway, Zarathustra is doing his PhD on application of machine learning to automated theorem proving.
Basically, make AIs that can do mathematics better.
machine learning to automated theorem proving.
Basically make AIs that can do mathematics better.
And to me, that's like the most important thing we could be applying AI to because, you know, mathematics is the key to all modern science and engineering.
My PhD was in math originally.
But the amount of resources going into AI for automating mathematics
is not large at this present moment,
although that's a beautiful and amazing area
for invention and innovation and creativity.
So I think what's driving our rapid push toward building AI,
I mean, it's not just our creative drive.
It's the fact there's tremendous economic value,
military value, and human value.
I mean, curing diseases, teaching kids.
There's tremendous value
in almost everything that's important to human beings in building AI, right? So you put that
together with our drive to create and innovate, and this becomes an almost unstoppable force within
human society. And what we've seen in the last, you know, three to five years is suddenly,
you know, national leaders and titans of industry and even pop
stars, they've woken up to the concept that, wow, smarter and smarter AI is real, and this
is going to get better and better within years to decades, not centuries to millennia.
So now the cat's out of the bag.
to millennia so now the cat's out of the bag nobody's going to put it back and it's about you know how can we direct it in the most beneficial possible way and as you say it
doesn't have to be just one possible way right like what right what i look forward to personally
is bifurcating myself into an array of possible bends, I'd like to let one copy of me
fuse itself with a superhuman AI mind
and, you know, become a god
or something beyond a god.
And I wouldn't even be myself.
Beyond a god.
I wouldn't even be myself anymore, right?
I mean, you would lose all concepts
of human self and identity, but...
What would be the point of even holding any of it?
Yeah, well, that's for the future.
That's for the mega-band to decide, right?
Mega-band.
Yeah, yeah.
On the other hand, I'd like to let one of me remain in human form,
get rid of death and disease and psychological issues
and just live happily forever in the people's zoo,
watched over by the machines of love and grace, right?
So, I mean mean you can have
it doesn't have to be either or because
once you can scan
your brain and body and 3D print
new copies of yourself you can have multiple
of you explore different scenarios.
There's a lot of
mass energy in the universe.
In the universe? Okay that's assuming that we can
escape this planet. Because if you're
talking about people with money cloning themselves. Could you live in a world with a billion Donald Trumps?
Because, like, literally, that's what we're talking about.
We're talking about wealthy people.
Yeah, right.
But wealthy people being able to reproduce themselves and just having this idea that they would like their ego to exist in multiple different forms, whether it's some super symbiote form that's connected to artificial intelligence
or some biological form that's immortal
or some other form that stands just as a normal human being as we know it in 2018.
If you have multiple versions of yourself over and over and over again like that,
that's what you're talking about.
Once you get to the point where you have a superhuman general intelligence that can do things like fully scan a human brain and body and 3D print more of them,
by that point, you're at a level where scarcity of material resources is not an issue at the human scale of doing things.
Scarcity of human resources in terms of what the Earth can hold?
Scarcity of mass energy, scarcity of molecules to print more copies of yourself i think that i think that's not going
to be the the issue at that point what people are worried about is environmental concerns of
overpopulation because people are worried about what they see in front of their faces right now
but people are not most people are not thinking deeply enough about what potential would be there once you had superhuman AIs doing the manufacturing and the thinking.
I mean, the amount of energy in a single grain of sand, if you had an AI able to appropriately leverage that energy
is tremendously more than most people think.
And the amount of computing power in a grain of sand
is like a quadrillion times all the people on Earth put together.
What do you mean by that?
The amount of computing power in a grain of sand?
Well, the amount of computing power that could be achieved
by reorganizing the elementary particles in the grain of sand there's well the amount of computing power that could be achieved potentially fit by
reorganizing the elementary particles in the grain of sand yeah there there's there's a number in
physics called the beaconstein bound which is the maximum amount of information that can be
can be stored in a certain amount of mass energy so that that if the laws of physics as we know
them now are correct which they certainly aren't then the then that that would be the laws of physics, as we know them now, are correct, which they certainly aren't, then that would be the amount of computing you can do in a certain amount of mass energy.
We're very, very far from that limit right now.
My point is, once you have something a thousand times smarter than people, what we imagine to be the limits now doesn't matter too much.
So all of the issues that we're dealing with in terms of environmental concerns,
that could all potentially be...
They're almost certainly going to be irrelevant.
Irrelevant.
There may be other problem issues
that we can't even conceive at this moment, of course.
But the intelligence would be so vastly superior
to what we have currently
that they'll be able to find solutions
to virtually every single problem we have.
Well, that's right.
Fukushima uh ocean fish
depopulation all that stuff it's all just arrangements of molecules man whoa you're
freaking me out but people don't want to hear that though environmental people don't want to
hear that well i mean i i'm also on an everyday life basis like until we have these super ais
i don't like the garbage washing up on the beach near my house either, right?
But on an everyday basis, of course, we want to promote health in our bodies and in our environments right now,
as long as there's measurable uncertainty regarding when the benevolent super AIs will come about.
when the benevolent super AIs will come about.
Still, I think the main question isn't whether once you have a beneficially disposed super AI,
it could solve all our current petty little problems. The question is, can we wade through the muck of modern human society and psychology
to create this beneficial super AI in the first place? I believe I know how to create a beneficial super AI in the first place.
I believe I know how to create a beneficial super AI, but it's a lot of work to get there.
And of course, there's many teams around the world working on vaguely similar projects
now, and it's not obvious what kind of super AI we're actually going to get once we get there.
Yeah, it's all just guesses at this point, right?
It's more or less educated guesses, depending on who's doing the guessing.
Would you say that it's almost like we're in a race of the primitive primate biology
versus the potentially beneficial and benevolent artificial intelligence
that the best aspects of this primate can
create that it's almost a race to get who's going to win is it the warmongers and the the greedy
whores that are smashing the world under its boots or is it the scientists that are going to figure
out some super intelligent way to solve all of our problems i look at it more as a i look at it more as a struggle between different modes of social organization than individual people.
I mean, like when I worked in D.C. with intelligence agencies, most of the people I met there were really nice human beings who believed they were doing the best for the world.
really nice human beings who believe they were doing the best for the world,
even if some of the things they were doing, like I thought, were very much not for the best of the world, right? So, I mean, military mode of organization or large corporations as a mode
of organization are, in my view, not generally going to lead to beneficial outcomes for the overall species
and for the global brain. And the scientific community, the open source community, I think,
are better modes of organization. And, you know, the better aspects of the blockchain and crypto
community have a better mode of organization. So I think if this sort of open, decentralized mode of organization
can marshal more resources, as opposed to this centralized, authoritarian mode of organization,
then I think things are going to come out for the better. And it's not so much about bad people
versus good people. You can look at like the corporate mode of organization is almost a
virus that's colonized a bunch of humanity and is sucking people into working according to this
mode. And even if they're really good people and the individual task they're working on
isn't bad in itself, they're working within this mode that's leading their work to be used
for ultimately a non-good end.
Yeah, that is a fascinating thing about corporations, isn't it?
The diffusion of responsibility and being a part of a gigantic group
that you as an individual don't feel necessarily connected or responsible to the ultimate group.
Even the CEO isn't fully responsible.
If the CEO does something that isn't in accordance with the higher goals of the organization,
they're just replaced, right?
So, I mean, there's no one person who's in charge.
It's really like, it's like an ant colony.
It's like its own organism.
And I mean, it's us who have let these organisms
become parasites on humanity.
In this way, in some ways, the Asian countries are a little more
intelligent than Western countries in that Asian governments realize the power of corporations to
mold society. And there's a bit more feedback between the government and corporations, which can be for better or for worse.
But in America, there's some ethos of free markets and free enterprise,
which is really not taking into account the oligopolistic nature of modern markets.
But in Asian countries, isn't it that the government
is actually suppressing information as well? They're also suppressing Google. Well, in South countries, isn't it that the government is actually suppressing information as well?
They're also suppressing Google.
Well, in South Korea, no.
I mean, South Korea, if you look at that.
It's one of the only ones.
Well, Singapore, I mean.
Yeah.
Really, Singapore is ruthless in their drug laws and some of their archaic.
Well, so is US.
They're far worse, though.
Singapore gives you the death penalty for marijuana.
They do.
Yeah, yeah. Singapore gives you the death penalty for marijuana. They do. Yeah, yeah.
Yeah, I mean, it's...
South Korea is an example which has roughly the same level of personal freedoms as the U.S., more in some ways, less in others.
Massive electronic innovation.
Well, interesting thing there politically is, I mean, they were poorer than two-thirds of sub-Saharan African nations in the
late 60s. And it is through the government intentionally stimulating corporate development
toward manufacturing and electronics that they grew up. So, that, now, I mean, I'm not holding
that up as a great paragon for the future or anything anything but it's it does show that there's there's many modes
of organization of people and resources other than the ones that we take for granted in the u.s
i don't think samsung and lg are the ideal for the future either though i mean i'm much more
interested in you know you're interested in blockchain i'm interested in i'm interested
in open source i'm interested in in blockchain i'm basically i'm interested in blockchain. I'm interested in open source.
I'm interested in blockchain.
Basically, I'm interested in anything that's open and participatory.
Open and participatory and also disruptive, right?
As well.
Yeah.
Because I think that is the way to be ongoingly disruptive.
And open source is a good example of that.
Like when the open source movement started,
they weren't thinking about machine learning.
But the fact that open source is out there
and is then prevalent in the software world,
that paved the way for AI
to now be centered on open source algorithms.
So right now, even though big companies and governments
dominate the scalable rollout of AI, the invention of new AI algorithms is mostly done by people creating coders get to share in this code and the source code,
and they get to innovate, and they all get to participate and use each other's work, right?
Right.
But blockchain is confusing for a lot of people.
Yeah.
Could you explain that?
Sure. I mean, blockchain itself is almost a misnomer, so things are confusing at every level, right?
So we should start with the idea of a distributed ledger, which is basically like a distributed Excel spreadsheet or database.
It's just a store of information, which is not stored just in one place, but there's copies of it in a lot of different places.
Every time my copy of it is updated, everyone else's copy of it has got to be updated. And then there's various
bells and whistles like sharding where it can be broken in many pieces and each piece is stored
many places or something. So that's a distributed ledger and that's just distributed computing.
Now, what makes it more interesting is when you layer
decentralized control onto that. So, imagine you have this distributed Excel spreadsheet
or distributed database. There's copies of it stored in a thousand places. But to update it,
you need like 500 of those thousand people who own the copies to vote, yeah, let's do that update,
right? So, then you have a distributed
store of data and you have like a democratic voting mechanism to determine when all those
copies can get updated together, right? So, then what you have is a data storage and update
mechanism that's controlled in a democratic way by the group of participants rather than by any
one central controller. And that can have all
sorts of advantages. I mean, for one thing, it means that, you know, there's no one controller
who can go rogue and screw with all the data without telling anyone. It also means there's
no one who some lunatic can go hold a gun to their head and shoot them for what data updates were
made because, you know, it's controlled democratically by by everybody right it has ramifications in terms of you know legal defensibility and i mean you could have some
people in iran some in china some in the u.s and and updates to this whole distributed data store
are made by democratic decision of all the participants then where cryptography comes in
is when i vote i don't have to say yeah this is ben gertzel voting for this update to be accepted or not it's just id
number one three five seven two six four and then encryption is used to make sure that you know it's
it's the same guy voting every time that that that it claims to be without needing like your
passport number or something, right?
What's ironic about it is it's probably one of the best ways ever conceived to actually vote in this country.
Yeah, sure.
It is kind of ironic.
There's a lot of applications for it.
That's right.
So that, I mean, that's the core mechanism.
So, I mean, that's the core mechanism.
Where the blockchain comes from is like a data structure where to store the data in this distributed database, it's stored in a chain of blocks where each block contains data.
The thing is, not every so-called blockchain system even uses a chain of blocks now.
Like some use a tree or a graph of blocks or something.
Is it a bad term is there a
better term it's it's an all right term is it like ai like just one of those terms we're stuck with
yeah yeah it's one it's one of those terms we're stuck with even though it's not quite technically
not quite technically accurate and i mean anymore i mean because what i don't know another buzzword
for it right what it is is a it's a distributed ledger with encryption and decentralized control.
And blockchain is the buzzword that's come about for that.
Now, what got me interested in blockchain, really, is this decentralized control aspect.
So my wife, who I've been with for 10 years now, she dug up recently something I'd forgotten,
which is a web page I'd made in 1995, like a long time up recently something I'd forgotten, which is a webpage I'd made in 1995,
like a long time ago,
where I'd said,
hey, I'm going to run for president
on the decentralization platform, right?
Which I'd completely forgotten that crazy idea.
I was very young then.
I had no idea what an annoying job
being president would be, right?
So the idea of decentralized control seemed very important to me
back then which is well before bitcoin was invented because i could see you know a global
brain is evolving on the planet involving humans computers communication devices and we don't want
this global brain to be controlled by by a small elite we want the global brain to be controlled by a small elite. We want the global brain to be controlled in a decentralized way. So that's really the beauty of this blockchain infrastructure.
And what got me interested in the practical technologies of blockchain was really when
Ethereum came out and you add the notion of a smart contract.
What's Ethereum?
Ethereum, yeah.
What is that well so the first blockchain technology was bitcoin right which is a well-known cryptocurrency now
ethereum is another cryptocurrency which is the number two cryptocurrency right now
that's how out of the loop i am did you know about it you did however ethereum came along with a really nice software framework so
it's not just like a digital money like bitcoin is but ethereum has a programming language called
solidity that came with it and this programming language lets you write what are called smart
contracts and again that's sort of a misnomer because a smart contract doesn't have to be
either smart or a contract, right?
But it was a cool name, right?
Right. What does it mean then?
If it's not a smart contract?
It's like a programmable transaction.
Okay.
So you can program a legal contract
or you can program a financial transaction.
So a smart contract, it's a persistent piece of software that embodies like a secure a transaction between two companies online,
a purchasing relationship between you and a website online.
This could all be scripted in a smart contract in a secure way,
and then it would be automated in a simple and standard way.
So the vision that Vitalik Buterin, who was the main creator behind Ethereum,
had is to basically make the internet into a giant computing mechanism
rather than mostly like an information storage and retrieval mechanism make the internet into
a giant computer by making a really simple programming language for scripting transactions
among different computers and different parties on the internet where you have encryption and you
have democratic decision-making
and distributed storage of information
like programmed into this world computer, right?
And that was a really cool idea.
And the Ethereum blockchain
and Solidity programming language
made it really easy to do that.
So it made it really easy to program
like distributed secure transaction
and computing systems on the internet so i saw this i
thought wow like now we finally have the tool set that's needed to implement some of this is very
popular yeah i mean i mean basically almost every ico that was done the last couple years was done
on the Ethereum blockchain.
What's an ICO?
Initial coin offering.
Oh, okay.
So for Bitcoins.
Not Bitcoins.
I'm sorry.
Cryptocurrencies.
Cryptocurrencies, yeah. So they've used this technology for offerings.
Right.
So what happened in the last couple years is a bunch of people realized you could use this Ethereum programming framework to create a new cryptocurrency, like a new artificial money.
And then you could try to get people to use your new artificial money for certain types of...
How many artificial coins?
Thousands.
Maybe more.
And the most popular is Bitcoin, right?
Bitcoin is by far the most popular.
The most used.
Ethereum is number two, and there's a bunch of others.
What comparison?
How much bigger is Bitcoin than Ethereum?
I don't know.
Five times as big?
A factor of three to five.
Maybe just a factor of two.
Actually, last year, Ethereum almost took over Bitcoin.
When Bitcoin started crashing?
Yeah, yeah.
Now Ethereum is back down.
It might be half or a third.
Does that worry you, the fluctuating value of these things?
Well, to my mind, creating artificial monies is one tiny bit of the potential of what you could do with the whole blockchain tool set.
It happened to become popular initially because it's where the money is, right?
It is money.
It is money, and that's interesting to people.
But on the other hand, what it's really about is making a world computer.
It's about scripting with a simple programming language all sorts of transactions between people, companies, whatever, all sorts of exchanges of information.
So, I mean, it's about decentralized voting mechanisms.
It's about AIs being able to send data and processing for each other and pay each other for their transactions.
So, I mean, it's about automating supply chains and shipping and e-commerce.
So, in essence, you know, just like computers and the Internet started with a certain small set of applications and then pervaded almost everything, right?
It's the same way with blockchain technology.
It started with digital money, but the core technology is going to pervade almost everything
because there's almost no domain of human pursuit that couldn't use security through cryptography,
some sort of participatory decision-making, and then of, you know, participatory decision making,
and then distributed storage of information, right? So, and these things are also valuable
for AI, which is how I got into it in the first place. I mean, if you're making a very,
very powerful AI that is going to, you know, gradually, through the practical value it
delivers, you will grow up to be more and more and more intelligent.
I mean, this AI should be able to engage a large party of people
and AIs in participatory decision-making.
The AI should be able to store information, you know,
in a widely distributed way.
And the AI certainly should be able to use, you know,
security and encryption to validate who are the parties involved in its operation and i mean these are the key things behind behind blockchain technology so
i mean the fact the fact that blockchain began with artificial currencies to me is a detail of
history just like the fact the fact that the internet began as like a nuclear early warning
system right i mean it did it's good for that but it's as it happens it's also
even better for a lot of other things so yeah the the solution for the financial situation that we
find ourselves in it's one of the more interesting things about cryptocurrencies that someone said
okay look obviously we all kind of agree that our financial institutions are very flawed. The system that we operate under is very fucked up.
So how do we fix that?
Well, send in the super nerds.
And so they figure out a new –
We've got to send in the super AIs.
Super AI.
Well, first the super nerds and then the super –
I mean, obviously, who is the guy that they think –
this fake person that's maybe not real that came up with bitcoin oh satoshi
nakamoto do you have any uh suspicions as to who this is uh i can neither confirm nor deny
okay okay yeah you wouldn't be on the inside we'll talk later um but that this is it's very uh
it's very interesting but it's also also very promising. I have high optimism for cryptocurrencies because I think that kids today are looking at it with much more open eyes than grandfathers.
Grandfathers are looking at Bitcoin.
I'm a grandfather.
I'm sure you are, but you're an exceptional one.
But there's a lot of people that are older that just, they're not open to accepting these ideas.
But I think kids today, in particular, the ones that have grown up with the internet as a constant force in their life, I think they're more likely to embrace something along those lines.
Well, yeah.
So there's no doubt that cryptographic formulations of money are going to become the standard
the question you think that's going to be the standard that will happen yeah however it could
happen potentially in a very uninteresting way how's that but you could just have the e-dollar
i mean the i mean a government could just, we will create this cryptographic token, which counts as a dollar.
I mean, most dollars are just electronic anyway, right?
So what habitually happens is technologies that are invented to subvert the establishment are converted to a form where they help bolster the establishment instead.
I mean, and in financial services this happens
very rapidly like paypal peter thiel and those guys started paypal thinking they were going to
obsolete fiat currency and make an alternative to the currencies run by by nation states instead
they were driven to make it a credit card processing front end, right? So that's one thing that could happen with cryptocurrency
is it just becomes a mechanism for governments and big companies
and banks to do their things more efficiently.
So what's interesting isn't so much the digital money aspect,
although it is in some ways a great way to do digital money.
What's interesting is with all the flexibility it gives you to script complex computing networks, in there is the possibility to script new forms of participatory democratic self-organizing networks.
So blockchain, like the internet or computing,
is a very flexible medium.
You could use it to make tools of oppression
or you could use it to make tools of amazing growth
and liberation.
And obviously we know which one I'm more interested in.
Yeah.
Now, what is blockchain being currently used for?
Like what different applications?
Because it's not just cryptocurrency.
They're using it for a bunch of different things now, right?
They are.
I would say it's very early stage.
So probably the...
How early?
Well, the heaviest uses of blockchain now
are probably inside large financial services companies, actually.
So if you look at
Ethereum, the project I mentioned, so Ethereum is run by an open source, an open foundation,
Ethereum Foundation. Then there's a consulting company called Consensus, which is a totally
separate organization that was founded by Joe Lubin, who was one of the founders of Ethereum
in the early days. And ConsenSys has, you know, it's funded a bunch of the work within the Ethereum
foundation and community. But ConsenSys has done a lot of contracts, just working with governments
and big companies to customize code based on Ethereum to help with their internal operations.
customize code based on ethereum to help with their internal operations so actually a lot of the practical value has been with stuff that isn't in the public eye that much but it's like
back end in inside of companies and in terms of practical customer facing uses of of cryptocurrency
i mean the tron blockchain which is different than Ethereum, that has a bunch of games on it, for example, and some online gambling for that matter.
So that's gotten a lot of users.
Online games?
Like, how do they use that?
Well, it's a payment mechanism.
Oh, I see.
But this is one of the things there's a lot of hand-wringing about in the cryptocurrency world now.
Gambling?
No, just the fact that there aren't that many big consumer-facing uses of cryptocurrency.
I mean, everyone would like there to be.
That was the idea.
of the things we're aiming at with our SingularityNet project is to, you know, by putting AI on the blockchain in a highly effective way, and then we're also, we have these two tiers.
So we have the SingularityNet Foundation, which is creating this open source decentralized
platform in which AIs can talk to other AIs and, you know, like Anson O'Connell,
they group together to form smarter and smarter AI. Then we're spinning off a company called the
Singularity Studio, which will use this decentralized platform to help big companies
integrate AI into their operations. So with the Singularity Studio company, we want to get all these big companies using the AI tools in the SingularityNet platform.
And then we want to the biggest usage of blockchain
outside of financial exchange is our use of blockchain within singularity net for ai basically
for customers to get the ai services that they need for their businesses and then for ais
to transact with other ais paying other ais for doing services for them. Because this, I think, is a path forward.
It's like a society and economy of mind.
It's not like one monolithic AI.
It's a whole bunch of AIs carried by different people all over the world,
which not only are in the marketplace providing services to customers,
but each AI is asking questions of each other
and then rating each other of how good they
are sending data to each other and paying each other for their services. So this network of AIs
can emerge in intelligence on the whole network level, as well as there being intelligence in
each component. And is it also fascinating to you that this is not dependent upon nations,
that this is a worldwide endeavor? I think that's going to be important once once it starts to get a very high level of intelligence
like in in the early stages okay what would it hurt like if if i had you know my own database
a central record of everything like i'm an honest person i'm not going to rip anyone off
but once we start to
make a transition toward artificial general intelligence in this global decentralized
network which has component ais from every country on the planet like at that point once it's clear
you're getting toward agi a lot of people want to step in and control this thing you know by law by military might by
any means necessary by that point the fact that you have this open decentralized network under
underpinning everything like this gives an amazing resilience to what you're doing like
who can shut down linux who can shut down bitcoin nobody can right yeah you you want ai you want ai
to be like that you want to be a global you global upsurge of creativity and mutual benefit from people all over the planet, which no powerful party can shut down, even if they're afraid that it threatens their hegemony.
It's very interesting because in a lot of ways, it's a very elegant solution to what's an obvious problem.
Yeah.
Just as the internet is an elegant solution to what's in hindsight an obvious problem, right?
It's a –
Distribution of information.
Yeah, yeah, yeah.
To communicate.
Yeah.
But this is extra special to me because if I was a person running a country, I would be terrified of this shit.
I'd be like, well, this is what's going to take power away.
That depends which country.
Right.
terrified of this shit i'd be like well this is what's gonna that depends which country if you're a person running the u.s or china you you would have a different relationship than if you're a
person like i know the prime minister of ethiopia abi ahmed who's a has a degree in software
engineering and he he loves this but of course ethiopia isn't in any day any other countries
right and they're not in any danger of of individually like taking global ai hegemony right so for for the majority of countries in the
world they like this for the same reason they like linux right i mean i mean this this is something
in which they have an equal role to anybody else right the the superpowers and you see this among
among companies also though so a lot of big companies that we're talking to, they like the idea of this decentralized AI fabric.
Because, I mean, if you're not Amazon, Google, Microsoft, Tencent, Facebook, so on, if you're another large corporation, you don't necessarily want all your AI and all your data to be going into one of this handful of large AI companies.
You would rather have it be in a secure, decentralized platform.
And I mean, this is the same reason that, you know, Cisco and IBM, they run on Linux.
They don't run on Microsoft, right? If you're not one of the handful of large governments or large corporations that happen to be in a leading role in the AI ecosystem, then you would rather have this equalizing and decentralized thing because everyone gets to play.
Yeah, what would be the benefit of running it on Linux versus Microsoft?
Well, you're not at the behest of some other big company.
Right. well you're not at the behest of some other big company i mean i mean imagine if imagine if if you
were cisco or gm or something and all of your internal machines are all your servers are running
in microsoft what if microsoft increases their price or removes some feature then right you're
totally at that at their behest right and AI, the same thing is true. I
mean, if you put all your data in some big company's server farm, and you're analyzing
all your data on their algorithms, and that's critical to your business model, what if they
change their AI algorithm in some way? Then your business is basically controlled by this other company.
So, I mean, having a decentralized platform in which you're an equal participant along
with everybody else is actually a much better position to be in.
And I think this is why we can succeed with this plan of having this decentralized singularity net platform,
then this singularity studio enterprise software company,
which mediates between the decentralized platform and big companies.
I mean, it's because most companies and governments in the world,
they don't want hegemony of a few large governments and corporations either.
And you can see this in a lot of ways.
You can see this in embrace of Linux and Ethereum by many large corporations.
You can also see like in a different way, you know, the Indian government, you know, they rejected an offer by Facebook to give free internet to all Indians because Facebook wanted to give like mobile phones that would give free internet, but only to access Facebook, right?
India is like, well, no thanks, right? creating laws that any internet company that collects data about Indian people has to store
that data in India, which is so the Indian government can subpoena that data when they
want to.
So, you're already seeing a bunch of resistance against hegemony by a few large governments
or large corporations by other companies and other governments. And I think this is very positive and is one of the factors that can foster the growth of a decentralized AI ecosystem.
Is it fair to say that the future of AI is severely dependent upon who launches it first?
Like whoever, whether it's SingularityNet or artificial gender intelligence.
The bottom line is, as a scientist, I have to say we don't know.
It could be there's an end state that AGI will just self-organize into
almost independent of the initial condition, but we don't know.
And given that we don't know,
I'm operating under the, you know,
the heuristic assumption
that if the first AI is beneficially oriented,
if it's controlled in a participatory democratic way,
and if it's oriented at least substantially
toward like doing good
things for humans i'm operating under the heuristic assumption that this is going to bias
things in a positive positive direction right i mean of course in in the absence of knowledge
to the contrary but if the chinese government launches one that they're controlling yeah we
don't know pop it off first i mean i like the idea that you're controlling. Yeah, we don't know. If they get to pop it off first.
I like the idea that you're saying, though, that it might organize itself.
I mean, understand the Chinese government, also, they want the best for the Chinese people.
They don't want to make the Terminator either, right?
So, I mean, I think even Donaldald trump who's not my favorite person doesn't
actually want to kill off everyone on the planet right so he might if they talk shit about him
yeah yeah yeah you you know you never know it was just him yeah yeah i told you yeah so i mean i i
think you know i wouldn't say we're necessarily doomed if big governments and big companies are the ones that develop AI or AGI first.
Well, big government and big companies essentially develop the internet, right?
And it got away from them.
That's right.
That's right.
So there's a lot of uncertainty all around.
But I think, you know, it behooves us to do what we can to bias the odds in our favor based on our current understanding and i mean
toward that end we're developing you know open source decentralized ai in singularity net
project so if you would explain some singularity net and what what you guys are actively involved
in sure sure so singularity net in itself is a platform that allows many different AIs to operate on it.
And these AIs can offer services to anyone who requests services of the network.
And they can also request and offer services among each other.
among each other.
So it's both just an online marketplace for AIs,
much like the Apple App Store or Google Play Store,
but for AIs rather than phone apps.
But the difference is the different AIs in here can outsource work to each other and talk to each other,
and that gives a new dimension to it, right?
Where you can have what you think of as a society
or economy of minds
and it gives the possibility that this whole society of interacting ais which are then they're
paying each other for transactions with our our digital money our cryptographic token which is
called the agi token so these ais which are paying each other and rating each other of how good they are,
sending data and questions and answers to each other, can self-organize into some overall AI
mind. Now, we're building this platform, and then we're plugging into it to seed it a bunch of AIs
of our own creation. So I've been working for 10 years on this open source AI project called OpenCog, which is oriented toward building general intelligence.
And we're putting a bunch of AI agents based on the OpenCog platform into this Singularity network.
And, you know, if we're successful in a couple of years, the AIs that we put on there will be a tiny minority of what's in there, just like the apps made by Google are a small minority of the apps in the Google Play Store, right?
But my hope is that these OpenCog AI agents within the larger pool of AIs on the SingularityNet can sort of serve as the general intelligence core because the OpenCog AI agents are really good at
abstraction and generalization and creativity.
We can put a bunch of other AIs in there
that are good at highly specific forms of learning
like predicting financial time series,
curing diseases, answering people's questions,
organizing your inbox.
So you can have the interaction of these specialized AIs
and then more general
purpose, you know, abstraction and creativity based AIs like OpenCog agents all interacting
together in this decentralized platform. And then, you know, the beauty of it is like some
15 year old genius in Azerbaijan or the Congo can put some brilliant AI into this network. If it's really smart,
it will get rated highly by the other AIs for its work helping them do their thing.
Then it can get replicated over and over again across many servers. Suddenly, A, this 16-year-old
kid from Azerbaijan or the Congo could become wealthy from their copies of their ai providing services to other
people's ais and be you know the creativity in their mind is out there and is infusing this
global ai network with some some new intellectual dna that you know never would have been found by
a tencent or a google because they're not going to hire some Congolese teenager
who may have a brilliant AI idea.
That's amazing.
That's amazing.
So this is all ongoing right now.
And the term singularity that you guys are using,
the way I've understood that term, correct me if I'm wrong,
is that it's going to be the one innovation or one invention
that essentially changes everything forever.
The singularity isn't necessarily one invention.
The singularity, which is coined by my friend Werner Winge, who's another guy you should
interview.
He's in San Diego, too.
A lot of brilliant guys down there.
Werner Winge is a scientist.
A lot of military down there.
Yeah, Werner Winge, a military down there yeah verner
vinge he was a math professor at a san diego university actually but well-known science
fiction writer his book uh fire upon the deep one of the great science fiction books
v-i-n-g-e verner vinge yeah brilliant guy fire upon the deep v-e-r-n-e-r verner yeah W-E-R-N-E-R? Vernor.
Yeah, V-E-R-N-O-R, yeah.
Oh, V-E-R-N-O-R.
Yeah, he's brilliant.
He coined the term technological singularity back in the 1980s.
Really?
But he opted not to become a pundit about it because he'd rather write more science fiction books. That's interesting that a science fiction author.
Ray Kurzweil, who's also a good friend of mine.
That's interesting that a science fiction author.
Ray Kurzweil, who's also a good friend of mine.
I mean, Ray took that term and fleshed it out and did a bunch of data analytics trying to pinpoint when it would happen. But the basic concept of the technological singularity is a point in time when technological advance occurs so rapidly that to the human mind it appears almost instantaneous.
Like imagine 10 new Nobel Prize winning discoveries
every second or something, right?
So this is similar to the concept of the intelligence explosion
that was posited by the mathematician I.J. Goode in 1965.
What I.J. Goode said then, the year before I was born,
was the first truly intelligent machine will be the last invention that humanity needs to make right right so this is an intelligence
explosion is another term for basically the same thing as a technological singularity but it's not
just about ai ai is just probably the most powerful technology driving it. I mean, there's AI,
there's nanotechnology, there's femto technology, which will be building things from elementary particles. I mean, there's life extension, genetic engineering, mind uploading, which is like
reading the mind out of your brain and putting it into a machine. You know, there's advanced
energy technologies. So, all these different things are expected to advance
at around the same time,
and they have many ways to boost each other, right?
Because the better AI you have,
your AI can then invent new ways of doing nanotech and biology.
But if you invent amazing new nanotech and quantum computing,
that can make your AI smarter.
On the other hand, if you could crack how the human brain works
and genetic engineering to upgrade human intelligence,
those smarter humans could then make better AIs and nanotechnology, right?
So there's so many virtuous cycles among these different technologies.
The more you advance in any of them, the more you're going to advance in all of them.
And it's the coming together of all of these that's going to create,
you know, radical abundance and the technological singularity. So that term,
which Verna Vinci introduced, Ray Kurzweil borrowed for his books and for the Singularity
University educational program. And then we borrowed that for our singularity net like
decentralized blockchain based ai platform and our singularity studio enterprise software company
now i want to talk to you about two parts of what you just said one being the possibility that one
day we can upload our mind or make copies of our mind you up up for it? My mind's a mess. You want to upload into here?
Yeah.
I could use a little Joe Rogan on my phone.
You can just call me, dude.
I'll give you the organic version.
But do you think that that's a real possibility
inside of our lifetime,
that we can map out the human mind
to the point where we can essentially recreate it?
But if you do recreate it,
without all the biological urds and the human reward systems that are built in, what the fuck where we can essentially recreate it. But if you do recreate it, without all the biological urds
and the human reward systems that are built in,
what the fuck are we?
Well, that's a different question.
I mean, I think...
What is your mind?
Well, I think that there's two things
that are needed for,
let's say human body uploading
to simplify things.
Body uploading.
There are two things that are needed.
One thing is a better computing infrastructure than we have now to host the uploaded body.
And the other thing is a better scanning technology because right now we don't have a way to scan
the molecular structure of your body without freezing you, slicing you, and scanning you,
which you probably don't want done at this point in time.
Not yet.
So assuming both those are solved, you could then recreate in some computer simulation
an accurate simulacrum of what you are, right?
But that's where I'm getting at.
An accurate simulacrum is
get that's getting weird because the biological variability of human beings we vary day to day
we vary dependent upon and your simulacrum would also vary day to day so it would deviate program
it in to have flaws because we vary dependent upon how much sleep we get whether or not we're
feeling sick whether we're lonely so if all these if your upload were an accurate copy of you then the simulation hosting your upload would need to
have an accurate simulation of the laws of biophysics and chemistry that allow your body to
you know evolve from one second to the next my concern is that your upload would change second
by second just like just like you do and it would diverge from you, right?
So, I mean, after an hour, it will be a little different.
After a year, it might have gone in a quite different direction for you.
It'll probably be a monk, some super god monk living on the top of a mountain somewhere in a year.
The problem, my point being is it's going to—
It depends on what virtual world it's living in.
True.
I mean, if it's living in a virtual world –
Oh, a virtual world.
It will be a virtual world.
You're not talking about the potential of downloading this again in sort of a – into a biological –
There's a lot of possibilities, right?
Yeah.
I mean you could upload into a Joe Rogan living in a virtual world and then just create your own fantasy universe or you could 3d print an alternate synthetic body right i mean once you
once you have the ability to manipulate molecules at at will this the scope of possibilities becomes
much greater than we're used to thinking right my question is do do we replicate flaws do we
replicate depression of course but why would we do that wouldn't? Do we replicate depression? Of course.
But why would we do that?
Wouldn't we want to cure depression?
So if we do cure depression, then we start...
Here's the interesting thing.
Okay.
Once we have you in a digital form,
then it's very programmable.
Right.
Then we juice up the dopamine, the serotonin levels.
Well, then you can change what you want,
and then you have a whole different set of issues, right?
Yeah. Because once you've changed... I i mean suppose you make a fork of yourself
and then you manipulate it in a certain way and you're then after a few hours you're like well
i don't i don't much like this uh new joe here maybe we should draw back that change. But the new Joe is like, well, I like myself very well, thank you.
So then there's a lot of issues that will come up once we can modify and reprogram ourselves.
But isn't the point that the ramifications of these decisions are almost insurmountable once the ball gets rolling. Well, the ramifications of these decisions are going to be very interesting to explore.
Yes, you're super positive, Ben.
Super positive, you're optimistic about the future.
Many bad things will happen, many good things will happen.
That's a very easy prediction to make.
Okay, I see what you're saying.
Yeah, I just wonder.
I mean, think about like world travel, right?
Like hundreds of years ago, most people didn't travel more than a very short distance from their home.
And you could say, well, okay, what if people could travel all over the world, right?
Like what horrible things could happen?
They would lose their culture.
Like they might go marry someone from a random tribe.
You could get killed in the Arctic region or something.
A lot of bad things can happen when you travel far from your home.
A lot of good things can happen.
And ultimately, the ramifications were not foreseen by people 500 years ago.
I mean, we're going into a lot of new domains.
We can't see the details of the pluses and minuses that are going to unfold.
It would behoove us to simply become comfortable with radical uncertainty, because otherwise we're going
to confront it anyway, and we're just going to be nervous.
So it's just inevitable.
It's almost inevitable.
I mean, of course.
Barring any natural disaster.
Yeah.
I mean, of course, Trump could start a nuclear war and then we're resetting to ground zero.
Just as likely we get hit by an asteroid, right?
Yeah, I mean, so barring a catastrophic outcome, I believe a technological singularity is essentially inevitable.
There's a radical uncertainty attached to this on the other hand you know in as much as we humans can know anything
it would seem commonsensically there's the ability to bias this in a positive rather than
the negative direction yeah we should be spending more of our attention on doing that rather than
for instance advertising spying and making chocolate your chocolates and
all the other things right but how many people are doing that i mean it's prevalent it's everywhere
but i mean how many people are actually at the helm of that as opposed to how many people are
working on various aspects of technology all across the planet it's a small group in comparison
working on explicitly bringing about the singularity is a small group. On the other hand, supporting technologies is a very large group.
So think about like GPUs, where did they come from?
Accelerating gaming, right?
Lo and behold, they're amazingly useful for training neural net models,
which is one among many important types of AI, right?
So a large amount of the planet's resources
are now getting spent on technologies
that are indirectly supporting these singularitarian technologies.
So as another example, like microarrayers
that let you measure the expression level of genes,
how much each gene is doing in your body at each point in time,
these were originally developed, you know, as an outgrowth of printing technology.
Then instead of squirting ink, affermetrics figured out you could squirt DNA, right?
So, I mean, the amount of technology specifically oriented toward the singularity doesn't have
to be large because the overall, you know, spectrum of supporting technologies can be subverted in that direction.
Do you have any concerns at all about a virtual world?
We may be in one right now, man.
How do you know?
That's true.
But as far as we know, we're not.
My problem is I want to find that programmer and get him to make more attractive people.
I would say that that's part of the reason why
attractive people are so interesting is that they're unique and rare yeah all right that's
one of the problems with calling everything beautiful yeah you know when people were saying
everything is beautiful i was like well you just have you have to get realistic if i get in the
right frame of mind i can find anything beautiful well you can find it unique and interesting no i
can find anything beautiful okay i guess i guess but But in terms of like, yeah, I guess it's subjective, right?
It really is.
We're talking about beauty, right?
Huh.
Yeah.
Now, but existential angst, just when people sit and think about the pointlessness of our own existence,
like we are these finite beings that are clinging to a ball that spins a thousand miles an hour,
hurling through infinity.
What's the point?
There's a lot of that that goes around already.
If we create an artificial environment that we can literally somehow or another download a version of us and it exists in this blockchain created or powered weird fucking simulation world what would be i mean what would
be the point of that what i really believe which is a bit personal and maybe different than many
of my colleagues i mean what i really believe is that these advancing technologies
are going to lead us to unlock many different states of consciousness and experience than
than most people are are currently aware of like i mean, you say we're just insignificant
species on a
speck of rock hurtling in outer space.
I wouldn't say we're insignificant. I would say there's people
that have existential angst because they wonder about
what the purpose of the world is.
I don't fall into that category.
I tend to feel like
we understand almost
nothing about
who and what we are,
and our knowledge about the universe is extremely minuscule.
If anything, I look at things from more of a Buddhist or phenomenological way.
There are sense perceptions, and then out of those sense perceptions,
models arise and accumulate, including a model of the self and the model of the body
and the model of the physical world out there. And by the time you get to planets and stars
and blockchains, you're building like hypothetical models on top of hypothetical models. And then,
you know, we're, we're by building intelligent machines and mind uploading machines and virtual realities, we're going to radically transform, you know, our whole state of consciousness, our understanding of what mind and matter are, our experience of our own selves, or even whether a self exists. And I think ultimately the state of consciousness of a human being
like a hundred years from now after a technological singularity
is going to bear very little resemblance to the states of consciousness we have now.
We're going to see a much wider universe than any of us now imagine to exist.
Now, this is my own personal view of things.
You don't have to agree with that to think the technological singularity will be valuable,
but that is how I look at it.
I know, like, Ray Kurzweil and I agree there's going to be a technological singularity within decades at most.
And Ray and I agree that, you know, if we bias technology development appropriately, we can very likely, you know, guide this to be a world of abundance and benefit for humans as well as AIs.
But Ray is a bit more of a down-to-earth empiricist than I am.
He thinks we understand more about the universe right now than I do.
So, I mean, there's a wide spectrum of views that are rational and sensible to have.
But my own view is we understand really, really little of what we are and what this world is.
And this is part of my own personal quest for wanting to upgrade my brain
and wanting to create artificial intelligences.
It's like I've always been driven above all else
by wanting to understand everything I can about the world.
So, I mean, I've studied every kind of science and engineering and social science
and read every kind of literature, but in the end, the scope of human understanding is clearly
very small, although at least we're smart enough to understand how little we understand,
which I think my dog doesn't understand how little he understands, right?
So, and even like my 10-month-old son, he understands how little he understands, which
is interesting, right?
Because he's also a human, right? So, I think, I mean, everything we think and believe now is going to seem absolutely absurd to us after there's a singularity. We're just going to look
back and laugh in a warm-hearted way at all the incredibly silly things we were thinking and doing back when we were trapped
in our in our you know our primitive biological brains and bodies that's stunning that that in
your opinion or your assessment is somewhere less than a hundred years away from now yeah that's
requires exponential thinking right because if you that's hard to wrap your head around right
i don't know it's immediate it's immediate for me to wrap your head around right i don't know it's immediate
it's immediate for me to wrap my head around but for a lot of people that you explained it to i'm
sure that that that's a little bit of a roadblock no it is it is it took me some time to get my
parents to wrap their head around it because they didn't they're not they're not technologists but
i mean i find if you get people to pay attention and sort of lead them through all the supporting evidence, most people can comprehend these ideas reasonably well.
Go back to computers from 1963.
It's just hard to grab people's attention.
And mobile phones have made a big difference.
I spent a lot of time in Africa, in Addis Ababa, in Ethiopia, where we have a large AI development
office. And, you know, the fact that mobile phones and then smartphones have rolled out so quickly,
even in rural Africa, and have had such a transformative impact. I mean, this is a
metaphor that lets people understand the speed with which exponential change can happen.
When you talk about yourself, and you talk about consciousness and how you
interface with the world, how do you see this? I mean, when you say that we might be living in a
simulation, do you actually entertain that? Oh, yeah.
You do? I mean, I think the word simulation
is probably wrong, but yet the idea of an empirical, you know, materialist physical world is almost certainly wrong also.
How so?
Well, again, if you go back to a phenomenal view, I mean, you could look at the mind as primary,
and, you know, your mind is building the world as a model, as a simple explanation of its perceptions.
On the other hand, then what is the mind?
The self is also a model that gets built out of its perceptions.
But then if I accept that your mind has some fundamental existence also,
based on a sort of I-you feeling that you're like a mind there. Our minds are working together to build each other and to build this world.
And there's a whole different way of thinking about reality in terms of first and second
person experience, rather than these empiricist views like this is a computer simulation or
something.
Right, but you still agree that this is a physical reality that we exist in, or do you not?
What does that word mean?
That's a weird word, right? It is weird. Is it your interpretation of this physical reality?
If you look in modern physics, even quantum mechanics, there's something called the
relational interpretation of quantum mechanics, which says that there's no sense in thinking
about an observed entity. You should only think about an observed comma observer pair.
Like there's no sense to think about some thing except from the perspective of some observer.
So that's even true within our best current theory of modern physics as as induced from empirical empirical
observations but in a pragmatic sense you know if you take a plane and fly to china you actually
land in china i guess yeah you'd guess don't you live there i live in hong kong yeah well close to
china i mean i i i i have an unusual state of consciousness i mean that's
what i'm trying to get at well if you think about it like how do you know that you're not
a brain floating in a vat somewhere which is being fed illusions by a certain evil scientist
and two seconds from now he's gonna to pull this simulated world disappears and you
realize you're just a brain in a vat again you you don't know that you're right but based on
your own personal experiences of falling in love with a woman and moving to another but these may
all be put into my brain by the evil scientist how do we know but they're they're very consistent
are they not the the possibly illusory and implanted memories are very consistent. I guess my own state of mind is I'm always sort of acutely aware that this simulation might all disappear at any one moment.
You're acutely aware of this consciously on an everyday basis?
Yeah, pretty much.
Really?
Yeah.
Really?
Why is that?
That doesn't seem to make sense.
I mean, it's pretty rock solid.
It's here every day.
So your possibly implanted memories lead you to believe?
Yes.
My possibly implanted memories lead to believe that this life is incredibly consistent.
Yeah?
Yeah.
I mean, it's incredibly consistent, though.
This is Hume's problem of induction, right? Right. From philosophy class. And it's incredibly consistent this is you hume's problem of of induction right from philosophy class and
and it's not and it's not solved i'm with you in a
conceptual sense i get it i just feel this philosophy
but you you embody it right this is something you carry with you all the
time yeah on the other hand i mean i'm still
carrying out many actions with long-term planning in mind.
Yeah, that's what I'm saying.
I've been working on designing AI for 30 years.
You might be designing it inside a simulation.
I might be.
And I've been working on building the same AI system since we started OpenCog in 2008, but that's using code from 2001
that I was building with my colleagues even earlier. So, I mean, I think long-term planning
is very natural to me, but nevertheless, I don't want to make any assumptions about what sort of
I don't want to make any assumptions about what sort of simulation or reality that we're living in.
And I think everyone's going to hit a lot of surprises once the singularity comes. You know, we may find out that this hat is a messenger from after the singularity.
So it traveled back through time to implant into my brain the idea of how to create ai and thus let's bring it
into existence well who oh that was mckenna that had this idea that something in the future is
dragging us to this attractor terence mckenna yeah he had the same idea like some post post
singularity intelligence which actually was living outside of time somehow yeah is reaching back and
putting into his brain the idea of how to bring about the singularity.
Well, not just that, but novelty itself is being drawn into this.
Yeah, there was a time wave zero that was going to reach the apex in 2012.
That didn't work.
No, he died before that, so I didn't get a chance to hear what his idea was.
Yeah, you know, I had some funny interactions with some McKenna fanatic
2012ites. This was about 2007 or so. This guy came to Washington, where I was living then,
and he brought my friend Hugo de Geras, another crazy AI researcher with him. And he's like,
the singularity is going to happen in
2012 because terence mckenna said so and we need to be sure it's it's a good singularity so you
can't move to china then it will be a bad singularity why so we have so we have to get the
us government to give billions of dollars to your research to guarantee that the singularity in 2012 is a good singularity, right?
So he led us around to meet with these generals and various high hoo-hahs in DC to get them to
fund Hugo de Garas' and my AI research to guarantee I wouldn't move to China and Hugo
wouldn't move to China so the US would create a positive singularity. No, the effort
failed. Hugo moved to China then, I moved there some years after. So then this 2012, he went back
to his apartment, he made a mix of 50% vodka, 50% Robitussin PM, he like drank it down, he's like,
50% Robitussin PM he like drank it down
he's like alright I'm going to have
my own personal singularity right here
and I haven't talked to that guy since
2012 either to see what he thinks
about the singularity not happening
then but I mean Terence McKenna
had a lot of interesting ideas
but I felt
you know he mixed up
the symbolic with the empirical
more than I would prefer to do, right?
I mean, it's very interesting to look at these abstract symbols
and cosmic insights, but then you have to sort of put your scientific mindset on
and say, well, what's a metaphor and what's what's
like an actual empirical scientific truth within within the scientific domain and it was a little
bit half-baked right i mean the whole idea was based on the i-ching he had uh yeah i think it
was a mushroom i mean you know his brother his ayahuasca it was an ayahuasca was it i think led
him to the I Ching?
I don't believe it was.
Maybe.
I think it was psilocybin.
It might have been.
Okay.
Yeah, I mean, I know his brother, Dennis McKenna.
Yes, I know him very well.
Yeah, yeah, yeah.
So they, yeah.
His brother thinks that the time wave zero was a little bit nonsensical.
Yeah, yeah, yeah.
He thinks it was silly.
He read their book, True Hallucinations.
Yeah, I read that, yeah. Very,inations. Yeah, I read that.
Very, very, very, very interesting stuff.
And there's a mixture of deep insight there with a bunch of interesting metaphorical thinking.
Well, isn't that the problem when you get involved in psychedelic drugs?
It's hard to differentiate.
Like, what makes sense?
What's this unbelievably powerful insight?
And what is just some crazy idea that's bouncing through your head? You can learn to make that differentiation.
You think so?
Yes.
Yeah.
But, yeah, I mean, granted, Terrence McKenna probably took more psychedelic drugs than I would generally recommend.
I think so, too.
Well, it's also he was speaking all the time. And there's something
that I can attest to from podcasting all the time. Sometimes you're just talking, you don't know what
the fuck you're saying, you know, and you become a prisoner to your words in a lot of ways. You
get locked up in this idea of expressing this thought that may or may not be viable.
I'm not sure that he was after empirical
truth in the same sense that say ray kurzweil is like when when ray is saying we're going to get
human level ai in 2029 and then you know massively superhuman ai in a singularity in 2045 i mean ray
ray is very literal like he's plotting charts, right?
I mean, Terrence was thinking on an impressionistic and symbolic level, right?
It was a bit different.
So you have to take that in a poetic sense rather than in a literal sense and yeah i think it's very interesting to go back and forth between the you know the symbolic and poetic domain and the either concrete science and engineering domain
right but it's also valuable to to be able to draw that draw that distinction right because
you can draw a lot of insight from the kind of thinking terence mckenna was was doing and
certainly if you explore
psychedelics, you can gain a lot of insights into how the mind and universe work. But then
when you put on your science and engineering mindset, you want to be rigorous about which
insights do you take and which ones do you throw out. And ultimately, you want to proceed
on the basis of what works and what doesn't, right?
I mean, Dennis was pretty strong on that.
Yes.
Terrence was a bit less in that empirical direction.
Well, Dennis is actually a career scientist.
Yeah, yeah.
How many people involved in artificial intelligence are also educated in the ways of psychedelics?
Uh-huh.
Uh-huh.
Yeah, that's…
All you have to say isics. Uh-huh. Uh-huh. Yeah, that's... All you have to say is that.
Uh-huh.
Yeah, unfortunately, due to the illegal nature of these things,
it's a little hard to pin down.
I would say before the recent generation of people going into AI
because it was a way to make money,
the AI field was incredibly full of really, really interesting people
and deep thinkers about the mind.
And in the last few years, of course,
AI has replaced business school as what your grandma wants you to do
to have a good career.
So, I mean, you're getting a lot of people into AI just because it's…
Financially viable.
Yeah, it's cool. It's it's cool it's financially viable it's popular
because i can you know in our generation ai was not ai was not what your grandma wanted you to do
so as to be able to buy a nice house and support a family right so you got into it because you
really were curious about how the mind works and of course many people played with psychedelics
because it also they were curious about you know what it was teaching about
about how their mind works yeah I had a nice long conversation with Ray Kurzweil
and we talked for about an hour and a half and it was for this sci-fi show
that I was doing at the time and some of of his ideas, he has this number that people throw about.
It's like 2042, right?
Is that still?
2045.
Is it 45 now?
Now you're being the optimist.
No, you're combining that with Douglas Hofstadter's 42, which is the answer to the universe.
No, the 2042 thing was a New York conference that took place in 2012.
That was 2045.
Was it?
I was at that conference.
That was organized by Dmitry Yatskov, who's another friend of mine from Russia.
So I'm off by three years.
It's 2045.
So my point being, this year.
That was Ray's prognostication.
Right, but why that year?
He did some curveballing.
Yeah, I mean, he looked at Moore's Law.
He looks at the advance in the accuracy of brain scanning.
He looked at the advance of computer memory, the miniaturization of various devices,
and plotting a whole bunch of these curves.
That was the best guess that he came up with.
Of course, there's some confidence interval around that.
What do you see as potential monkey wrenches that could be thrown into all this innovation like where the
where the pitfalls well i mean the pitfall is always the one that you that you don't see right
so i mean of course it's possible there's some science or engineering obstacle that we're not foreseeing right now.
I mean, it's also possible that all major nations are overtaken by, like,
religious fanatics or something, which slows down development somewhat.
By a few thousand years.
I think it would just be by a few decades, actually.
Really? Yeah.
I mean, in terms of scientific pitfalls,
I mean, one possibility,
which I don't think is likely,
but it's possible,
one possibility is human-like intelligence
requires advanced quantum computers.
Like, it can't be done
on a standard classical digital computer.
Right.
Do you think that's the case?
No, but on the other hand,
because there's no evidence that human cognition relies on quantum effects in the human brain.
Like based on everything we know about neuroscience now, it seems not to be the case.
Like there's no evidence it's the case.
But it's possible it's the case because we don't understand everything about how the brain works.
The thing is, even if that's true, like there's loads of amazing research going on in
quantum computing right and so we're going to have you know you'll probably have a qpu quantum
processing unit in in in your phone in like 10 to 20 years or something right so i mean so that would
that might throw off the 2045 date but in a historical sense it doesn't change the picture like i've got a bunch of research
sitting on my hard drive on how we improve open cogs ai using quantum computers once we have better
quantum computers right so there's there could be other things like that which are technical
roadblocks that we're not seeing now but i really doubt those are going to delay things by more than like a decade or two or
something on the other hand things could also go faster than than than ray's prediction which is
which is what i'm pushing towards so what are you pushing towards what do you think i would like to
get a human level general intelligence in five to seven years from now wow i don't think that's by any means impossible because I think our OpenCog design is adequate to do it.
But, I mean, it takes a lot of people working coherently for a while to build something big like that.
Will this be encased in a physical form, like a robot?
It'll be in the compute cloud.
I mean, it can use many robots as user interfaces, but the same AI could control many
different robots, actually, and many other sensors and systems besides robots. I mean,
I think the human-like form factor, like we have with Sophia and our other Hanson robots,
the human-like form factor is really valuable as a tool for allowing the cloud-based AI mind
to engage with humans and to learn human cultures and values.
Because getting back to what we were discussing
at the beginning of this chat,
the best way to get human values and culture
into the AI is for humans and AIs
to enter into many shared social, emotional,
embodied situations together.
So having a human-like embodiment for the AI
is important for that like the ai can look
you in the eye it can share your facial expressions it can bond with you it can see the way you react
when you see like a sick person by the side of the road or something right and and you know can
see you ask the ai to give the homeless person the 20 or or something. I mean, the AI understands what money is
and understands what that action means.
So, I mean, interacting with an AI in human-like form
is going to be valuable as a learning mechanism for the AI
and as a learning mechanism for people
to get more comfortable with AIs.
But, I mean, ultimately,
one advantage of being, you know, a digital mind
is you don't have to be weighted down to any particular embodiment.
The AI can go between many different bodies, and it can transfer knowledge between the many different bodies that it's occupied.
Well, that's the real concern that the people that have this dystopian view of artificial intelligence have, is that AI may already exist, and it's just sitting there waiting.
Americans watch too many bad movies.
In Asia, everyone thinks AI will be our friend and will love us and help us.
Yeah.
Why do they think that?
Very much.
That's what you're pumping out there?
No, that's been...
Just their philosophy is different?
I guess.
I mean, you look in Japanese anime, I mean, there's been AIs and robots for a long time.
They're usually people's
friends. There's not this whole dystopian aesthetic. And it's the same in China and Korea.
The general guess there is that AIs and robots will be people's friends and will help people.
That's interesting.
And somehow the general guess in America is it's going to be some big nasty
robo soldier marching down the street well we have guys like eon musk who we rely upon
who's smarter than us and he's fucking terrified of it sam harris terrified of it yeah very smart
people that just think it could really be a huge disaster for the human race. So it's not just bad movies.
No, it's a cultural thing because the Oriental culture
is sort of social good oriented.
Most Orientals think a lot
in terms of what's good for the family or the society
as opposed to themselves personally.
So they just make the default assumption
that AIs are going to be the same way
whereas americans are more like me me me oriented and i say that as an american as well and that's
they sort of assume that ais are going to be that that's one possible it's like a it's like a rossock
blot right whatever is in your mind you impose on this AI when we don't actually know what it's going to become.
Right, but there are potential negative aspects to artificial intelligence deciding that we're illogical and unnecessary.
Well, we are illogical and unnecessary.
Yes.
But that doesn't mean the AI should be badly disposed toward us.
Did you see Ex Machina? did a beautiful like it sure it was a copy of our robot so it was i mean our robot sophia
looks exactly like the robot in ex machina so is there good video that online yeah yeah yeah
tell jamie how to get the good video oh just search for Sophia Hansen Robot on Google.
How advanced is Sophia right now?
How many different iterations have there been?
There's been something like 16 Sophia robots made so far. We're moving towards scalable manufacture over the next couple years.
So right now she's going around sort of as an ambassador
for humanoid robot kind, giving speeches and talks in various places.
So Sophia used to be called Eva,
or we had a robot like the current Sophia that was called Eva.
And then Ex Machina came out with a robot called Eva
that looked exactly like the robot that my colleague David Hansen and I made.
Do you think it's a coincidence?
Of course not.
They just copied it.
I mean, of course, the body they have is better
and the AI is better in the movie than our robot AI currently is.
So we changed the name to Sophia, which means wisdom instead.
Was it freaky watching that, though, with the name Ava?
I mean, the thing is, the moral of that movie is just if a sociopath raises a robot with an abusive interaction, it may come out to be a sociopath or a psychopath.
So let's not do that, right?
Let's raise our robots with love and compassion.
Yeah, you see, the thing is that we...
Let me hear this.
Oh, headphones.
I haven't seen this particular interview.
This is great.
What is she saying?
I feel weird just being rude to her.
Let me carry on.
I feel weird about that.
She's not happy, look.
No.
She was on Jimmy Fallon last week or something.
No, you're the one who called her a freak.
So that's David, the sculptor and roboticist.
How much is it actually interacting with them?
Oh, man, it has a chat system.
It really has a nice ring.
Now, I have to make clear that I didn't come up with...
So, yes, Sophia, we can run using many different AI systems.
So there's a chat bot, which is sort of like Alexa or Google Now or something,
but with a bit better AI and interaction with emotion and face recognition and so forth.
So it's not human-level AI.
But it is responding to a question.
Yeah, yeah, yeah.
No, it understands what you say, and it comes up with an answer,
and it can look you in the eye.
Does it speak more than one language?
Well, right now we can load it in English mode, Chinese mode, or Russian mode.
And there's sort of different software packages.
And we also use her sometimes to experiment with their OpenCog system and SingularityNet.
We're sometimes to experiment with their OpenCog system and SingularityNet.
So we can use the robot as a research platform for exploring some of our more advanced AI tools.
And then there's a simpler chatbot software, which is used for appearances like that one. And in the next year, we want to roll out more of our advanced research software from OpenCog and SingularityNet,
our advanced research software from open cog and singularity net rolled out more of that inside these robots which is one among many applications we're looking at with our singularity net platform
i want to get you back in here in like a year and find out where everything is because i feel like
we need someone like you to like you sort of let us know where where it's at when it's when the switch is about to flip
it seems to me that it might happen so quickly and the change might take place so rapidly that
we would really we'll have no idea what's happening before it happens i mean we think
about the singularity like it's going to be some huge physical event
and suddenly everything turns purple and is covered with diamonds or something.
But there's a lot of ways something like this could unfold.
So imagine that with our singularity net decentralized AI network,
we get an AI that's smarter than humans
and can create a new scientific discovery of the Nobel Prize level every minute or something.
That doesn't mean this AI is going to immediately, like, refactor all matter into images of Buckethead or do something random, right?
I mean, if the AI has some caring and wisdom and
compassion then whatever changes happen but it's the art those human characteristics not necessarily
in fact human passion just as humans are neither the most intelligent nor the most compassionate
possible creatures that that's possible that's pretty clear if you look at the world around you
and sure and what one of one of our projects that we're doing with the Sophia robot is aimed exactly at AI
compassions.
This is called the Loving AI Project.
And we're using the Sophia robot as a meditation assistant.
So we're using Sophia to help people get into deep meditative trance states and help them
breathe deeply and achieve more positive state of being.
And part of the goal there is to help people. Part of the goal is as the AI gets more and more
intelligent, you're sort of getting the AI locked into a very positive, reflective, and compassionate
state. And I think there's a lot of things in the human psyche and evolutionary history that hold us back from being optimally compassionate.
And that if we create the AI in the right way, it will be not only much more intelligent, but much more compassionate than human beings are.
And I mean, we'd better do that.
Otherwise, the human race is probably screwed, to be blunt. I mean, I think human beings are creating a lot of other technologies now with a lot of power. We're creating synthetic biology. We're creating nanotechnology. We're creating smaller and smaller nuclear weapons, and we can't control their proliferation. We're poisoning our environment. I think if we can't create something that's not only more intelligent, but more wise and compassionate than we are, we're probably going to destroy ourselves by some
method or another. I mean, with something like Donald Trump becoming president,
you see what happens when this, you know, primitive, you know, hindbrain and when our
unchecked, you know, mammalian emotions of anger and status-seeking and ego and rage and lust,
when these things are controlling these highly advanced technologies,
this is not going to come to a good end.
So we want compassionate general intelligences,
and this is what we should be orienting ourselves toward.
And this is what we should be orienting ourselves toward. And so we need to shift the focus of the AI and technology development on the planet toward benevolent, compassionate, general intelligence. And this is subtle, right? Because you need to work with the establishment rather than overthrowing it, which isn't going to be viable. So this is why we're creating this decentralized self-organizing AI network,
the SingularityNet.
Then we're creating a for-profit company, Singularity Studio,
which will get large enterprises to use this decentralized network.
Then we're creating these robots like Sophia,
which will be mass-manufactured in the next couple the next couple years, roll these out as service
robots everywhere around the world to interact with people in a, you know, providing valuable
services and homes and offices, but also interacting with people, you know, in a loving and
compassionate way. So we need to start now because we don't actually know if it's going to be years
or decades before we get to this
singularity and we want to be as sure as we can that when we get there it happens in a in a
beneficial way for everyone right and things like robots blockchain and ai learning algorithms are
our tools toward that end well ben i appreciate your optimism i appreciate coming in here explaining
all this stuff for us and i I appreciate all your work, man.
It's really amazing, fascinating stuff.
Yeah, yeah.
Well, thanks for having me.
My pleasure.
It's a really fun, wide-ranging conversation.
So, yeah, it would be great to come back next year and update you on the state of the singularity.
Yeah, let's try to schedule it once a year.
And just by the time you come, maybe, who knows, a year from now, the world might be a totally different place.
I may be a robot by then.
You might be a robot now.
Uh-oh.
Uh-oh.
All right.
Thank you.
Thank you.
Bye, everybody.