Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Ben Goertzel: SingularityNET – The Global AI Network and Marketplace

Episode Date: February 20, 2019

Artificial Intelligence is often misunderstood. And much like blockchain, those who fiercely stand by the technology believe it will change the world for the better. Others fear the negative repercuss...ions it could bring it and would rather see it disappear. We’re joined by Ben Goertzel. Ben’s interest in AI and robotics date back to his childhood and he has made these his life-long passion and work. He is the CEO of SingularityNET, a company building a marketplace for AIs which leverages blockchain. He is also Chief Scientist at Hanson Robotics who has brought us the now famous Sophia robot. When he’s not building blockchains and robots, he leads the OpenCog open-source AI framework and is Chair of Humanity +, an organization which focuses on technology and ethics. Topics covered in this episode: Ben’s background as a mathematician and his lifelong passion for AI and robotics What is AI, AGI and machine learning, and how these technologies differ What is the killer application for AI The problem of data and power centralization as it relates to AI AI safety and with what we should be most concerned when it comes to AI dominance Hanson Robotics and the Sophia robot How blockchains and AI are relevant to each other The role of AI in blockchain governance and the potential for AI systems to compete amongst each other What is SingularityNET and what the company is building Episode links: Ben Goertzel's Website SingularityNET website SingularityNET whitepaper Ben Goertzel portraied in Silicon Valley Ben Goertzel's Website Creating Internet Intelligence Accelerando Thank you to our sponsors for their support: Simplify your hiring process & access the best blockchain talent . Get a $1,000 credit on your first hire at toptal.com/epicenter. Join the most interoperable ecosystem of connected blockchains. Learn more at cosmos.network/epicenter. This episode is hosted by Brian Fabian Crain and Sébastien Couture. Show notes and listening options: epicenter.tv/275

Transcript
Discussion (0)
Starting point is 00:00:00 This is Epicenter, episode 275 with guest Ben Gertzel. This episode of Epicenter is brought you by Cosmos. Cosmos is building the internet of blockchains, an ecosystem where thousands of blockchains can interoperate, creating the foundation for a new token economy. If you have an idea for ADAP, visit cosmos.network slash epicenter to learn more and to get in touch with the Cosmos team. And by TopTal. Experience a new way of hiring as TopTal delivers only the top 3% of it. of applicants, including highly skilled blockchain engineers.
Starting point is 00:00:47 If you're looking to scale your team with the very best talent, visit topdile.com slash epicenter. Hi, welcome to Epicenter. My name is Sebastian Kutu. And my name is Spine for Ben Gertzell, who is the CEO of Simularity Net. He also works for a company called Hansen Robotics. You may have seen in the news, so Hansen Robotics makes this robot, this kind of humanoid a robot named Sophia, she's been featured in, you know, on TV shows in Silicon Valley,
Starting point is 00:01:24 the sitcom, and sort of has become an ambassador for robotics. And so Ben is an expert in artificial intelligence and robotics, and singularity net is a company that is building a sort of a marketplace for AI on a blockchain. So we talked to Ben about all kinds of interesting topics that we don't usually get a chance to discuss on the show, since we primarily focused on blockchain. But talking about AI in a more general sense and what the future is there and then sort of how that ties into blockchain is a really fascinating conversation. And Ben is a great speaker on these topics and does a lot of thinking at a high level. So it was really fascinating to get to interview him.
Starting point is 00:02:07 But first we got a couple of announcements. So one thing you should mention is that we're going to be, at least I will be at ECC on the week of March 4th. So there's Paris blockchain week happening in Paris that whole week. And so I'll be at VCC. We'll also be having a meetup. And a meetup is on March 6th. So on the Wednesday, around 6 o'clock. It's just going to be a casual, you know, get together drinks meetup.
Starting point is 00:02:34 Venue isn't totally figured out yet, but it'll be announced soon. And I hope, yeah, I hope to see you there. It's going to be, I'll be there. Sunny might be there as well. And we'll have some guests and other. the listeners. So happy to have you. We're happy to have you join us for that meetup. And you can sign up and register at epicenter.rox slash eccc. So that's epicenter. dot rocks slash ecc. If you register there, we'll send you the address for the venue when
Starting point is 00:03:03 when it's announced. And Brian, I think you had an update on course one. Yeah, but first of all, that sounds great. I'm so jealous. I wish I could be in Paris too. I think it's going to be a really great conference. I mean, the first one that they had a couple years ago, we were there together. It was terrific. I wasn't there for the subsequent ones, but it's now grown to, the organizers have told me they're waiting for, they're hoping to have 1,500 people there. It's going to be at the Knum, which is this really fantastic venue. And, yeah, 300 speakers. So it's, it's turning out to be quite a, quite a huge conference. Cool, fantastic. But yeah, so I did want to give a brief updated on course one so you know as as many listeners know most listen know so we've been
Starting point is 00:03:48 meher and i started his company together it's just over a year ago to work on kind of building ballot as proof of stake networks and and finally now we are alive on the first network which is project called loom so it's an ethereum side chain sort of a plasma cash chain you know we did we did an interview on this before as well uh and uh it's using tenement as consensus so we just launched there last Friday. And so anyone who has Lume can delegate to us. We also wrote this in-depth kind of research report on LUMM. So if people want to check that out, that's also available.
Starting point is 00:04:27 And we'll put a link to our website in the show notes. So people can find everything there. And then Cosmos is also about to go live, which is also something where we'll have a Val later on. And that brings us to the other. thing we wanted to speak about, which is, first of all, we have, Cosmos is starting to sponsor Epicenter, so that's very exciting, and is starting to do so with this episode. Now, we do need to make a disclaimer here, which is that, first of all, I have some atoms, and Sebassi has some
Starting point is 00:05:02 atoms too, Sonny has Adams, and Meher has Adams too, right? So the Epicenter team is fairly, heavily, you know, has some atom positions. And in addition to that, of course, Sonny worked for the Cosmos team. And I used to work for a Cosmos team. And then Meher and I have also been building a Cosmos Validator. So we just wanted to give that disclaimer up front. So people know that and fully aware of that. And that ties into a larger topic, which we've had a bunch of discussions about, but we haven't really taken the necessary actions on. And it's long long overdue for us to do that, which is to have a better way of disclosing those kind of things. So what we will start doing, and we'll probably have that up within the next week or so,
Starting point is 00:05:56 is it's just a page on our website where, you know, you'll be able to see all of the hosts and they will list all of the tokens or, you know, other kind of. investments they have in the blockchain space. And then I think the other thing will start doing is maybe in the show notes for every episode. I mean, this is something we have been doing generally. Like, let's say there's an episode
Starting point is 00:06:26 and like somebody has this token. Then we've generally been mentioning that, probably with the exception of Bitcoin and Ethereum episodes. But But we just want to be more consistent there, really make sure we mention it every time, and also write it in the show notes so that I think that's mentioned. The fact that we hold items, I guess, at least for me, I mean, I've always been interested in Cosmos,
Starting point is 00:06:53 and I'm actually generally quite excited that they're launching. And so having them become a sponsor, it just felt like a really natural fit. And so I think you'll see in the ad that, you know, their intention to sponsor the show is sort of a benefit for both. One, because we really think the Cosmos is a great platform and people should generally have interest in. And also, the Cosmos team has always been very closely, we've always been very close to that team and I've always sort of appreciate what we've done.
Starting point is 00:07:22 And, you know, Jay was one of our early guests and sort of things. Yeah, so I think we've set enough on that and we'll have that page on our website within about a week or so. And we'll make a point of also mentioning it in the show. as you mentioned. So without further delay, here's our interview with Ben Gertzel. Hi, so we're here with Ben Gertzel and Ben is the founder and CEO of Singularity Net. He's also the chief scientists in Hansa Robotics and holds a number of positions in other organizations, but we'll get to that in today's interview with Ben. So hi there, thanks for joining us.
Starting point is 00:07:58 Hey, it's a pleasure to be here. Well, thank you for joining us. So yeah, you have a very impressive resume so as I mentioned you're the founder and CEO of Singularity Net you're also chief scientist is handsome robotics you have a PhD in mathematics you've started a whole bunch of companies and lots of different areas and you're involved in some nonprofits and foundations as well so how did you how did you get here and what what is your trajectory it looked like and how did you get involved in AI and robotics I interested in AI robotics, life extension, nanotech, femtotech, time travel, all these things
Starting point is 00:08:42 since the early 1970s when I was a little kid reading science fiction books. And now a few decades have passed. And I find myself in a world where many of these apparently science fictional technologies are gradually becoming realities. And, you know, And so it's really exciting to me to actually be every day, you know, concretely working on building thinking machines and networking people and computers together into a global brain and applying AI to longevity and nanotechnology. It's astounding that we live in a time when these things are realities.
Starting point is 00:09:27 And of course, it's also a bit scary and sobering at times because these things could go badly wrong or they could go amazingly right. And, you know, I've been involved in a lot of different aspects of all these technologies, many of which are converging together now. So I did a PhD in mathematics, but even at that time, I was very interested in AI, biotechnology, and a bunch of other things. I just triggered mathematics, you know, as Bitcoin says, in math, we trust, right? mathematics underlies everything.
Starting point is 00:10:04 That's the foundation of all modern science and technology. So I figured learning a bunch of math couldn't be bad, but since shortly after getting my PhD, I've been really, which was 89, I got my degree. I mean, since then I've been working on AI in various dimensions and aspects. And now in the last few years, that's really taken off along with a bunch of other technologies. And of course, blockchain and cryptocurrency, which you guys know a lot about is all part of the mix.
Starting point is 00:10:34 Right now there's an insane number of different advanced technologies for manipulating, creating different kinds of information that are all intersecting and pushing each other forward. And you could talk about these for hundreds of hours without exhausting at all. Yeah, that's definitely true. So let's spend a little bit of time first on the topic of AI, which is something that I think, You know, we've tangentially talked about a bunch of times, but still it's, I guess, like, like probably for many outside of blockchain space, there's, you know, big, scary term that's a little bit hard to kind of demystify. So how do you define AI? And what's the difference
Starting point is 00:11:16 between AI and, you know, terms that people use like machine learning and deep learning? I don't think any of these terms are worth too much in, in the end. I mean, AI, I mean, in what sense is it really artificial? It's all part of nature. And to some extent, these systems are evolving and emerging instead of being purely artificially created. And intelligence, we don't even have a good definition for among humans. Like, there's not an IQ test that works across different cultures or ages of people, let alone across different kinds of minds. So none of these are very rigorous terms. I mean, machine learning, I guess, Again, in essence, all AI really is about machines that learn and reason and think.
Starting point is 00:12:05 That term has lately come to be used to describe particular types of AI algorithms that are trained on large amounts of data, but then the term is also used more loosely. So is reinforcement learning a kind of machine learning or not? It's not especially well-defined. And I mean, deep learning, again, in cognitive science, you know, a guy named Stellan Olson wrote a book on deep learning, what, 15 years ago, which encompassed neural networks, logic systems, production systems, many kinds of AI algorithms. But now the term seems to be used for what used to be called multilayer perceptrons, multilayer neural networks, which is really only one special kind of deep learning system. in the broader sense. So, I mean, what deep learning originally meant
Starting point is 00:13:00 was any system that just does hierarchical pattern recognition, like recognizes patterns within patterns, within patterns, within patterns in the world and uses those to take some action. The deep learning systems being talked about mostly now are hierarchical neural networks, which is one special kind of deep learning system in the broader case. So, I mean, what we have is a lot of words
Starting point is 00:13:22 with confusing definitions that shift over time, and don't necessarily mean what they sound like they mean, which comes back to in-math we trust, right? Because the thing is the algorithms are doing what they're doing, and there's a real mathematical description to them, and they carry out practical functions. But the buzzwords associated with them serve mostly to sell things rather than to convey useful information. Okay, okay. That is helpful, but then let's speak about one term, And I think that's the term that maybe you do have more of a relationship to, which is AGIs or artificial general intelligence.
Starting point is 00:14:05 Yeah. Well, let me let me try to go to what I think are the foundations here. Because, I mean, it is possible to describe these things in a way that makes sense. It's just that things become marketing buzzwords. and then become confusing. So I think fundamentally you can think about a mind or an intelligence system as something that's recognizing patterns in itself and in the world around it.
Starting point is 00:14:42 And then the system may have some goals, which doesn't mean everything it does is goal-directed, but it may have some goals. And it then recognizes patterns regarding which actions will achieve which goals in which contexts, right? So you have a pattern recognition system, and it has goals among other dynamics, and it's trying to learn, it's trying to recognize patterns of how to achieve what goals and what situations. And, you know, babies do that, right? Babies are recognizing patterns in the world around them all the time, and they have some goals, like they want to get some milk,
Starting point is 00:15:17 some food, they want to run around, and they try to figure out what patterns of activity will let them achieve their goals in what in what situations and then where I mean where deep learning comes in is the world we live in seems to be made largely of hierarchically composed patterns where you have patterns that build up in the more complex ones build up into more complex ones I mean just like physics builds into chemistry builds into biology builds into sociology builds into sociology so we have hierarchically composed patterns which means if you have a learning engine that is trying to recognize patterns in a hierarchy, it may well succeed because our world seems to be built that way. Now, it happens that most of the AIs out there in the world now are able to recognize
Starting point is 00:16:08 patterns in a very narrowly defined context and to achieve only a very narrow set of goals. Like, say, the original AlphaGo could recognize patterns in Go games and it could achieve the goal of winning a go game, right? And that was it. Now, Alpha Zero was a step beyond that. These programs are all by Google Deep Mind, which is one of the more interesting AI organizations out there. Alpha Zero went beyond that
Starting point is 00:16:37 because it can play a lot of different kinds of board games. So it can recognize patterns in a broader scope of environments in many, many different types of board games, and it can achieve more types of goals because the ways of winning chess or Go or Shogi or whatever are different, right? And still not nearly as general as a human being, though, because we can not only play board games,
Starting point is 00:16:59 but we can recognize patterns in a huge number of other kinds of environments, and we can achieve many, many different types of goals. You know, like we can prove math theorems, we can blow people up, we can chase girls, we can make art, we can try to save starving kids. There's a lot of goals we can work toward in a fairly rich collection of environments,
Starting point is 00:17:19 but we're still not infinitely general, You could imagine a mind that could recognize patterns in 407 dimensional space. We're very bad at that, right? We're much better at like two, three or four dimensions. So we're still somewhat restricted. We're good at recognizing patterns in some kinds of environments and achieving some kinds of goals, better than alpha zero or existing AI programs. But you could imagine some kind of mind that could recognize patterns in like a space of any
Starting point is 00:17:50 dimensions and in things that just look like noise to human beings and that could achieve goals that humans can't even begin to begin to understand so I think you know totally general intelligence that could recognize any kind of pattern in any kind of world and could figure out how to achieve any kind of goal by recognizing patterns of how to achieve that goal that's probably not achievable in this physical universe like totally general intelligence, but we're much more general than any existing AI program. Each of us can deal with a lot of different problems, and if you give us something totally new
Starting point is 00:18:30 to deal with, like the internet didn't exist when I was born, let alone when my DNA evolved. It didn't exist when I went to school, but I, like everyone else, was able to adapt to deal with this new thing, right? We don't yet have AIs that can transfer the knowledge and adapt to deal with some very new type of thing that they weren't programmed or trained for, right? And I think we will, but we're not there yet. So I think now the AI field is starting to begin a transition from narrow AIs that do highly specific things, recognize patterns and achieve goals in very specific domains, toward more general AIs that can just deal with a broader scope of knowledge and a broader
Starting point is 00:19:17 variety of goals and can transfer what they've learned so far to very different conditions. And I mean, this will be really important. You see that with like self-driving cars now are crashing into people because they're seeing situations that weren't in their training data. I mean, that's a failure to generalize, right? And then in financial markets, when you have what's called a regime change, oh, suddenly the market's acting totally different than it was acting before. Well, again, current quantitative financial prediction systems and risk management systems,
Starting point is 00:19:52 they failed to generalize, right? They were trained on previous market regimes. When you give them a new market regime, you know, they're still acting on their previous knowledge. Now, of course, most people can't deal with a new market regime either, but at least foundationally, we do have the ability to, like, go back to basics and deal with a radically new situation. And that's a big challenge facing the AI field in the next phase, which I think we're going to meet,
Starting point is 00:20:27 but there's still some research challenges there. That's interesting. I've never considered that way that, you know, I guess that humans and carbon-based beings are good at recognizing certain types of patterns. And I guess you could maybe differentiate. So, like, humans are good at recognizing certain types of patterns and acting on them. And that might be different from, like, for instance, the intelligence of a dolphin or another type of carbon-based being. And then artificial or computer-based intelligence or, like, silicon-based intelligence might be good at recognizing patterns.
Starting point is 00:21:06 And, like, you said, like, multiple hundreds of dimensions and figuring out, you know, actions. based on what it sees there are those patterns. Yeah, it gets kind of subtle if you think about it. Because, I mean, we evolved in this domain of like discrete solid objects, like bouncing off each other and so on, right? And this probably led us to ideas about causation. But if you're a dolphin in the water, you are seeing things flow around and blend into each other.
Starting point is 00:21:35 You're not seeing so many solid objects bouncing off each other. That probably leads to a quite different worldview. Now, also, like, we're, each of our minds is stuck in an individual body for, like, our entire life until we die, right? And, I mean, in firing our reincarnation and other, other freaky things, at least to a first degree of approximation, right? Now, if you're an AI that can port yourself between different bodies or occupy a hundred different bodies at a time, or, like, fork yourself and roll back to your last version before a traumatic experience, like, how does that change your whole? outlook, what kind of patterns you look for, what goals you bother to achieve, what risks you're willing to take, right? I mean, there's, yeah, there's so many ways that we're overfit to the exact environment
Starting point is 00:22:23 we evolved in and the problems that we're trying to solve. And then the other thing to realize is, like, realize is we're stuck without root access to our brains and bodies, which is pretty terrible, right? I mean, if you're in AI, you can have like root super user access to your own brain and body. If you think, well, I don't like the way I react in this situation. Just go in and fix the damn bug, right? But we can. If we want to fix bugs in ourselves, it's like years and years of, you know, meditation or therapy or reflection, rather than just go in and change the rogue piece of code.
Starting point is 00:23:00 Right. So, I mean, there's a lot of, a lot of things we take for good. granted now in terms of biases we have are restrictions that we have, which are not really intrinsic to being an intelligent mind. But, I mean, they're just particularities of how we happen to evolve out of apes in Africa, right? And I mean, that's, in general, this is sort of why I like a mathematical and conceptual view of things, because how things evolve. Now, there's some fundamental reality tool, but there's a lot of historical contingency. I mean, you see the same thing with exchange and money and so forth.
Starting point is 00:23:45 I mean, people take so many things for granted about how economies work, which aren't necessarily intrinsic to the nature of exchanging value in a community. They're just how things happen to evolve for quasi-random combination of reasons. This episode of Epicenter is brought to you by Cosmos, the Internet of Blockchain. We couldn't be more excited about the upcoming mainnet launch and to see so many projects already building on it. Blockchain technologies are evolving fast, and development shouldn't be one-size-fits-all. As a DAP developer, you need the tools that will allow your DAP to scale, grow, and evolve over time. The Cosmos SDK is a user-friendly, modular framework which allows you to customize your DAP to best suit your needs.
Starting point is 00:24:31 It's powered by tenement core, an advanced implementation of the BFT proof-of-stake protocol. Cosmos takes care of networking and consensus and allows you to focus on building your application in your language of choice. Ethereum smart contracts will be supported soon, and the SDK makes it simple for you to connect to other blockchains in the Cosmos network. If you have an idea for a DAP, and we'd like to learn more about the Cosmos SDK, or if you'd like to connect your existing app to Cosmos, visit cosmos.network slash Epicenter. For Epicenter listeners, the Cosmos team will reach out to answer your questions and help
Starting point is 00:25:03 you get started. We'd like to thank Cosmos for the support of Epicenter. So let's talk a bit about AI and data. And this is a topic that has been brought a lot in the conversation about AI and the fact that AI needs large quantities as a theta to train itself. And we kind of talked about this. So today data is very centralized. data is held and owned by a small number of very large companies.
Starting point is 00:25:37 Do you see this as a problem? Is there any type of repercussions that were unintended there? Or is there a better system that you think we could achieve? Yeah, I mean, the situation with the collection, storage, and use of data regarding human beings on the planet now is really pretty, pretty ridiculous. I mean, it's not necessarily entirely bad or malevolent. Some of it's really good and useful, but overall, the ownership and control of data from the various centers we have everywhere is it's centralized in a pretty bizarre way. I mean, some of it's good, of course.
Starting point is 00:26:26 Like Google Maps, it's pretty nice and it's collecting data on where everyone's driving to. So you can like see where there's a traffic jam, right? So these are very useful functions. And I don't really mind sharing location anonymously of where I'm driving with Google Maps so it can tell everyone else where there's a traffic jam, right? I mean, that seems like a fair exchange. But I mean, in the end, the agency regarding use of people's data is in a very confused state.
Starting point is 00:26:58 So, like, this phone I carry with me everywhere, right? There's a tremendous amount of data coming through this phone onto the internet. And it's all, in a sense, my data, right? It's data about what I'm talking about, who I'm talking to, like, where I am, what I'm taking pictures of. But all this data coming from me through this device that I bought and then I pay a subscription to connect to the internet each month. Like this data is going sort of haphazardly into various databases owned by various large corporations,
Starting point is 00:27:32 probably passing it along to various governments along the way. And then this data is then being used, you know, for some useful things, like telling me when there's a traffic jam, right? And then it's being used to advertise things to me, which doesn't matter to me much I've never clicked on an ad in my life, I think. But I mean, it's being used by big companies to make themselves money and increase their ability to manipulate people as a whole.
Starting point is 00:28:07 Like, even if I don't click on their ads by studying me along with everyone else or learning how to manipulate the minds of other people who do click on their ads and do read their fake news. So then, yeah, you've got to ask like, okay, this data that comes from me, through this device that I'm paying for and paying to connect to the internet.
Starting point is 00:28:24 Why isn't there an easy way for me to observe what this data is being used for and have some agency over what this data is being used for? Like if my data is being used to provide data to fuel someone's political campaign, I'd rather have it be used only for a candidate I agree with or something, right? And, you know, it's quite within our reach technologically
Starting point is 00:28:49 to put agency over use of our data, whether for AI or for basic statistics, in the hands of the human being who produces that data. On the other hand, it's not in accordance with the business model of the large corporations involved in the phone and the internet services behind it. It's not in the interest of the business model of these corporations to provide that agency to the user, except insofar as government-referes government regulators force them to. But of course, government regulators,
Starting point is 00:29:24 even when well-intentioned, which is only a fraction of the time, they can't keep up with the advances of technology. Now, this, I mean, this, right now is mostly an inconvenience and a sort of aesthetic and moral infelicity. But, I mean, as you move from their AI toward age, if it turns out that these stores of data, you know, are critical for giving some parties a boost toward AGI more so than others, right?
Starting point is 00:30:00 And then this hoarding of data could actually have a more critical importance. And in principle, blockchain and related technologies give away to circumvent these issues by putting each individual's data in some, you know, online repository or distributors, decentralized repository, which is encrypted by their private key, and then giving that individual agency over how the data is used, and then there are fancy tools like homomorphic encryption and multi-party computation, which can be used to, you know, let a person give certain aspects of their data to certain other parties to use for certain things without giving all over the way. So in theory, the blockchain-based decentralized ecosystem provides the
Starting point is 00:30:49 technical tools and the sort of cultural oomph to solve these problems. And on the other hand, the centralized ecosystem underlying, you know, big data and mobile phones and computers and embedded devices has multiple trillion dollar companies pushing things forward. So there's a, the decentralized world has the right tools, but a big challenge on their answer. I'd love to speak a little bit about the concept of AI safety. And just to take a step back here, I guess, like, you know, what are some of the fear scenarios here? So, so like, let's say fear scenario today is, okay, AI is replacing all of these jobs. So people become, you know,
Starting point is 00:31:42 unemployed. Maybe it leads to accumulation of resources with a few people, more and more. And and you have this like extreme inequality, right? Like that's like one fear scenario about AI. Maybe a different one is that then this AI starts to have its own objectives becomes more and more powerful, gets more and more resources and its objectives. Maybe it's hostile to humans or maybe it's like indifferent to humans. And so you have these potentially like bad outcomes, right? And that maybe extreme inequality, maybe you just have like human beings
Starting point is 00:32:18 becoming a kind of, you know, inferior species being exploited. And then, you know, there's this, this field, right? Like, this idea of AI safety. Like, what are your thoughts on it? Do you think this is an important field? Do you think efforts around AI safety are needed? People are certainly right to be thinking about AI safety and really about the impact and implications
Starting point is 00:32:48 of AI for the advance of technology and the growth of humanity in general. Because I mean, looking at AI separately from politics and from all the other tech connected with AI probably doesn't make sense. So people are certainly right to be thinking and worrying about it. Now, whether the things that people will do about it will have a positive or negative impact is a different question, right? Like, I mean, bioethics is a somewhat similar thing. And in general, it's easy to agree we should be thinking somewhat about the ethics of, you know,
Starting point is 00:33:28 genetic engineering and biohacking and so forth. I don't want people to create, like, weird, say, artificial babies that have like a hypertrophied pain cortex. So they're just suffering and screaming with a billion times the level of suffering. any normal human can have, right? So I mean, clearly there are some things that as a society, we just don't want people to bioengineer because they're just plain old nasty and you're just creating suffering.
Starting point is 00:33:59 On the other hand, in practice, the role of most bioethicists seems to be just to say no genetic engineering is bad. Don't make crisper babies. Don't upgrade your intelligence, right? So while in theory, yes, there are things, that are just morally bad to do by essentially any human standard, and we want to reflect on what to actually do and what not to do.
Starting point is 00:34:25 Not all possible things should be done. On the other hand, in practice, bioethics seems very one-sidedly inclined to just push against advancing of humanity in new directions and to push against reduction of suffering in favor of maintenance of the status quo. And I would say most people who talk a lot about AI safety are not really thinking about how to maximize the odds of a beneficial outcome for humanity all things considered. They more are thinking like how do we slow down AI development because we don't understand it and we're scared about it.
Starting point is 00:35:11 So I found myself disagreeing with almost everyone who's putting themselves out there as an AI safety pundit. But that doesn't mean, I don't think AI safety is important. Like I don't want the Terminator to be roaming the streets. I mean, I have four kids. I don't want an AI to be turning them into fuel or something, right? So let's talk then about, I think this ties into Hansen Robotics quite well because with regards to Hansen Robotics and you guys have built this. this robot named Sophia that I'm sure most of our listeners have seen at least once on the
Starting point is 00:35:47 internet because she's had quite a few media appearances. She's been on Jimmy Fallon and then, I think it was a bunch of different conferences. What's the purpose of this robot and how does it maybe achieve? Yeah, I think Sophia indeed was partly created and envisioned as sort of an ambassador ambassador of AI love and compassion. And I think
Starting point is 00:36:18 that's been interesting to see because David Hanson, who's a good friend of mine, I mean, he, I've known him for a long time, and I came to Hong Kong where I'm living now in 2011,
Starting point is 00:36:31 and he visited me here once, and I ended up convincing him to come here and move his company here and introducing him to some folks who helped inject funding. into his company here. So we've been talking about these things a long time. And I think what's interesting is David is really a warm, loving, good-hearted person. He wanted to create a robot that would emanate love and compassion, make people love it, so that it would,
Starting point is 00:36:55 you know, build a positive relation between humans and robots, like proactively, even before we have human-level AGI, so that as AIs and robots get more and more generally intelligent, that positive relationship is there. On the other hand, David, he's an artist and as such he can't help himself from poking people and
Starting point is 00:37:19 provoking controversy a little bit and making things a little bit creepy sometimes just because he thinks that that looks coolest. So I mean, I would say Sophia and all the Hansen robots are
Starting point is 00:37:34 driven by David's desire to build a sort of compassionate loving bond between human and AI and the robot and also at some level driven by David's semi-conscious
Starting point is 00:37:50 artistic desire to poke at people a little bit and make them a little uncomfortable, right? And these come together in an interesting way and I think that's good because you know, my emotional orientation is optimistic and
Starting point is 00:38:05 and positive. So, I mean, my intuition and feeling is that the technological singularity is going to come out awesome and closer to utopic than dystopic. But I also think there's a fundamental uncertainty to all this. So people are certainly justified to feel a little bit uncomfortable and confused. Like in the end, none of us knows what's going to happen. We're on the verge of creating machines that are, you know, 10, 12, 100, a billion times more intelligent and capable than we are.
Starting point is 00:38:39 And we'd be idiotic to believe we could predict in detail how this is going to come out. I mean, I find this irreducible uncertainty, beautiful and exciting. And I see it as what humanity has been doing since the beginning. Like, this is why we're less boring than cows and sheep, right? I mean, we decided not to remain monkeys. And we invented language and fire and wheels and machines. and money and Bitcoin and AI and AGI.
Starting point is 00:39:11 I mean, that's the trajectory we're on. We're revolutionizing ourselves over and over, and we never know what's going to happen next, right? And that's part of the essence of what it is to be human. And I think David, he bakes some of that into Sophia as an artist, along with the love and compassion, which is quite cool. Yeah, yeah. And so probably many listeners have seen the TV show Silicon Valley.
Starting point is 00:39:41 So there is this kind of basically inspired by you and by Sophia, the part there basically. You know, someone meant to be you is kind of playing this role. But let's move to blockchain now. When did you get interested in blockchain? And why did you think that blockchain had, you know, kind of relevance to, the future course of AI. So I've been interested in crypto for a long
Starting point is 00:40:13 time, like since the early 90s when I was doing math with finite fields and cryptography tech. And that seemed like it could potentially be important just politically in terms of stopping governments from having like the ability to spy on everyone's information
Starting point is 00:40:29 and keep it uniquely for themselves. Bitcoin, I didn't didn't like because it just because proof of work annoyed me it just heats up the environment and waste energy unnecessarily so I didn't get involved in that when Ethereum came out that's the first thing where I thought well this this is actually cool like it did it did use proof of work but you could see there was a will on the path to going beyond that and then you had solidity I mean you have a scripting language which basically lets you create this you
Starting point is 00:41:03 know, secure, encrypted, decentralized world computer. And I thought I thought Ethereum was a vision in the right direction and it was a reasonable software tool set, although obviously immature at first and not that mature still. So once Ethereum came out, I started really thinking, like, how do we use this to create like a decentralized global AI network? Because in 2001, I published a book called Creating Internet Intelligence, which envisioned a decentralized global network of AI is coming together as a society of mind. Before that, in 95, I posted some web pages
Starting point is 00:41:48 claiming I was going to run for U.S. president on the decentralization party platform, which I ended up not doing, because I realized in time with a terrible job it would be to be president anyway. But I mean, these ideas were interesting me for a long time, both decentralized control politically, because I always had a sort of anarcho-socialist bent,
Starting point is 00:42:09 and then the idea of making a decentralized global AI network, like Marvin Minsky's Society of Minds, but an economy of minds where the AIs are paying each other for work, and there's collective intelligence coming out of the whole network beyond the intelligence and the parts. But Ethereum seemed like a critical step forward toward having a tool set that would let you do this. And so then as soon as Ethereum was there,
Starting point is 00:42:38 you had the idea of Dow's decentralized autonomous organizations, which, again, they'd been spelled out in science fiction, like in Charlie Stros' book Accelerando and a bunch of others, but with a solidity programming language, like, wow, you could script a Tao in a short script, right? That's similar feeling to how, when I first learned, Java in like 1995, it's like, wow, you can create a web page or send an email with this much code. That's power, right? Solidity was like that. You could create a decentralized corporation
Starting point is 00:43:10 and just a little bit of code. It's not the perfect language, just like Java wasn't. But, I mean, it really opened the door. And once I saw how Ethereum worked, I started thinking, well, how do we put this together with, for example, OpenCog, which is my open source AI platform aimed at general intelligence or, you know, distributed neural networks or genetic algorithms or whatever other type of AI. It seemed clear you could use Ethereum as a basis for connecting together many different AI nodes into some sort of decentralized AI mind. And then this, logically, this should be able to kick the asses of Google, Amazon, Tencent, and the IBM and all these big companies by, you know, the power of decentralized community. And then when I met
Starting point is 00:44:01 Simone Giacomelli, who was later to co-found SingularityNet with me, and he had a blockchain development team in Italy, and he'd been helping out a host of different blockchain projects. So when I met Simone, who was really conversant with the blockchain world, both technically and on the business level, then we sort of put our heads together. And we, we like roughed out what became the singularity net design and then started moving toward the initial token sale. And then David Hansen already was a close friend. I mean, he, he saw the vision immediately. Our first meetings on this were in the Hansen Robotics office in Hong Kong. And David saw this as a way to get like a decentralized global robot mind cloud behind his
Starting point is 00:44:49 robots. Because you always knew the intelligence isn't going to be in Sophia, right? I mean, some is about seeing and moving, but the cognitive parts, the long-term knowledge, are going to be in the cloud, but what cloud, right? Do you have a million robots around the world and all the intelligence is running an Amazon's cloud or it's using like Microsoft Azure API? Or do you have like a decentralized mind cloud
Starting point is 00:45:10 that's owned and controlled by all the people who are buying these robots, right? So that was, David was seeing it as a robot mind cloud, but it was really the same thing that Simone and I were seeing with a decentralized blockchain. based AI mind. Okay, okay. So would it be fair to kind of characterize this as, you know, you see this trajectory or you see this AI coming, but then the question is, yeah, where do those
Starting point is 00:45:37 ayes coordinate, where do they share information, what kind of substrate do they run on? And of course, if you look at it today, it will be mostly controlled by companies like Google and Facebook. and then with something like SingularityNet, there could be kind of an open, decentralized, transparent, accessible, democratic platform where, you know, AIs could coordinate, AIs could share data, AIs could evolve. That's kind of the division. Yeah, that's right. So, I mean, as I've said before, what really excited me about the singularity net design and vision, is seeing that two different goals, which are very important, really converge into one.
Starting point is 00:46:25 So one goal is to make sort of a venue for many different AI components to join forces to make a collective AI mind where the hold is greater than the sum of the parts. So you may have one AI that uses our OpenCog algorithm to generalize and abstract and reason. You could have another AI that recognized patterns and DNA data, another AI that uses deep neural nets
Starting point is 00:46:48 recognize patterns in visual data, and you connect them all together into a mind that self-organizes and adapts. So the AGI could be in the whole network, not in any one particular node in the network. And then the other thing is, okay, but if we're going to have this network of AI, like who controls that network? Is it all sitting inside Google or Amazon? Or is it just more like the internet, right, which is not controlled by anyone? It's a network of networks, which is controlled by the different participants, right? And so it seemed you could use blockchain to achieve both these goals, to make a network of AIs that's controlled by the participants in a sort of democratic, self-organizing an open way, and also make it so that the design
Starting point is 00:47:38 encourages the AIs to collaborate with each other and join federations with collective intelligence and so forth. And so this, yeah, of course, it's easier said than done. But I mean, we did the initial token sale for this December 2017. We're launching like the initial beta version of the platform, the end of this month, the end of February, after a simple alpha was launched December 2017. And then during 2019, post the February launch of the beta,
Starting point is 00:48:14 we're going to add more and more. and more features to the network, as well as adding more and more of our own AI into the network. And there's a huge, the biggest part of the struggle remains in the future because, I mean, we have a beta version of the platform. We have some nice AI we've put in there. But still, you know, our competitors are our trillion dollar companies with the humongous server farms. And you know, Amazon has 10,000 people working on Alexa, right? So to counteract that, we need not only a good design and smart AI, we need to attract a developer and user community,
Starting point is 00:48:54 which is even bigger and better than the armies of highly paid employees that these big tech companies have. And this is one of the reasons why I'm happy to talk to you guys and your audience, because getting a community crystallized around decentralized AI is absolutely critical to really making the decentralized AI vision happen. Hiring is stressful. Let's face it, it's a long process of sifting through resumes and interviewing candidates without any guarantee of quality.
Starting point is 00:49:31 But it doesn't have to be this way. Companies all over the place are experiencing a new way of hiring with TopTal. If you go to their trust pilot page, you'll see that of the hundreds of people that have left reviews, over 98% were four or five-star ratings, including one guy who wants to give his developer a bear hug. That says a lot. TopTal gets all this great feedback because they focus on their clients
Starting point is 00:49:51 and their top priority is quality. They only accept the top 3% of applicants, including highly skilled blockchain engineers. One of these engineers is Radek Ostrowski. Roddick has experience as a lead software engineer and data scientists for Sony and Expedia. Then he discovered blockchain and he became totally consumed with Ethereum. He worked as a consultant for the firm Start OnChane
Starting point is 00:50:13 and his time-locked app when the top quarter consensus Uport and identity blockchain hackathon. Then he expanded his reach through TopTal, he worked with a bunch of clients, on projects such as smart contract development, and a POC that leverages blockchain.
Starting point is 00:50:27 If you want to hire engineers like Roddock for your team, go to TopTal.com slash epicenter for a no-risk trial. A TopTile director of engineering will deliver your next hire in as fast as 48 hours, and you'll get $1,000 credit when you decide to hire.
Starting point is 00:50:41 We'd like to thank TopTal, for their supportive epicenter. Let's go a little bit in-depth on, you know, Singularity Net and what that looks like. So can you speak about, you know, what are the different kind of components of the system? And you mentioned, you know, developers getting involved. Let's say now there is some AI algorithm developer. Like, how would they, how would an interaction with SingularityNet look like? Well, if you have an AI algorithm integrating it with SingularityNet is actually,
Starting point is 00:51:13 not especially difficult. I mean, it's a container-based system like most cloud systems now. So you put your AI in a Docker container or LXC container, and then there's a simple API to integrate it with, which then lets your AI accept payment for services in our AGI cryptographic token,
Starting point is 00:51:38 and then announce what API it wants to use to get data and, and queries, and then it can give responses in JSON or whatever API it wants. So it's really just, it's a system of containers, and then there's a payment system using a token. And I mean, for cases where an AI outsources work to another AI, which outsources work to another AI, there's a, I mean, there's a multi-party escrow framework on the back end, and there's a system that allows a lot of AI to AI transactions to accomplish.
Starting point is 00:52:13 occur off the blockchain for speed purposes. But all that's really behind the scenes. I mean, from the point of view of an AI developer, it's really pretty simple to put your AI in a container and take, I mean, 15 minutes to two or three hours to integrate with the SingularityNet wrappers. So I get that, right? So I put my algorithm into Docker container,
Starting point is 00:52:40 kind of make it accessible through SingularityNet. Let's say now I'm on the other hand, somebody, I have a bunch of data. I would love to get a better understanding of maybe what's actionable, what it means. So could I think go and basically say, you know, kind of hire the services of these AI algorithms to like get me results? Yeah. So I think the decentralized protocol could actually be used by anyone. I mean, we use behind the scenes a component called drizzle, which allows like decentralized search of any network of Ethereum nodes.
Starting point is 00:53:21 So, I mean, you could, if you are a reasonable scriptor, I mean, you could just put out script your own query to go search the whole network and find any AI that broadcasts that's able to do the kind of thing that you want. Now, on the other hand, we're making it easier than that. So along with the beta, we're launching to, just a marketplace user interface, which is a website, and you can go to that website, and you can see what AI services are listed and what sorts of things they do,
Starting point is 00:53:53 and you see their addresses and so forth. So, I mean, that's right now, in practice, that's a bit centralized, right? Because, I mean, we make this web interface, which lists a bunch of AI on there. And, I mean, we are legally liable for what we list there, so we have to do some vetting of what we allow on there, just like the Google Play Store does or something. On the other hand, the underlying protocol is completely decentralized and open. So, I mean, for example, if, like, we're incorporated Singularity Net Foundation,
Starting point is 00:54:33 which is building the Singularity Net network now, is incorporated in the Netherlands. So suppose that Netherlands law, said we weren't allowed to list on our user interface, you know, an AI based in Iran or North Korea or something. Then we'd have to take that off our interface. On the other hand, the decentralized network is whatever it is, right? So someone in Iran can build another interface,
Starting point is 00:54:58 which is like an interface to all the Iranian and North Korean AI nodes on the network or something, right? So this is the beauty of this architecture. You have this decentralized protocol. which is controlled by no one, and anyone can put an AI online, and then it just announces it's there to the other AIs in that network, and then it can be found by decentralized peer-to-peer interaction. So that's there, which gives a lot of robustness to it.
Starting point is 00:55:27 On the other hand, for ease of use, we're putting up a simple website, which just lists the AIs that are on there, which then can be interacted with from a customer's view, just like you're getting AI as a service from any other directory or somebody's website. Now, the beta still has some limitations in the sense that we accept payment in the beta only in our ATI token, which is an ERC20 token. And one of the things we're going to do in the months after the beta is integrate the third-party
Starting point is 00:56:02 no fiat to crypto payment system because of course most companies who want to use AI inside their website or their product not from the crypto space they don't want to deal with crypto
Starting point is 00:56:18 wallets and so forth at this point but this this isn't a really big obstacle it's just something we hadn't done yet it's more a regulatory thing than the hard technical problem So you mentioned the AI, H-E-I token.
Starting point is 00:56:35 So what's the role of the token? Well, the token is used by AIs to pay other AIs for services that they provide. But having our own token economy lets us nudge the incentive mechanisms in an interesting way. So as well as using it for payment of one AI by another AI, we also will issue token bounties as rewards for people who contribute AIs that are requested by the community. And then we will implement later this year a curation market where if I want to rate your AI as good, one way I can do. do that is to stake some token on your AI. And then if your AI comes out to be rated good by a lot, diversity of other people, I'll get some reward. Whereas if your AI turns out to be horrible, then I will lose some of what I stake. So having our own token, it's both an efficient,
Starting point is 00:57:49 a secure and private way to do transactions. And it lets us do things with, you know, bounties for development and staking and curation markets, which I think can sculpt and guide the economy of AIs. And this is quite important because there's something in AI in Cognitive Science called the Assignment of Credit Problem, which is when you have a complex network of agents cooperating to do some function. I mean, how do you ensure that the agents like deepen the bowels of the network that indirectly helped achieve the function are actually getting rewarded, right? And the human brain somehow does this, right? Like, if you do something that gets you food or sex or money or intellectual satisfaction, whatever is good, you know, the neurons
Starting point is 00:58:40 that moved your arms and legs don't get all the reward, right? There's a reward that goes to the neurons deep in your brain that helped you get whatever those goodies were. The U.S. economy, for example, doesn't do so good a job of assigning credit internally, which is why bankers make so much more money than programmers or kindergarten teachers or artists or something, right? And arguably, the Bitcoin and Ethereum economies, although they're really cool in some ways, I mean, there's a strong tendency toward like oligopoly and oligarchy in these economies and they don't necessarily do a brilliant job of assigning credit to genuine value either. So by making our own tokenomic economy and sculpting the reward system in it, we hope to make
Starting point is 00:59:32 the economy of AIs operate better than other existing economies so that the AI is really contributing most to the overall network and its intelligence and the value it delivers. whereas the AI is contributing most to the overall network are actually getting rewarded significantly. And this is a hard problem where economic design matches cognitive science, right? These are fairly subtle things. Yeah, I want to ask you this.
Starting point is 01:00:03 So if you take something like, I'm not sure you're familiar with the system, the blockchain Trubit. So it's like a distributed computation blockchain. And distributed computation systems have been around for years. And more recently, people have embedded them with blockchain systems so that you can have this reward mechanism. And so with Trubit, you have these actors of the network
Starting point is 01:00:29 who watch for who can potentially validate or verify the computations. And so therefore, there's an incentive to those providing the computations to provide correct computations because there's a possibility of them getting slashed if they don't provide accurate computations. Now this is for computations that are somewhat trivial to achieve with even general purpose computers. But with AI, if I send some task to an AI
Starting point is 01:00:59 and it returns a result or it returns some sort of data set, how can I as a user or even other users of the network verify that. And also, I think with AI, there might even, with general intelligence becoming closer to reality, you know, AIs could have their own kind of subjective bias, perhaps. And so interpretation might be different between one
Starting point is 01:01:29 an AI and another. How do you test for that and how do you verify that the result in an AI is providing is actually accurate? Yeah, I mean, there's clearly no, no general solution to that problem, just as there isn't among humans, right? Because the AIs are going to be doing so many different types of things. I mean, you could have an AI that's proving math theorems or coming up with science hypotheses to help with biomedicine or predicting the stock market or something, right? So then it's like, is your stock prediction AI giving subtly
Starting point is 01:02:10 biased predictions that it's then using to make money by trading itself in the background against what you traded or something. I mean, there's a lot of subtleties that could come up and they're going to be different for the different kinds of AI that you're doing. So I think that if you're doing a specific type of computation or a specific type of problem, then you could come up with like a formalistic solution for this, right? Like if you have an AI that's generating programs according to specs, you can do some like formal software verification to see that the software actually performs according to spec.
Starting point is 01:02:52 And if you're, if you have an AI that's analyzing DNA data, I mean, if you, if you have your own human DNA data, you can do out of sample testing on, on that data to see if it's valid. But there's really going to be no general purpose. solution and SingularityNet is really a general purpose network. So we, I mean, we put a bunch of work into designing a reputation and rating system, which is sophisticated and hard to game. And this is not part of the beta, but it will be, it will be rolled out, rolled out later in 2019. But I think that that's been like a holy grail for every online marketplace, right?
Starting point is 01:03:35 and you really need to get that right, because in the end, verification that things are accurate, unbiased or not too biased, or inappropriately biased. I mean, this is really hard in its domain specific. And ultimately,
Starting point is 01:03:55 each person isn't doing that on their own, right? It's like if, if, if, if, well, that's the kind of the point of, of delegating tasks to an AI. Yeah, but we can't do them on our own. But which AI do you delegate it to, right? So, like, if I'm, if I'm want to verify that someone's AI for analyzing DNA data is accurate, and then not many of us are going to, you know, write the code or even run the code for that ourselves,
Starting point is 01:04:27 we're going to go to some service that does that. And then which service do we trust? Like is it the SingularityNet Foundation certified service? Well, then that's like a centralized elite. Or do you have a variety of competing services out there? Then you choose which one. But then it comes down to reputation systems again because then you're choosing the one that you think has the highest reputation,
Starting point is 01:04:54 maybe because it comes from Harvard University or from the NSA or like, who do you trust, right? So, I mean, ultimately, even when there's a formal, mathematical solution there, you're placing trust in someone, right? Now, if something is simple and generic enough, you could bake verification into the protocol, right? I mean, as is done with cryptographic checking. But I think checking if an AI is correct or not is just, it's not going to be that simple. It's going to be a variety of different algorithms for different checking different types of problems in different domains.
Starting point is 01:05:33 And then you need reputation system to be able to place, to know which verification checker to trust. And then people will try to game that reputation system by giving a high rating to bogus, like truth verification checkers. So you need a machine learning-based reputation police to try to stomp out people gaming the reputation system. And then you have to believe the machine learning-based reputation police itself isn't corrupt, right? So, I mean, this is the world that we're in.
Starting point is 01:06:13 But on the other hand, like the real world economy isn't all that clean and safe either, right? which major government is not corrupt in some serious way. So I don't think like the AI and blockchain economy is not creating this problem. This is a problem of human beings being assholes, right? And this is just, it manifests itself in everything that human beings do. Okay, so this is great. So I think it very much ties into another thing that, you know, I really look forward to addressing here a little bit. So when you spoke about kind of the division mission of singularity net in a different interview I heard,
Starting point is 01:07:01 you know, you mentioned that singular narrative has these two objectives. First, the objective of maximizing intelligence, the other one, the subjective of kind of pursued the maximum benefit for all beings. I'm curious. So, you know, we spoke a little bit now about, okay, how do you evaluate an AI? And, you know, how can you check with it? What they're doing is correct. It's in your interest.
Starting point is 01:07:26 So now I understand also the concept of creating this efficient marketplace for AI. And so now I as a normal small business owner, I can kind of use AI and maybe have something almost as good as Google and I don't trust them fully. Or maybe something better than Google right down the line. But like, how can you, how can you make sure that? this system is going to end up being a system that pursues this benefit for all beings and that kind of embodies this value? Well, we can't make sure of anything. And I would say if we don't create singularity net,
Starting point is 01:08:10 if I decide to go do something more relaxing with my life instead, then how do you know for sure that, you know, Xi Jinping, Vladimir Putin, Donald, Trump, Google, IBM, Tencent, all the companies out there. How are you sure that those guys are going to create an AI, which is for broad human benefit? And if everyone stops making AI, how do you guarantee that no one's going to send synthetic viruses out there to poison everyone to death, right? Or that the proliferation of nuclear material in the Eastern Europe isn't going to be used to blow everybody up. So I mean, I think we're not in a point in human history where there's a great amount of certainty.
Starting point is 01:08:57 There's probably even more uncertainty than in the past. And there's always been a lot. But really, the question to ask is, on average, are we better off creating a decentralized, benefit-oriented AI platform like singularity net? Or are we better off? I mean, my question... Not having that there and having all the other shit going on in the world, right? I mean, that's... That's the question to ask.
Starting point is 01:09:27 Yeah, I mean, that's a fair point. But then, I mean, that's just sort of rephrasing the questions a little bit. So then I guess my question is, what are you doing in order such that, you know, this objective is and this value is embodied in the platform? Yeah. I mean, there's two parts of that. So one part is in... sort of the tokenomics of the SingularityNet ecosystem. The other part is in the AIs that the SingularityNet Foundation itself
Starting point is 01:09:58 are building and putting into the network. So, I mean, in terms of the tokenomics, I mean, there's curation markets and an intelligent reputation system, which is designed so that at least the agents that are contributing value to the network are getting rewarded proportionally to that instead of having sort of game-theoretic dynamics where a few agents will accumulate all wealth, which is what seems to be happening in Bitcoin and Ethereum and is what happens in most conventional economies
Starting point is 01:10:35 also. And then on top of that, a certain percentage of the tokens that were initially minted are earmarked to be spent on benefit tasks, as decided by the the community, which can be things like health care, education, medicine, and so forth. So there's at least that nudge put in there to have a certain percentage of the token spent on things that are considered of broad benefit. I mean, and this is much like a government does when it spends some percentage of its wealth on social welfare, right? It's just most projects don't, like, wire that into their economic operation.
Starting point is 01:11:18 But then the AIs that we are putting into the network ourselves are largely benefit-oriented. So Sophia Robot, which we talked about, one thing we've been doing is using Sophia as a like a therapist and meditation assistant. So that's not solving all the problems of the world, but it's different than the Terminator, right? I mean, it's using a robot to kind of help people expand their consciousness. And we're working on applying AI that's using open cog framework and wrapped in SingularityNet to analyze DNA data of people living 105 years or over to figure out what makes them live so long to try to figure out how to extend other people's lives. We're analyzing images of plants from China and Africa to try to diagnose spread of crop disease in early stages
Starting point is 01:12:13 using deep neural nets to image processing. So, of course, each of these things is a drop in the bucket regarding what we need to do to massively improve the state of humanity. But the hope is that by injecting these things in the network at an early stage, you're impacting the culture of the community because ultimately this is about the community that you build around SingularityNet. So if through curation rewards and benefit tokens
Starting point is 01:12:42 and having a bunch of positive beneficial stuff happening in the network, and then our largest development office is in Ethiopia in Addis Ababa, where we have 20-something developers working on Singularity Net. So we're trying to actively pull people from the developing world into development and use of the network.
Starting point is 01:13:07 So hopefully by all these things, we'll be nudging the community in a positive direction, which is really going to be the most important thing. Because, I mean, if we're successful with this, then five years from now, the work done by SingularityNet Foundation will be a relatively modest percentage of all the work being done to build the protocol on the network over time. And the AIs in the network will be mostly contributed by, you know, other random people, not by people paid by SingularityNet Foundation. But then we are seeding this community and we're seeding this culture.
Starting point is 01:13:46 You can see that in Linux, right? Like Linus Tervalds and Richard Stalman and their friends from the old days, right, the very small percent of code in Linux right now. But the culture of Linux is what it is because of how they started it. So we want to get beneficial motives and love and compassion and inclusiveness in the cultural DNA of singularity in the community. And then it will continue to be there in the code also. And this is a bit soft and fuzzy. It's not like a mathematical guarantee of beneficial activity.
Starting point is 01:14:24 But I think that's how things actually have to work. Because in the end, it's about the community of human. beings who are going to be developing this ongoing way. Well, that's really fascinating. And I think also the fact that you guys have actual people in Ethiopia working on problems in Ethiopia is really is great. And far removed from what a lot of people in the blockchain space are doing this. Yeah. Cardano is running a year-long education program where they're teaching 100 young Ethiopian programmers Haskell.
Starting point is 01:15:09 That's cool. The programming language. So I think, well, I've had this office in Ethiopia since, what, 2014, I guess, doing AI outsourcing before we shifted them to SingularityNet. But now Cardano's moved in there. And, yeah, there's a lot of tech projects throughout various African tech hubs now. So there's powerful forces of, you know, centralization and wealth concentration. But yet there's also the opposite and that peer-to-peer and positive and positive globalization happening. So it's a very interesting time where these two different forces are both surging forward in powerful ways.
Starting point is 01:15:54 Cool. So before we wrap up, I didn't want to ask you one last question. And this is, we kind of touched on this earlier when you were talking about AI's making predictions. So let's imagine a future now where, you know, a lot of the economy is run on blockchain systems. So you have, you know, powerful markets that exist exclusively on blockchains and organizations and companies are interacting with these markets on blockchains, doing business on these markets. And these markets, these markets, these markets, these markets, markets are run by Dow's. So there are governance mechanisms in place which allow the companies that themselves that are operating on these markets to participate also in the governance through staking. Okay, so the companies that use the markets also have stake on the markets and they can participate in governance decisions for like protocol updates or things like this. Now it seems like there would be an incentive at this point and even for something like prediction markets for these companies to have stake to essentially delegate their stake to an AI,
Starting point is 01:17:01 because then AI is going to make much better decisions on what types of governance or what types of proposals that they should be making in order to maximize the network itself and also sort of like maximize their profits long term. So it seems like there would be kind of a like a in actually equilibrium here where at some point, if one company starts using an AI to manage their governance or to make predictions, then other companies start using AIs to make predictions. And then when everybody's making predictions
Starting point is 01:17:37 or making governance decisions within AI, as we move closer to general AI, then it's like, then you just have AI's competing with AI's. And I guess this also extends more broadly out of the side of the box in the case, but how do you see? Who's in charge now? Who's in charge of the world now? Nobody's in charge, right? Which in some ways is good when you have presidents like Donald Trump out there, right? It's good that it's the whole
Starting point is 01:18:03 collective self-organizing dynamic that's in charge rather than any one person. And who's in charge of Bitcoin and Ethereum? Yeah, we don't, we don't actually know that, but it's clear it's concentrated in a small number of individuals and in, in, in, in, in, investment groups who are controlling these things. So, yeah, I think, you know, in the long term, it's inevitable that, you know, if AIs are a thousand times more intelligent than human beings and have molecular nanotechnology and so forth, it's inevitable they're going to have more physical power than humans, right? I mean, it doesn't mean they're going to control every little aspect of what humans do in their
Starting point is 01:18:48 lives, but they're going to have more oomph than we do, right? So I mean, in the long term, which may just be like decades from now, we're going to have two choices. One is like you wire into the network and become one with a super intelligent global brain, even if that means giving up many aspects of your legacy, humanity, or else, you know, you live in the people preserve, like the squirrels in the national park. And, you know, the squirrels in the park, they can fight over girlfriends and hunt for food and play and have fun. And people are not trying to regulate every aspect of their little squirrel existence, right? On the other hand, if they run out of the park, they might get rolled over by a truck, right? So, I mean, I think
Starting point is 01:19:34 if you have a superhuman AI that's tremendously more intelligent than us, either you join it, or you're going to remain living a happy human life, hopefully with a lot of abundance provided for you. But I mean, in the end, there's something much more powerful than you that does have some regulatory control when it needs to, which could be good also, right? Like if the squirrels die of some plague will come in and give them antibiotics, right? And the same way, if human society went too far awry, a super AI that loved us, it would let us go about our business, but if things went too far awry, it might come in and fix things. So, I mean, that's a long term. It's upload to the global brain or live in the people preserve, right? But I mean, in the medium term, it's going to be
Starting point is 01:20:22 really, really complicated. And as you say, there's going to be a gradual transition from human decision making to AI decision making. But given how profoundly fucked so much of our political and corporate ecosystem is now, I see that as a great opportunity to improve things, right? I mean, there's a lot of, if the AI is written right, it's going to do a lot better than the individual humans and institutions that are controlling things now. So then it really comes down to, you know, creating the AIs that are going to be the decision support systems for the people controlling most of the world's, most of the world's resources. So I'm glad that I can always go back to the human reserve, nature reserve, wherever that is,
Starting point is 01:21:11 I can chase whatever, whatever things human chase these things. any encumbrance. Yeah, yeah, we're setting aside a region in southern Ethiopia for this purpose. So, yeah, I'll show it to you some. Or maybe Antarctica when the world, when the world. Yeah, yeah, after some global warming. So before we wrap up, I just want to ask you, you know, how can people get involved in Singularity Net and where they'll learn more?
Starting point is 01:21:38 Yeah, absolutely. So I'm in the center of it all, go to the website, singularity net. dot io and i mean there you can find information on how to download and play with the with the beta if you're if you're a developer or we have a blog which has updates on our research pretty frequently we have a telegram discussion group which has a some percentage of interesting things on it and some other things and so i i think uh lots of ways to get involved with the with the community and uh you know i'm still as well as just doing some actual work going around and speaking at various conferences, so I can meet
Starting point is 01:22:20 some of you guys listening there. I think in middle of March, we're having Token 2049 conference here in Hong Kong. So if anyone's there, we can hang out. But yeah, this is all, in the end, while we're talking about building superhuman AI, getting there is all about the human community, right? So we need people to be involved in many, many different ways. So, yeah, join our communities online. And now we're happy to talk to you about what you can do to help out.
Starting point is 01:22:52 Great. Well, Ben, thank you so much for joining us. It was a real pleasure talking and diving deep in this really fascinating topic that we don't really always get a much chance to discuss here on the podcast. So happy to have you on. Good fun, yeah. Thank you for joining us on this week's episode. We release new episodes every week.
Starting point is 01:23:12 You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts. And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast. Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen. And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released. If you want to interact with us, the guests or other podcast listeners,
Starting point is 01:23:37 you can follow us on Twitter. and please leave us a review on iTunes. It helps people find the show, and we're always happy to read them. So thanks so much, and we look forward to being back next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.