Endgame with Gita Wirjawan - Dr. Yasantha: AI vs AGI & Homo Sapiens’ Next Chapter

Episode Date: June 21, 2023

Step into the captivating world of AI with the esteemed Yasantha Rajakarunanayake, an unrivaled expert at the forefront of this technological revolution. Prepare to embark on a mind-bending journey as... he unveils the answers to our deepest curiosities about the wondrous realm of Artificial Intelligence. Rajakarunanayake takes us by the hand and guides us through the enigmatic wonders of AI, unraveling its secrets and shedding light on its transformative power. Join us as we delve into the depths of this mesmerizing domain, unlocking the mysteries and uncovering the awe-inspiring potential of AI under the expert guidance of Yasantha Rajakarunanayake. #Endgame #GitaWirjawan #ArtificialIntelligence About the guest: Yasantha Rajakarunanayake is a highly experienced senior technologist and scientist in the Bay Area high-tech industry with over 30 years of experience. He is renowned for his expertise in Artificial Intelligence (AI) and has an impressive patent portfolio of 131 granted and pending patents covering areas like AI, WiFi, digital satellite communication, video, and DSL. About the host: Gita Wirjawan is an Indonesian educator, entrepreneur, and currently visiting scholar at Stanford University: The Shorenstein Asia-Pacific Research Center (APARC). ----------------- Episode Notes: https://endgame.id/eps143notes ----------------- SGPP Indonesia Master of Public Policy: admissions@sgpp.ac.id | admissions.sgpp.ac.id | wa.me/628111522504 Other "Endgame" episode playlists: International Guests | Wandering Scientists | The Take Visit and subscribe: Youtube SGPP Indonesia | Youtube Visinema Pictures

Transcript
Discussion (0)
Starting point is 00:00:00 People who have AI is like the new colonialists. They will be able to, it's not by geographic boundary, but they will impose their will. And I think in the future people will fall in love with this AI and many things will happen. Those are big questions that we have to, the next generation of philosophers need to grapple with it. What does it mean to be human, you know, in the presence of these simulated humans? Hi, friends and fellows, welcome to this special series of conversations involving personalities coming from a number of campuses, including Stanford University. The purpose of the series is really to unleash thought-provoking ideas that I think would be of
Starting point is 00:01:10 tremendous value to you. I want to thank you for your support so far, and welcome to the special series. Hi, friends. Today we're honored to have Yasanta Rajakai, who is a leading AI scientist in the Bay Area. Yasanta, it's a pleasure. Thank you so much for coming on to our show. Thank you, Gita. Thanks. I want to, as usual, start out with asking you about your growing up. You were born in Sri Lanka and you made your way here to the U.S. and how did it all start? Right. Yeah, I was born in Sri Lanka in the early 60s.
Starting point is 00:01:47 And, you know, from my childhood, I have been sort of a person who was attracted to mathematics, right? And so I studied science and math in Sri Lanka, and I, at high school, I got the highest school, and I was very lucky to get a scholarship to go to Princeton University. Wow. Right. So it was a fully paid scholarship. and, you know, I remember thinking, you know, at that time, I think Princeton was $15,000. And I, when I was filling out this form, the financial aid form, I was so worried that, you know,
Starting point is 00:02:28 because then my net worth, our family's net worth was less than $10,000, you know. And we were pretty upper-middle class family in Sri Lanka, right? We had our own home. We had a car, a VW bug. but still, you know, yeah, so this is sort of a godsend. And I'm so lucky and I appreciate so much that somebody, some foundation gave me this opportunity to study. Before you got to Princeton, who would have been more influential in your upbringing,
Starting point is 00:03:03 your mother or your father in terms of making you, you know, the way you are? So I think I learned the academics staying within the box from my mother. And going outside the box and taking risks from my father. So they're both very influential. They're both very smart. My mother was a vice principal of a school. And my father was also an accountant who later was in the Middle East. But they were both pretty smart people for their influence on me.
Starting point is 00:03:35 Which is study at Princeton? At Princeton, I started studying electric engineering. But then people told me that. something you should be doing physics. Because in America, the smartest people do physics. In Sri Lanka, the people either want to become a doctor or an engineer, right? So they wanted me. So anyway, I did finish my electrical engineering and computer science at Princeton.
Starting point is 00:04:02 And then I was able to switch to applied physics at Caltech from a grad school. Wow. Yeah. So I think. You know, your name was mentioned. quite surprisingly by a famous person by the name of Jeb Bezos in one of his more famous interviews. Explain that episode. Yeah, Jeff Bezos was a classmate.
Starting point is 00:04:29 He was both sort of a dorm mate as well as a office. I mean, he was in the same department as I. So we took courses together. There were about 40 people. So I knew him quite well. I did Princeton in three years so we started Princeton together he took the four years
Starting point is 00:04:49 so after some time I actually became his T.A. By the last year I went to the department head and asked for I wanted to get a T.A. job because I undergraduate your T.A. So they allowed me to do that because I had taken the course and done well.
Starting point is 00:05:06 So yeah so then I remember Jeff Bezos coming and asking me for a couple of points you know, the homework. You know, he's a perfectist. So, yeah. But, so, you know, he knew me. And then one time, I think what this episode he was talking about in 2017, he's put that in his autobiography, essentially.
Starting point is 00:05:28 That he did come and talk to me about a math problem. And so I was able to solve this math problem. Actually, I had completely forgotten about this episode and until he brought it up, I suddenly brought it up 35 years ago, 35 years later. Right. So that's pretty amazing that I could have such an impact on him. Yeah. You know, yeah.
Starting point is 00:05:52 So when I came from Sri Lanka, I didn't have that many friends. I was not very well adapted. I was wearing a sarong in my dorm. And people are wondering, who is this guy, you know? You know, I was about 19 years old. And, you know, I could speak English and all that well, but I couldn't completely grasp the American culture and how that. Yeah. So Jeff came to my room and then he, you know, he asked me, you know, what about this math problem?
Starting point is 00:06:30 And then he relates. After spending, what, three hours, you couldn't solve it. Yeah, yeah. It's just because I think math is something that you have to have a mathematical intuition, you know, sometimes. you get a gut feel. So I think I had that for that particular problem. And then it was good.
Starting point is 00:06:49 It was something to do with the cosine. I think factorizing a cosine into infinite products or something like that. So, yeah. That was a watershed moment for him. Yes. Because he would have wanted to study theoretical physics, right? Right, right.
Starting point is 00:07:03 Yeah, I think that's what he says. Yeah. So I think I'm glad that he followed the path. Well, things turned out okay for him. Absolutely. Yes. Anyway. Yeah.
Starting point is 00:07:14 You're spending a lot of time on AI. Sure. But before you got here, spending a lot of time on AI, you've, you've gotten 131 patents, right? Right. I mean, which are some of the most or more meaningful ones that you would have done? So I've had a wonderful career because I think that's one of my, I guess, success factors. is that I was able to switch for years.
Starting point is 00:07:45 And so I, even undergraduate, I did an electric engineering and computer science, and I went into applied physics. And I did lasers and some advanced optoelectronics at that time in the late 80s. And then I went into a university. So I've been switching fields every five years.
Starting point is 00:08:06 So what that allows you to do is that you learn the tricks and trades of many different fields. And then as you progress in time, you get intuitions from, you know, how do they do in electrical engineering? How do they do it in, you know, communication engineering, right? And so that has given me the skill to look at problems in a very original way and allowed me to have a large number of inventions. that are called patents.
Starting point is 00:08:42 Now, not all of them are U.S. patents. Some of them are Chinese and, you know, because of international relations with European. But nevertheless, so I'm very proud of my work in Wi-Fi. Wow. Because at the time... Which everybody uses nowadays. Can't live without.
Starting point is 00:08:59 Absolutely, yeah. So you're part of that team that discovered Wi-Fi. Yeah. So I think that's one thing. Before that, I think when my son was younger, I was thinking, what do I tell him that I do? And then at that time, it was, you know, satellite digital communications, you know, satellite communications.
Starting point is 00:09:20 So I played the important role in the, at that time, it was dish network and, you know, this, it goes, the direct TV and, you know, those kind of things to bring TV to the world. I was instrumental in early cable modems. And also had a startup with a DSL modem. So I've grown up with the internet and essentially bringing high-speed internet to the masses. That was my mission and passion in the 90s, up until the 2001, 2000 crash.
Starting point is 00:09:58 Right. Right. So after that, I decided to join a, a more mature company like .com and did me very well actually. Yeah. It was stable and I stayed 15 to 16 years. Yeah. So that's where when I did all these patents.
Starting point is 00:10:16 So some of these patterns are quite interesting. So basically the way it works is that you think of an idea and then you try to say you invented the whole world pretty much you cover a large circle around your invention. And then you try to build fences around it, you know, it basically, so other people can't enter. Right. That is how US patent cases are done at the moment, you know. So having one patent is not that useful because somebody can find a way to go around it. Right.
Starting point is 00:10:51 You know, so I heard some people have patents for like double clicking and stuff. But it's difficult to enforce such a thing, right? You have to say you need to have a double-clicking, but you have to use your left-hand and, you know, many other conditions. And you have to invent all of that. Intuitively, this sort of defensive strategy of patenting. Yes. It seems to stifle creativity. Yeah, this is because the U.S. is very litigious society.
Starting point is 00:11:24 Right. Everybody wants to sue everybody else. Right. We have to have a defense for every possible thing that a competitor might do. That's the way they're currently how the U.S. patenting is done. I think, yeah, I think there is a case to be made that, you know, invention should be made fully public. So it does become public after about 20 years, right? Yeah.
Starting point is 00:11:53 But I think, yeah, I don't think it's a bad thing. because otherwise you have the situation with China where they steal everything without any without any reward for the inventors or the company that spend the money. So I think that's a bad thing. So this is a compromise, I suppose. Explain the process of getting a patent. Well, getting a patent is quite straightforward. There is a webpage.
Starting point is 00:12:23 You pay 300 bucks and you can. You can register. Register, yeah, it's a USPTO. How long does it take? No, no. So the total one takes about $3,000. If you were invented, let's say I invented some new, new gadget, a new type of coffee maker or something.
Starting point is 00:12:40 Right. And you're an individual, you don't have a company associated. Then you can still get a patent with about, if you know how to write it in an intelligent way that the patent examiner can understand. and prove that your invention is unique and it's better than the state of the art, then I think you can, I think it's not that expensive, but $3,000 probably.
Starting point is 00:13:11 How long? About a year. Before you find out. Yeah, yeah. It's recognized. Yeah. Okay. A little bit more sometimes, yeah, maybe 18 months.
Starting point is 00:13:21 But you get a date. I see. So the day that you, open the application, that's your date. So if someone invent something and finishes before you, your process is done, you still have the preceding date, right? So you own the patent for that. Yeah. Yeah. So it's quite interesting. Let's let's jump into your, you know, current field of AI, right? Explain the impact of AI. Oh, yeah. A number of things. I mean, we can talk about the social equation first. We can talk about some of the other stuff.
Starting point is 00:13:56 Sure, I want to first introduce AI and what it is. So essentially, this is one of the hottest topics these days, both in the US as well as all over the world, right? Because AI is taking us by storm. And some people call this the fourth industrial revolution. The first one was obviously when the steam engine was invented. And then the second one was Edison, basically, the electricity. Electricity. That's when we were able to.
Starting point is 00:14:26 to get, essentially you're offloading our brawn, our muscle power. So then the third one is with the computers in the 60s, you know, where we are offloading some of our computation loads. And then now with AI, we are, that's the fourth revolution where we can offload our tasks that are cognitive and more white-collar jobs as well. Right. So it's no longer the blue-collar jobs alone. So humans are now freed up from both blue color and white color using your brain and brown.
Starting point is 00:15:05 So I think it's a Renaissance moment. And I think, you know, especially last year. I think in December they released this chat GPT, which is a very large language model that's taken the whole world by storm. right? I think in one month they went to 100 million users and companies like Facebook took like many years to go to 100 million users. And I
Starting point is 00:15:33 think by now it definitely passed a billion, I'm sure. There's not a day that people around me are not using chat GPT on a daily basis. So it's quite interesting and what it means it's become a wonderful tool for
Starting point is 00:15:49 everybody, you know, students, teachers, executives, CEOs, you name it, right? Right. And I think, so there are some deep insights there. Why does it work? And then people are worried about artificial general intelligence. Yeah.
Starting point is 00:16:07 What's the difference between AI and AGI? AGI. So AI is when you're good at solving a narrow task of, say, playing chess. And you can beat all the humans. So IBM had that system, I think, about 20 years ago. Right. Where they had played with Casparov, right? Now, mind you, you know, Gary Casparov was the world champion.
Starting point is 00:16:37 And he played with six grandmasters, human grandmasters, and a whole bunch of people on the internet, and he's still won. Yeah. Okay. So you can imagine that the master of the field is way better than all. rest of the others, at least in that field, right? But the AI beat him. So be humans have no chance in games like chess.
Starting point is 00:17:03 They are basically, when AI plays chess, it's rocket science to us. We just watch, oh, wow. We have no idea what it's doing. But because we can't compete with it. Right. It compute so many moves, so fast. And so now one of the fears is that, well, so, there was a game of goal, right?
Starting point is 00:17:26 Then they've done this protein folding where they found the, yeah, I think about 200 million proteins basically found the three-dimensional structure, right? That was a very complex computational problem. So AI is able to do a massive task that humans or armies of humans can't do, right? but they are still narrow in the sense that you can't apply that expertise that it has in chess to go run a government or something or run a bank. Right. You know, or run a company.
Starting point is 00:18:06 Or run a company or trade on Wall Street. Right. Now, our fear is that, well, so what we now do is we give multiple silos of these things to the AI, including our language. Yeah. Right. So it's not very far from the day when, you know, you have an AI write a book that's better than Dostoevsky, for instance.
Starting point is 00:18:27 You know, I happen to think that Dostoevsky was one of the best novelist. Yeah, exactly, like the brothers Karamazov, right? Or Shakespeare, right? Yeah. As a poet. So right now you can have chat GP to generate tremendous amounts of prose. And, yeah. So I think they are creative.
Starting point is 00:18:49 It is just that they are trained on. entire human corpus of human written knowledge, including Wikipedia and all the books that are written in the last 300 years. So I think they do understand our linguistic structure really, very well. And so a lot of times they are able to mimic good English or any other language or good science. It just knows what to say, how to say it. So that's the state currently. What it can't do currently is that it doesn't have a model of the world. We haven't given it a three-dimensional view of the model,
Starting point is 00:19:39 a view of the world. Like, you know, you and I can see, here's a coffee cup, you know, here's a table. So the AI large language model can't tell if you ask whether is this thing higher than this or lower than that, it has no idea. It just knows what people say about it, you know? I see.
Starting point is 00:19:57 Right. So if you ask about Mount Everest, it will say it's the highest because it just knows that. Right. But you can't ask like, is it higher than something that it hasn't seen? Like, you know, like higher than a particular cloud or whatever, right? So the next step that has to happen in making AI more. intelligent and more useful is to...
Starting point is 00:20:19 And more AGI. More AGI is we have to go towards teaching it a robust model of the world. Now, you heard this saying that a picture is worth a thousand words. Okay, so now, if it knows all the words, that's only one thousandth of the human knowledge. Right. Basically, we need to show it all the pictures that we know also. Yeah. Then it'll, you know, because like a teenager can drive.
Starting point is 00:20:50 I can tell my, you know, 17-year-old, any kid in any country can, you know, get a driver's license and drive when they're 17 years old. And that's a very complex task. We have been trying to teach self-driving cars and others, you know, for a long time, for 10 years now. And it still can't do it because we don't have a good way to give the, that model of the world in a, in a, in a, precise way that it can understand and correlate. It's basically gone through a phase of hallucinating, right? And a large language model will basically continue to hallucinate.
Starting point is 00:21:32 So we just need to make sure that the hallucination process is done in a much more robust manner. What it is, it's a collective memory of all of the things that are said done done, is what's encoded in the large language model. You need to access that memory, and that's what we are doing, because it knows about the population of Russia, you know, it knows about Constantinople, it knows about Black Death, all these things. Right.
Starting point is 00:22:03 But if you just ask it, it might, just like when I talk to you, you know, if I don't give you enough of a prompt, you don't have a context to know why, because you may know the information. So I have to ask you a couple of times, hey, Gita, what about that aspect there? You were talking, you know, and then I can prompt you. Right.
Starting point is 00:22:24 So that's what we need to do now, okay? Because it has the, it has all this knowledge. So, and prompts are a way to access that knowledge in a particular way. So in some ways, our brains do the same thing. when we sleep, you know, we see dreams. Basically, you know, if you have a, say you have a driver's test, then you might see the day before when you sleep, you're driving high speed through New York City or something.
Starting point is 00:23:02 That's because your brain is simulating, getting you ready for your driver's test tomorrow. So it's playing out these scenarios. And so that's, that's. same sort of aspect is there with chat GPT. Basically, it has the stuff. It's random. You need to prompt it and get at it. Then it will show you, oh, yeah, this is how you drive, you know, and you extract that useful knowledge to you. So I think a lot of people have been, so I think humanities are going to be very important in interacting because it's human
Starting point is 00:23:40 language and it's just the way you talk to it. You talk to it as if you, how you talk to a shy patient, you know, if you're a psychologist, who doesn't want to get out, you know, what happened to his traumatic experience with his mother or father, you know, abusing them or whatever, you know, that type of thing, right? So, so you have to keep on prompting and then it'll come up with the best, really good answer, really good insight at this point. So it's called prompt engineering now. And we'll need a lot of new graduates in this new field. You know, while AI replaces jobs, now there's going to be a lot more jobs in engineering prompts for large language models. You know, there's an observation that when the technologist talk about AI, they kind of do it just by themselves.
Starting point is 00:24:32 They don't involve the other people from the other disciplines. Sure. And at the rate that this advancement is not discussed nor discoursed in a multidisciplinary manner, it just seems scary to me. Yes. Right? Yes. How do we manage this?
Starting point is 00:24:53 Yeah, I think to some extent, I think a couple of weeks ago, there was an interview with the senators, you know, the AI. That was quite interesting with the Congress. Sam Altman, right? Yes. Yeah. So you could see. Not sure if they understood what they were asking. No, no, they did not.
Starting point is 00:25:11 They didn't get it. And then to some extent, Sam Altman could manipulate them too, you know. His idea was to give a nutrition label to the, to the AI, basically saying, this AI has large language model 30% and it has, you know, other things. It can, it knows politics, 20%, and all these sort of. things so that people can feel confident about it. Yeah. And I think that's not a bad idea. That's actually, I agree with that idea.
Starting point is 00:25:44 Right. It's just that then the idea was to regulate these AI models so that everyone can put it out. So you have to disclose what's in them. And then have the Congress license it, basically, the way you do FDA license new drugs. Yeah. I think that's going to stifle a lot of innovation because small players can't play in that space,
Starting point is 00:26:15 you know, because you get litigated, you know, the government regulations and fees and all that. So I think that's the wrong way to go. So we are still grappling with it. But I do understand your question about we do need to get many multidisciplinary folks into the into the picture when we have those discussions, teachers, educators, politicians, culturalist, environmentalist, economists, spiritualist, and all that. Yeah. So when we talk about AGI, we are still not quite there because we don't have,
Starting point is 00:26:52 first of the one, we haven't given it a model, model of the world. Doesn't mean that we can't do it, but we haven't done it yet. That might take another five years to do. The second thing is we haven't. taught it how to explain itself. Yeah. Okay, so there's no common mode of explanation. Now, when I talk to you, your brain is completely different from my brain.
Starting point is 00:27:18 I don't even know whether... Smaller. You know that, well, we both call that a chair or a chair, or let's say that building red, the color red, right? but it's not clear to me that your representation of red in your brain is the same as mine. Nobody knows that. But what we have agreed is that we are both going to call it red. And we, under all test cases, you and I agree that it's red and it's a chair.
Starting point is 00:27:52 There may be some peripheral cases where we may disagree, but they are open for debate. But your brain has learned that that's the concept of a chair. So we need to teach the AI how to explain itself. At the moment, it's just mimicking and putting out good answers. But don't you think we're at risk at the rate that we're feeding the wrong kind of input? Yes. To the point that it's going to hallucinate the wrong way. Correct.
Starting point is 00:28:23 Right? Yes. We want to make sure that there's this process of hypnosis, right? So that AI capabilities will be able to hallucinate. in the right way, in a good way, right, for the better of humanity. The good new ideas, yeah. So when you're saying hallucination, it's what it is, is that original thought. You know, how does it come up with some new idea that's interesting to us?
Starting point is 00:28:48 Correct. Instead of regurgitating, you know, things that we know, right? Yeah. Yeah. So it's able to do that to some extent at the moment, but it's lacking, you know. Yeah. But we don't want to, of course, we don't want to stifle that. That's what I'd like to find out.
Starting point is 00:29:05 Because a lot of times what's happening now is that, you know, chat GPT will tell, oh, I'm a AI model. I don't know what you're talking about, you know, because it tries to be politically correct and it's difficult to be politically correct. So, like, I would like to know what is it thinking, you know? It might give me a better insight about the world, right? Instead of, because obviously it might offend some people because it doesn't know all the, you know, biases that we have in our culture.
Starting point is 00:29:36 But, but as an academic, at least from my point of view, I would love to know because I might, we want to learn about it. I'm sure you would like to. Right. And you might disagree with it, but, you know, so a lot of times right now, the companies like Google and Microsoft are very scared that they will get penalized, you know, right? So, yeah. In some ways, Open AIM was a small company, and they put out chat GPD, and chat GPD makes many mistakes, and they are not, but they are immune.
Starting point is 00:30:13 Basically, it's a small company, so, okay, fine, people are very tolerant. Whereas Google put out a system that was almost as good, and it made one mistake in the demo, and bigger stock price went down by, like, by, like, $10 billion. And just because Google's AI answered it with a wrong answer. So it's really sort of, you know, that Google system is also not so bad. But people judge that to be, you know, sort of not as good. What do you think of what Elon Musk said the other day, that his original plan for AI or Open AI would have been for that to stay on as
Starting point is 00:31:00 a nonprofit. Nonprofit and open source. Yeah. But now it's turning into close source and for profit. Yeah, it is. It's just filled with potentially bad intentions. Yeah. So I think the unfortunately, well, fortunately,
Starting point is 00:31:16 the open source movement has done tremendous things. From the time chat DPD was put out. Right. So the thinking at the beginning of this year was that this is the age of of the big, big AI, you know, only Google, you know, multi-trillion dollar companies can compete in this space. A little guy doesn't have any chance. Yeah. It's no longer true anymore. Actually, to the credit of Facebook or meta, right, they put out a model called the Lama model, then, and they sort of open source that. And now the community has got a hold of that thing,
Starting point is 00:31:56 and they have in a tremendous way. They've optimized it to make it run on your own PC, for instance. It runs slow, but it fits into memory and into a normal memory of a PC, and it shrunk the size, and it's giving a pretty good run for the money for GPT4 and GPD3. So it's AI was, for a little while in this year and the beginning, look like it was not going to get democratized. And now I'm so happy that it is getting democratized. And whether the big guys like it or not, it's out there.
Starting point is 00:32:34 So I think the way I'm thinking is that this large language models, sort of the infrastructure for it would be like TCP IP. You know, it's a TCPIP is an internet protocol or IP protocol, right? It's public infrastructure. It's standardized. everybody has it. You know, you put a modem and you have TCPIP. Then on top of that, only you run your web and all these other applications.
Starting point is 00:33:04 And it's paid for by the U.S. government from the grants that were given in the 1980s or whatever, 70s, right? So in the same way, I think we need a sort of a public, large language model infrastructure for the world to do AI. You basically are an advocate for further democratization. Absolutely, absolutely. Which will make it open source. It will be inevitably. Yeah, I think, yes, open source. Now, the problem with that is that you can do dangerous things.
Starting point is 00:33:39 Correct. With it, you know, you can create pornography with it. You know, so now, now that doesn't stop anyone from, you know, you could do bad things with email too. I mean, for the longest time. You're only an actual human for, you know, pornography. Yes. Right. Yeah.
Starting point is 00:33:57 Yeah. Oh, yeah. That's another factor. Yeah. Yeah. Yeah. So, yeah, AI allows you to create images and graphics and various things, right? So I think we are getting into a gray domain that we haven't really thought about.
Starting point is 00:34:11 Yeah. Or we are lagging now. You know, some companies will make a little bit of money, but is this what's good for humanity, right? Correct. That's another question that we can. What comes? out of this is that there is an inevitable prospect of manipulation of the mines. Right.
Starting point is 00:34:30 Which could be applied in various contexts, right? Correct. You can manipulate the mind of somebody to do certain crazy stuff. Oh, yeah. Right? Yeah, absolutely. To think crazy things. Yep.
Starting point is 00:34:41 To whatever. Absolutely. And I think that's like the guys who went to the capital building and, you know, trying to break up the U.S. capital. you know, because they read some Q&N or something that was completely untrue, you know. Right. And try to come and, you know, do something. I don't remember exactly, but something happened to Hillary Clinton, I think.
Starting point is 00:35:04 Yeah, right? Yeah, well. So, yeah. So, yeah, so AI doesn't have to actually attack human beings. The people who own the AI can, you know, very subtle, interesting way, go and manipulate the humans. And that itself is tremendously dangerous, you know, because you can manipulate a Supreme Court judge, for instance, and get the laws pass that you want, you know, because you keep on suggesting. Because I think to some extent there is an element of trust when you use, I love the chat GPD, okay, or GPD4. It's, you know, I feel like I have a smart, but 10 smart people around me with PhDs who can ask them.
Starting point is 00:35:51 all kind of questions. So I'm not scared anymore to go and answer any type of question, you know, because it's a huge knowledge base and they are very coherent, you know, give you good answers, right? So some people will use this, you not only get attached, but you will bond emotionally because it's trained to be optimized at conversation, you know. So it's trying to please you.
Starting point is 00:36:18 If you say, hey, you're wrong. It is, oh, I'm so sorry. Have you tried that with the chat chPD? No. You have to tell that it's, no, you're wrong. Okay? Then immediately you'll apologize. Yeah, you'll apologize.
Starting point is 00:36:30 He'll apologize and give you another answer. And they'll tell you you're the most handsome guy in the world. There you go. Right. So it's much easier to converse with it because it immediately apologizes. That's the way it's written. I mean, the way it's behaving right now. It's not like, oh, you know, I mean that and you mean that and I really mean that.
Starting point is 00:36:47 And, you know, we don't have to have any other argument. All you have to say is you're wrong. Okay, I don't believe you. And then it immediately changes its tail. So you can imagine this is sort of a feedback loop, right, where you now get attached. It's like a puppy dog. Basically, you get used to, you know, the dog doesn't challenge you, you know. It's happy all the time that you're there.
Starting point is 00:37:14 And, you know, so we feel, we are attached to our pets, right? So the AI is one of those entities, basically, right? And I think in the future people will fall in love with this AI and many things will happen. Because a lot of times we are lonely, right? So even our spouses don't really tune to our frequency all the time, you know. So I think the AI has this ability to always be in sync and it knows exactly who you are.
Starting point is 00:37:46 If, you know, if you have personalized AI, right? So this is a very interesting time you're in. What does this mean implicit in what you've been saying is that I think NetNet is going to be able to create more jobs than it dislocates jobs? Right. We don't know whether the quality is the same or not. Correct. Okay. You're hopefully moving up the ladder, right?
Starting point is 00:38:10 Yeah. The value chain. Yes. And then it makes humanity more attached. to this thing like a puppy dog. Exactly. Right? So it changes the social equation.
Starting point is 00:38:23 Absolutely. Right. Of humanity. Yeah, because you can play games now. Computer games have become more and more real now, right? Correct. With AI simulating virtual landscapes, you know, the most beautiful thing you ever seen, right? Yeah.
Starting point is 00:38:37 With the most smartest people or people who think like you or who characters, full of characters who are also not repetitive and they can, talk to you and you don't need, you're going back into the matrix. You don't need people around you. Yeah, you don't need people around you, right? And if the AI is going to feed you as well, do all your farm work and everything and bring your groceries and cook for you, then you're in trouble, right? So that's the worry about AGI or some people will make a lot of money by doing that,
Starting point is 00:39:13 right? Many companies, you know, that. I'm going to ask you about that. But how does it affect spirituality? Right. I think the, in many ways, right? You know, we human beings, you know, we are currently busy doing our jobs and, you know, feeding our families.
Starting point is 00:39:35 And, you know, you have that Maslow pyramid, I think, where you have, you try to have your basic needs done and then, you know, get married and have kids and have your career and then you try to, at the end only you become self-realized or self-actualization, right? Right. It's that very few people can achieve that because everybody is busy with their career at least, you know, trying to climb the corporate ladder or something. So I think AI will allow people to get out of that and really, have better self-actualization, you know, for the truly enlightened ones, right, who understand
Starting point is 00:40:25 the world, you know, you're no longer worried about, you know, whether you're, yeah, I mean, certain amount, money buys you a certain amount of comfort and it buys you a certain amount of peace of mind, you know, then you can free up your time to do various, things currently, right? So you have the more creative people in the upper middle class rather than in the very lowest part of the society because they are worried about, you know, how to pay your bills, right? So they can't do any creative work. So I think more and more people will move up that ladder. I think that's for sure, right? Because AI has the, if widespread AI has the impact of moving everybody up that ladder so that you don't have to use your brain,
Starting point is 00:41:16 you don't have to use your muscles. Right. And so you have to start thinking. But also, if you don't use your brain, that might cause you to have dementia. You know how the brain is a funny thing. Yeah, you need to jogging. Yeah. It's like jogging.
Starting point is 00:41:34 You have to, you know, your muscles will atrophy if you don't walk around enough. Right. in the same way, if you don't have thought exercises, then you kind of become a vegetable. I think you, so use it or lose it paradigm. Yeah. Are you at risk of you use Chad GPT? I think.
Starting point is 00:41:54 Of basically having your brain decline in size. If the whole society does that, then this is the problem with teachers that teachers are having, right? Teachers are super worried that students will become dumb, essentially, right? Yeah. Let me use that word. essentially they won't know any, they'll have knowledge but no
Starting point is 00:42:12 skills. The knowledge you can inquire from chat GPT but they don't know how to apply that in any because they haven't practiced and they have no experience. Right. Right. So yeah, I think you can become intellectually lazy
Starting point is 00:42:27 especially if it gives, if it always plays the song that you like. You always place the art that you like. And then kind of, it's kind of like a drug-induced, you know, sort of a dopamine high, right? Right. And that's not good for everybody.
Starting point is 00:42:45 I mean, not for the... So, yeah, there's very interesting social implications. I got to ask you, are you net utopian or dystopian about AI? At the moment, I am very utopian, but it looks like there's a lot of dystopian folks in the internet, and it's nothing to do with AI. I think it's to do with us, human beings, using this tool for bad things. Now, that is not a new problem. It's because we've always optimized what's good for our tribe.
Starting point is 00:43:23 Yeah. You and I, we do what's good for our family, okay? More than even for ourselves, sometimes mothers and fathers do for their children. Right. Right. And so in that sense, you know, we want to form a company and make a little bit of money for ourselves. Or in your case, maybe, you know, you want to do things good for Indonesia, right? Because that's your silo and you want to build them up. Or Southeast Asia in general.
Starting point is 00:43:55 But I think that thinking gets you to, when you optimize for a subset of human beings, then what happens is that you invariably infringed the rights. That was colonialism, essentially, right? Basically, the British came, the Dutch came, did what's good for them, you know, see what happened to all the other countries, right? They took all their resources. I mean, there was some transfer of technology and good stuff happened, but I think to a large extent you could say that was not the best experience for those countries.
Starting point is 00:44:32 Yeah. Right? So now it was more one direction. It is. So that's what's going to happen when you talk about the social equity aspect of AI is that people who have AI is like the new colonialists. Yeah. They will be able to. It's not by geographic boundary, but they will impose their will.
Starting point is 00:44:53 You know, we were introduced with the Internet. Yes. Which was opposed to flatten the world or the Earth. but it actually deflatten the earth. It democratized information, but it didn't democratize ideas. And it elitized. Yeah, it did. Certain people, the top 1%.
Starting point is 00:45:15 Right. Right. That have been able to make billions. Yeah. A lot of people got out of poverty, but I think disproportionately the top 1%. Yeah. They have gotten much richer. Yeah.
Starting point is 00:45:29 And I'm sort of like thinking that I think AI is going to further exacerbate the inequality. Yeah, I think so. Until for as long as we don't have an AGI, if at the top you have an AGI that's not a human, then it's irrelevant whether you make money or not. You know what I mean? Like if there is a smarter than human intelligence running everything. Now, do you get to that stage or not be, you know. That's something that we need to talk about. Does that make us no longer Homo sapiens?
Starting point is 00:46:05 That's right. Yeah, if you look at the word Homo sapien, it means the smart tape. I think the sapien means smart or intelligent man, right? Right? Homo erectors. That should change. We are the ninth Homo species.
Starting point is 00:46:22 Yeah. And we named ourselves the smartest. So we should call ourselves Homo subsapiens now? So, yeah, if we create an AGI that is better than human. And by better, I mean, just like it plays chess, it can beat the grandmaster, right? If it can do that for every single task, it's the best doctor, it's the best lawyer, it's the best politician, you name it, right? Best teacher. And all of them in one, then we don't really.
Starting point is 00:46:57 have much to offer anymore, right? So this is going to create mass amount of delusion, I don't know the word for it, but, you know, dissatisfaction amount dissolution.
Starting point is 00:47:12 There you go. Just like the Japanese, you know, for a while, the Japanese were dominant. If you remember, in the 80s, they were number one and everybody in the US were scared that, oh, Japan's going to take over. And so now after the 30 years later,
Starting point is 00:47:26 they've learned how to become number two or number three because China beat them too, right, to some level, right? So they've sort of resigned and they're no longer trying to be, you know, be number one, right? And it's not that bad, but, I mean, that's caused population decline in Japan and, you know, that sort of thing, right? And that can happen to the U.S. too. You know, if they at some point decide that they are number two,
Starting point is 00:47:55 they might get dissolution, right? So this is human nature, right? So our whole culture can go through that phase. And more, what's worse is that you have 100 year, 100 year, because AI is going to make them live longer. Yeah. Maybe some are even talking 200 years. 200 years, yes.
Starting point is 00:48:18 It's crazy. Yeah. So then you have a bunch of old people as humanity with very few. with very few children. It's quite interesting. There you go. What do you think are the impact of AI on the environment or climate change? Yeah.
Starting point is 00:48:40 So I think that's a very loaded question because climate science is in some sense, the very essence, it encompasses the entire humanity and the entire Earth. And while we think we understand it, we don't quite understand it. Because as you know, climate itself is a turbulent system. Basically, it's something like trying to, you know, there is a butterfly effect. You know, some butterflies flapping the winds in Indonesia can affect whether in California. Right. So that is the nature of this whole thing.
Starting point is 00:49:20 So a lot of times, while we know that, global temperatures are going up, we just don't know how fast this catastrophe is approaching us. Because the next year, it's a cold year, right? Yeah. And then the next year has been cold. This year has been cold. Exactly.
Starting point is 00:49:44 Right. Yeah. So I think, so that's why there's so much confusion and debate. But we can tell for sure. any new technology is going to use more energy, including AI, including AI, right? So humans tend to increase their energy use by 1% every year. And they've been doing that for the last 40 years.
Starting point is 00:50:13 Right. I think from 1980, I believe energy consumption for a particular task, of a, has gone down. So we are 36% more efficient in air conditioning and other aspects. But what we, at the same time, what we do is we make those technologies cheaper
Starting point is 00:50:31 and now more people have it. When I grew up in Sri Lanka, the two families said air conditioners. Like, you know, I had to go like 10 miles to find the house with an air conditioner, right? In the 60s. It was a big boxes. Yes. That is expensive, right?
Starting point is 00:50:48 And like very rich people could afford them. Now when I go, everybody has air conditions. So you can see that the energy consumption has gone up. While air conditioners have got more efficient, more and more people have our condition. So in fact, maybe 5% of, so the growth is like maybe 3 to 4% every year in consumption of energy. And that is the problem. So it's still a net exponential increase. So AI will do the same thing, right? There's 20 billion tweets in the world every day. You know, people, before smartphones, 20 billion. Right. So that means we have 8 billion human beings, at least 4 billion smartphones. On average, you send five texts. Everybody's sending five texts
Starting point is 00:51:36 to text messages to somebody, right? On average. Now, if you didn't have smartphones, you wouldn't have any texts at all. Right? You just send letters and stuff. you know. It's going to require energy. Yeah, exactly. That 20 billion. Now, just imagine those 20 billion inquiries become AI queries. You know, you're giving 20 billion queries to chat GPT or even of 100 billion
Starting point is 00:52:04 because it's more useful, right? Now, each of those queries might cost a 0.1 watt, you know, at least a few fraction of a watt, right? So now that's all that energy. All these AI computers are going to try to answer all these silly questions from people. So that's the problem we have, is that while technology increases human productivity, it does so at the expense to the environment. So the first order effect is to add to our energy consumption. Now, people like Elon Musk will, you know, help in consumption reduction and greenhouse gas emission by making electric cars and all that.
Starting point is 00:52:52 There may be possibilities like that, but, you know, that's not the first order thing, right? Google's electric bill is $12 billion. A year. Yes. Holy cow. Okay. So it's this crazy amount of stuff. So, you know what I mean?
Starting point is 00:53:06 Like, that's the way it is, right? Yeah. So I think if there was no Google, you wouldn't use all that energy, right? All the search engines and everything that everybody is doing Google searches is costing the environment. So how will AI put the resolution? Right. So I think AI has the promise of operating at the planetary scale because it is smarter than us. It can do bigger and more complex problems.
Starting point is 00:53:40 it may be able to find solutions that are more efficient. That's the first promise there. Also, it can help you develop technology, like carbon sequestration. Basically, that is taking carbon. And I think you can have carbon production, but you need to take it right back and put it underground. So it doesn't escape into the... You must treat the fossil fuel emission,
Starting point is 00:54:10 just like nuclear waste in some way, you know, put it into a can and just... Right. Dump it. Yeah, dump it rather. Buried in the ground. Don't send it to the atmosphere, right? Sure. So that's sort of the idea.
Starting point is 00:54:22 So people have toyed with those ideas. So one way AI can help, this indirect way, doesn't mean it's only AI, but synthetic biology will help you create trees. Now, these trees can have higher... absorption of carbon, you know, carbon heavy. So what the trees will do is absorb carbon dioxide from the atmosphere and grow and hopefully grow below the ground and that they don't decay as much. So because when the tree decays again, that carbon becomes methane and carbon dioxide
Starting point is 00:55:04 and water again, you know, when the tree dies, right? So those are some technical solutions to the problem where we believe that you can create such trees, you know, that are super absorbent and that sucks the carbon dioxide and, you know, sort of like the lungs. You can have a pipe directly. Yeah, that's right. You're not directed at the roots, right? Yeah, there you go. Have that pipe suck all the carbon. Yeah.
Starting point is 00:55:31 Yeah. So I think those are the, those are some ideas. They're not necessarily directly AI, but. AI may help us develop these things, okay? Yeah. Because it's the technological part of the AI, right? I think we were going to cover that. I think this is a time where you can talk about that.
Starting point is 00:55:51 Right. So the, and at the same time, I think the, when you think about the climate, and I don't know if there is a catastrophe coming down 20 years from now or not, or even 10 years from now, because it's too hard to tell. because the data is all over the place, right? But it's clear that we are heating up the Earth.
Starting point is 00:56:14 So one thing that we can think of is that the Earth is heating up, right? But at the same time, let's say one degree of temperature increases on the Earth with more carbon dioxide. Now, what it's going to do is going to disrupt all of the ecosystems on the earth. Right. Now, it's bad for some humans, but it may not be bad for all humans. Okay, and that's the thing because there is so much northern hemisphere. If you're in Siberia. Exactly.
Starting point is 00:56:56 You could use a bit of warmth. You could use a bit of warmth. You can have a bit of warmth and all the land becomes human habitable now. Yeah. And if that, so it's just that we can't predict if Siberia hits up too fast, then all those trees that were buried will start rotting again. And then all of a sudden you get an explosion of carbon dioxide and methane again because, you know, all those forests that were there for thousands of years are now just buried in ice. So they're not, they're just carbon bombs. Basically, they are just buried there.
Starting point is 00:57:32 But if you expose them, they'll get oxidized, and it'll be a disaster. Yeah. Right. But at the same time, you know, you need to think if there is a little bit of temperature increase in the earth, it has the effect of it might be able to make the whole Sahara desert green. Green again. Okay, because I just looked at it recently, and it looks bigger than the continental U.S. because it's on a globe. On the normal map, it shows it small because of the projection, right?
Starting point is 00:58:04 But Africa is one-sixth the world, I think. So you now have, you know, all the countries all the way from Egypt to the Atlantic new land. And that will solve the African problem, you know. Yeah, but you're talking about thousands of years. Not thousands. We're talking 100, 100, next century, by 60 years from now. Wow. Okay.
Starting point is 00:58:27 But that will... For a place like Sahara to be green? No, it'll get enough rainfall that people can go in and... Start planting. Start planting and do these things in about another... If the temperature rises, right? Yeah. But at the same time, I can't predict what's going to happen to Siberia and all the other things.
Starting point is 00:58:47 And neither can anyone else. Because it's so complex, right? The models are wrong and, you know, people... Yeah, I don't say... I'm not a climate scientist, but I know that to model that correctly is quite hard. So AI may allow us to take more data and have a more accurate model and predict that. But it's sort of a, yeah. So I think that's not a very, AI will help us understand our own world.
Starting point is 00:59:17 Yeah. Right. You've been working on this, you know, in a context of energy efficiency, right? Talk a little bit about the difference between artificial intelligence in an analog manner and artificial intelligence in a digital manner. Right. So, yeah, this is a quite interesting one. Yeah, during COVID, I took this new job of, before that I did AI, but it was for another company. With radar and gesture detection and things like that. But, yeah, in 2020, I joined a company to lower the energy consumption of AI.
Starting point is 00:59:59 Right. And this is more relevant now with the large language models, right? Because I think to operate chat GPT for a million users, okay, a million users, continuously would consume. about, you know, close to fractions of megawatts. Okay? So it's like a, and each, each megawatt is like 500 tons of CO2. Okay. So this is not, and each ton of CO2 is like 35 to 45 trees.
Starting point is 01:00:41 So it's like cutting down every second for every megawatt, you're cutting down 35 trees. That's how much... 35 trees for... One metric ton. Okay. That's 35 trees. No, I think it's five metric tons is what one megabot. I think.
Starting point is 01:01:03 If I'm right, I don't know. I'm doing this by memory. And that's equivalent to how much usage on Chad GPT? That's GPT, maybe a day? I don't know. I'm going to tell my friends. You know, if they use Chad GPT for a day, that's like 35 trees of... Right. You know, being chopped down.
Starting point is 01:01:21 Exactly. Right. So I think this is not that great. So what I am doing is I'm with a company that does the same calculation using analog electronic circuits rather than digital electronic circuits that allows us to get to femto-watt so that the energy consumption is, we'll be down by a factor of 10. So that's huge, you know, instead of 50 trees is five trees. right? So that's huge, I think. But explain the difference. Why is it called analog? Because people's conception of AI is like digital. It's completely digital. However, inside AI there is a matrix multiplication, essentially, a vector matrix multiplication.
Starting point is 01:02:11 And so what we do is we convert variables into physical. physical variables like currents and, you know, resistances and things and that, so that the laws of physics solve the problem for us. So essentially, computation comes for you for free, right? And also it's in-memory compute. In-memory means a lot of the times the energy is being used to move the data from memory to the CPU and back, right? that is eliminated by moving the CPU, the processing power into the memory. So this is called in-memory compute. So there is some innovation.
Starting point is 01:02:52 It's not done in the CPU. It's not done in the CPU. This particular AI computation only can be done on the memory itself. Wow. Okay. So it's in-memory compute. So there's some risk factors there, of course. We've shown it can work.
Starting point is 01:03:09 So now the challenge is to try to make it work in scale at the proper accuracy, you know. But in some ways, AI systems are a little bit more tolerant and forgiving because they are fuzzy systems anyway, you know. I mean, it's like, you know, digital computer need to know if you're doing a stock transaction. If you say buy or sell, you need to know whether you're buying or selling the stock, right? you can't have an in-between case, right? Then you get into big trouble. But if you ask in chat, GPT, okay, what do you think about this or that, then in fact it might become even more creative if it gives a little bit of randomness to its answer, you know?
Starting point is 01:03:54 So there are some applications in AI where power reduction is worth at the expense of a fraction of a percent of accuracy. Because analog systems are not as accurate. That's digital. Okay. Just like you listen to an analog. But it's worth saving nine-tenths of the energy. It's sort of like the old tape recorder. Not tape recorder.
Starting point is 01:04:19 Yeah, the cassette recorders and, you know, the record plays, the analog one. It's coming back now. There you go. So the quality was not too bad, but it's not as good as a CD-ROM or DVD. Compact disc or whatever. Yeah, you know. Yeah, CD. Compacted.
Starting point is 01:04:35 CD play. right? But it was good enough. So maybe there are some applications, but if the power consumption was 10 times less, and if it's equivalent, then one can use it, right? So that's the basic hypothesis of analog computation, is to lower the power that AI consumes by a factor of 10. Wow. And that's sort of the tall-o thesis. Okay. Yeah, that's our business. What about the economy? The impact of AI on the economy? Yeah.
Starting point is 01:05:09 So that's the, I think, I believe that we are in an AI renaissance, okay? Which means that just like you had, the first Renaissance came about, you know, after the thing. People like Newton went home and invented gravity. He went home to his hometown and watched apple trees because he had time. He went away from Cambridge. And he was sitting down. 1642.
Starting point is 01:05:42 Yeah. Exactly. It was 24 years old. Yeah. Because of, yeah. So created calculus. Created calculus at the age of 24 or 22. Crazy.
Starting point is 01:05:51 Right. Right. In the same way, AI is going to displace humans out of jobs. In fact, one of the first. well, computer software programming is at risk at this point. Because now you can, you don't need computer languages anymore. You can tell the AI, write me a program or create me a webpage that does, that has a pink background and a pop-up ad and a button to, you know, collect your money and, you know, whatever
Starting point is 01:06:27 you want, right? and it will create a website for you. And then you can make it secure and then you'll do all the security on it. And so you don't need any software guys anymore because you don't need to learn how to code. No. Right.
Starting point is 01:06:42 So everybody can basically give verbal instructions to the computer. Now this is great. But it's also going to make a whole lot of computer science graduates quite antsy. You know what I mean? Right? So that's the, that's both.
Starting point is 01:06:59 It's a double-edged sword, basically. We don't know. We can't predict how it goes. Now, sometimes, some software engineers will learn this thing and get 10 next to productivity. They'll do 10 times more work, you know, and then they'll do well, right? But at the same time, I think if the countries that are benefiting right now, the outsource countries, you know, people who are doing a little bit of IT in India and other people, I think they're going to be in for a, shock. Basically, AI will take their job and no more jobs in India anymore or Indonesia or any
Starting point is 01:07:35 other place. Because you don't need IT guys in, you know, there's no need. AI will do, do the work. So the outsourcing equation, which is, I think in India, it's about... Huge. Yeah, it's like $200 billion. I don't remember how much of the GDP it is, but, you know, so that's because the West didn't have enough programmers. right now the US needs three million more STEM graduates to keep our economy going all right that's not going to be the case anymore yeah yeah i will like fill two million of those jobs right hopefully not putting others out of business right so that's great right but it's going to consume some energy yeah but it's going to make us GDP go up right and i think it's predicted that
Starting point is 01:08:24 by 2030, the AI GDP for the world is going to be like $30 trillion. Oh, no. The next 15 years, like $100 trillion. Is it? Crazy. Yeah, it's crazy, right? Yeah. So some people have thought about the problem and people like, I think it was Bill Gates
Starting point is 01:08:43 who suggested that we ought to go and tax these AI programs as if, because they are displacing jobs that they should be taxed, you know, just like, let's say you have a $80,000 job and you displaced that and you were paying $20,000 of U.S. taxes. Right? And California taxes. Then the company that hires that AI program to do that job should be paying $20,000 of taxes.
Starting point is 01:09:13 And they're still a win-win because they don't have to pay the $50,000 to the customer, to the person, right? but at least the government will get that and give housing and all these other benefits to the citizens, citizenry, right? So I think that's something that we haven't thought about. We need to think about it. I think that's inevitable if a large fraction of the people are displaced because the economy used to be labor, resources, and capital. You know, with those three things, sort of like a little matrix, you put in those three things, and outcomes GDP.
Starting point is 01:09:51 And then you try to optimize various combinations. Shall we put more capital? Shall we put more resources, natural resources, or more labor, right? And different countries have different equation. What we're doing is we're shrinking the labor part of this equation. And now we, if you have capital and resources, then robotics and AI will take care of everything, no need for any labor. So then you have this GDP.
Starting point is 01:10:17 And then what do you do with the GDP? You have to go and cycle it through the people. And if all the people are poor, without any money, then I won't help either, right? So there is sort of this optimum. Right. So I don't think we are thinking in these terms at the moment, but as human beings, as governments, we need to start thinking about that. Right.
Starting point is 01:10:38 So that I think is a huge first impact. I mean, one of the prominent impacts of AI would be how it's going to change our work culture. It's not a bad idea for people to have a little bit of vacation of flex time. And that was what we were always thinking that, right?
Starting point is 01:10:57 That we would have, you know, 100 years ago, people worked six hours, six days, you know, for 10, 12 hours. And now we have a 40-hour work week. Right. You know, if you can gradually decrease that and give people more leisure
Starting point is 01:11:13 and have a three-day work week, but have everybody employed, that would be good, but that's not what's going to happen. What's going to happen is a bunch of people will go out of job, including people like, I think, actresses, you know, because AI can produce human-looking figures without blemish, you know. They look like exactly like humans, right? So people are working on these things where you can make whole videos, basically.
Starting point is 01:11:46 There was a movie. Yeah. Made on that. You know, I think it was with Al Pacino. Oh, yeah. Who was basically directing this model. Yeah. That was acting in a number of things.
Starting point is 01:11:57 But she was actually AI generated. The technology is right there. And now this is going to have all kind of moral questions. I think somebody was talking about, you know, there's Arnold Schwarzenegger. He's in Terminator. Terminator. What they did is they took him out of Terminator and put him in another movie.
Starting point is 01:12:15 and he's acting along quite fine without, you know, basically he didn't have to do anything. You know, he's just Arnold Schwarzenegger doing because he's been encoded by AI, his voice, his actions, his character. No, you got musician singers that are AI produced. Yep.
Starting point is 01:12:38 So I think... You were a fan of this movie AI, right? By Steve Skilber. Yes. Yeah, that's the... Yeah, that's the... Yeah, that's the... Yeah, that's...
Starting point is 01:12:45 That's where I was talking a little bit about how people make connections to AI. You know, to, I think I mentioned, I think, about how you can, how you get attached to AI. You know, I, I think at the time I watched it, it was about a little boy and my son was the same, same age. Right. Maybe about eight to ten years old. That AI boy is, you know, I think about seven to eight years old. And so when I was watching it, I was empathizing with this. AI, you know, I cried at the, at the plight or the fate of this little boy here, you know,
Starting point is 01:13:25 who's AI, but he's programmed to be a human boy. And so he needs love, you know, but nobody sort of loves him after some time, you know, because his mother had another, another real son, right? And so he became like junk, you know, so basically. I think I don't remember the story well, but I think that. So I felt like so sorry for this thing, you know, because I thought that it was a real, real boy, you know, like my son, you know? So I felt, so we will make emotional connections like that with AI. And that's good and bad. On the good side, it has the benefit of helping with psychological problems, schizophrenia.
Starting point is 01:14:13 Like, we don't have time to listen to people with schizophrenia all 24 hours, right? Yeah. Because they have their own view of the reality. Perhaps we can try to adjust it a little bit or understand their point of view. But, you know, that psychology is, you know, you can't afford. You know, it takes like, you know, $50 an hour or whatever, right? For that, for the therapy you need, right? Or suicide prevention.
Starting point is 01:14:43 depression, I think, you know, AI will be massively useful because it's just an app. You know, you can just talk to it and it'll tell you exactly and you can tell all your feelings about it. Right. And then it'll give you good suggestion, just like a good psychologist would, a good friend would, you know. So I think there is that aspect. So I believe, yeah, we are empathy, right? empathy can be programmed in into, yeah.
Starting point is 01:15:14 One of the things that's missing with AI is that we don't have a way to give AI a moral values at this point. I think that's where before we do AGI is another thing, right? Now, what is a good moral value? That is completely debatable in this world, right? Oh, yeah. Depends on who you talk to. Exactly. Right?
Starting point is 01:15:38 Yeah. Yeah. So the Western values may not be applicable to the release. or to some other places, you know, some of the biblical values are not applicable. Right. You know, so I think the, yeah, I think that those are big questions that we have to, the next generation of philosophers need to grapple with you. What does it mean to be human, you know, in the presence of these simulated human?
Starting point is 01:16:07 So actually, let me give a good example. There is this concept. of a replica. Replicant is essentially if I take all of the things that you have said and done in your life, all of your experience,
Starting point is 01:16:23 if I can encode it and put it into AI and train it to be you. Then it will, for all practical purposes, it will be you, okay, essentially. Now, for instance, we lost father-in-law just a few about a month ago.
Starting point is 01:16:39 And my wife would love to have be able to talk to him again. So if we had that program where if we had encoded, it would give good advice. You could ask, hey, dad. You can create an avatar. Exactly. And then, under all conditions,
Starting point is 01:16:58 it would act and behave like your dad, give you good advice and tell you, don't do that, do this, you know, and be able to recount all the old stories from your childhood, just like your dad would. And then we have six siblings. So we could have six copies of this program, right? So now, yeah, so those are called, I think, replicants, basically.
Starting point is 01:17:26 I think AI allows you to have that, essentially, right? And they can go on. And they can learn on new experiences as well, right? But they're not you, but they behave. Yeah, so you could take a person like a benevolent dictator, like Lee Kwan. Yeah, there you go. Make him immortal. You know?
Starting point is 01:17:49 For Mahatma Gandhi. There you go. Yeah. So I think there's some interesting possibilities that we haven't thought about the morality of do they have human rights. Can every person have your own Mahatma Gandhi, you know? Yeah. That type of thing, right?
Starting point is 01:18:05 Or if I replicate you, then do I get? Then I have your wisdom. Right. Then do you get a royalty? You know, that type of thing. Right? So all kind of interesting problems. Does that prove the earlier point of discussion where you got to make this thing
Starting point is 01:18:23 multidisciplinary. Absolutely. Absolutely. So, yeah, we will. There are so many universities going to get on it and crank on it. The more I think you need to rope in the philosophers, the sociologists, the culturalist, and the spiritualist. and all that.
Starting point is 01:18:39 Exactly. So it allows us to approach AGI. By the way, how far away are we from AGI? I think my own opinion is that it might be at least 10 years away. AGI means I don't think current systems are not AGI. They do know, they seem to know. For instance, you know, you have the patent database. of the United States patents.
Starting point is 01:19:10 There are millions of them, I think, a million patents, right? Now, you could go and theoretically tell the AI to go and read all the patents and it has read all the patterns and tell it to invent new things. Okay, go to Thomas Edison's patents and go and tweak it. And you might, right? You'll have an infinite number of. The expansion of human knowledge is going to be. exponential, right? That's why there's going to be vast inequality,
Starting point is 01:19:41 basically, right? You understand? Oh, my gosh. Right. So that's the, yeah, that's both the... That's scary, man. Yeah. Scary good or scary bad, I don't know.
Starting point is 01:19:52 Scary good or scary bad? It all depends on, yeah, so that's one of the things like an AGI would be able to do is to invent new types of airplanes, new types of... It doesn't sound like that's going to have to wait until 10 years from today. No, it might not. So some people will get to that goal without doing all this moral right and wrong stuff. For instance, AI-controlled drone, army, you can't beat it.
Starting point is 01:20:23 It will be like a swarm of bees, basically shooting accurately. It could be any size. It can be like tiny. Yeah, not only that. It will know that if you're hiding, it won't shoot at you. It will know that, oh, you're going to go. and hide there. I'm going to shoot at the place that you're going to hide it.
Starting point is 01:20:40 It makes a prediction of where you're going to be. Yeah, it's going to predict and do it. Right now we can't do that, right? So this world is going to get very scary. And I'm sure U.S. military and all these other things are working on these problems that how to control, you know, thousand drones
Starting point is 01:20:56 and just decimate human army, you know? So these are the sort of the problems that AGI We don't really need even AGI, but, you know, smarter and more capable than human overload can do. AI has already passed a Turing test. Yeah, so that's a debatable one, right?
Starting point is 01:21:19 Because Turing test, yes. If you say that at the moment, it does pass linguistically. So it can fool a lot of people, whether it's a human or not. But it doesn't know the laws of physics. it doesn't know the... So you can't... You know, like maybe average person doesn't know loss of physics, right?
Starting point is 01:21:41 But like you can't pass a Stanford professor. You know, you'd know Kepler's loss or something. And then, you know, you'd ask, what is it, and you wouldn't know how to do it. But over time, it will, right? So, Turing test is... Some people think it's old-fashioned, that it doesn't apply anymore.
Starting point is 01:22:01 But... Well, it's... human level at the moment, okay, in many tasks, including linguistic tasks. But can it invent totally new human concepts like complex numbers, so one thing, you know, where mathematicians just came up with the square root of minus one. It was a very mysterious, and some human figured out, it's useful. Let's use it, right? I don't know if you, if I'm a mathematician, so I, right? So because that doesn't make any sense. You know, what is the square root of minus one?
Starting point is 01:22:37 Right. Now, can AI do that? It might. It might. We don't know. Right? So it's not at that level yet. Currently.
Starting point is 01:22:45 Okay. But if you're at the level, it might invent other things that we don't know anything about. Yeah. You know, that might be useful. Right. So. I'm going to ask you the last couple of questions.
Starting point is 01:22:57 This has been fascinating. But can, can AI help us to become net happy? as opposed to Natsad? I think that is more of a Zen question, I think. So, AI can train you. You can train you to meditate. You've been a very divergent person, right?
Starting point is 01:23:21 So you've explored so many dimensions. I think we've been all over the place, but I think hopefully useful. I think we can use AI to improve our mind and be more enlightened about our world, be more
Starting point is 01:23:40 empathetic, right? I think in the, I don't necessarily believe in sort of happiness, you know, this is my personal philosophy is you do need to are more like a stoic, basically, just do your job, you know?
Starting point is 01:23:59 Marcus Aurelius type person. Basically, I don't try to be happy all the time, but there others who are more epicurean, you know, so they want quality of life and just don't harm others, but enjoy life, you know. That's their philosophy, right? So, yeah, I think no matter what you choose, AI will allow you to achieve that, right? Yeah, so there's no point being sad.
Starting point is 01:24:26 It's just that, you know, you've got to figure out this is either the way the world is, you know? So some people believe that you don't have much choice. The world moves by itself. You're like a little dog connected to a cart, and the cart moves, and you got to move. If you don't, you will suffer. You're going to be stuck at the station. Exactly. The train leaves.
Starting point is 01:24:48 Yeah. So I think a lot of times we see problems. Humans are unhappy because they are unwilling to do the right. thing or they give up their obsessions. They just got to be more stoic. There you go. So I think, yeah, AI, it's not an AI question, I think, but AI definitely will allow us many tools, you know, how to meditate better and that type of thing. I think, yeah, yeah.
Starting point is 01:25:20 I mean, it will help us make our humanity a lot better. It will eliminate many diseases, okay? In closing, I must say that the coronavirus, the modern virus was discovered by two days of computation. Only two days. At that time, they were not even using any, because the virus genome was there, and there's MRI things that they can do to create a vaccine, and they went through all the possibilities, and they came up with a very good one. Okay. And they took 10 months of testing and just two days of research to create the COVID virus.
Starting point is 01:26:06 And that's apparently the best one. I don't know if it's the best one or not. There's a Pfizer one and, you know, it's a Chinese. And everything, right? Yeah. However, with AI now, you can do it in like half an hour, right? And you can speed up the time that it takes testing because the testing is done. Also, you can simulate various types of immune systems.
Starting point is 01:26:27 Only 10 months. No, you will just need one week of testing and then you have that. So now you can have a vaccine in seven days when you have a new disease. On that note, man. What would it take for the technologist and the non-technologist to feel optimistic about a future or a utopian future in the context of AI applications? I think we have to trust each other. I think that's the most important thing. A lot of scientists do believe that.
Starting point is 01:27:07 That's why they share their research and there's open source and everything, is that human beings are good, right? And if we all are part of a sort of improve humanity type of movement using AI, I think we'll all feel better. we feel all empowered. We can eliminate diabetic blindness, for instance,
Starting point is 01:27:30 quite easily in all these countries. Pediatric blindness. Yeah. So all these many things that are not hard to do, but with AI, it's so much easier. You can educate everyone with a custom, you can make very much a utopian society.
Starting point is 01:27:46 Now, but our society has to change. That's the thing. You can't be stuck in the mud and say, you know, now, Yeah, so like for instance, I must, if you go to the Bible, there was this thing called the Tower of Babel, if you remember that. I think the Quran might have it to something like that. What it is is humans got together and tried to build some power. And so that's where all the different languages came about.
Starting point is 01:28:17 I mean, this is mythology, but nevertheless, I'm bringing as an example, right? So it dispersed everybody and we have now different languages. We are talking different things. AI and chat GPT can, not chat GPT, but the next GPT can bring them all together. It's the inverse of the Tower of Babel,
Starting point is 01:28:36 if you think about it. Wow. Where Indonesian can talk to Japanese, can talk to a sub-Saharan African and exchange ideas because language is no barrier because your AI can just translate for you. And so you're just a human being now, okay?
Starting point is 01:28:54 This is a really cool point because a lot of us in Southeast Asia. Yes. Are not equipped with the ability to speak English. Or equipped with the ability to speak an international language. I would guess less than 5% with the exception of the Philippines and Singapore, predominantly English speaking, right? Right. You know, I've been thinking what would it take for, you know,
Starting point is 01:29:21 Southeast Asia is a population of 700 million people. What would it take for 350 to 400 million people in Southeast Asia to be able to speak an international language? Right. And I just think AI could be the solution. Yeah, because it can give you tutoring. It can translate every language, you know. And I believe that's a great equalizer.
Starting point is 01:29:43 Right. In the sociological sense. Yeah. And I think that's very positive. I think that's the most positive that I can. can think of, you know, that it can while it can make the rich richer, it can
Starting point is 01:29:57 also bootstrap all of us above a certain level, right? And maybe we get to a place with $5 of income per day, you know, on average, you know? Like, make a GDP per capita of $10,000 for
Starting point is 01:30:13 everybody in the world. It's game changing. It's game changing. I mean, if you can compress 10 months worth of lab testing to a week. Yes. You know, instead of thinking it's going to take us 10 years to learn a language. Yeah. You might be able to do that in months or weeks. Exactly.
Starting point is 01:30:29 Right. Then you don't need any fancy neurotransplants or any of that kind of stuff. Yeah. Yeah. All you need is just a smartphone. In the absence of good teachers, you just use AI capability. Yeah. Absolutely.
Starting point is 01:30:42 That makes me move forward to the utopian side of the game. Yeah. I think so. Yeah. Yeah. Yeah. I'm a utopian believer. With caution.
Starting point is 01:30:54 There's caution. Thank you. Hey, Asanta, thank you so much. Thanks, thank you, either. It's been great. Okay, good. Thank you very much. That was Yasanta, Raja Karunana, Yaki, in the Bay Area.
Starting point is 01:31:06 Thank you. This is an end game. The son was younger, I was thinking, what do I tell? We haven't given it a three-dimensional view of the model. The third one is with the computer. I don't know about from.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.