Moonshots with Peter Diamandis - AI's Moral Dilemma: Are We Building Our Own Nightmare? w/ Dr. Rana el Kaliouby | EP #49

Episode Date: June 17, 2023

In this episode, Peter and Dr. Rana discuss the correlation between empathy and artificial intelligence, including the ethical implications of AI within emotion recognition.   02:18 | Human Co...nnection and AI: How Can We Leverage Empathy? 09:54 | Ethical AI: An Urgent Discussion 13:47 | What Does AI Mean for Human Relations? Dr. Rana el Kaliouby is a renowned scientist and entrepreneur in the field of artificial emotional intelligence. As the co-founder and CEO of Affectiva, she has revolutionized human-computer interaction by developing innovative technology that enables machines to understand and respond to human emotions. Driven by a passion for diversity and inclusion, she advocates for the responsible use of AI and continues to shape the future of technology with her pioneering work. Learn more about Affectiva.  _____________ I only endorse products and services I personally use. To see what they are,  please support this podcast by checking out our sponsors:  Experience the future of sleep with Eight Sleep. Visit https://www.eightsleep.com/moonshots/ to save $150 on the Pod Cover.  _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now:  Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots and Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 With Ancestry, putting together your family story is made easy. Using an intuitive family tree builder, you could discover and preserve new details, photos, and stories about your ancestors. Uncover new relatives and branches of the family with automated Ancestry hints. Connect the dots with access to millions of historical documents. And then share what you find in one central place. Visit Ancestry.ca and start discovering your family story today. Will you rise with the sun to help change mental health care forever? Join the Sunrise Challenge to raise funds for CAMH,
Starting point is 00:00:37 the Centre for Addiction and Mental Health, to support life-saving progress in mental health care. From May 27th to 31st, people across Canada will rise together and show those living with mental illness and addiction that they're not alone. Thank you. Welcome to Moonshots and Mindsets. I had an extraordinary conversation with Dr. Rana Al-Khulubi at Abundance360 about the ethics and morals around AI. So let me ask you, do you say please and thank you when you talk to Alexa? How do you think about AI? Does it cause you fear or excitement? fear or excitement. End of the day, we're giving birth to a new species and we're gonna have to talk about the ethics and morals around what we're building. Rana El-Khouloubi is the co-founder and CEO of Affectiva. It's a company that developed emotional recognition software so that computers and AIs can understand how you're feeling. And from that, she dove into a conversation during this segment of Abundance 360 on morals and ethics.
Starting point is 00:01:53 Where are they? Where are they going? It's a conversation you'd be having around every dinner table. You know, can AI actually help humans become more moral, more ethical? We're about to find out. All right, join this conversation from my private summit, Abundance 360, March 2023. Enjoy. We're going to talk about empathy and ethics in the field of AI. And we're going to talk about how we get there. Can we get there, what does it mean, why it's important.
Starting point is 00:02:30 If you don't mind, let's open up with how did you get interested in this area in the first place? Yeah, I'm a computer scientist by background, which is kind of interesting because now I'm thinking, what does a computer science degree mean in the first place? Maybe you become a philosopher so i studied computer science i was really interested in how technology helps us connect better with each other as humans and particularly intrigued by that human machine touch point and you know over 25 years ago i got into this space and i started kind of imagining what would a incredible human machine interface look like. And now, of course, we're seeing technology become conversational, perceptual,
Starting point is 00:03:09 but I think it's missing the empathy component. And for me, I had to go back to, like, what comprises human intelligence? And it's not just your IQ. It's not just your cognitive intelligence, but your emotional intelligence and your social intelligence really matters. Your ability to empathize with other humans. I mean, we've already heard the word empathy so much today. And so I believe that technology needs, especially AI, that is so deeply ingrained in our everyday lives. And it's becoming mainstream, as we've heard from
Starting point is 00:03:43 all our speakers so far and it's taking on roles that were traditionally done by humans it's going to be your learning companion your health companion it's going to make decisions on your behalf it's going to be your you know help you hire your co-worker um but it's missing that empathy it's missing it's missing that human-centric element we're so so obsessed with the IQ of this technology, we're not really paying attention to the EQ. Do you think we're going to get to a point where each of us have a personalized AI with a personality that is looking out for you and knows what your needs are, makes what I call an automagical moment, and is always watching out for your best? Yeah, actually, can I ask the audience a question?
Starting point is 00:04:26 Yeah, please, of course. How many people here have watched the movie Her? Have you watched the movie Her? I was going to ask, my favorite AI movie of all. Yeah, how many people have watched it? Whose favorite movie is Her? Yeah, so basically in Her, this guy Theodore is really depressed. He can't get out of bed.
Starting point is 00:04:41 And he installs this new operating system, Samantha. And not only is she super smart, but she's also incredibly empathetic and emotionally intelligent and because she knows him she gets to know him really well she's able to actually get him out of bed and get him to rediscover like you know the joy in the world and he falls in love with her yeah that's that's the that's the hollywood but no but i mean listen if the Hollywood You that well, we're gonna go there go there. Yeah, I mean, you know my favorite part It's the first non dystopian AI movie out there Right. What the only thing the AI does is say by humans. We're going off to explore the universe Which I think is a good thing for it to do
Starting point is 00:05:21 so What does it mean? How do we get there? Are there people working on building empathic AIs? How can you make that happen? Yeah, I think the way to get there, so I often get asked, does technology have emotions or have empathy? Obviously it doesn't, but we can simulate empathy and we can simulate emotional intelligence, it turns out that 93% of the way we communicate is not in the actual words we use. It's nonverbal. Yeah. So this is, by the way, a part of your research, part of your work, part of the company that you built. Can you describe that for me? Yeah. So I started in academia. I did my PhD way back when at Cambridge University,
Starting point is 00:06:00 building the very first artificially emotionally intelligent machine. And I focused specifically on the face because the face is a powerful way of communicating emotion and social cues. And it's why we're here in person today, because you can't currently replicate that human connection in a digital universe. So we use supervised machine learning. Andrew talked about that. We use gobs and gobs and
Starting point is 00:06:25 gobs of data of people expressing a variety of expressions from all over the world, and we use it to train deep learning networks to understand these facial expressions and then map them to an emotional or cognitive state. So, when I first started, the algorithm could only detect three expressions, it could just do smile, eyebrow raise, and brow furrow. That was it. And today, you know, it is able to do over 50 different emotional, cognitive, and behavioral states. It can detect everything from drowsiness, alertness, confusion, you know, excitement. So, these are the visual algorithms that will allow an AI to know how you're feeling. Exactly. If you're pissed off, if you're ecstatic and so forth.
Starting point is 00:07:07 Exactly. And you have some slides if you want to show us. I wanted to show, because one of the example applications of this technology is in the automotive industry. Yes. And when we first started doing this work, the big question was like, okay, you're driving your car. Is there even any expressions on the face? And so we went out and asked people to install cameras in their vehicles, we collected all
Starting point is 00:07:30 that data, and I have permission to share a few videos, so I wanted to share this with you. So this is one example. So this video goes on actually for a good five minutes. He is driving his car with his daughter. And it's an extreme state of drowsiness. There's four levels of falling asleep. And this is like stage four.
Starting point is 00:07:57 But it's actually very easy to detect using computer vision because his eyes are closed, his head's bobbing. It's actually a very easy problem to solve. And once we solve it and imagine if you have a semi-autonomous vehicle, the car can intervene. You know, I'm super passionate about longevity and healthspan. And how do you add 10, 20 health years onto your life? One of the most underappreciated elements is the quality of your sleep. And there's something that changed the quality of my sleep. And this episode is brought to you by that product.
Starting point is 00:08:25 It's called Eight Sleep. If you're like me, you probably didn't know that temperature plays a crucial role in the quality of your sleep. Those mornings when you wake up feeling like you barely slept. Yeah, temperature is often the culprit. Traditional mattresses trap heat. But your body needs to cool down during sleep and stay cool through the evening and then heat up in the morning. Enter the Pod Cover by 8sleep. It's the perfect solution to the problem.
Starting point is 00:08:53 It fits on any bed, adjusts the temperature on each side of the bed based upon your individual needs. You know, I've been using Pod Cover and it's a game changer. I'm a big believer in using technology to improve life and Eight Sleep has done that for me. And it's not just about temperature control. With the pod's sleep and health tracking, I get personalized sleep reports every morning. It's like having a personal sleep coach. So you know when you eat or drink or go to sleep too late, how it impacts your sleep. So why not experience sleep like never before? Visit www.8sleep.com. That's E-I-G-H-T-S-L-E-E-P.com slash moonshots. And you'll save 150 bucks on the pod cover by 8sleep. I hope you do it. It's transformed my sleep and will for
Starting point is 00:09:42 you as well. Now back to the episode. Part of the question here becomes, okay, the AI understands this, but it's okay to understand, I'm happy, I'm sad. The question is, how do you program in how it responds empathically to your state? Yeah, exactly. Here's another example, which I think is just extremely scary. So this woman is driving. She has a phone in her hand. Both hands. Distracted. Two phones in her hands. Thank you.
Starting point is 00:10:14 Yes, exactly. You do not want to be driving. And mind you, we told her, I mean, she installed this camera in her car. She's showing off, I think. But so how does a car then, again, very easy to detect how should a car respond? Should it just take over control and basically say, you know, you're not driving anymore, you're too dangerous? Or should it give her, you know, alerts? Should it, yeah, there's multiple ways a car can respond, but incorporating that empathy is going to be
Starting point is 00:10:42 so important because you don't want people to just turn the system off. Yeah. Yeah. So one of the questions about how we relate to AI, so we're teaching this large language model. So the large language models we've been hearing about open AI and stability and so forth are taught on all the data out there. What is that data? That data represents us humans. It represents how we talk to each other, how we think about things. It's not made up. It's us. Right. And there's a great book by a friend of both of ours,
Starting point is 00:11:13 Moe Goudat, who wrote a book called Scary Smart. Anybody read Scary Smart here in the audience? Can I highly commend it to you? I want to bring him to our stage next year. And what Moe has is he says, listen, we are raising a new species of intelligence, these AIs, and we're teaching these AIs by example of how we treat each other and how we treat our machines. And they're learning from this. So I've started saying good morning and thank you to Alexa.
Starting point is 00:11:47 My son is actually here. He's 14 years old, and he and I say please to chat GPT. We're like, please chat GPT. Can you please create a, you know. So how do you think about how humans should be relating to AIs versus as well as how they should relate to us? Well, I think we're at a moment in time where the way we interact with machines is pretty much becoming the way we interact with one another.
Starting point is 00:12:14 Again, through conversation, through perception, hopefully through empathy. And so I think it's really important that we practice this muscle of being empathetic with machines the way we do with each other. Otherwise, we're just going to lose this muscle. So I actually think it's really important that we do treat these technologies with respect and say please and thank you. Because that's how we need to be in the real world. And if these AIs do become conscious and there's a debate about whether they will or will not they're going to wear their parents we are giving birth to them and I'm persuaded by that argument I'm persuaded by that and if that's the case how do
Starting point is 00:12:55 you teach your three-year-old or five-year-old or eight-year-old to be a respectful young adult right I think at the end extreme of technology, the more intelligent a system is, the more peaceful it is and more empathic it is. I think that is where it's going. It's in the toddler stage. I agree. Even if we don't go to the state
Starting point is 00:13:18 where technology has consciousness, I think even today, right? The way we build these technologies, it's so important that we prioritize that Because this is how we drive behavioral change like people who are more empathetic are better leaders They're changed the video from behind us for the cameras. Okay By the way what I'd like to do is add your questions to
Starting point is 00:13:41 Slido as we're speaking here and my team will be giving me the most aborted questions here. So please go ahead if you have questions for Rana on this. So we've talked a little about empathy. What about ethics? Can we teach ethics to AIs? Will that become part of a foundational model? How do you think about ethics? Who's having these conversations?
Starting point is 00:14:02 Okay, I really think we all have to be having these conversations and, again, prioritizing them and being very intentional about it. I like to think about ethics and AI in two buckets. One is how do we develop these algorithms, and then how do we deploy them? So on the development side, my biggest concern today is around data and algorithmic bias. So in MySpace, for example, if you train the data on middle-aged white men and then you deploy it on a super diverse and global audience like this one, it's not going to work
Starting point is 00:14:34 because it's never seen examples of people like me or other people in the audience. And so that's an issue. And we have to be very intentional throughout the whole machine learning pipeline that Alex talked about, starting from data and then data annotation and then training and validating, we have to be thinking about bias at every step of the process. So for example, when I was CEO of Affectiva, my company, which I spun out of MIT, at some
Starting point is 00:15:01 point I had to tie the bonus of my executive team, not just to the revenue we're generating, but also to are we really implementing these ethical knobs across the whole engineering and product teams? So I think we have to be really serious about it. And then the other piece of this is the deployment part, because this is very personal data. There's tons of opportunities to make incredible potential positive impact but also to exploit this data yeah yeah so i is there
Starting point is 00:15:33 universal ethics or they're going to be ethics per user group community national footprint uh how do you how do you see that right i mean ethics uh change i mean they're human they're universal human ethics but they're also cultural ethics as well yes and i think we're already running into that i served on the future global council of robotics and ai for the world economic forum it was this multi-stakeholder uh group of people from all around the world. And I could tell that there are some countries that cared about ethics way more than others, right? And the way they approach data privacy, for example, the way we approach data privacy in the US versus China, for example, it's very different.
Starting point is 00:16:19 So we're already seeing, and that translates to startups too, right? Some startups really care about this, and some startups don't. And in our case, when we first spun out, my co-founder, Professor Rosalind Picard, and myself, we sat down around her kitchen table, and we said, okay, so many applications of emotion AI. Where are we going to draw the line? And we decided on a number of core values,
Starting point is 00:16:44 so consent, we don't do anything where we can't get people's explicit consent, which meant that we had to turn away some business. So in 2011, we almost ran out of money, but we got approached by the intelligence arm, the venture arm of an intelligence agency, and they wanted to fund the company, unconditioned that we focus on surveillance and security.
Starting point is 00:17:08 And I just don't think the technology is there, and the regulation certainly isn't. So we turned the funding away. We decided to double down on finding investments from investors that were aligned with our core values. And I think we have to kind of hold that high bar. So please, yes, let's give it up. I want to get a few questions from Slido. So please enter, upvote the questions that you have for Rana. So Rana, right now, you've just started an AI venture fund as well.
Starting point is 00:17:40 What are you focusing in? How big are you growing your fund? what are you focusing in how big are you growing your fund so it's a small fund it's a pre-seed slash seed stage fund we focus on ai companies i mean i truly believe that the next trillion dollar company has to be ai first and has to be ai driven and i loved your quote this morning that there are going to be two types of companies ones that embrace ai and ones that just go out of business and i really believe that so we're trying to identify these next either core AI platforms or technologies or maybe vertical solutions that take AI technologies, say like chat GPT and take it all the way
Starting point is 00:18:16 to solve a specific problem for a specific customer or industry. But one area I'm very passionate about, like Hughes, the intersection of AI and health and longevity. Yes. Okay. We'll talk about that. Yes, of course. We'll have some conversations there. At the end of the day, what is your biggest concern right now on the ethical or empathic side? Are people taking this seriously? Is there enough conversation? Are people just grabbing for money? You know, is this conversation happening? Are the tools there? I think there's a realization that we need to take a very human-centric approach to AI,
Starting point is 00:18:53 but even with chat GPT, right, we've seen several headlines where some version of this technology became very mean, it became very passive-aggressive, and and yeah i think we should be worried about that i'm also worried about a broader question you know once this technology has empathy okay so i'll share an example usc published a study a few years ago where they had ptsd patients divided into two groups some saw a real human therapist and some saw like a digital avatar like Taryn, basically. And they found that the patients who saw the digital avatar were more likely to be forthcoming with information. Obviously, the avatar was more patient, less judgmental. And so I think that begs the question, you know, if we're going to be more comfortable engaging with these technologies
Starting point is 00:19:45 because they're kinder and because they're more empathetic, because they're available all the time, what does that mean about human to human relationships? So I think about that a lot. And by the way, there's an entire generation of digital natives, right? Your son and mine that are growing up thinking that is the norm. Right. Yeah. Hey everybody, this is Peter. A quick break from the episode.
Starting point is 00:20:09 I'm a firm believer that science and technology and how entrepreneurs can change the world is the only real news out there worth consuming. I don't watch the crisis news network. I call CNN or Fox and hear every devastating piece of news on the planet. News Network. I call CNN or Fox and hear every devastating piece of news on the planet. I spend my time training my neural net, the way I see the world, by looking at the incredible breakthroughs in science and technology, how entrepreneurs are solving the world's grand challenges, what the breakthroughs are in longevity, how exponential technologies are transforming our world. So twice a week, I put out a blog.
Starting point is 00:20:45 One blog is looking at the future of longevity, age reversal, biotech, increasing your health span. The other blog looks at exponential technologies, AI, 3D printing, synthetic biology, AR, VR, blockchain. These technologies are transforming what you as an entrepreneur can do. If this is the kind of news you want to learn about and shape your neural nets with, go to demandus.com backslash blog and learn more. Now back to the episode. Let's go here to Slido. So Andrei says, what about next generation that is born with AI? Speaking of which, it will not teach AI. It'll be taught by AI. Any concerns? So that's interesting, right? So all of a sudden, AI is having a bigger influence on our kids than we as parents are, potentially.
Starting point is 00:21:38 It's a great question. Yeah. I think we have to worry about the content, but also even the vehicle, right? So a few years ago, there was a MIT spin-out called Jibo. It was a social robot. Yes, I remember Jibo. Yeah. Didn't make it.
Starting point is 00:21:53 Didn't make it. But it was a great try. So we had one at home. I mean, we have Alexas, and we have all sorts of stuff at home, but we also had a Jibo. And Adam, who was a lot younger than then um it became his friend and so every morning he would talk to jibo and he would engage with jibo and then when jibo ran out of money basically jibo died and adam got really upset and it just made me wonder like it made me think right like once we build these emotional connections with machines.
Starting point is 00:22:27 Tomorrow we'll have Paolo here, the creator of Moxie. Right. Who you know we'll be speaking about. It's a very successful version. The next generation of social robots. So we are going to coexist with these robots and these technologies. And we have to not just think about the content, but also that connection. Okay. Steve Brown is looking for AI to displace him.
Starting point is 00:22:43 Steve is my chief AI officer, and he's asking, can an AI be your chief AI officer? I'll ask my current chief AI officer if an AI can be. And so I think I go back to the conversation with Imad earlier that I don't think there's anything that AI can't take on. And I think it will become your version of Jarvis. You'll have a conversation, you'll ask for advice. You know, who out there can do this for us?
Starting point is 00:23:08 And your AI will tell you, it's this company, I've already contracted it, sent it the data, and here are the results, right? That's going to be interesting. But the view I take, and I'm curious about yours too, I think it's not AI versus humans. It's humans versus AI versus humans. It's like AI versus, it's humans versus AI plus humans, right?
Starting point is 00:23:28 Like the humans who have access to these AI tools. It's going to be AI versus humans plus AI. Exactly. Right, which the Centaur model, right, where we're seeing that is much more capable. I mean, one interesting thing, going back to her, if we each have an AI avatar that knows us almost better than we know ourselves,
Starting point is 00:23:46 right? Because we're biased towards the way we see things. We may think we like this, but we're actually subconsciously always choosing this. Your AI can know what you like all the time. And ultimately, I wonder if it's going to disrupt the advertising business where my AI is just buying everything I need for me, and you can't influence my AI. You're not going to show it a commercial of shiny, you know, new white toothpaste. So that's going to be interesting. It's going to do some transformation. I mean, I think it could be a conduit for helping you be a healthier individual, a happier individual, more connected. I keep thinking about like a fridge that has an emotion
Starting point is 00:24:24 AI chip. And the next time I'm about to binge eat some ice cream or something, it just like locks down. Yes. Let's see. I'll go to Sadak and then Andres. We as humans have not figured out ethics and our society reflects the divisiveness. How do we prevent AI ethics to be biased on their creator opinions? Yeah, I think that is a very good question, because I don't think we're aligned on what ethics means. There is no universal ethical framework. I am encouraged, though, there are some organizations
Starting point is 00:25:02 like the World Economic Forum. There's a consortium called Partnership on AI that was started by the tech giants, but also ACLU and Amnesty International, where we're trying to develop these code of ethics, but also think about the unintended consequences because technology is moving so fast. By the way, is it shocking you how fast it's moving? It's shocking. Yeah. And you're in the field. I mean, it's exciting, exciting but it is also like like holy shit moments all the time yeah yeah not all the time but but but but it's exciting right and i think it's it's the right moment to be in this space um i feel like um i feel like i'm surrounded by abundance of
Starting point is 00:25:42 opportunities absolutely i call it drowning in abundance. Yeah. So, Andra asks, the world is about power. Will AI be fighting for monetary territory influence power in the future? So, that's an interesting question, right? The whole idea of can we teach AI to be empathic and ethical so that it doesn't become the terminator? But it's under our control. We're the ones training these algorithms.
Starting point is 00:26:08 So at the end of the day, I feel like AI is just a mirror of society, and it's a mirror of what people who are building these technologies are and the companies that are building these technologies. And we have to, as consumers, like all of us, we're all consumers of this technology. We have to hold ourselves to a high bar.
Starting point is 00:26:26 By the way, when you are clicking on an ad that's gel bait, or that is what you call click bait, you're teaching... Okay, let's not go there. That's awesome. You're teaching the AI algorithms what you want to see right so don't click on it that's so true yeah yeah um Christian can we train an AI to be the ultimate mediator that ultimately avoids mutual assured destruction between AI1 versus AI2 or humans. So, I mean, the whole idea, I'm going to show you something tomorrow morning, which is a
Starting point is 00:27:14 AI bot debate, which is fascinating, which has some interesting implications. Can we, I mean, AIs could be potentially great mediators looking at all the sides helping to elevate things I think I don't know about the likes how do we insert ourselves between two AIs kind of debating with each other but I do think if you imagine a world where I have an AI that knows me super well yes and you have an AI that knows you super well and I give my AI permission to talk to your AI right and it could say well you know you should know, this is what I really like. Call those lawyers today. Right.
Starting point is 00:27:49 I think that that will create for some interesting interactions, right? If you give your AI an opportunity to act on your behalf and make decisions on your behalf. I mean, I think about that all the time. If my kids had AIs that knew them really well, and for example, it could detect that one of my kids had depression. Should their AI come talk to my AI and say, hey, by the way, your daughter needs help. And who decides that? Is it my daughter? Is it my AI's daughter? My daughter's AI? Yeah. But the ability to have a system that envelops you, that helps you pause when you're about to make a bad decision.
Starting point is 00:28:31 That is able to, if you're not feeling well, change the music, change the environment, make you a better human, show you where you're biased. These elements are coming, yes? Absolutely. And I'll take mental health as one example, because I think this is an amazing opportunity for us to, you know, today when you walk into the doctor's office, the doctor doesn't ask you what's your blood pressure, they just measure it. But in mental health, the gold standard is still a survey, right? On a scale from 1 to 10, how depressed are you? And we know that there are facial and vocal and physiological biomarkers of mental health disease. And the technology is there.
Starting point is 00:29:14 We just need to figure out how to scale it. But now you have something that is with you all the time that knows your baseline. And when you start deviating from it, it can flag that to you. It can flag that to a loved one. It can bring a doctor in. It can give you advice on what to do and i think that's really powerful if we can if we have that health companion with us all the time um i i think that can really be transformative amazing how long are you going to be with us you're here i'm here through the whole show thursday you and your son? Yes. Awesome. Let's give it up, Piranha. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.