Young and Profiting with Hala Taha - Fei-Fei Li: The “Godmother of AI”, Keeping Humanity at the Heart of the AI Revolution | Artificial Intelligence E285

Episode Date: April 22, 2024

At 15, Fei-Fei Li transitioned from a middle-class life in China to poverty in America. Despite the pressures of her family’s financial situation and her mother’s ailing health, her knack for phys...ics never wavered. She went from learning English as a second language to attending and working at prestigious institutions like Princeton and Stanford. Today, she is among a handful of scientists behind the impressive advances of artificial intelligence in recent times. In this episode, she breaks down her human-centered approach to AI and explores the future of the technology. Dr. Fei-Fei Li is a professor of Computer Science at Stanford University and the co-director of the Stanford Institute for Human-Centered AI. She is the creator of ImageNet, a key driver of modern artificial intelligence. With over 20 years at the forefront of the field, Dr. Li is focused on AI research, education, and policy to improve the human condition.    In this episode, Hala and Fei-Fei will discuss: - The current capabilities of AI - The difference between machine learning and AI - The training process for AI models - The gaps in our knowledge about how AI learns - Why ChatGPT fails at higher-level reasoning like math - The biological inspiration for vision in computers - Fears and hopes associated with AI - The human element of jobs AI can’t replace - Augmentation of human capabilities through AI - The three pillars of her human-centered AI framework - Responsible development and use of AI - The roadblocks to be aware of when using AI - Her advice to young entrepreneurs navigating the AI world  - And other topics…   Dr. Fei-Fei Li is a professor of Computer Science at Stanford University and the co-director of the Stanford Institute for Human-Centered AI. She is also the creator of ImageNet and the ImageNet Challenge, a key catalyst to the latest developments in deep learning and AI. Sometimes called the ‘Godmother of AI,’ she is a pioneer in early computer vision research. Dr. Li is the author of The Worlds I See, one of Barack Obama's recommended books on AI. Her work has been featured in various publications, including the New York Times, Wall Street Journal, Fortune Magazine, Science, and Wired Magazine.  Sponsored By: Shopify - Sign up for a one-dollar-per-month trial period at youngandprofiting.co/shopify Indeed - Get a $75 job credit at indeed.com/profiting Yahoo Finance - For comprehensive financial news and analysis, visit YahooFinance.com Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap Youtube - youtube.com/c/YoungandProfiting LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, entrepreneurship podcast, Business, Business podcast, Self Improvement, Self-Improvement, Personal development, Starting a business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side hustle, Startup, mental health, Career, Leadership, Mindset, Health, Growth mindset, AI, ChatGPT, AI Marketing, Prompt, AI in Action, Artificial Intelligence, AI in Business, Generative AI, AI for Entrepreneurs, Future of Work, AI Podcast    Learn more about YAP Media's Services - yapmedia.io/

Transcript
Discussion (0)
Starting point is 00:00:00 There's a quote from 1970s about AI. It's true today. It says that the most advanced computer AI algorithm will still play a good chess move when the room is on fire. It's a quote to show that machines are programmed to do tasks. But unlike humans, we have contextual situational awareness, and that is not what AI is today. the inaugural Sequoia professor in the computer science department at Stanford.
Starting point is 00:00:33 And co-director of Stanford's Human-Centered AI Institute. She's published more than 300 scientific articles, was vice president at Google and chief scientist of AIML at Google Cloud. Human Center AI is a framework of developing and using AI. That puts human values, human dignity in the center, so that we're not developing technology that's harmful to humans. I'm not naive. I know technology is a double-edged sword.
Starting point is 00:01:03 I know that our civilization, our species, is always defined by the struggle of dark and light and by the struggle and good and bad. So from that point of view, any hope I have for AI is not about AI. It's about humans. And the hope is that when we create machines that resemble our intelligence, we should.
Starting point is 00:01:39 Yeah, fam, welcome to the show, and I'm super pumped to be digging even deeper today. into the field of artificial intelligence and how it might impact all of our lives in the years to come. Today we have a special guest who has played a big role in the development of AI and will likely even play a bigger role
Starting point is 00:01:54 in how it gets used in the future. Dr. Fay-Fei Lee is a professor of computer science at Stanford University as well as the co-director of the Stanford Institute for Human-centered AI. Her work focuses on advancing AI research, education, and policy to improve the human condition.
Starting point is 00:02:12 Dr. Lee's new thing. book is called The World's I See and it weaves together her personal narrative with the history and development of AI. Today we're going to talk about her human-centered approach to AI. We're going to discuss how she's creating eyes for AI with computer visioning and we'll also learn what the future holds for the promising yet sometimes scary technology of AI. Dr. Lee, welcome to Young and Profiting Podcast. Thank you, Hala. I'm very excited to join this show. Likewise, I'm so honored to talk to somebody like you,
Starting point is 00:02:42 given all your credentials. In fact, Wired named you one of a tiny group of scientists, perhaps small enough to fit around a kitchen table, who's responsible for AI's recent remarkable advances. So it feels like AI is changing every day. There's new developments all the time. So my first question to you is, can you walk us through the development of AI?
Starting point is 00:03:04 What can it currently do now? And what can't it do right now? Yeah. Great question. It's true. Even as an AI scientist, I feel that I can hardly catch up with the progress of AI, right? It is a young field of around 70 years old,
Starting point is 00:03:21 but it's progressing really, really fast. So what can I do right now? First of all, it's already everywhere. It's around us. Another name for AI that is a little less of a hype name is machine learning. It's really just mathematical models built by computer programs so that the program can iterate and learn to make the model predict or decide on data better. So it's fundamentally machine learning.
Starting point is 00:03:52 For example, if we shop on Amazon app, the kind of recommendations we get is through machine learning or AI. If you go from place A to place B, the algorithm that gets through the road to map out the path is machine learning. If you go to Netflix, there is a recommendation that's machine learning. If you watch a movie, there is a lot of machine learning, computer vision, computer graphics to make special effects, to make animations, that's machine learning. So machine learning and AI is already everywhere. What cannot do? Well, no machines today can help me to fold my laundry or cook my omelette.
Starting point is 00:04:38 It cannot take away complex human reasoning. It cannot create in a way humans create in the combination of both reasoning, logic, but also beauty, emotion. There's a quote from 1970s about AI, and I think that quote still is true today. It says the most advanced computer AI algorithm will still play a good chess move. when the room is on fire. It's a quote to show that machines are programmed to do tasks, but it's unlike humans. We have a much more fluid, organic, contextual, situational awareness of our own thinking, our own emotion, as well as the surrounding. And that is not what AI is today. So insightful. And I love that you said that it's like an evolution of machine learning,
Starting point is 00:05:37 because I always wonder, well, what's the difference between machine learning and AI? It sounds pretty similar. So machine learning was almost like the basics of AI. The tool of AI. Think about physics. Physics in Newtonian time, the most important tool of physics was calculus. Yet we call the field physics. So artificial intelligence is a scientific field that is researching and developing technology
Starting point is 00:06:06 to make machines think like humans. But the tools we use, the mathematical computer science tool, is dominated by machine learning, especially neural network algorithms. So good. So AI is actually fresh on my mind because two days ago,
Starting point is 00:06:23 I interviewed Dr. Stephen Wolfram, and we talked about chatGBT and how chat GPT works. And he was explaining to me that when they were developing chat GPT, what was surprising is that they found out that these simple rules would create all this complexity, that they could give ChatGPT simple rules and then it could write like a human.
Starting point is 00:06:44 And it turns out that we actually still don't really understand how AI learns, which to me is like mind-boggling. How did we create something and yet we don't even know how it really works? Can you elaborate on that a bit? Really, at the end of the day, there are things we understand, there are things we don't. So it's neither white box nor black box. I would call it a gray box.
Starting point is 00:07:08 And depending on your understanding of the AI technology, it's either darker gray or lighter gray. So the things we know is that it is a neural network algorithm that is behind, say, a chat GPT model or a large language model. Of course, you hear the names of transformer models, sequence to sequence and all that. At the end of the day, these models, take data, like document data, and it learns how the words and sometimes even subwords,
Starting point is 00:07:43 right, parts of words are connected with each other. There are patterns to see, right? If you see the word how, it tends to be followed by R, and then it tends to be followed by you. So how are you is a frequently occurring sequence. So that pattern is learned. And once you learn enough in a big, huge neural network, your ability to predict the next word when you're given a word is really, really quite amazing, amazingly high to the point that it can converse more or less like a human. And because in the training data, it has so much knowledge, whether it's chemistry or movie reviews or geopolitical facts, it has memorized all of them. So it can give out very, very good answers. So those are the things we know. We know how the algorithm works.
Starting point is 00:08:41 We know it needs training. We know that it's learning and predicting pattern. What we don't know is that because these models are huge, there are billions and billions, hundreds of billions of parameters. And then inside these models, there are these little nodes. each one of them have a little mathematical function that connects to each other. So how do we know exactly how these billions and billions of parameters learn the pattern and where is the pattern stored and why sometimes it hallucinates a pattern versus
Starting point is 00:09:20 it gives out a correct answer? There's not yet precise mathematical explanation. We don't know. there's no equation that can tell us, oh, I know exactly why at this moment the chat GPT gives you the word, how are you versus how is he, you know? So that's where the grayness come from. These are large models with behaviors that are not precisely explained mathematically. Talk to us about how AI models are trained? How does AI learn, typically? Typically, AI model is given a vast amount of data. And then some of the data are labeled with human supervision. Like if I give AI models millions and millions of images, some are labeled cats, dogs, microwaves, chairs, and all that. And they learn to associate the pattern with the labels. Sometimes in recent, especially in language,
Starting point is 00:10:27 domain, what we call self-supervision. You give it millions and millions, trillions of documents, and it just keeps learning to predict the next syllabus, the next word, because all the training data is showing you all these sequences of words. And there, you don't have to give additional label. You just give the documents, and that's called self-supervised learning. So whether it's supervised with additional labels or supervised without additional label is self-supervised, it starts with data. Now, data goes into the algorithm, and the algorithm has to have an objective to learn. Typically, in the language model, the objective is to predict the next syllabus as accurately as the training data shows you. In the case of images with cat labels, for example,
Starting point is 00:11:25 is to predict an image that has a cat with the right label cat instead of the wrong label microwave. And then because it has this objective, during training, if it makes a mistake, if I didn't predict the next word right, or if I label the cat wrong, it goes back and iterates and updates its parameters based on the mistake.
Starting point is 00:11:50 It has some mathematical rules or learning rules to update. And then it just keeps doing that till humans ask it to stop or it no longer updates, whatever stop criteria. And then you're left with a ginormous neural network that's already trained by ginormous amount of data. And in that neural network, it has all the parameters, the mathematical parameters that's already learned. Now, you can take this and now you have a new sentence coming. And then it goes through this model, because it has all the parameter it has learned, it predicts
Starting point is 00:12:29 what I should say given a new sentence. Like, hello, halla, how is your breakfast today? And it would predict I had a great breakfast today or whatever. So that's how it's going to be used. So interesting. ChatjiBT, it's just predicting the next word and the next word and the next word based on all the different patterns and trying to figure out what makes sense to come next. So that's super clear. What I don't understand with something like chat, GBT, is that it's so
Starting point is 00:12:58 good at writing human language, but it's known to make simple math mistakes. How is it possible that it's good at doing human language, but then on math, for example, it's known to make stupid mistakes? It's because math, the way we do math in human mind is different from the way we do language. Language has a very clear pattern of sequence to sequence. Like I say the word how the word R and you typically follow, but sometimes it doesn't, right? So I have to learn these patterns. But if I say the word one plus, it's not like five typically follows or two typically follows, right? Like, there is actually a deeper rule of one plus two equals three. Of course, when it has seen enough of that, it should predict three for today's language
Starting point is 00:13:51 model. And actually, it does. This is too simple an example. But the point is that math takes a higher level of reasoning than just following statistical patterns, and large language model by in large follows statistical patterns. So some of the mathematical reasoning is lacking. Totally makes sense. So you've got a new book.
Starting point is 00:14:13 It's called The Worlds I See. And you say that the worlds you see are in different dimensions. So can you talk to us about why you titled the book this way? Yeah. This title came about after I finished writing the book. And I realized the journey of writing the book is really peeling into different experiences. There is the world of AI that I experience as a scientist. The book is a coming of age of a young scientist,
Starting point is 00:14:46 so I experience the world of science in different stages. But there is also the world as an immigrant. I go through life in different parts of the world, and how do I handle or go through that? And then there is more subtle but profound world like learning to be a human. I know this sounds silly, but especially in the context of an AI scientist,
Starting point is 00:15:12 it's really important. Part of the book is exploring my journey of living and taking care of alien parents and how that experience build my own character, how we help each other, support each other, and towards the end of the book, how that experience made me see my science in a different light. compared to maybe other scientists who haven't had this very profound human experience.
Starting point is 00:15:41 So it really is different worlds that I experience, and it's blended into the book. I love that, and I love how you call it a science memoir. And so you say that you're involved in the science of AI, but you're also involved in the social aspect of AI. So what do you mean by the social aspect exactly? I started in AI as a very personal journey. It's just a young science nerd loves an obscure niche, like nobody knows, field. But I'm just fascinated in a private way that how do we make machines think? How do we make machines see? And that, I was happy. And I would have been content with that through the rest of my life, honestly.
Starting point is 00:16:30 even if nobody in the world has heard of AI, I would be happily in my lap being a scientist. But what really changed is around 2017, 2018, I felt like me as a scientist and the tech world woke up and realized, oh, wow, this technology has come to a maturation point that is impacting society. And because it's AI, it's inspired by human thinking, it's inspired by human behavior. It has so much human implication at the individual level as well as the society level.
Starting point is 00:17:10 So as a scientist, I feel I was thrust it into a messier reality that I never really realized. Now, I have a choice. A lot of my fellow scientists would just continue to stay in the lab, which I think it's very admirable and respected and still just focused on the size. But my other choice is to recognize as a scientist, as an educator, as a citizen, I have social responsibility. My responsibility is more focused on what I need to educate young people. And while I can teach them equations and coding and all that, I also want to share with them what the social implications are of this size, because it's my responsibility.
Starting point is 00:18:00 I also has a responsibility to communicate with the world, because even starting quite a few years ago, now it's even worse because of the large language model. There's just so much public discourse about AI, and many of them are ill-informed, and that's dangerous. That's unfair. That's dangerous. It tends to harm people who are not in the position of power, and I have a responsibility to communicate. And then third, I also feel Stanford, especially as one of America's higher institutions, have a responsibility to help make the world better, to help our policymakers, to help civil society, to help companies, to help entrepreneurs, to educate, to inform and to give insights.
Starting point is 00:18:52 And that is the messiness of meeting the real world. And I feel I shouldn't shy away from that. I should take on that responsibility. Yeah, for sure. You're one of the most knowledgeable people about AI. We need you to tell us the roadblocks that we need to look out for and how can we make sure that we use AI for good and not for bad and take the steps to do that.
Starting point is 00:19:15 So let's talk about computer vision next. So you are a computer vision AI scientist. So what first got you interested in this and what is computer vision AI? Well, in one sentence, computer vision AI is part of AI. Is the specific part of AI that makes computers see and understand what it sees? And this is very profound. When humans open our eyes, we see the world not only in colors and shades, we see it in meaning, right? Like I'm looking at my messy desk right now.
Starting point is 00:19:49 It has cell phones. It has a cup, it has monitor, it has my allergy medicine, and it has a lot of meaning. And more than that, we can also construct. Even if we're not the best artists, humans since dawn of civilization have been drawing about the world, has been sculpting about the world, has been building bridges and monuments, and has created the visual and the world. The ability to see and visually create and understand is so innate in humans. And wouldn't it be great if computers have that ability?
Starting point is 00:20:32 And that is what computer vision is. At YAP, we have a super unique company culture. We're all about obsessive excellence. We even call ourselves scrappy hustlers. And I'm really picky when it comes to my employees. My team is growing every day. We're 60 people all over the world. And when it comes to hiring, I know long.
Starting point is 00:20:52 longer feel overwhelmed by finding that perfect candidate, even though I'm so picky, because when it comes to hiring, Indeed is all you need. Stop struggling to get your job post noticed. Indeed, sponsor jobs help you stand out and hire fast by boosting your post to the top relevant candidates. Sponsored jobs on Indeed get 45% more applications than non-sponsored ones, according to Indeed data worldwide. I'm so glad I found Indeed when I did because hiring is so much easier now. In fact, in the minute we've been talking, 23 hires were made on Indeed, according to Indeed data worldwide. Plus, there's no subscriptions or long-term contracts. You literally just pay for your results. You pay for the people that you hire. There's no need to wait any longer. Speed up your hiring
Starting point is 00:21:31 right now with Indeed. And listeners of this show will get a $75-sponsored job credit to get your jobs more visibility at Indeed.com.com. Profiting. Just go to Indeed.com slash profiting right now and support our show by saying you heard about Indeed on this podcast. Indeed.com slash profiting. Terms and conditions apply. Hiring, Indeed, is all you need. Happy New Year, Yap, gang. I just love the unique energy of the new year. It's all about fresh starts. And fresh starts not only feel possible, but also feel encouraged.
Starting point is 00:22:03 And if you've been thinking about starting a business, this is your sign. There's no better time than right now. 2026 can be the year that you build something that is truly yours, the year where you take control over your career. And it starts with Shopify. I've built plenty of my own businesses on Shopify, including my LinkedIn Secrets Masterclass. So it's a two-day workshop. people buy their tickets on Shopify. And then my mastermind subscription is also on Shopify.
Starting point is 00:22:28 I built my site quickly in just a couple of days, payments for setup super easily. And none of the technical stuff slowed me down like it usually does because Shopify is just so intuitive. And this choice of using Shopify helped me scale my masterclass to over $500,000 in revenue in our first year. And I'm launching some new podcast courses and can't wait to launch them on Shopify. Shopify gives you everything you need to. to sell online and in person, just like the millions of entrepreneurs that they power.
Starting point is 00:22:56 You can build your dream story using hundreds of beautiful templates and set up as fast with built-in AI tools that help you write product descriptions and edit photos. Plus, marketing is built in so you can create email and social campaigns easily. And as you grow, Shopify can scale right along with your business. In 2026, stop waiting and start selling with Shopify. Sign up for your $1 per month trial and start selling today at Shopify.com slash profiting. Go to Shopify.com slash profiting. That's Shopify.com slash profiting.
Starting point is 00:23:27 Yeah, fam, hear your first. This new year was Shopify by your side. Young and Profiters. I know there's so many people tuning in right now that end their workday wondering why certain tasks take forever, why they're procrastinating certain things, why they don't feel confident in their work,
Starting point is 00:23:46 why they feel drained and frustrated and unfulfilled. But here's the thing you need to know. It's not a character flaw that you're feeling this way. It's actually your natural wiring. And here's the thing. When it comes to burnout, it's really about the type of work that you're doing. Some work gives you energy and some work simply drains you. So it's key to understand your six types of working genius.
Starting point is 00:24:10 The working genius assessment or the six types of working genius framework was created by Patrick Lensione and he is a business influencer and author. And the working genius framework helps you identify what you're actually built for and the work that you're not. Now, let me tell you a story. Before I uncovered my working genius, which is galvanizing and invention, so I like to rally people and I like to invent new things, I used to be really shameful and had a lot of guilt around the fact that I didn't like enablement, which is one of my working frustrations. So I actually don't like to support people one-on-one.
Starting point is 00:24:42 I don't like it when people slow me down. I don't like handholding. I like to move fast, invent, rally people, inspire. But what I do need to do is ensure that somebody else can fill the enablement role, which I do have, Kate, on my team. So working genius helps you uncover these genius gaps, helps you work better with your team, helps you reduce friction, helps you collaborate better, understand why people are the way that they are. It's helped me restructure my team, put people in the spots that they're going to really excel,
Starting point is 00:25:07 and it's also helped me in hiring. Working Genius is absolutely amazing. I'm obsessed with this model. So if you guys want to take the Working Genius assessment and get 20% off, you can use code profiting. Go to WorkingGenius.com. Again, that's workinggenius.com. Stop guessing. Start working in your genius.
Starting point is 00:25:23 So interesting. When I think about consciousness, everything that has consciousness has eyes. This always freaked me out. Bugs have eyes. Fish have eyes. Fish eyes look like our eyes. And that's so scary, weird, the fact that all these living things have eyes. If AI starts to have eyes, wouldn't it just be that they're living and sentient at that point? So first of all, you touched on something really, really profound. Because visual sensing is one of the oldest, evolutionarily speaking. So 540 million years ago, animals to start developing eyes. It was a pinhole that collects light, but evolving into the kind of eyes, the fish, the octopus, the elephant, the eyes we have. So you actually touch on something
Starting point is 00:26:16 really profound. This is extremely innate, embedded into our development of our intelligence. And of course, you also ask a philosophically really profound question. Everything has eyes as consciousness. Actually, a neuroscientist or neurophilosopher, you should invite one to debate with you. For example, does a tiny shrimp using eyes doing things, does they have consciousness or it has just perception? I don't have an answer, honestly. How do you measure consciousness? Just because the shrimp can see the rock and climb around, does it mean it's just a sensory reflex or it has a deeper consciousness? I don't know. So just because machines have eyes, does it develop consciousness? It's a topic we can talk about, but I just want to make sure that we are
Starting point is 00:27:16 at least on the same page, that just seeing itself doesn't mean it has consciousness. But the kind of visual intelligence we have, like I just described, to understand, to create, to build, to represent a world with such visual complexity, at least in humans, it does take consciousness. Everything that you're saying is just so interesting, even that shrimp example, even though it's navigating, swimming around rocks and whatever, or doesn't mean that it's actually conscious, it could be to your point,
Starting point is 00:27:48 just all like reflexes. And that makes it a little less scary if machines end up having eyes. So how are you replicating biological processes, like vision in computers now? I think a lot of computer vision is biologically inspired, and it's inspiring in at least two areas.
Starting point is 00:28:07 One is the algorithm itself. So the whole neural network algorithm, in fact, back in the 1915, 50s and 60s, the computer scientists were inspired by vision neuroscientists. When they were studying cat, mammalian visual system, they discover the kind of hierarchical neurons. And it's because of that, it inspired computer scientists to build a neural network algorithm. So the visual, the animal visual structure in the brain is very much the foundational inspiration to today's AI technology.
Starting point is 00:28:45 So that's one area. The second inspiration come from functionality, right? The ability to see. What do we see? Humans are not that good at seeing color, for example. We see color rich enough, but the truth is, there's infinite wavelength that defines infinite colors, but we have only probably dozens of colors.
Starting point is 00:29:08 So clearly we're not seeing just colors in the same way, like if I use a machine to register wavelength. On the other hand, we see meaning, we see emotion, we see all these things. And it's just incredibly inspiring that we can build these functionality into machines. And that is another part of biological inspiration. It's the functional inspiration. And with that, I think there is a lot to imagine. For example, first of a visually impaired patients,
Starting point is 00:29:42 If we help them with artificial visual system to understand the world, rich world we see, it would be tremendously helpful. Machines, right? I don't know. Do you have a Rumba in your house? Yeah, yeah. Right. So it almost is kind of seeing.
Starting point is 00:29:59 It's not seeing the same way we are, but it's kind of seeing and mapping. But one day, I hope I not only have a Rumba, I also have a cleaning robot, right? Like then it needs to see my house in a much. more complex way. And then the most important, right, for example, rescue robots, there's so many situations that puts humans in danger or humans are already in danger and you want to rescue humans, but you don't want to put more humans in danger. Think about that Fukushima nuclear leak incident. People had to really sacrifice to go in there to stop the leak and all that. it will be amazing if robots can do that.
Starting point is 00:30:41 And that needs seen. It needs visual intelligence in much deeper ways. That's so interesting. And it's helpful for you to say that because my first reaction is like, why are we giving robots this much power? Like losing our power as humans. But to your point, it can help humans. And I know that's a whole, like what you talk about is human-centered AI, right?
Starting point is 00:31:03 Yes. Can you define what human-centered AI is in your own words? Yeah. Human Center AI is a framework of developing and using AI. And that framework puts human values, human dignity in the center so that we're not developing technology that's harmful to humans. So it's really a way to see technology or use technology in a benevolent way. Now, I'm not naive.
Starting point is 00:31:34 I know technology is a double-edged sword. I know that double-edgedged sport can be used intentionally or unintentionally in bad ways. So human-centered AI is really trying to underscore that we have a collective responsibility to focus on the good development and good use of AI. And it was really inspired by my timing industry when I was on sabbatical as a professor. is seeing the incredible business opportunities that is already opening the floodgate of AI back in 2018 and knowing that when business starts to use AI,
Starting point is 00:32:17 it impacts lives of every individual. So I went back to Stanford and together with my colleagues, we realized as a thought leadership institution, as Americans' higher education plays to educate the next, generation students, we should really have a point of view to develop and stay at the forefront of the development of this technology. This is how we formulated the human-centered AI framework. And one of the biggest fears that people have with AI is that AI is going to replace all of our jobs. Now, AI is probably going to create a lot of jobs, and I've talked a lot about that
Starting point is 00:33:00 with other guests on the podcast. But how do you suggest that? that we make jobs and take consideration into making sure that AI doesn't take all the jobs? Several things, Hala. First of all, why do we have jobs? It's really important to think about it. I think jobs is part of human prosperity because we need that to translate into financial rewards so that we have the prosperity that our family and we need. It also is part of human dignity.
Starting point is 00:33:32 It's beyond just money. For many people, it's the meaning of life and self-respect. So from that point of view, I think we have to recognize jobs shift throughout human history, technology, and also other factors, creates, destroys, morphs, transforms jobs. But what doesn't change is the need for human prosperity and human dignity. So I think when we think about AI and it's impacting jobs, it's important to go to the very core of what jobs are and means and what technology can do. So when it comes to, say, human dignity, for example, I do a lot of healthcare research with AI. And it's so clear to me that many of the jobs that our clinicians and healthcare workers do are part of humans caring for humans.
Starting point is 00:34:32 And that emotional bond, that dignity, that respect can never be replaced. What is also clear to me is that American healthcare workers, especially nurses, are over-fatigued, overworked, and if technology can be a positive force to help them, to help them take care of patients better, to reduce their workload, especially some of the repetitive, thankless work like constant charting or walking miles and miles a day to fetch pharmacy medicines and all that. If those parts of the job, the tasks, can be augmented by machines, it is really, really truly intended to protect the human prosperity and dignity, but augment human capabilities. So from that point of view, I think there is a lot of opportunity for AI to play a positive
Starting point is 00:35:29 role. But again, it depends on, first of all, it depends on how we design AI. In my lab, we did a very interesting research. We were trying to create a big robotics project to do a thousand human everyday tasks. But at the beginning of this project, it was very important to us that we are creating robots to do these tasks that humans want help. For example, buying a wedding ring. Even if you have the best robot in the world,
Starting point is 00:36:03 who wants a robot to choose a wedding ring or opening Christmas gift? It's not that hard to open a box, but the human emotion, the joy, the family, bond the moment is not about opening a silly box. So we actually ask people to rank for us thousands and thousands of tasks and tell us which tasks they want robots help. For example, like cleaning toilet. Everybody wants robots help. So we focus on those tasks that humans prefer robotic help rather than those tasks that humans care and want to do themselves. And that
Starting point is 00:36:43 is a way of thinking about human center air? How do we create technology that is beneficial welcomed by humans rather than I just go in and tell you I'm using robot to replace everything you care about?
Starting point is 00:36:59 Another layer just to finish this topic is policy layer. Like economic, social well-being is so important and technologists don't know it all and we shouldn't feel we know it all. We should be collaborating with civil society, legal world, policy world, economists to try to understand the nuance and the profoundness of jobs and tasks and AI's impact.
Starting point is 00:37:27 And this is also why our Human Center AI Institute at Stanford has a digital economy lab. We work with policymakers and thinking about these issues. We try to inform them and provide information to help move the data. these topics forward in a positive way. You have three aspects to your human-centered AI framework, right? So AI is interdisciplinary. AI needs to be trying to make sure that we have human dignity and using it for human good. And then there's also one about intelligence.
Starting point is 00:38:00 Can you break down your three pillars of your human-centered AI framework? The three pillars of the human-centered AI framework is really about thought leadership in AI and focusing on what higher education institute like Stanford can do. One we talked about is recognizing the interdisciplinary nature of AI welcoming the multi-stakeholder studies, research, and education policy outreach to make sure that AI is embedded in the fabric of our society today and tomorrow in a benevolent way. The second one is what you said is focusing on augmenting humans, creating technology that enhances human capability and human well-being and human dignity rather than
Starting point is 00:38:46 taking away. The third one is about continue to be inspired by human intelligence and develop AI technology that is compatible with humans because human intelligence is very complex, it's very rich. We talked a lot about emotion, intention, compassion, and today's AI lacks most of that. It's pretty far from that. Being inspired by this can help us to create. And also, by the way, there's another thing about today's AI that is far worse than humans.
Starting point is 00:39:22 It draws a lot of energy. Humans, our brain works around 20 watts. That is like dimmer than the dimest light bulb in your house. Yet we can do so many things. We can create the pyramid. We can, you know, come up. with E equals MC square, we can write beautiful music and all that. AI today is very, very energy consuming.
Starting point is 00:39:49 It's bulky. It's huge. So there's a lot in human intelligence that can inspire the next generation AI to do better. What's up, young and profitors? I remember when I first started Yap, I used to dread missing important calls. I remember I lost a huge potential partnership because the follow-up thread got completely lost in my messy communication system. Well, this year, I'm focused on not missing any opportunities, and that starts with your business communications. A missed call is money and growth out the door.
Starting point is 00:40:23 That's why today's episode is brought to you by Quo, spelled QUO, the smarter way to run your business communications. Quo is the number one rated business phone system on G2, and it works right from an app on your phone or computer. The way Quo works is magic for team alignment. Your whole team can handle calls and text from one shared number, and everyone sees the full conversation. It's like having access to a shared email inbox but on a phone. And also, Quo's AI can even qualify leads or respond after hours, ensuring your business stays responsive, even when you finally logged off. It makes doing business so much easier. Make this the year where no opportunity and no customer slips away. Try Quo for free plus get 20% off your first six months
Starting point is 00:41:02 when you go to Quo.com slash profiting. That's QUO.com slash profiting. Quo. No missed calls, no missed customers. Hello, young improfitors. Running my own business has been one of the most rewarding things I've ever done, but I won't lie to you. In those early days of setting it up, I feel like I was jumping on a cliff with no parachute. I'm not really good at that kind of stuff. I'm really good at marketing, sales, growing a business, offers.
Starting point is 00:41:29 But I had so many questions and zero idea where to find the answers when it came to starting an official business. I wish I had known about Northwest Registered Agent back when I was starting YAP media. And if you're an entrepreneur, you need to know what Northwest Registered agent is. They've been helping small business owners launch and grow businesses for nearly 30 years. They literally make life easy for entrepreneurs. They don't just help you form your business. They give you the free tools you need after you form it, like operating agreements and
Starting point is 00:41:57 thousands of how-to guides that explain the complicated ins and outs of running a business. And guys, it can get really complicated, but Northwest Registered Agent just makes it all easy and breaks it down for you. So when you want more for your business, more privacy, more guidance, more free resources, Northwest Registered Agent is where you should go. Don't wait and protect your privacy, build your brand, and get your complete business identity in just 10 clicks and 10 minutes. Visit Northwest Registeredagent.com slash Yapfree and start building something amazing. Get more with Northwest Registered Agent at Northwest Registeredagent.com slash yapfrey. Hey young improfitors. As an entrepreneur, I know firsthand that getting a huge expense off your books
Starting point is 00:42:42 is the best possible feeling. It gives you peace of mind and it lets you focus on the big picture and invest in other things that move your business forward. Now imagine if you got free business internet for life. You never had to pay for business internet again. How good would that feel? Well, now you don't even have to imagine because spectrum business is doing exactly that. They get it that if you aren't connected, you can't make transactions, you can't move your business forward. They support all types of businesses, from restaurants to dry cleaners to content creators like me and everybody in between. They offer things like internet, advanced Wi-Fi, phone TV, and mobile services. Now, for my business-owning friends out there, I want you to listen up.
Starting point is 00:43:20 If you want reliable internet connection with no contracts and no added fees, Spectrum is now offering free business internet advantage forever when you simply add four or more mobile lines. This isn't just a deal. It's a smart way to cut your monthly overhead and stay connected. Yeah, bam, you should definitely take advantage of this offer. It's free business internet forever. Visit spectrum.com slash free for life to learn how you can get business internet free forever. Restrictions apply. Services not available in all areas.
Starting point is 00:43:49 Every time I have an AI episode, I feel like I learned so much that I didn't really realize before. We've had conversations with other people on the show about how a lot of people are scared of AI getting apex intelligence, that it's going to be so much smarter than humans, going to take over the world, is going to control humans. Do you have any fears around that? I do have fears. I think who lives in 20, 24 and don't have fears. And as a citizen of the world, I think our civilization, our species is always defined
Starting point is 00:44:22 by the struggle of dark and light and by the struggle and good and bad. We have incredible benevolence in our DNA, but we also have incredible badness. in our DNA, and AI as a technology can be used by the badness. So from that point of view, I do have fear. The way I cope with fear is try to be constructively helpful, is try to advocate for the benevolent use of this technology and to use this technology to combat the badness. At the end of the day, any hope I have for AI is not about AI, it's about humans. To paraphrase Dr. King, the arc of history is long, but it does bend towards justice and benevolence in general. But to come down from that abstract thinking, I think we have work to do.
Starting point is 00:45:18 Because if AI is in the hands of bad actors, if AI is concentrated in only in a few powerful people's hand, it can go very wrong. We don't need to wait for sensual. AI. Even today's car, imagine there is a bad person who is in charge of building 50% of America's car and that person just wants to make all the car brakes malfunction or add a sensor and say if you see a pedestrian run it over. Actually, today's technology can do that. You don't need sensor AI. But the fact that we don't have that dystopian scenario is for. first of all, human nature is buying large goods. You know, our car factory workers, our business leaders in building cars, nobody thinks
Starting point is 00:46:13 about doing that. We also have laws. If someone is trying to do harm, we have societal constraints. We also try to educate the population towards good things, right? So all this is hard work, and we need that hard work in AI to ensure it doesn't do bad. I just want to give an example that when I was talking to Stephen Wolfram, because the interview is fresh in my head. And he said something that made me feel a little bit at ease with AI and the fact that it could get really smart. He said, we're living in AI.
Starting point is 00:46:46 We live in nature. Nature is so complex. We can't control it. It has simple processes that are really, really complex. We can predict it all we want, but we'll never really know what nature is going to do. And already we live in a world where we're interacting with nature every day and we have to just do. with the fact that we don't control it and it's smarter than us to a degree. And he's like, that's what maybe AI will be like in the future. It will be there. It will be its own system.
Starting point is 00:47:12 What are your thoughts on that? That's a very interesting way to put it. Okay, first time I heard that I like his way of saying that humans in the face of complexity and powerful things, that we still have a way to cohabitat with it. I don't agree nature's AI in the sense that nature is not programmable. And I don't think nature has a collective intention. It's not like the earth wants to be a bigger earth or bluer earth or, you know, so from that point of view, it's very, very different. But I appreciate the way he says that. And I also think using his analogy, we also live with other humans. And there are humans who are more stronger than us, smarter than us, do better, whatever, than us.
Starting point is 00:48:02 But yet, by in large, our world is not everyone killing each other, by in large. Now, this is where we do see the darkness, and this has nothing to do with AI. Human nature has darkness, and we harm each other. And the hope is, it's not just the hope, the work is that when we create machines that resemble our intelligence, we should prevent it to do similar harms to us, to each other, and try to bring out the better part of ourselves. As we wrap up this interview, I wanted to ask you a couple of questions. So first off, you're talking to a lot of young entrepreneurs right now and people who want to be entrepreneurs. What's your advice to them about how to embrace this AI world?
Starting point is 00:48:50 So first of all, I hope you read my book, The Worlds I See, because the book is written to young people for young people. It's a coming of age of a scientist, but the true theme of the book is finding your North Star, is finding your passion and believing in that against all odds and chase after the North Star. And that is the core of what entrepreneurship is about, is that you believe. leaving, bringing something to the world, and against all odds, you want to make it happen. And that should be your North Star. In terms of AI, it's an incredibly powerful tool. So it depends on what business and products you're making. It either can empower you, or it's an essential part of your core product, or it keeps you competitive. It's so horizontal that for most entrepreneurs out there, if you don't know anything about AI,
Starting point is 00:49:50 it is important to educate yourself because it's possible that AI will play either in your favor or in your competitors' favor. So knowing that is important. I'm just going to ask you one last question. And this is really about visioning. Let's vision a world 10 years from now, 2034, where there's human-centered AI. And let's also try to visualize a world 10 years from now where maybe it's not human-centered AI. Maybe it got in the bad hands of some folks. Let's talk about those two worlds, and then we'll close it out. The world that's human-centered AI, I think it's not too far from at least the North America
Starting point is 00:50:35 world we live in, even though I know we're not perfect, is that we still have a strong democracy. We still believe in individual dignity and, by large, free market capitalism, that we are allowed as individual to pursue our happiness and prosperity and respect each other. And AI helps us to do better scientific discovery, to have self-driving cars, to help people who can drive or, you know, reduce traffic to make life easier, to make education more personalized, to empower our teachers and healthcare workers to discover a cure for diseases, to alleviate our aging population problems, to make agriculture more effective, to find climate solutions. There is so much AI can do in the world that we still have the good
Starting point is 00:51:35 foundation. Now, the dystopia world is, AI can be used as a bad tool to topple democracy. Disinformation is an incredibly harmful way of harming democracy and the civil life we have right now. If it's completely concentrated in power, whether it's state power or individual power, it makes the rest of the society much more subject to the will and possibly wrath of that power, whether it's AI or not. We have seen in human history that concentrated power is always bad. And concentrated power using powerful technology is not a recipe for good. Well, Dr. Lee, I'm so happy we have somebody like you who's helping us to navigate the AI world, who's also helping to shape the AI world
Starting point is 00:52:31 in a way that hopefully is going to be good for humans. Please let us know where we can learn more about you and everything that you do. Thank you, Hala. Thank you for promoting my book and please constantly checking with Stanford Human Center, AI Institute, newsletter and website. Amazing.
Starting point is 00:52:48 We'll stick all those things in the show notes. Dr. Lee, thank you for joining us on Young Improfiting Podcast. Thank you, Hala. Yeah, bam, it is clear that AI has the potential to be a powerful tool. But it's also important to keep things in perspective. Remember the quote that Dr. Lee shared.
Starting point is 00:53:09 The most advanced computer AI algorithm will still play a good chess move when the room is on fire. For the time being, we humans have much more fluid, organic, and contextual understanding of ourselves and our own thoughts and emotions. And AI still cannot create in the way that humans can create. There's so much potential for good. when it comes to AI, like those rescue robots and cleaning robots that Dr. Lee described. Not to mention the way that some overworked professions like nursing, for example, could be greatly improved by technology that could help them. And if used well, AI has the capacity to
Starting point is 00:53:48 bolster not only our capabilities, but also our prosperity and our dignity. But this will depend in large part on us and whether we can encourage and foster what Dr. Lee calls human-centered AI. The biggest risk regarding AI is likely not AI turning evil, but AI being deployed by bad people. But if you think about it, cars, guns, and other things like that can all be abused and misused, and they are. But that's why we have laws and policies and social norms that help guard against those things. And hopefully, that will be the same with AI. We'll have regulations around it to help protect us from the bad things that can happen from bad people who use AI. And that will especially be more likely to happen if Dr. Lee and others like her can carry the day.
Starting point is 00:54:40 And if they can, then perhaps, and hopefully, we will live in that first alternative future universe that Dr. Lee described, the one where AI helps us improve scientific discovery, develop amazing self-driving cars, personalized education, and ultimately help us lead more comfortable and fulfilling lives. Thanks for listening to this episode of Young and Profiting Podcast. Every time you listen and enjoy an episode of this podcast, share it with your friends and family. Maybe someday an AI bot will be able to do that for you, but until then, we really do depend on you to share this podcast by word of mouth. And if you did enjoy this show and you learned something new, then please drop us a five-star review on Apple Podcasts. I read these reviews every single morning. It makes my day. So if you want to make
Starting point is 00:55:31 my day, go take two minutes and write a positive five-star review on Apple Podcasts. And maybe I'll shout you out on an upcoming episode. And if you prefer to watch your podcasts as videos, you can also find all of our episodes uploaded to YouTube, just look up Young and Profiting. You can also find me on Instagram at Yap with Hala or LinkedIn. My name is Hala Taha. You can just search for my name. And before we wrap, I always have to give a big thank you to my incredible Yap production team. You guys are so hardworking. You're so talented. There's too many of you to shout out now. The team is growing so fast. But thank you for all that you do on my podcast, on the other network podcasts. You guys are amazing. Thank you so much. And this is your host,
Starting point is 00:56:17 Halitaha, aka the podcast princess, signing off.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.