TED Talks Daily - How a worm could save humanity from bad AI | Ramin Hasani
Episode Date: October 23, 2024What if AI could think and adapt like a real brain? TED Fellow and AI scientist Ramin Hasani shares how liquid neural networks — a new, more flexible AI technology inspired by physics and l...iving brains — could transform how we solve complex problems.
 Transcript
 Discussion  (0)
    
                                         TED Audio Collective for TED Fellows Films, adapted for podcasts, just for our TED Talks daily listeners.
                                         
                                         TED's fellowship supports a network of global innovators, and we're so excited to share their
                                         
                                         work with you. Today, we'd like you to meet AI scientist Ramin Hassani. We talk about AI a lot
                                         
                                         here at TED because it's a world-changing technological development that's fast improving
                                         
                                         and risky, but it's hard for a layperson like me to really grasp
                                         
                                         how it functions. Ramin's new AI system, which he co-invented, addresses that issue head-on.
                                         
                                         His system gives us a lot more control and visibility into the mechanics behind the tech,
                                         
                                         making it safer and more trustworthy. After we hear from Ramin, stick around for his conversation
                                         
    
                                         with TED Fellows Program Director Lily James Olds, all coming up after the break. from home as we settled down at our Airbnb during a recent vacation to Palm Springs,
                                         
                                         I pictured my own home sitting empty. Wouldn't it be smart and better put to use welcoming a family like mine by hosting it on Airbnb? It feels like the practical thing to do,
                                         
                                         and with the extra income, I could save up for renovations to make the space even more inviting
                                         
                                         for ourselves and for future guests. Your home might be worth more than you think.
                                         
                                         Find out how much at airbnb.ca slash host. And now our TED Talk of the day.
                                         
                                         My wildest dream is to design artificial intelligence that is our friend. You know,
                                         
                                         if you have an AI system that can help us understand mathematics, you can solve the
                                         
                                         economy of the world. If you have an AI system that helps us understand mathematics, you can solve the economy of the world.
                                         
    
                                         If you have an AI system that can understand humanitarian sciences,
                                         
                                         we can actually solve all of our conflicts.
                                         
                                         I want this system to, given Einstein's and Maxwell's equations,
                                         
                                         take it and solve new physics, you know?
                                         
                                         If you understand physics, you can solve the energy problem.
                                         
                                         So you can actually design ways for humans to be the better
                                         
                                         versions of themselves. I'm Ramin Hassani. I'm the co-founder and CEO of Liquid AI. Liquid AI is an
                                         
                                         AI company built on top of a technology that I invented back at MIT. It's called Liquid Neural
                                         
    
                                         Networks. These are a form of flexible intelligence as opposed to today's AI systems that are fixed, basically.
                                         
                                         So think about your brain.
                                         
                                         You can change your thoughts.
                                         
                                         When somebody talks to you, you can completely change the way you respond.
                                         
                                         You always have a mechanism that we call feedback in your system.
                                         
                                         So basically, when you receive information from someone as an input, you basically
                                         
                                         process that information and then you reply. For liquid neural networks, we simply got those
                                         
                                         feedback mechanisms and we added that to the system. So that means it has the ability of
                                         
    
                                         thinking. That property is inspired by nature. We looked into brains of animals, and in particular, a very, very tiny worm called C. elegans.
                                         
                                         The fascinating fact about the brain of the worm is that it shares 75% of the genome that it has with humans.
                                         
                                         We have the entire genome mapped.
                                         
                                         So we understand a whole lot about the functionality of its nervous system as well. So if you understand the properties of cells in the worm,
                                         
                                         maybe we can build intelligent systems
                                         
                                         that are as good as the worm
                                         
                                         and then evolve them into systems
                                         
                                         that are better than even humans.
                                         
    
                                         The reason why we are studying nature
                                         
                                         is the fact that we can actually,
                                         
                                         having a shortcut through exploring
                                         
                                         all the possible kind of algorithms that you can actually having a shortcut through exploring all the possible
                                         
                                         kind of algorithms that you can design.
                                         
                                         You can look into nature, that would give you like a shortcut to really faster get into
                                         
                                         efficient algorithms.
                                         
                                         Nature has done a lot of search, billions of years of evolution, right?
                                         
    
                                         So we learned so much from those principles.
                                         
                                         I just brought a tiny principle from the worm into artificial neural networks and now they
                                         
                                         are flexible and they can solve problems
                                         
                                         in an explainable way
                                         
                                         that was not possible before.
                                         
                                         AI is becoming very capable, right?
                                         
                                         The reason why AI is hard to regulate
                                         
                                         is because we cannot understand the system.
                                         
    
                                         Even the people who design the systems,
                                         
                                         we don't understand those systems.
                                         
                                         They are black boxes.
                                         
                                         With Liquid,
                                         
                                         because we are fundamentally
                                         
                                         using mathematics that are
                                         
                                         understandable, we have tools to really understand and pinpoint which part of the system is responsible
                                         
                                         for what. You're designing white box systems. So if you have systems that you can understand
                                         
    
                                         their behavior, that means even if you scale them into something very, very intelligent,
                                         
                                         you can always have a lot of control over that system
                                         
                                         because you understand it.
                                         
                                         You can never let it go rogue.
                                         
                                         So all of the crises we're dealing with right now,
                                         
                                         you know, doomsday kind of scenarios,
                                         
                                         is all about scaling a technology that we don't understand.
                                         
                                         We liquidate our purposes to really calm people down
                                         
    
                                         and show people that, hey, you can have very powerful systems
                                         
                                         that you have a lot
                                         
                                         of control and visibility into their working mechanisms. The gift of having something very
                                         
                                         super intelligence is massive and it can enable a lot of things for us. But at the same time,
                                         
                                         we need to have control over that technology because this is the first time that we're
                                         
                                         going to have a technology that's going to be better than all of humanity combined.
                                         
                                         That was Ramin Hassani, a 2024 TED Fellow.
                                         
                                         Stick around after the break to hear Ramin go deeper into his work.
                                         
    
                                         And now a special conversation between TED Fellow Ramin Hassani and TED Fellow's Program Director Lily James-Olds.
                                         
                                         Hi, Ramin. It's so great to have you with us today.
                                         
                                         Thanks for having me.
                                         
                                         So does this mean we can all stop panicking about AI?
                                         
                                         Well, a little bit, yes. So we are moving in that direction. We're opening the black box.
                                         
                                         We are trying to improve the control that we have
                                         
                                         as designers of AI systems in a way that you have a lot more control on the outcomes,
                                         
                                         on the outputs of an AI system. And you can put boundaries around what you want them to do,
                                         
    
                                         you know, and that controlled ability is something that we want to create for AI
                                         
                                         and build systems that are fundamentally and inherently understandable.
                                         
                                         Okay, I'm going to come back to that because I have a lot of questions on that.
                                         
                                         But just to start, so you say that looking into nature helped you and your team invent these liquid neural networks, and in particular, one specific worm that surprisingly shares a lot in common with humans.
                                         
                                         Now, this is totally wild to
                                         
                                         me. I had no idea that I was so closely related to a worm. Can you tell me a bit more about how
                                         
                                         this worm's brain inspired your discovery of liquid neural networks? The worm is called C.
                                         
                                         elegans. This is the first animal that we had in its entire nervous system mapped. You know,
                                         
    
                                         neuroscientists anatomically connected all the connections that
                                         
                                         exist in the brain of the worm, 302 neurons. The scientists that designed this thing, they
                                         
                                         won Nobel Prizes. And the reason for that is just the fascinating fact that in the tree of evolution,
                                         
                                         600 million years ago, we got split from this worm. So it shares 75% genetic similarity to humans. The fact that our nervous
                                         
                                         systems, our brains are actually inspired by the mapping of this kind of worm, I thought that this
                                         
                                         would be a very good place to get started. Also, you should know that the body of the worm is
                                         
                                         transparent. You can see inside how things happen. So you see that the neurons flash actually
                                         
                                         like under microscope when you look at the worm. So you can actually see how the neuron behaves
                                         
    
                                         while you can record the brain activities of the worm. So you have a lot of data. So it becomes a
                                         
                                         very good model organism. So I started looking into this. I thought that, okay, so neurons and
                                         
                                         synapses are the same, almost the same in terms of functionality in this worm and in humans.
                                         
                                         So if we can understand on this worm how things work from the mathematical principles and how behavior emerges from a set of neural activities with mathematics that are involved,
                                         
                                         then we can take this and evolve this into better versions of itself, which became human brain. And maybe
                                         
                                         we can also evolve artificial intelligence that way. That's so crazy that that discovery came
                                         
                                         from nature so directly. So back to where you started this conversation. Right now,
                                         
                                         we don't have the transparency into how current AI systems work. As you said, it's a black box.
                                         
    
                                         And you said that this is the problem and why we don't
                                         
                                         have control over these systems. I guess my first question is just how did we get to this point?
                                         
                                         You know, why weren't these AI systems built with transparency as a core tenant?
                                         
                                         The thing is, like, the AI systems were transparent and they are still traceable. You know,
                                         
                                         the problem that we have with these AI systems is the scale of these
                                         
                                         AI systems today. So you started taking this very simple mathematics, you know, simple if condition,
                                         
                                         you know, if something happens, neuron gets activated. If doesn't happen, the neuron turns
                                         
                                         off. Then we took this function and we scaled this technology. We scaled it into billions or now we are getting into trillions of parameters.
                                         
    
                                         You know, so now a system,
                                         
                                         imagine you have trillion knobs that you have to turn.
                                         
                                         Now, if you want to go and reverse engineer,
                                         
                                         what are these trillions of knobs are actually doing?
                                         
                                         This becomes a non-tractable process.
                                         
                                         So you wouldn't be able to really say
                                         
                                         what each of these one out of trillions of knobs
                                         
                                         are actually doing
                                         
    
                                         and what's the function of these things
                                         
                                         in the overall kind of behavioral generation
                                         
                                         of the generative AI system.
                                         
                                         That's why we call them black boxes.
                                         
                                         You know, when we scaled the models,
                                         
                                         we saw that much, much better
                                         
                                         and smarter behavior emerged from these AI systems.
                                         
                                         That's the excitement that we move towards, right? We always want to design systems that are
                                         
    
                                         more fascinating, you know, getting closer, getting smarter than humans. And then that
                                         
                                         excitement sometimes prevents us from looking into the socio-technical challenges that these
                                         
                                         AI systems can bring, right? And that is something that we have to control.
                                         
                                         So how are the liquid neural networks different?
                                         
                                         So why are they more trustworthy?
                                         
                                         And why do we have more control over them at scale?
                                         
                                         That's a great question.
                                         
                                         So think about it like this.
                                         
    
                                         When you're sitting on an airplane, you know, as a passenger,
                                         
                                         then the pilot turns on autopilot. You as a passenger completely trust that autopilot. Even if you don't understand that system, how is it that we trust that autopilot
                                         
                                         in action in such a safety critical task? The reason why you trust it is because the engineers
                                         
                                         who designed that whole system, they completely understand how that mathematics works. They go through
                                         
                                         multiples of testing so that they can get into this safety critical kind of system.
                                         
                                         That's the best type of explainability that you want to have. You know, you want the engineers
                                         
                                         who design the systems understand fully how the technology works. Now with liquid neural networks,
                                         
                                         the core mathematics is something that is tractable. That's why us engineers and scientists are being able to actually get into these systems.
                                         
    
                                         And we have a lot of tools to really steer and put controls on top of this.
                                         
                                         Something that's been on my mind and many people's minds a lot is how can we make sure that AI systems are built on ethical frameworks and inclusive data?
                                         
                                         Data representation is one aspect. that AI systems are built on ethical frameworks and inclusive data.
                                         
                                         Data representation is one aspect.
                                         
                                         The ability of a human to understand also what happens inside a model is another aspect of it, right?
                                         
                                         Then these two together, data representation plus us being able to explain models,
                                         
                                         that's the road towards achieving safe artificial intelligence.
                                         
                                         So fascinating. I have to say this conversation does make me feel a little bit more at ease. So
                                         
    
                                         thank you for taking the time to talk to us today. My last question is, if someone listening is
                                         
                                         interested in diving deeper into this topic, what resources would you recommend to them in terms of
                                         
                                         a book, a podcast or something else?
                                         
                                         I've given a lot of talks about liquid neural networks online,
                                         
                                         but more concentrated kind of material,
                                         
                                         you can find it on our website.
                                         
                                         We started the company around liquid neural networks
                                         
                                         and taking these technologies to the next level
                                         
    
                                         and providing it to the society
                                         
                                         for developing safe AI.
                                         
                                         And this is liquid.ai.
                                         
                                         So this is where you can find all sorts of information.
                                         
                                         There are blog posts around like the research papers, talks, products, and everything around the topic.
                                         
                                         Amazing. Well, thank you so much, Ramin.
                                         
                                         Absolutely. Thank you.
                                         
                                         Support for this show comes from Airbnb. If you know me, you know I love staying in Airbnbs when I travel. They make my family feel most at home when we're away from home. As we settled down at our Airbnb during a recent vacation to Palm Springs, I
                                         
    
                                         pictured my own home sitting empty. Wouldn't it be smart and better put to use welcoming a family
                                         
                                         like mine by hosting it on Airbnb? It feels like the practical thing to do, and with the extra
                                         
                                         income, I could save up for renovations to make the space even more inviting for ourselves and for future guests. Your home might be worth more than you think.
                                         
                                         Find out how much at Airbnb.ca slash host.
                                         
                                         To learn more about the TED Fellows program and watch all the TED Fellows films, go to
                                         
                                         fellows.ted.com. And that's it for today.
                                         
                                         TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team,
                                         
                                         Martha Estefanos, Oliver Friedman, Brian Green, Autumn Thompson, and Alejandra Salazar.
                                         
    
                                         It was mixed by Christopher Fazi-Bogan. Additional support from Emma Taubner and
                                         
                                         Daniela Balarezo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed.
                                         
                                         Thanks for listening.
                                         
                                         Looking for a fun challenge to share with your friends and family?
                                         
                                         Today, TED now has games designed to keep your mind sharp while having fun.
                                         
                                         Visit TED.com slash games to explore the joy and wonder of TED games.
                                         
