TED Talks Daily - OpenAI's Sam Altman talks the future of AI, safety and power — live at TED2025 | Sam Altman

Episode Date: April 15, 2025

The AI revolution is here to stay, says Sam Altman, the CEO of OpenAI. In a probing, live conversation with head of TED Chris Anderson, Altman discusses the astonishing growth of AI and shows how mode...ls like ChatGPT could soon become extensions of ourselves. He also addresses questions of safety, power and moral authority, reflecting on the world he envisions — where AI will almost certainly outpace human intelligence. (Recorded on April 11, 2025) Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 I used to say, I just feel stuck. Stuck where I don't want to be. Stuck trying to get to where I really need to be. But then I discovered lifelong learning. Learning that gave me the skills to move up, move beyond, gain that edge, drive my curiosity, prepare me for what is inevitably next. The University of Toronto School of Continuing Studies, lifelong learning to stay forever unstuck. Support for this episode comes from Airbnb. Every year I travel to Vancouver for the TED conference, a week filled with big ideas, inspiring speakers, and late-night conversations. But while I'm away, my home just sits empty. I've been thinking, why not list it on Airbnb? Hosting could help cover some of my travel costs and maybe even let me stay an extra day in Vancouver to soak in the city's beauty. Instead of rushing to the airport, I could take one more walk
Starting point is 00:01:00 along the seawall, grab another amazing meal, or relax at the spa after a busy week filled with inspiration. Hosting on Airbnb feels like the practical thing to do, and Airbnb makes it easy to get started. Your home might be worth more than you think. Find out how much at airbnb.ca slash host. You're listening to TED Talks Daily, where we bring you new ideas and conversations to spark your curiosity every day. I'm your host, Elise Hue. The transformative power of artificial intelligence is a topic we talk a lot about here on this show and for good reason.
Starting point is 00:01:45 At TED 2025, Sam Altman, the CEO of OpenAI, sat down with head of TED, Chris Anderson, for a conversation about the fast growing field, its global consequences, and where it's going next. That's coming up. Sam, welcome to TED. Thank you so much for coming. Thank you. It, welcome to TED. Thank you so much for coming. Thank you. It's an honor. Your company has been releasing crazy and insane new models pretty much every other
Starting point is 00:02:15 week it feels like. Yeah, the new image generation model is part of GPT-40, so it's got all of the intelligence in there. And I think that's one of the reasons it's been able to do these things that people really love. CAOKEY I mean, if I'm a management consultant and I'm playing with some of this stuff, I'm thinking, uh-oh, what does my future look like?
Starting point is 00:02:35 I mean, I think there are sort of two views you can take. You can say, oh man, it's doing everything I do, what's going to happen to me? Or you can say, like through, it's doing everything I do, what's going to happen to me? Or you can say, like through every other technological revolution in history, OK, now there's this new tool, I can do a lot more. What am I going to be able to do?
Starting point is 00:02:55 It is true that the expectation of what we'll have for someone in a particular job increases, but the capabilities will increase so dramatically that I think it'll be easy to rise to that occasion. CAOKEY NELSON-MARTINOVICH The writing quality of some of the new models, not just here but in detail, is really going to a new level. This is an incredible meta-answer,
Starting point is 00:03:15 but there's really no way to know if it is thinking that or just saw that a lot of times in the training set. And of course, if you can't tell the difference, how much do you care? CA So that's really interesting. We don't know. Isn't there, though, at first glance, this looks like IP theft?
Starting point is 00:03:35 CA I will say that I think the creative spirit of humanity is an incredibly important thing, and we want to build tools that lift that up, that make it so that new people can create better art, better content, write better novels that we all enjoy. I believe very deeply that humans will be at the center of that. I also believe that we probably do need to figure out
Starting point is 00:04:00 some sort of new model around the economics of creative output. I think people have been building on the creativity of others for a long time. People take inspiration for a long time. But as the access to creativity gets incredibly democratized and people are building off of each other's ideas all the time, I think there are incredible new business models that we and others are
Starting point is 00:04:25 excited to explore exactly what that's going to look like. I'm not sure, like, clearly there's some cut and dry stuff, like you can't copy someone else's work, but how much inspiration can you take? If you say, I want to generate art in the style of these seven people, all of whom have consented to that, how do you like divvy up how much money goes to each one? These are like big questions. But every time throughout history, we have put better and more powerful technology in the hands of creators, I think we collectively get better creative output, and people do just more amazing stuff.
Starting point is 00:04:55 CAOKEY NIGHTSWIFT And an even bigger question is when they haven't consented to it. In our opening session, Carol Cadwallader showed chat GPT, give a talk in the style of Carol Cadwallader, and sure enough, it gave a talk that wasn't quite as good as the talk she gave, but it was pretty impressive. And she said, OK, it's great, but I did not consent to this. How are we going to navigate this?
Starting point is 00:05:19 Like, isn't there a way, should it just be people who consented, or shouldn't there be a model that somehow says that any named individual in a prompt whose work is then used, they should get something for that? So right now, if you use our image-gen thing and say, you know, I want something in the style of a living artist, it won't do that. But if you say, I want it in the style of this particular kind of vibe,
Starting point is 00:05:43 or this studio or this art, or this art movement, or whatever it will, I, and obviously, if you're like, you know, I'll put a song that is like a copy of the song, I won't do that. The question of where that line should be and how people say, like, this is too much, we sorted that out before with copyright law and kind of what fair use looks like.
Starting point is 00:06:07 Again, I think in the world of AI, there will be a new model that we figure out. But from the point of view, I mean, the world is full of creative people are some of the angriest people right now, or the most scared people about AI. And the difference between feeling your work is being stolen from you and your future is being stolen from you and feeling your work is being amplified and can be amplified,
Starting point is 00:06:30 those are such different feelings. And if we could shift to the other one, to the second one, I think that really changes how much humanity as a whole embraces all this. Well, again, I would say some creative people are very upset. Some creatives are like, this is the most amazing tool ever, I would say some creative people are very upset. Some creatives are like, this is the most amazing tool ever, I'm doing incredible new work.
Starting point is 00:06:48 But, you know, like, it's definitely a change, and I have a lot of, like, empathy toward people who are just like, I wish this change weren't happening. I liked the way things were before, I liked the way... CA But in principle, you can calculate from any given prompt how much... There should be some way of being able to calculate
Starting point is 00:07:12 what percentage of a subscription revenue or whatever goes towards each answer. In principle, it should be possible, if one could get the rest of the rules figured out, and it's obviously complicated, you could calculate some kind of revenue share. CA If you're a musician and you spend your whole life, your whole childhood, whatever, listening to music, and then you get an idea and you go compose a song
Starting point is 00:07:31 that is inspired by what you've heard before, but a new direction, it'd be very hard for you to say, like, this much was from this song I heard when I was 11, this much from when I was something else. But we're talking here about the situation where someone specifically in a prompt names someone. Yeah, so I... Well, again, right now, if you try to go where someone specifically in a prompt names someone. Yeah, so I... Well, again, right now, if you try to go generate an image in a name style,
Starting point is 00:07:49 we just say, that are slimy, we don't do it. But I think it would be cool to figure out a new model, where if you say, I want to do it in the name of this artist, and they opt in, there's a revenue model there. I think that's a good thing to explore. So I think the world should help you figure out that model quickly, and I think it will make a huge difference, actually. I want to switch topics quickly.
Starting point is 00:08:09 The battle between your model and open source, how much were you shaken up by the arrival of DeepSeq? I think open source has an important place. We actually just last night hosted our first community session to kind of decide the parameters of our open source model and how we want to shape it. We're going to do a very powerful open source model. I think this is important.
Starting point is 00:08:38 We're going to do something near the frontier, I think better than any current open source model out there. This will not be all. There will be people who use this in ways that some people in this room, maybe you or I, don't like. But there is going to be an important place for open-source models as part of the constellation here. And I think we were late to act on that, but we're going to do it really well now.
Starting point is 00:09:02 CAOKEY NYE-JONES-LOUO I mean, your spending, it seems, like an order, or even orders of magnitude more than DeepSeek allegedly spent, although I know there's controversy around that. Are you confident that the actual better model is going to be recognized, or you actually, like, isn't this in some ways life-threatening to the notion that, yeah, by going to massive scale, tens of billions of dollars investment, we can maintain an incredible lead.
Starting point is 00:09:30 All day long, I call people and beg them to give us their GPUs. We are so incredibly constrained. Our growth is going like this. DeepSeq launched and it didn't seem to impact it. There's other stuff that's happening. Tell us about the growth, actually. You gave me a shocking number backstage there. I have never seen growth in any company,
Starting point is 00:09:51 one that I've been involved with or not, like this. Like the growth of ChachiPT. It's really fun. I feel like great, deeply honored. But it is crazy to live through, and our teams are exhausted and stressed, and we're trying to keep things up. CAOKEY NIGHTSHAW, Ph.D. How many users do you have now?
Starting point is 00:10:12 I think the last time we said was 500 million weekly actives, and it is growing very rapidly. CAOKEY NIGHTSHAW, Ph.D. So you told me that it doubled in just a few weeks, in terms of compute or in terms of... I can send that privately, but I guess... CAOKEY NIGHTSHAW, Ph.D. Oh... I can send that privately, but I guess... Oh. I misremembered, Sam. I'm sorry, we can edit that out of the thing if you really want to.
Starting point is 00:10:32 No one here would treat it. It's growing very fast. Um... Oh. So you're confident, you're seeing it grow, take off like a rocket ship, you're really seeing incredible new models all the time. What are you seeing in your best internal models right now that you haven't yet shared with the world,
Starting point is 00:10:52 but you would love to hear on this stage? So first of all, you asked about, are we worried about this model or that model? There will be a lot of intelligent models in the world. Very smart models will be commoditized to some degree. I think we'll have the best, and for some use, you'll want that. But honestly, the models are now so smart that for most of the things most people want to do, they're good enough. I hope that'll change over time, because people will raise their expectations, but if
Starting point is 00:11:16 you're kind of using ChatGPT as a standard user, the model capability is very smart. But we have to build a great product, not just a great model. And so there will be a lot of people with great models, and we will try to build the best product. And people want their image gen, you know, so some Sora examples for video earlier. They want to integrate it with all their stuff.
Starting point is 00:11:40 We just launched a new feature called, well, still called memory, but it's way better than the Memory before, where this model will get to know you over the course of your lifetime. And we have a lot more stuff to come to build this great integrated product. And I think people will stick with that. So there will be many models, but I think we will, I hope, continue to focus on building the best defining product in the space. After I saw your announcement yesterday
Starting point is 00:12:06 that you've now, ChatGPT will know all of your query history, I entered. Tell me about me, ChatGPT, from what you know. And my jaw dropped, so it was shocking. I knew who I was and all these sort of interests that hopefully mostly were pretty much appropriate and shareable. But it was astonishing, and I felt the sense of real excitement, a little bit queasy,
Starting point is 00:12:33 but mainly excitement, actually, at how much more that would allow it to be useful to me. CAOKEY SILVEIRAEUS One of our researchers tweeted, kind of like yesterday or this morning, that the upload happens bit by bit. It's not that you plug your brain in one day, but you will talk to ChatGPT over the course of your life. Maybe if you want, it'll be listening to you throughout the day
Starting point is 00:12:56 and sort of observing what you're doing. And it'll get to know you, and it'll become this extension of yourself, this companion, this thing that just tries to help you be the best, do the best you can. CA.M. In the movie, her, the AI basically announces that she's read all of his emails and decided he's a great writer and persuades a publisher to publish him.
Starting point is 00:13:20 Is that... that might be coming sooner than we think. I don't think it'll happen exactly like that, that might be coming sooner than we think. I don't think it'll happen exactly like that, but yeah, I think something in the direction where AI, you don't have to just like go to ChatGPT or whatever and say, I have a question, give me an answer, but you're getting like proactively pushed things that help you, that make you better, whatever. That does seem like it's soon.
Starting point is 00:13:45 So what have you seen that make you better, whatever, that does seem like it soon. CAOKEY SUTHERLAND So what have you seen that's coming up internally that you think is going to blow people's minds? Give us at least a hint of what the next big jaw-dropper is. The thing that I'm personally most excited about is AI for science at this point. I think I am a big believer that the most important driver of the world
Starting point is 00:14:08 and people's lives getting better and better is new scientific discovery. We can do more things with less. We sort of push back the frontier of what's possible. We're starting to hear a lot from scientists with our latest models that they're actually just more productive than they were before. That it's actually mattering to what they can discover. CAESAR What's the plausible near-term discovery,
Starting point is 00:14:29 like room temperature... STUART Superconductors, that would be a great one. CAESAR Superconducting, yeah. Is that possible? STUART Yeah, I don't think that's prevented by the laws of physics, so it should be possible. But we don't know for sure. I think you'll start to see some meaningful progress against disease with AI-assisted tools.
Starting point is 00:14:56 You know, physics maybe takes a little bit longer, but I hope for it. So that's like one direction. Another that I think is big is starting pretty soon, like in the coming months. Software development has already been pretty transformed. Like, it's quite amazing how different the process of creating software is now than it was two years ago. But I expect, like, another move that big in the coming months as agentic software engineering really starts to happen.
Starting point is 00:15:26 I've heard engineers say that they've had almost religious-like moments with some of the new models, where suddenly they can do it in an afternoon, what would have taken two years? CA Yeah, it's like mine. It really, like, that's been one of my big field AGI moments. CA But talk about what is the scariest thing that you've seen. Because outside, a lot of people picture you as, you know,
Starting point is 00:15:45 you have access to this stuff, and we hear all these rumors coming out of AI, and it's like, oh my god, they've seen consciousness, or they've seen AGI, or they've seen some kind of apocalypse coming. Have you seen, has there been a scary moment when you've seen something internally and thought, uh-oh, we need to pay attention to this?
Starting point is 00:16:05 There have been moments of awe, and I think with that is always like, how far is this going to go, what is this going to be? But there's no like, we don't secretly have, we're not secretly sitting on a conscious model or something that's capable of self-improvement or anything like that. People have very different views of what the big AI risks are going to be. But I continue to believe there will come very powerful models that people can misuse in big ways. People talk a lot about the potential for new kinds of bioterror models
Starting point is 00:16:33 that can prevent the potential for new kinds of bioterror models. And I think that's a very important point. I think that's a very important point. I think that's a very important point. I think that's a very important point. I think that's come very powerful models that people can misuse in big ways. People talk a lot about the potential for new kinds of bioterror, models that can present a real cybersecurity challenge, models that are capable of self-improvement
Starting point is 00:16:55 in a way that leads to some sort of loss of control. So I think there are big risks there. And then there's a lot of other stuff, which honestly is kind of what I think many people mean, where people talk about disinformation or models saying things that they don't like or things like that. CAO. Sticking with the first of those, do you check for that internally before release?
Starting point is 00:17:15 Of course, yes. So we have this preparedness framework that outlines how we do that. I mean, you've had some departures from your safety team. How many people have departed? Why have they left? We have... I don't know the exact number, but there are clearly different views about AI safety systems. I would really point to our track record. There are people who will say all sorts of things.
Starting point is 00:17:38 You know, something like 10 percent of the world uses our systems now a lot. And we are very proud of the world uses our systems now a lot. And we are very proud of the safety track record. But track record isn't the issue in a way, because what we're talking about, we're talking about an exponentially growing power where we fear that we may wake up one day and the world is ending.
Starting point is 00:18:00 So it's really not about track record. It's about plausibly saying that the pieces are in place to shut things down quickly if we see a danger. Oh yeah, yeah, no, of course, of course that's important. You can't, you don't like wake up one day and say, hey, we didn't have any safety process in place, now we think the model is really smart, so now we have to care about safety. You have to care about it all along this exponential curve. Of course, the stakes increase, and there are big challenges. But the way we learn how to build safe systems
Starting point is 00:18:33 is this iterative process of deploying them to the world, getting feedback, while the stakes are relatively low, learning about, like, hey, this is something we have to address. And I think as we move into these agentic systems, there's a whole big category of new things we have to learn to address. CAOKEY So let's talk about agentic systems and the relationship between that and AGI. I think there's confusion out there. I'm confused.
Starting point is 00:18:55 So artificial general intelligence, it feels like chat GPT is already a general intelligence. I can ask it about anything, and it comes back with an intelligent answer. Why isn't that AGI? It doesn't... First of all, you can't ask it anything. It's very nice of you to say it,
Starting point is 00:19:13 but there's a lot of things it's still embarrassingly bad at. But even if we fix those, which hopefully we will, it doesn't continuously learn and improve. It can't go get better at something that it's currently weak at. It can't go discover new science and update its understanding and do that. And it also kind of can't, even if we lower the bar, it can't just sort of do any knowledge work you could do in front of a computer. I actually, even without the sort of ability to get better
Starting point is 00:19:45 at something it doesn't know yet, I might accept that as a definition of AGI. But the current systems, you can't say, like, hey, go do this task for my job, and it goes off and clicks around the internet and calls someone and looks at your files and does it. And without that, it feels definitely short of it. CAOKEY NYEIHOLAEKWUHUG
Starting point is 00:20:04 Do you guys have internally a clear definition of what AGI is? And when do you think that we may be there? It's one of these, it's like the joke, if you've got 10 open-air researchers in a room and ask to define AGI, you'd get 14 definitions. And... CAO That's worrying, though, isn't it? Because that has been the mission initially.
Starting point is 00:20:23 We're going to be the first to get to AGI, we'll do so safely, but we don't have a clear definition of what it is. I was going to finish the answer. Sorry. What I think matters, though, and what people want to know, is not where is this one magic moment of we finished, but given that what looks like is going to happen is that the models are just going to get smarter and more capable
Starting point is 00:20:51 and smarter and more capable and smarter and more capable, on this long exponential, different people will call it AGI at different points, but we all agree it's going to go way, way past that, you know, to whatever you want to call these systems that get much more capable than we are. The thing that matters is how do we talk about a system that is safe through all of these
Starting point is 00:21:15 steps and beyond as the system gets more capable than we are, as the system can do things that we don't totally understand. And I think more important than a when is AGI coming and what's the definition of it, it's recognizing that we are in this unbelievable exponential curve. And you can say, this is what I think AGI is, you can say, you think this is what you think AGI is, someone else can say superintelligence is out here, but we're going to have to contend and get wonderful benefits
Starting point is 00:21:45 from this incredible system. And so I think we should shift the conversation away from, what's the AGI moment, to a recognition that this thing is not going to stop. It's going to go way beyond what any of us would call AGI, and we have to build a society to get the tremendous benefits of this and figure out how to make it safe. CAOKEY NELSON Well-SCHEHAUSER
Starting point is 00:22:05 Well, one of the conversations this week has been that the real change moment is... I mean, AGI is a fuzzy thing, but what is clear is agentic AI, when AI is set free to pursue projects on its own and to put the pieces together. You've actually got a thing called operator, which starts to do this.
Starting point is 00:22:27 And I tried it out. You know, I wanted to book a restaurant, and it's kind of incredible. It kind of can go ahead and do it, but this is what it said. There's a... You know, it was an intriguing process. And, you know, give me your credit card and everything else, and... and, you know, give me your credit card and everything else, and I declined on this case to go forward. But I think this is the challenge that people are going to have.
Starting point is 00:22:52 It's kind of like, it's an incredible superpower. It's a little bit scary, and Yoshua Bengio, when he spoke here, said that agentic AI is the thing to pay attention to. This is when everything could go wrong, as we give power to AI to go out onto the internet when he spoke here, said that agentic AI is the thing to pay attention to. This is when everything could go wrong, as we give power to AI to go out onto the internet to do stuff. I mean, going out onto the internet was always, in the sci-fi stories,
Starting point is 00:23:14 the moment where escape happened and potential things could go horribly wrong. How do you both release agentic AI and have guardrails in place so that it doesn't go too far? First of all, obviously you can choose not to do this and say, I don't want this, I'm going to call the restaurant and read them my credit card over the phone.
Starting point is 00:23:36 CAO I could choose, but someone else might say, oh, go out, chat GBT onto the internet at large and rewrite the internet to make it better for humans. CAO The point I was going to make is just with any new technology, it takes a while for people to get comfortable. I remember when I wouldn't put my credit card on the internet because my parents had convinced me someone was going to read the number and you had to fill out the form and then call them,
Starting point is 00:23:59 and then we kind of all said, OK, we'll build anti-fraud systems and we can get comfortable with this. I think people are going to be slow to get comfortable with agentic AI in many ways, but I also really agree with what you said, which is that even if some people are comfortable with it and some aren't, we are going to have AI systems clicking around the internet. And this is, I think, the most interesting and consequential safety challenge we have yet faced,
Starting point is 00:24:23 because AI that you give access to your systems, your information, the ability to click around on your computer, now, those, you know, when AI makes a mistake, it's much higher stakes. It is the gate on, so we talked earlier about safety and capability. I kind of think they're increasingly becoming one-dimensional. Like, a good product is a safe product. You will not use our agents if you do not trust that they're not gonna like empty your bank account or delete your data or who knows what else. And so people want to use agents that
Starting point is 00:25:00 they can really trust that are really safe and I think we think we are gated on our ability to make progress, on our ability to do that, but it's a fundamental part of the product. CAO in a world where agency is out there and say that maybe open models are widely distributed and someone says, OK, AGI, I want you to go out onto the internet and spread a meme however you can
Starting point is 00:25:30 that X people are evil or whatever it is. It doesn't have to be an individual choice. A single person could let that agent out there, and the agent could decide, well, in order to execute on that function, I've got to copy myself everywhere and, you know. Like, are there red lines that you have clearly drawn internally, where you know what the danger moments are
Starting point is 00:25:55 and that we cannot put out something that could go beyond this? CAOKEY Yeah, so this is the purpose of our preparedness framework. And we'll update that over time, but we've tried to outline where we think the most important danger moments are, or what the categories are, how we measure that and how we would mitigate something before releasing it. I could tell from the conversation you wish AI. You're not a big AI fan.
Starting point is 00:26:22 CAOKEY MCLARENOE Actually, on the contrary, I use it every day. I'm awed by it. I think this is an incredible time to be alive. I wouldn't be alive any other time, and I cannot wait to see where it goes. But we've been holding... I think it's essential to hold... You can't divide people into those camps.
Starting point is 00:26:39 You have to hold a passionate belief in the possibility, but not be over-seduced by it, because things could go horribly wrong. hold a passionate belief in the possibility, but not be over-seduced by it, because things could go horribly wrong. No, no, I... No. I was going to say is, I totally understand that. I totally understand looking at this and saying,
Starting point is 00:26:57 this is an unbelievable change coming to the world, and maybe I don't want this. Or maybe I love parts of it. Maybe I love talking to Chad GPT, but I worry about what's going to happen to art, and I worry about the pace of change, and I worry about these agents clicking here, clicking around the internet,
Starting point is 00:27:14 and maybe on balance, I wish this weren't happening, or maybe I wish it were happening a little slower, or maybe I wish it were happening in a way where I could pick and choose what parts of progress were going to happen. And I think the fear is totally rational, the anxiety is totally rational. We all have a lot of it, too.
Starting point is 00:27:35 But A, there will be tremendous upside. Obviously, you know, you use it every day, you like it. B, I really believe that society figures out, you know, you use it every day, you like it. B, I really believe that society figures out, over time, with some big mistakes along the way, how to get technology right. And C, this is going to happen. This is like a discovery of fundamental physics
Starting point is 00:27:59 that the world now knows about, and it's going to be part of our world. And I think this conversation is really important. I think talking about these areas of danger are really important, new economic models are really important. But we have to embrace this with caution but not fear, or we will get run by with other people that use AI to do better things. CAOKEY You've actually been one of the most eloquent proponents of safety.
Starting point is 00:28:25 You testified in the Senate. You've been one of the most eloquent proponents of safety. You testified in the Senate. I think you said basically that we should form a new safety agency that licenses any effort, i.e. it will refuse to license certain efforts. Do you still believe in that policy proposal? I have learned more about how the government works. I don't think this is quite the right policy proposal? I have learned more about how the government works. I don't think this is quite the right policy proposal.
Starting point is 00:28:53 But I do think the idea that as these systems get more advanced and have legitimate global impact, we need some way, you know, maybe the companies themselves put together the right framework or the right sort of model for this, but we need some way that very advanced models have external safety testing and we understand when we get close to some of these danger zones, I very much still believe in that.
Starting point is 00:29:18 CAOKEY MCLARENOE It struck me as ironic that a safety agency might be what we want, and yet agency is the very thing that is unsafe. There's something odd about the language there, but anyway. I asked... Yes, please. I do think this concept of... we need to define rigorous testing for models,
Starting point is 00:29:40 understand what the threats that we collectively, society, most want to focus on and make sure that as models are getting more capable, we have a system where we all get to understand what's being released in the world. I think this is really important, and I think we're not far away from models that are going to be of great public interest in that sense.
Starting point is 00:30:00 CAOKEY NELSON So Sam, I asked your O1 Pro reasoning model, which is incredibly important. SAM SACCONE Thank you for the $200. CAOKEY NEL. $200 a month. It's a bargain at the price. I said, what is the single most penetrating question I could ask you? It thought about it for two minutes. Two minutes.
Starting point is 00:30:17 You want to see the question? I do. Sam, given that you're helping create technology that could reshape the destiny of our entire species, who granted you or anyone the moral authority to do that? And how are you personally responsible? Accountable if you're wrong. No, it was so that was impressive. You've been asking me versions of this for the last half hour What do you think I? But what I would say is this.
Starting point is 00:30:44 Here's my version of that question. But no answer? What was your question for me? Yeah, how would you answer that one? In your shoes? Yeah. Or what do you, as an outsider? I don't know.
Starting point is 00:30:54 I am puzzled by you. I'm kind of awed by you, because you've built one of the most astonishing things out there. There are two narratives about you out there. One is, you are the one who's been in the room for the longest time, and you, because you've built one of the most astonishing things out there. There are two narratives about you out there. One is you are this incredible visionary who's done the impossible, and you shock the world.
Starting point is 00:31:16 With far fewer people than Google, you came out with something that was much more powerful than anything you've done. It is amazing what you've built. But the other narrative is that you have shifted ground, that you've shifted from being open AI, this open thing, to the allure of building something super powerful. And you know, you've lost some of your key people. There's a narrative out there.
Starting point is 00:31:41 Some people believe that you're not to be trusted in this space. I would love to know who you are. What is your narrative about yourself? What are your core values, Sam, that can give us the world confidence that someone with so much power here is entitled to it? Look, I think like anyone else,
Starting point is 00:32:02 I'm a nuanced character that doesn't reduce well to one dimension here, so probably some of the good things are true and probably some of the criticism is true. I, in terms of OpenAI, our goal is to make AGI and distribute it, make it safe for the broad benefit of humanity. I think by all accounts, we have done a lot in that direction. Clearly our tactics have shifted over time.
Starting point is 00:32:24 I think we didn't really know what we were going to be when we grew up. We didn't think we would have to build a company around this. We learned a lot about how it goes and the realities of what these systems were going to take from capital. But I think we've been, in terms of putting incredibly capable AI with a high degree of safety in the hands of a lot of people and giving them tools to sort of do whatever amazing things they're going to do. I think it would be hard to give us a bad grade on that.
Starting point is 00:32:52 I do think it's fair that we should be open sourcing more. I think it was reasonable for all of the reasons that you asked earlier as we weren't sure about the impact these systems were going to have and how to make them safe that we acted with precaution. I think a lot of your questions earlier would suggest at least some sympathy to the fact that we've operated that way. But now I think we have a better understanding as a world, and it is time for us to put very capable open systems out into the world.
Starting point is 00:33:21 If you invite me back next year, you will probably yell at me for somebody who has misused these open-source systems and say, why did you do that? That was bad. You should have not gone back to your open roots. But there's trade-offs in everything we do, and we are one player in this one voice in this AI revolution, trying to do the best we can
Starting point is 00:33:43 and kind of steward this technology into the world in a responsible way. We've definitely made mistakes, we'll definitely make more in the future. On the whole, I think we have, over the last almost decade, it's been a long time now, it's, you know, we have mostly done the thing we've set out to do, we have a long way to go in front of us. Our tactics will shift more in the future, but adherence to this sort of mission and what we're trying to do,
Starting point is 00:34:09 I think, is very strong. (*Applause*) You posted this... Well, OK, so here's the ring of power from Lord of the Rings. Your rival, I will say, not your best friend at the moment, Elon Musk, claimed that he thought that you'd been corrupted by the ring of power. An allegation that, by the way...
Starting point is 00:34:36 A.S.P. An allegation that could be applied to Elon as well, you know, to be fair. But I'm curious, sir. People, you have... I might respond. I'm thinking about it. I might say something. I...
Starting point is 00:34:53 It's in everyone's mind, as we see technology, CEOs get more powerful, get richer, is can they handle it, or does it become irresistible? Does the power and the wealth make it impossible to sometimes do the right thing, and you just have to cling tightly to that ring? What do you think? I mean, do you feel that ring sometimes?
Starting point is 00:35:16 CA How do you think I'm doing, relative to other CEOs that have gotten a lot of power and changed how they act or done a bunch of stuff in the world? Like, how do you think? (*Applause*) You have a beautiful... You are not a rude, angry person who comes out and says aggressive things to other people.
Starting point is 00:35:43 Sometimes I do that. That's my single vice. But you know... No, but... No, I think in the way that you personally conduct yourself, it's impressive. I mean, the question some people ask is, is that the real you, or is there something else going on? But I'm just...
Starting point is 00:35:59 No, I'll take the feedback. We all have seen CEOs... You put up the Soren Ring of Power, or whatever that thing is. So I'll take the feedback. What is something I have done where you think I've been corrupted by power? I think the fear is that just the transition of open AI to a for-profit model is, you know, some people say, well, there you go, you got corrupted by the desire for wealth.
Starting point is 00:36:20 You know, at one point, there was going to be no equity in it. It'll make you fabulously wealthy. By the way, I don't think that is your motivation personally. the desire for wealth. You know, at one point there was going to be no equity in it. It'll make you fabulously wealthy. By the way, I don't think that is your motivation personally. I think you want to build stuff that is insanely cool, and what I worry about is the competitive feeling, that you see other people doing it, and it makes it impossible to develop at the right pace.
Starting point is 00:36:40 But you tell me, if you don't feel that, like, so few people in the world have the kind of capability and potential you have, we don't know what it feels like. What does it feel like? Shockingly the same as before. I think you can get used to anything step by step. I think if I were, like, transported from 10 years ago to right now all at once, it would feel very disorienting.
Starting point is 00:37:05 But anything does become sort of the new normal, so it doesn't feel any different. And it's strange to be sitting here talking about this, but like, you know, the monotony of day-to-day life, which I mean in the best possible way, feels exactly the same. CA You're the same person.
Starting point is 00:37:27 I mean, I'm sure I'm not in all sorts of ways, but I don't feel any different. CA This was a beautiful thing you posted. Your son. That last thing you said that, I've never felt love like this. I think any parent in the room so knows that feeling, that wild biological feeling that humans have and AIs never will, of your holding your kid. so knows that feeling, that wild biological feeling that humans have and AIs never will,
Starting point is 00:37:48 of your holding your kid. And I'm wondering whether that's changed how you think about things. Like, say, here's a black box with a red button on it. You can press that button, and you give your son likely the most unbelievable life, but also you inject a 10 percent chance that he gets destroyed. Do you press that button? In the literal case, no.
Starting point is 00:38:22 If the question is, do I feel like I'm doing that with my work, the answer is I also don't feel like that. Having a kid changed a lot of things, and by far the most amazing thing that has ever happened to me. Like, everything everybody says is true. The thing my co-founder, Ilya, said once is, I don't know, this is a paraphrase, something like, I don't know what the meaning of life is, but for sure it has something to do with babies.
Starting point is 00:38:45 And it's, like, unbelievably accurate. It changed how much I'm willing to, like, spend time on certain things, and, like, the kind of cost of not being with my kid is just, like, crazily high. And I... But, you know, I really cared about not destroying the world before. I really care about it now.
Starting point is 00:39:09 I didn't need a kid for that part. I mean, I definitely think more about what the future will be like for him in particular, but I feel like a responsibility is the best thing I can for the future of everybody. Tristan Harris gave a very powerful talk here this week in which he said that the key problem, in his view, was that you and your peers in these other models
Starting point is 00:39:35 all feel basically that the development of advanced AI is inevitable, that the race is on and that there is no choice but to try and win that race and to do so as responsibly as you can. And maybe there's a scenario where your super-intelligent AI can act as a brake on everyone else's or something like that. But the very fact that everyone believes it is inevitable means that that is a pathway to serious risk and instability.
Starting point is 00:40:08 Do you think that you and your peers do feel that it's inevitable, and can you see any pathway out of that, where we could collectively agree to just slow things down a bit, have society as a whole weigh in a bit, and say, no, we don't want this to happen quite as fast, it's too disruptive? First of all, I think people slow things down all the time because the technology is not ready,
Starting point is 00:40:38 because something's not safe enough, because something doesn't work. There are, I think, all of the efforts. Hold on things, pause on things, delay on things, don't release certain capabilities. So I think this happens. And again, like, this is where I think the track record does matter. If we were rushing things out and there were all sorts of problems, either the product didn't work as people wanted it to or there were real safety issues or other things there, and I will come back to a change we made, I think you could do that. There is communication between most of the efforts.
Starting point is 00:41:08 With one exception, I think all of the efforts care a lot about AI safety. And I think that people... I'm obviously not going to say. And I think that there's really... deep care to get this right. I think the caricature of this is just like this crazy race or sprint or whatever, misses the nuance of people are trying to put out models quickly and make great products for people, but people feel the impact of this so incredibly
Starting point is 00:41:48 that I think if you could go sit in a meeting in OpenAI or other companies, you'd be like, oh, these people are really kind of caring about this. Now, we did make a change recently to how we think about one part of what's traditionally been understood as safety, which is with our new image model,
Starting point is 00:42:12 we've given users much more freedom on what we would traditionally think about as speech harms. You know, if you try to get offended by the model, will the model let you be offended? And in the past, we've had much tighter guardrails on this. But I think part of model alignment is following what the user of a model wants it to do within the very broad bounds of what society decides.
Starting point is 00:42:42 So if you ask the model to depict a bunch of violence or something like that, or to sort of reinforce some stereotype, there's a question of whether or not it should do that. We're taking a much more permissive stance. Now, there's a place where that starts to interact with real-world harms that we have to figure out how to draw the line for, but I think there will be cases where a company says, OK, we've heard the feedback from society,
Starting point is 00:43:04 people really don't want models to censor them in ways that they don't think make sense. That's a fair safety negotiation. But to the extent that this is a problem of collective belief, the solution to those kinds of problems is to bring people together and meet at one point and make a different agreement. If there was a group of people, say here or out there in the world, who were willing to host a summit of the best ethicists, technologists, but not too many people, small, but and you and your peers,
Starting point is 00:43:38 to try to crack what agreed safety lines could be across the world, would you be willing to attend? Would you urge your colleagues to come? Of course, but I'm much more interested in what our hundreds of millions of users want as a whole. I think a lot of the room has historically been decided in small elite summits. One of the cool new things about AI
Starting point is 00:43:58 is our AI can talk to everybody on Earth, and we can learn the collective value preference of what everybody wants, rather than have a bunch of people who are blessed by society to sit in a room and make these decisions. I think that's very cool. And... ... and I think you will see us do more in that direction.
Starting point is 00:44:17 And when we've gotten things wrong, because the elites in the room had a different opinion about what people wanted for the guardrails on ImageGen than what people actually wanted, and we couldn't point to real-world harm, so we made that change. I'm proud of that. I mean, there is a long track record of unintended consequences coming out of the actions of hundreds of millions of people.
Starting point is 00:44:37 And there are people... Also 100 people in a room making a decision for this. And the hundreds of millions of people don't have control over... They don't necessarily see what the next step could lead to. I am hopeful that that is totally accurate and totally right. I am hopeful that AI can help us be wiser, make better decisions, can talk to us, and if we say, hey, I want thing X,
Starting point is 00:45:01 rather than, like, the crowd spin that up, AI can say, hey, totally understand that's what you want. If that's what you want at the end of this conversation, you're in control, you pick, but have you considered it from this person's perspective or the impact it will have on this people? I think AI can help us be wiser and make better collective governance decisions than we could before.
Starting point is 00:45:21 CAOKE Well, we're well out of time. Sam, I'll give you the last word. What kind of world do you believe all things considered your son will grow up into? I remember... It's so long ago now. I don't know when the first iPad came out. Is it like 15 years, something like that?
Starting point is 00:45:40 I remember watching a YouTube video at the time of a little toddler sitting in a doctor's office waiting room or something, and there was a magazine, one of those old glossy cover magazines, and the toddler had his hand on it and was going like this and kind of angry. And to that toddler, it was like a broken iPad. And she never thought of a world that didn't have touchscreens in them. And to all the adults watching this, it was this amazing thing,
Starting point is 00:46:11 because it was like, it's so new, it's so amazing, it's a miracle. Of course, magazines are the way the world works. My kid, my kids, hopefully, will never be smarter than AI. They will never grow up in a world where products and services are not incredibly smart, incredibly capable. They will never grow up in a world where computers don't just kind of understand you and do, you know, for some definition of whatever you can imagine,
Starting point is 00:46:43 whatever you can imagine. It'll be a world of incredible material abundance. do, you know, for some definition of whatever you can imagine, whatever you can imagine. It'll be a world of incredible material abundance. It'll be a world where the rate of change is incredibly fast and amazing new things are happening. And it'll be a world where, like, individual ability, impact, whatever, is just so far beyond what a person can do today. I hope that my kids and all of your kids will look back at us
Starting point is 00:47:11 with some pity and nostalgia and be like, they lived such horrible lives, they were so limited. The world sucked so much. I think that's great. Sam. (*Applause*) It's incredible what you've built. It really is. It's incredible what you've built. It really is.
Starting point is 00:47:28 It's unbelievable. I think over the next few years, you're going to have some of the biggest opportunities, the biggest moral challenges, the biggest decisions to make of perhaps any human in history, pretty much. You should know that everyone here will be cheering you on to do the right thing. We will do our best. Thank you very much. Thank you for coming to TED. Thank you.
Starting point is 00:47:49 Sam. Thank you. Thank you very much. Enjoyed it. That was Sam Altman in conversation with Chris Anderson at TED 2025. If you're curious about TED's curation, find out more at TED.com slash curation guidelines. And that's it for today's show. TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green, Lucy Little, Alejandra Salazar, and Tansika Sangmanivon. It was mixed by Christopher Fazy-Bogan.
Starting point is 00:48:27 Additional support from Emma Tobner and Daniela Ballarezzo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening. I used to say, I just feel stuck. But then I discovered lifelong learning. It gave me the skills to move up, gain an edge, and prepare for what's next. The University of Toronto School of Continuing Studies. Lifelong learning to stay forever, unstuck.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.