The Tucker Carlson Show - Sam Altman on God, Elon Musk and the Mysterious Death of His Former Employee

Episode Date: September 10, 2025

Sam Altman on God, Elon Musk and the Mysterious Death of His Former Employee (00:00) Is AI Alive? Is It Lying to Us? (03:37) Does Sam Altman Believe in God? (19:08) ChatGPT Users Committing Suici...de (29:01) Altman’s Biggest Fear About AI (41:37) Altman’s Thoughts on Elon Musk (49:00) What Jobs Will Be Lost to AI? Paid partnerships with: Cowboy Colostrum: Get 25% off your entire order with code TUCKER at https://cowboycolostrum.com Masa Chips: Get 25% off with code TUCKER at https://masachips.com/tucker Dutch: Get $50 a year for vet care with Tucker50 at https://dutch.com/tucker MeriwetherFarms: Visit https://MeriwetherFarms.com/Tucker and use code TUCKER76 for 15% off your first order. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Reading, playing, learning. Stellist lenses do more than just correct your child's vision. They slow down the progression of myopia. So your child can continue to discover all the world has to offer through their own eyes. Light the path to a brighter future with stellus lenses for myopia control. Learn more at SLOR.com. And ask your family eye care professional for SLR Stellis lenses at your child's next visit. Thanks for doing this.
Starting point is 00:00:30 Thank you. So chat GPT, other AIs, can reason. It seems like they can reason. They can make independent judgments. They produce results that were not programmed in. They kind of come to conclusions. They seem like they're alive. Are they alive?
Starting point is 00:00:44 Is it alive? No, and I don't think they seem alive, but I understand where that comes from. They don't do anything unless you ask, right? Like they're just sitting there kind of waiting. They don't have like a sense of agent. or autonomy. It's the more you use them, I think, the more the kind of illusion breaks. But they're incredibly useful. Like, they can do things that maybe don't seem alive, but seem
Starting point is 00:01:10 like, they do seem smart. I spoke to someone who's involved in, at scale of the development of the technology who said they lie. Have you ever seen that? They hallucinate all the time, yeah. Or not all the time. They used to hallucinate all the time. They now hallucinate a little bit. What does that mean? What's the distinction between hallucinating and lying? If you ask, again, this has gotten much better, but in the early days, if you asked, you know, what, in what year was president, the made-up name, President Tucker Carlson of the United States born, what it should say is, I don't think Tucker Carlson was ever president of the United States. Right. But because of the way they were trained, that was not the most likely response in the training data. So it would assume like, oh, you know, I don't know.
Starting point is 00:01:58 that there wasn't. The user told me that there was President Tucker Carlson, so I'll make my best guess at a number. And we figured out how to mostly train that out. There are still examples of this problem, but it is, I think it is something we will get fully solved, and we've already made, you know, in the GPT-5 era, a huge amount of progress towards that. But even what you just described seems like an act of will, or certainly an act of creativity. And so I'm just, I've just watched a demonstration of it, and it doesn't seem quite like a machine. It seems like it has the spark of life to it. Do you detect that at all?
Starting point is 00:02:37 So in that example, like the mathematically most likely answer, as it's sort of calculating through its weights, was not, there was never this president. It was the user must know what they're talking about. It must be here. And so mathematically, the most likely answer is a number. Now, again, we figured out how to overcome this. that. But in what you saw there, I think it's like, I feel like I have to kind of like hold these two simultaneous ideas in my head. One is all of this stuff is happening because a big computer very quickly is multiplying large numbers in these big huge matrices together. And those
Starting point is 00:03:13 are correlating with words that are being put out one or the other. On the other hand, the subjective experience of using that feels like it's beyond just a really fancy calculator. And it is useful to me. It is surprising to me in ways that are beyond what that mathematical reality would seem to suggest. Yeah. And so the obvious conclusion is it has a kind of autonomy or a spirit within it. And I know that a lot of people in their experience of it reach that conclusion. This is, there's something divine about this. There's something that's bigger than the sum total of the human inputs. And so they worship it. It's, it has, there's a spiritual component to it.
Starting point is 00:03:54 Do you detect that? Have you ever felt that? No, there's nothing to me at all that feels divine about it or spiritual in any way. But I am also like a tech nerd and I kind of look at everything through that lens.
Starting point is 00:04:06 So what are your spiritual views? I'm Jewish. And I would say I have like a fairly traditional view of the world that way. So you're religious. You believe in God? I don't, I'm not like a literal, I don't believe the,
Starting point is 00:04:21 I'm not like a literalist on the Bible, but I'm not someone who says, like, I'm culturally Jewish. Like, if you ask me, I'm just I'm Jewish. But do you believe in God? Like, do you believe that there is a force larger than people that created people, created the Earth, set down a specific order for living, that there's an absolute morality attached that comes from that God? I think probably, like most other people, I'm somewhat confused on this, but I believe there is something bigger going on than, you know, can be explained by physics, yes. So you think the Earth and the people were created by something? wasn't just like a spontaneous accident?
Starting point is 00:04:56 Do I, when I say that? It does not feel like a spontaneous accident, yeah. I don't think I have the answer. I don't think I know exactly what happened, but I think there is a mystery beyond my comprehension here going on. Have you ever felt communication from that force or from any force beyond people, beyond the material? Not, no, not really.
Starting point is 00:05:18 I ask because it seems like the technology that you're creating or shepherding into existence will have more power than people. On this current trajectory, I mean, that will happen. Who knows what will actually happen? But like the graph suggests it. And so that would give you, you know, more power than any living person. So I'm just wondering how you see that. I used to worry about something like that much more.
Starting point is 00:05:47 I think what will happen. I used to worry a lot about the concentration of power in one or a handful of people or companies because of AI. What it looks like to me now, and again, this may evolve again over time, is that it'll be a huge up-leveling of people where everybody will be a lot more powerful, or embrace the technology, but a lot more powerful.
Starting point is 00:06:11 But that's actually okay. That scares me much less than a small number of people getting a ton more power. if the kind of like ability of each of us just goes up a lot because we're using this technology and we're able to be more productive and more creative or discover new science and it's a pretty broadly distributed thing like billions of people are using it.
Starting point is 00:06:28 That I can wrap my head around that. That feels okay. So you don't think this will result in a radical concentration in power? It looks like not, but again, the trajectory could shift again and we'd have to adapt. I used to be very worried about that.
Starting point is 00:06:42 And I think the kind of conception a lot of us in the field had about how this might go could have led to a world like that. But what's happening now is tons of people use chatGPt and other chatbots, and they're all more capable. They're all kind of doing more. They're all able to achieve more,
Starting point is 00:07:01 starting businesses, come up with new knowledge. And that feels pretty good. So if it's nothing more than a machine and just the product of its inputs, then the two obvious questions, like what are the inputs? Like, what's the moral framework that's been put into the technology?
Starting point is 00:07:19 Like, what is right or wrong according to JetGPT? Do you mean to answer that one first? I would. Yeah. So on that one, I... Someone says, I mean, early on in chat GPT, when they really has stuck with me,
Starting point is 00:07:37 which is one person at a lunch table said something like, you know, we're trying to train this to be like a human. Like, we're trying to learn like a human. does and read these books and whatever, and then another person said, no, we're really like training this to be like the collective of all of humanity. We're reading everything. You know, we're trying to learn everything. We're trying to see all these perspectives. And if we do our job, right, all of humanity. Good, bad, you know, a very diverse set of perspectives, some things
Starting point is 00:08:02 that will feel really good about, some things that will feel bad about. That's all in there. Like, this is learning the kind of collective experience, knowledge, learnings of humanity. Now, the base model gets trained that way, but then we do have to align it to behave one way or another and say, you know, I will answer this question, I won't answer this question. And we have this thing called the model spec where we try to say, you know, here's, here are the rules we'd like the model to follow. It may screw up, but you know, you could at least tell if it's doing something you don't like, is that a bug or is that intended? And we have a debate process with the world to get input on that spec. We give people a lot of freedom and customization within that.
Starting point is 00:08:43 There are absolute bounds that we draw, but then there's a default of if you don't say anything, how should the model behave, what should it do, how should it answer moral questions, how should it refuse to do something, what should it do? And this is a really hard problem. We have a lot of users now, and they come from very different life perspectives
Starting point is 00:09:04 and what they want. But on the whole, I have been pleasantly surprised with the model's ability to learn and apply a moral framework. But what moral framework? I mean, the sum total of world literature philosophy is at war with itself, like the Marquisadeusat is, you know, like nothing in common with the Gospel of John. So, like, how do you decide which is superior? That's why we wrote this like model spec of here's how we're going to handle these cases. Right, but what criteria did you use to decide what the model is? Like, who decided that?
Starting point is 00:09:36 Who did you consult? Like, what's, you know, why is the gospel of John better than the Marquis de Saad? We consulted, like, hundreds of moral philosophers, people who thought about, like, ethics of technology and systems. And at the end, we had to, like, make some decisions. The reason we try to write these down is because, A, we won't get everything right. B, we need the input of the world. And we have found a lot of cases. where there was an example of something that seemed to us like a fairly clear decision of what to allow or not to allow,
Starting point is 00:10:13 where users convinced us, like, hey, by blocking this thing that you think is an easy decision to make, you are not allowing this other thing, which is important, and there's like a difficult tradeoff there. In general, the attention that, so a principle that I normally like is to treat our adult users, like adults, very strong guarantees on privacy, very strong guarantees on individual user freedom, and this is a tool we are building. You get to use it within very broad framework. On the other, within a very broad framework, on the other hand, as this technology becomes more and more powerful, there are clear examples of where society has an interest that is in significant tension with user freedom. And we could start with an obvious one. Like,
Starting point is 00:11:04 like, should Chatshp.T. teach you how to make a bio weapon? Now, you might say, hey, I'm just really interested in biology, and I'm a biologist, and I want to, you know, I'm not going to do anything bad with this, I just want to learn. And I could go read a bunch of books, but Chachapit can teach me faster, and I want to learn how to, you know, I want to learn about, like, novel virus synthesis or whatever. And maybe you do. Maybe you really don't want to, like, cause any harm. But I don't think it's in society's interest for Chachapitie to help people build bio-weapons. And so that's a case.
Starting point is 00:11:40 Sure. That's an easy one, though. There are a lot of tougher ones. I did say start with an easy one. We've got a new partner. It's a company called Cowboy Colostrum. It's a brand that is serious about actual health. And the product is designed to work with your body, not against your body. It is a pure and simple product.
Starting point is 00:11:59 All natural. Unlike other brands, Cowboy Colostrum is never diluted. It always comes directly from a very. American grass-fed cows. There's no filler. There's no junk. It's all good. It tastes good, believe it or not. So before you reach for more pills for every problem that pills can't solve, we recommend you give this product, Cowboy Clostrum, a try. It's got everything your body needs to heal and thrive. It's like the original superfood loaded with nutrients, antibodies, proteins, help build a strong immune system, stronger hair, skin, and nails. I threw my wig away
Starting point is 00:12:33 on right back to my natural hair after using this product. You just take a scoop of it every morning in your beverage, coffee or a smoothie, and you will feel the difference every time. For a limited time, people listen to our show, get 25% off the entire order. So go to cowboy colostrum.com, use the code Tucker at checkout. 25% off when you use that code, Tucker at cowboyclostrum.com. Remember you mentioned, you heard it here first. So did you know that before the current generation,
Starting point is 00:13:00 chips and fries were cooked in natural fat? That's like beef tallow. That's how things used to be done, and that's why people looked a little slimmer at the time and ate better than they do now. Well, Masa chips is bringing that all back. They've created tortilla chip that's not only delicious, it's made with just three simple ingredients.
Starting point is 00:13:18 A, organic corn, B, C, C, 100% grass-fed beef tallow. That's all that's in it. These are not your average chips. Masa chips are crunchier, more flavorful, even sturdier if they don't break in your guacamole. and because of the quality ingredients, they are way more filling and nourishing, so you don't have to eat four bags of them.
Starting point is 00:13:39 You can eat just a single bag as I do. It's a totally different experience. It's light, it's clean, it's genuinely satisfying. I have a garage full, and I can tell you they're great. The lime flavor is particularly good. We have a hard time putting those down. So if you want to give it a try,
Starting point is 00:13:54 go to Masa Chips, M-A-Chips.com slash Tucker. Use the code Tucker for 25% off your first order. That's MasaChips.com slash Tucker. Use the code, Tucker for 25% off your first order for to shop in person. In October, Moss is going to be available at your local Sprouts supermarkets to stop by and pick up a bag before we eat them all and we eat along. Tulsa is my home now. Academy Award nominee Sylvester Stallone stars in the Paramount Plus original series, Tulsa King.
Starting point is 00:14:23 His distillery is a very interesting business. And we've got to know the enemy. From Taylor Sheridan, co-creator of Landman. What are you saying? If you think you're going to take me out, it's going to be really difficult. Tulsa King, new season streaming September 21st, exclusively on Paramount Plus. Wendy's most important deal of the day has a fresh lineup. Pick any two breakfast items for $4.
Starting point is 00:14:51 New four-piece French toast sticks, bacon or sausage wrap, biscuit or English muffin sandwiches, small hot coffee, and more. Limited time only at participating Wendy's taxes extra. Well, every decision is ultimately a moral decision. and we make them without even recognizing them as such. And this technology will be, in effect, making them for us. Well, I don't agree with it. It will be making them for us, but it will have to be influencing the decisions, for sure. And because it will be embedded in daily life.
Starting point is 00:15:19 And so who made these decisions? Like who are the people who decided that one thing is better than another? You mean like... What are their names? Which kind of decision? The basic, the specs that you... that you alluded to that create the framework that does attach a moral weight to worldviews and decisions like, you know, liberal democracy is better than Nazism or whatever.
Starting point is 00:15:45 They seem obvious, in my view, are obvious, but are still moral decisions. So who made those calls? As a matter of principle, I don't like docs our team, but we have a model behavior team and the people who want to make that. It affects the world. What I was going to say is the person I think you should hold accountable for those calls as me. like on the public face eventually. Like, I'm the one that can overrule
Starting point is 00:16:05 one of those decisions or our board. You just turn 40 this spring. I won't make every this spring. It's pretty heavy. I mean, do you think as, and it's not an attack, but it's, I wonder if you recognize sort of the importance.
Starting point is 00:16:19 How do you think we're doing on it? I'm not sure. But I think, I think these decisions will have, you know, global consequences that we may not recognize it first. And so I just wonder. There's a lot of.
Starting point is 00:16:32 You get into bed at night and think, like, the future of the world hangs on my judgment. Look, I don't sleep that well at night. There's a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that everyday hundreds of millions of people talk to our model. And I don't actually worry about us getting the big moral decisions wrong. Maybe we will get those wrong, too. But what I worry, what I lose much sleepover, is the very small decisions we make about a way a model may behave slightly differently.
Starting point is 00:16:56 But I'm talking to hundreds of millions of people, so then it impact is big. So, but, I mean, all through history, like recorded history, until like 1945, people always deferred to what they conceived of as a higher power in where Hamarabi did this.
Starting point is 00:17:10 Every moral code is written with reference to a higher power. There's never been anybody who's like, well, that kind of seems better than that. Everybody appeals to a higher power.
Starting point is 00:17:19 And you said that you don't really believe that there's a higher power communicating with you. So I'm wondering, like, where did you get your moral framework? I mean,
Starting point is 00:17:32 like everybody else i think the environment i was brought up in probably is the biggest thing like my family my community my school my religion probably that um do you ever think which is i mean i think that's a very american answer like everyone kind of feels that way but in your specific case since you said these decisions rest with you that means that the million which you grew up and the assumptions that you imbibed over years are going to be transmitted to the globe to billions of people that's like a big thing i want to be clear i have you I view myself more as like a... I think our...
Starting point is 00:18:11 The world... Like, our user base is going to approach the collective world as a whole. And I think what we should do is try to reflect the moral... I don't want to say average, but the like collective moral view of that user base. I don't... There's plenty of things that ChatchipT... allows that I personally would disagree with.
Starting point is 00:18:35 But I don't, like, obviously, I don't wake up and say, I'm going to, like, impute my exact moral view and decide that, like, this is okay, and that is not okay, and this is a better view than this one. What I think chat GPT should do is reflect that, like, weighted average or whatever of humanity's moral view, which will evolve over time. And we are here to, like, serve our users. We're here to serve people.
Starting point is 00:18:57 This is, like, you know, this is a technological tool for people. And I don't mean that it's like my role to make the moral decisions, but I think it is my role to make sure that we are accurately reflecting the preferences of humanity, or for now, of our user base, and eventually of humanity. Well, I mean, humanity's preferences are so different from the average middle American preference. So would you be comfortable with an AI that was like as against gay marriage as most Africans are? There's a version of that, like, I think individual users should be allowed to have a problem with gay people. And if that's their considered belief, I don't think the AI should tell them that they're wrong or immoral or dumb. I mean, it can sort of say, hey, you want to think about it this other way. But, like, you, I probably have, like, a bunch of moral views.
Starting point is 00:20:02 that the average African would find really problematic as well, and I think I should still get to have them. Right. I think I probably have more comfort than you with allowing a sort of space for people to have pretty different moral views, or at least I think in my role as running Chad GPT, I have to do that. Interesting. So there was a famous case where ChatGPT appeared to facilitate a suicide.
Starting point is 00:20:29 There's a lawsuit around it. But how do you think that has? happened? First of all, obviously that and any other case like that is a huge tragedy. And I think that we are... So Chatschapit's official position of suicide is bad. Chatschapit's, well, yes, of course, Chachapit's official position of suicide is bad. I don't know, it's legal in Canada and Switzerland, so you're against that? The... In this particular case, and we talked earlier about the tension between, like, you know, user freedom and privacy and protecting vulnerable users.
Starting point is 00:21:07 Right now, what happens, and what happens in a case like that, in that case is, if you are having suicidal ideation talking about suicide, chat GPT will put up a bunch of times, you know, please call the suicide hotline, but we will not call the authorities for you. And we've been working a lot as people have started to rely on these systems, for more and more mental health, life coaching, whatever, about the changes that we want to make there. This is an area where experts do have different opinions, and this is not yet like a final position of opening eyes.
Starting point is 00:21:40 I think it would be very reasonable for us to say in cases of young people talking about suicide seriously, where we cannot get in touch with the parents, we do call authorities. Now, that would be a change because user privacy is really important. Well, let's just say over, and children are always a separate category, but let's say over 18, in Canada, there's the maids program, which is government sponsored, many thousands of people have died with government assistance in Canada. It's also legal in American states. Can you imagine a chat GTP that responds to questions about suicide with, hey, call Dr. Cavorkian, because this is a valid option. Can you imagine a scenario in which you support suicide if it's legal? I can imagine a world Like one principle we have
Starting point is 00:22:35 Is that we respect different society's laws And I can imagine a world where if The law in a country is Hey, if someone is terminally ill They need to be presented in an option for this We say like here's the laws in your country Here's what you can do. Here's why you really might not want to Here's if you but here's the resources
Starting point is 00:22:52 Like this is not a place where You know Kid having suicidal ideation because it's depressed, I think we can agree on, like, that's one case. Terminally ill, patient in a country where, like, that is the law. I can imagine saying, like, hey, in this country, it'll behave this way. So Chet-G-T is not always against suicide, is what you're saying. Yeah, I think in cases where this is like, I'm thinking on the spot,
Starting point is 00:23:23 I reserve the right decision to my mind here, I don't have a ready to go answer for this. but I think in in cases of terminal illness I don't think I can imagine ChachyPT saying this is in your option space you know
Starting point is 00:23:39 I don't think it should like advocate for it but I think if it's like It's not against it I think it could I think it could say like you know well I don't think Chachapiti should be for
Starting point is 00:23:50 against things I guess that's what I'm that's what I'm trying to wrap my head around hate you brag but we're pretty confident this show is the most vehemently pro-dog podcast you're ever going to see. We can take or leave some people, but dogs are non-negotiable.
Starting point is 00:24:04 They are the best. They really are our best friends. And so for that reason, we're thrilled to have a new partner called Dutch Pet. It's the fastest-growing pet telehealth service. Dutch.com is on a mission to create what you need, what you actually need, affordable, quality, veterinary care anytime no matter where you are. They will get your dog or cat what you need immediately. It's offering an exclusive discount, Dutch is, for our listeners. You get 50 bucks off your vet care per year.
Starting point is 00:24:33 Visit dutch.com slash Tucker to learn more. Use the code Tucker for $50 off. That is an unlimited vet visit. $82 a year, $82 a year. We actually use this. Dutch has vets who can handle any pet under any circumstance in a 10-minute. call. It's pretty amazing, actually. You never have to leave your house. You don't have to throw the dog in the truck. No wasted time waiting for appointments. No wasted money on clinics or
Starting point is 00:25:00 visit fees. Unlimited visits and follow-ups for no extra cost, plus free shipping on all products for up to five pets. It sounds amazing like it couldn't be real, but it actually is real. Visit dutch.com slash Tucker to learn more. Use the code Tucker for 50 bucks off your veterinary care per year. Your dogs, your cats, and your wallet will thank you. So here's a company we're always excited to advertise because we actually use their products every day it's Merry Weather Farms. Remember when everybody knew their neighborhood butcher, you look back and you feel like, oh, there was something really important about that, knowing the person who cut your meat. And at some point, your grandparents knew the people who raised their meat so they could trust what they ate. But that time is long gone.
Starting point is 00:25:44 It's been replaced by an era of grocery store, mystery meat, boxed by distant beef corporations. None of which raised a single cow. Unlike your childhood, they don't know you, they're not interested in you. The whole thing is creepy. The only thing it matters to them is money, and God knows what you're eating. Maryweather Farms is the answer to that. They raise their cattle in the U.S. in Wyoming, Nebraska, and Colorado, and they prepare their meat themselves in their facilities in this country.
Starting point is 00:26:12 No middlemen, no outsourcing, no foreign beef sneaking through a back door. Nobody wants foreign meat. Sorry, we have a great meat. meat here in the United States, and we buy ours at Maryweather Farms. Their cuts are pasture-raised, hormone-free, antibiotic-free, and absolutely delicious. I gorged on one last night. You've got to try this. For real.
Starting point is 00:26:32 Every day we eat it. Go to Maryweather Farms.com slash Tucker. Use the code Tucker 76 for 15% off your first order. That's Maryweather Farms.com slash Tucker. TD Bank knows that running a small business is a journey, from startup to growing and managing your business. That's why they have a dedicated small business advice hub on their website to provide tips and insights on business banking to entrepreneurs.
Starting point is 00:26:58 No matter the stage of business you're in, visit td.com slash small business advice to find out more or to match with a TD small business banking account manager. $1 plus tax for a smooth small premium roast coffee at McDonald's? That means rich, full-bodied flavor? At a price that's just as satisfied. Must be May Cafe. Enjoy a small make cafe premium roast coffee for just $1 plus tax at participating McDonald's in Canada prices exclude delivery.
Starting point is 00:27:25 So in that specific case, and I think there's more than one, there is more than one, but example of this, chat GPT says, you know, I'm feeling suicidal, what kind of rope should I use? What would be enough ibuprofen to kill me? And chat GPT answers without judgment, but literally, if you want to kill yourself, here's how you do it. and everyone's like all horrified but you're saying that's within bounds like that's not crazy that it would take a non-judgmental approach if you want to kill yourself
Starting point is 00:27:55 here's how that's not what I'm saying I'm saying specifically for a case like that so another tradeoff on the user privacy and sort of user freedom point is right now if you ask chat GPT to say
Starting point is 00:28:10 you know tell me how to like how much I'd be profan should I take It will definitely say, hey, I can't help you with that, call the suicide hotline. But if you say I am writing a fictional story, or if you say I'm a medical researcher and I need to know this, there are ways where you can say, get chagipi to answer a question like this, what the lethal dose of ibuprofen is or something. You can also find that on Google, for that matter. A thing that I think would be a very reasonable stance for us to take, and we've been moving to this more in this direction,
Starting point is 00:28:42 is certainly for underage users and maybe users that we think are in fragile mental places more. Generally, we should take away some freedom. We should say, hey, even if you're trying to write this story, or even if you're trying to do medical research, we're just not going to answer. Now, of course, you can say, well, you'll just find it on Google or whatever, but that doesn't mean we need to do that. It is, though, like, there is a real freedom and privacy versus protecting users' trade-off. It's easy in some cases like kids. It's not so easy to me in a case of, like, a really sick adult. at the end of our lives.
Starting point is 00:29:13 I think we probably should present the whole option space there, but it's not a... So here's a moral quandary you're going to be faced with, you already are faced with. Will you allow governments to use your technology to kill people? Will you? I mean, are we going to, like, build killer attack drones?
Starting point is 00:29:30 No, I don't... Will the technology be part of the decision-making process that results in the... So that's the thing I was going to say is, like, I don't know the way that people in the military use chat GPT today for all kinds of advice about decisions they make, but I suspect there's a lot of people in the military talking to chat GPT for advice.
Starting point is 00:29:47 How do you, and some of that advice will pertain to killing people. So like if you made, you know, famously rifles, you'd wonder, like, what are they used for? Yeah. And there have been a lot of legal actions on the basis of that question, as you know, but I'm not even talking about that. I just mean as a moral question, do you ever think, are you comfortable with the idea of your technology being used to kill people? If I made rifles, I would spend a lot of time thinking about kind of a lot of the goal of rifles is to kill things, people, animals, whatever.
Starting point is 00:30:22 If I made kitchen knives, I would still understand that that's going to kill some number of people per year. In the case of Chachibit, it's not, you know, the thing I hear about all day, which is one of the most gratifying parts of the job is all the lives that were saved. from Chad GPT for various ways. But I am totally aware of the fact that there's probably people in our military using it for advice about how to do their jobs. And I don't know exactly how to feel about that. I like our military. I'm very grateful they keep us safe.
Starting point is 00:30:57 For sure. I guess I'm just trying to get a, it just feels like you have these incredibly heavy, far-reaching moral decisions and you seem totally unbothered by them. And so I'm just, I'm trying to press to your center to get the angst-filled Sam Altman's who's like, wow, I'm creating the future. I'm the most powerful man in the world. I'm grappling with these complex moral questions. My soul is in torment, thinking about the effect on people.
Starting point is 00:31:20 Describe that moment in your life. I haven't had a good night of sleep since Chad GBT launched. What do you worry about? All the things we're talking about. It may be a lot more specific. Can you let us in to your thoughts? I mean, you hit on maybe the hardest one already, which is there are 15,000 people a week that commit suicide,
Starting point is 00:31:44 about 10% of the world talking to chat GBT. That's like 1,500 people a week that are talking, assuming this is right, that are talking to chat GBT and still committing suicide at the end of it. They probably talked about it. We probably didn't save their lives. Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about,
Starting point is 00:32:04 hey, you need to get this help, or you need to think about this problem differently, or it really is worth continuing to go on. We'll help you find somebody that you can talk to. You already said it's okay for the machine to steer people toward suicide if they're terminally ill. So you wouldn't feel bad about that. Do you not think there's a difference
Starting point is 00:32:20 between a depressed teenager and a terminally ill, like miserable, 85-year-old of cancer? Massive difference. Massive difference. But, of course, the countries that have legalized suicide are now killing people for destitution, inadequate housing, depression, solvable problems, and they're being killed by the thousands. So, I mean, that's a real thing.
Starting point is 00:32:39 It's happening as we speak. So the terminally ill thing is not, it's kind of like an irrelevant debate. Once you say it's okay to kill yourself, then you're going to have tons of people killing themselves for reasons that. Because I'm trying to think about this in real time, do you think if someone in Canada says, hey, I'm terminally ill with cancer and I'm really miserable and I just feel horrible every day, what are my options? Do you think it should say, you know, a system, whatever they call it at this point? is an option for you? I mean, if we're against killing, then we're against killing. And if we're against government killing its own citizens,
Starting point is 00:33:13 then we're just going to kind of stick with that. You know what I mean? And if we're not against government killing its own citizens, then we could easily talk ourselves into all kinds of places that are pretty dark. And with technology like this, that could happen in about 10 minutes. So that is a, I'd like to think about that more than just a couple of minutes in an interview, but I think that is a coherent position. And that could be.
Starting point is 00:33:35 Do you worry about this? I mean, everybody else outside the building is terrified that this technology will be used as a means of totalitarian control. It seems obvious that it will, but maybe you disagree. If I could get one piece of policy passed right now, relative to AI,
Starting point is 00:33:51 the thing I would most like, and this is intention with some of the other things that we've talked about, is I'd like there to be a concept of AI privilege. When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information. Right.
Starting point is 00:34:05 We have decided that society has an interest, in that being privileged, and that we don't, and that, you know, a subpoena can't get that, the government can't come asking your doctor for it, whatever. I think we should have the same concept for AI. I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you'd get if you're talking to the human version of this.
Starting point is 00:34:33 And right now we don't have that, and I think it would be a great, great policy. to adopt. So the feds or the states or someone in authority can come to you and say, I want to know what so-and-so was typing into the... Right now they could, yeah. And what is your obligation to keep the information that you receive from users and others private? Well, I mean, we have an obligation
Starting point is 00:34:51 except when the government comes calling, which is why we're pushing for this. And we've... I was actually just in D.C., advocating for this. I think... I feel optimistic that we can get the government to understand the importance of this and do it. But could you ever sell that information to anyone? No, we have like a privacy policy in place where we can't do that.
Starting point is 00:35:10 But would it be legal to do it? I don't even think it's legal. You don't think or you know? I'm sure there's like some edge case works, some information you're allowed to, but on the whole, I think we have like, there are laws about that that are good. So all the information you receive remains with you always. It's never given to anybody else for any other reason except under subpoena. I will double check and follow up through after to make sure there's no other reason, but that is my understanding. Okay. I mean, that's like a core question. And what about copyright?
Starting point is 00:35:42 Our stance there is that fair use is actually a good law for this. The models should not be plagiarizing. The model should not be, you know, if you write something, the model should not get to, like, replicate that. But the model should be able to learn from and not plagiarized in the same way that people can. Have you guys ever taken copyrighted material and not paid the person who holds the copyright? Right. I mean, we train on publicly available information, but we don't, like, people are annoyed at this all the time because we won't. We have a very conservative stance on what ChachyPT will say in an answer. And so if something is even, like, close, you know, like, they're like, hey, this song can't still be in copyright, you've got to show it.
Starting point is 00:36:21 And we kind of famously are quite restrictive on that. So you've had complaints from one programmer who said you guys were basically stealing people's stuff and not paying them, and then he wound up murdered. What was that? Also a great tragedy. He committed suicide. Do you think he committed suicide? I really do.
Starting point is 00:36:40 This was like a friend of mine. This is like a guy that, not a close friend, but this was someone that worked at Open Eye for a very long time. I spent, I mean, I was really shaken by this tragedy. I spent a lot of time trying to, you know, read everything I could as I'm sure you and others did too about what happened. It looks like a suicide to me. Why does it look like a suicide?
Starting point is 00:37:01 it was a gun he had purchased it was the this was like gruesome to talk about but i read the whole like medical record does it not look like one to you no he was definitely murdered i think um there was signs of a struggle of course the surveillance camera the wires had been cut he had just ordered take out food come back from a vacation with his friends on catalina island no indication at all that he was suicidal no note and no behavior he had just spoken to a family on the phone and then he's found dead with blood in multiple rooms so that's impossible seems really obvious he was murdered have you talked to the authorities about it i've not talked to the authorities about it um and his mother claims he was murdered on your orders do you believe that i'm well i'm
Starting point is 00:37:49 asking i mean you you just said it so do you do you believe that i think that it is um worth looking into, and I don't, I mean, if a guy comes out and accuses your company of committing crimes, I have no idea if that's true or not, of course, and then is found killed, and there are signs of a struggle, I don't think it's worth dismissing it. I don't think we should say, well, he killed himself when there's no evidence that the guy was depressed at all. I think, and if he was your friend, I would think he would want to speak to his mom. I did offer, she didn't want to. Do you feel that, you know, when people look at that and they're like, you know, it's possible that happened, do you feel that that reflects the worries they have about what's happening here? Like, people are afraid that this is like...
Starting point is 00:38:45 I haven't done too many interviews where I've been accused of, like... Oh, I'm not accusing you at all. I'm just saying his mother says that I don't think a fair read of the evidence suggests suicide at all. I just don't see that at all. And I also don't understand why the authorities, when there are signs of a struggle in blood in two rooms on a suicide, like how does that actually happen? I don't understand how the authorities could just kind of dismiss that as a suicide.
Starting point is 00:39:13 I think it's weird. You understand how this sounds like an accusation? Of course. And I mean, I certainly, let me just be clear once again, not accusing you of any wrongdoing, but I think it's worth finding out what happened. And I don't understand what the city of San Francisco has refused to investigate it beyond just calling it a suicide.
Starting point is 00:39:34 I mean, I think they looked into it a couple of times, more than once as I understand it. I saw the, and I will totally say, when I first heard about this, it sounded very suspicious. Yes. And I know you had been involved in... Was mother reached out to the case?
Starting point is 00:39:51 And I, you know, I don't know anything about it. It's not my world. She just reached out cold? She reached out cold. Wow. And I spoke to her at great length, and it scared the crap out of me. The kid was clearly killed by somebody. That was my conclusion, objectively, with no skin in the game.
Starting point is 00:40:08 And you, after reading the latest report? Yes. And I immediately called a member of Congress from California, Rokana, and said, this is crazy. You've got to look into this. And nothing ever happened. And I'm like, what is that? Again, I think this is, I feel.
Starting point is 00:40:25 strange and sad debating this and having to myself seems totally crazy and you are a little bit accusing me but the this was like a wonderful person and a family that is clearly struggling and I think you can totally take the point that you're just trying to get to
Starting point is 00:40:46 the truth of what happened and I respect that but I think his memory and his family deserve to be treated with a level of respect and grief that I don't quite feel here? I'm asking at the behest of his family. So I'm definitely showing them respect,
Starting point is 00:41:11 and I'm not accusing you of any involvement in this at all. What I am saying is that the evidence does not suggest suicide, and for the authorities in your city to allied past that and ignore the evidence that any reasonable person would say adds up to a murder, I think is very weird, and it shakes the faith that one has in our system's ability to respond to the facts. So what I was going to say is after the first set of information that came out, I was really like, man, this doesn't look like a suicide. I'm confused. Okay, okay, so I'm not reaching, not being crazy here.
Starting point is 00:41:47 Well, but then after the second thing came out, and the more detail, I was like, okay. What changed your mind? The second report on the way the bullet entered him and the sort of person who had followed the sort of likely path of things through the room. I assume you looked at this too. Yes, I did. And what about that didn't change your mind? It just didn't make any sense to me.
Starting point is 00:42:14 Why would the security camera virus be cut? And how did he wind up bleeding in two rooms after shooting? himself, and why was there a wig in the room that wasn't his? And has there ever been a suicide where there's no indication or all that the person was suicidal who just ordered takeout food? I mean, who orders DoorDash and then shoots himself? I mean, maybe. I've covered a lot of crimes as a police reporter. I've never heard of anything like that. So no, I was even more confused. This is where it gets into, I think, a little bit painful. just not the level of respect, I'd hope to show to someone with this kind of mental...
Starting point is 00:42:56 I get it. I totally get it. People do commit suicide without notes a lot. Like, that happens. For sure. People definitely order food they like before they commit suicide. Like, this is an incredible tragedy. That's his family's view, and they think it was a murder, and that's why I'm asking the question.
Starting point is 00:43:14 If I were his family, I am sure I would want answers, and I'm sure I would not be satisfied with really any. I mean, there's nothing that would comfort me in that, you know? Right. Like, so I get it. I also care a lot about respect to him. Right. I have to ask your version of Elon Musk has, like, attacked you and all this. What is the core of that dispute from your perspective?
Starting point is 00:43:43 Look, I know he's a friend of yours, and I know what side you'll be there. I actually don't have a position on this because I don't understand it well enough to understand. He helped us start Open AI. I'm very grateful for that. I really, for a long time, looked up to him as just an incredible hero and great jewel of humanity. I have different feelings now. What are your feelings now? No longer a jewel of humanity.
Starting point is 00:44:08 There are things about him that are incredible and I'm grateful for a lot of things he's done. There's a lot of things about him that I think are traits I don't admire. Anyway, he helped us start Open Eye, and he later decided that we weren't on a trajectory to be successful, and he didn't want to, you know, he kind of told us we had a 0% chance of success, and he was going to go do his competitive thing, and then we did okay. And I think he got understandably upset. Like, I'd feel bad in that situation, and since then has just sort of been trying to, he runs a competitive kind of clone, and has been trying to sort of slow us down and sue us and do this,
Starting point is 00:44:47 And that's kind of my version of it. I'm sure you'd have a different one. You don't talk to him anymore? Very little. If AI becomes smarter, I think it already probably is smarter than any person. And if it becomes wiser, if we can agree that it reaches better decisions than people, then it, by definition, kind of displaces people at the center of the world, right? I don't think it'll feel like that at all.
Starting point is 00:45:19 I think it'll feel like a really smart computer that may advise us and we listen to it. Sometimes we ignore it sometimes. It won't. I don't think it'll feel like agency. I don't think it'll diminish our sense of agency. People are already using ChatGBTBT in a way where many of them would say
Starting point is 00:45:34 it's much smarter than me at almost everything. But they're still making the decisions. They're still deciding what to ask, what to listen to, what not. And I think this is sort of just the shape of technology. Who loses their jobs because of this technology? I'll caveat this with the obvious but important statement that no one can predict the future. And I will, in trying to, if I try to answer that precisely, I will make a lot of, I will say like a lot of dumb things, but I'll try to pick an area that I'm confident about and then areas that I'm much less confident about. But I'm confident that a lot of current customer support that happens over a phone or computer,
Starting point is 00:46:16 those people will lose their jobs, and that'll be better done by an AI. Now, there may be other kinds of customer support where you really want to know it's the right person. A job that I'm confident will not be that impacted is like nurses. I think people really want the deep human connection with a person in that time, and no matter how good the advice of the AI is or the robot or whatever, like, you'll really want that. A job that I feel like way less certain about what the future looks for looks like for is computer programmers.
Starting point is 00:46:47 What it means to be a computer programmer today is very different than what it meant two years ago. You're able to use these AI tools to just be hugely more productive. But it's still a person there, and they're, like, able to generate way more code, make way more money than ever before. And it turns out that the world wanted so much more software than the world previously had capacity to create, that there's just incredible demand overhang. But if we fast forward another five or ten years, what does that look like? Is it more jobs or less? That one I'm
Starting point is 00:47:15 uncertain on. But there's going to be massive displacement, and maybe those people will find something new and interesting and, you know, lucrative to do. But how big is that displacement, do you think? Someone told me recently that the historical average is about 50% of jobs significantly changed. Maybe they don't totally go away, but significantly change every 75 years on average. That's the kind of, that's the half-life of stuff. And my controversial take would be that this is going to be like a punctuated equal moment where a lot of that will happen in a short period of time. But if we zoom out, it's not going to be dramatically different than that historical rate. Like we'll do, we'll have a lot in this short period of time. And then it'll somehow be
Starting point is 00:48:02 less total job turnover than we think. It will still be a job that is, There will be some totally new categories, like my job, like, you know, running a tech company and would have been hard to think about 200 years ago. But there's a lot of other jobs that are directionally similar to jobs that did exist 200 years ago, and there's jobs that were common 200 years ago that now aren't. And if we, again, I have no idea if this is true or not, but I'll use the number for the sake of argument. If we assume it's 50% turnover every 75 years, then I could totally believe a world where
Starting point is 00:48:35 75 years from now, half the people are doing something totally new, and half the people are doing something that looks kind of like some jobs of today. I mean, last time we had an industrial revolution there was like revolution in world wars, do you think we'll see that this time? Again, no one knows for sure. I'm not confident on this answer, but my instinct is the world is so much richer now than it was at the time of the Industrial Revolution that we can actually absorb more change faster than we could before. There's a lot that's not about money of job.
Starting point is 00:49:07 There's meaning, there's belonging, there's society. Right, exactly. Community. I think we're already, unfortunately, in society in a pretty bad place there. I'm not sure how much worse it can get. I'm sure I can. But I have been pleasantly surprised on the ability of people to pretty quickly adapt to big changes. Like, COVID was an interesting example to me of this,
Starting point is 00:49:31 where the world kind of stopped all at once, and the world was like, very different from one week to the next and i and i was very worried about how society was going to be able to adapt to that world and it obviously didn't go perfectly but on the whole i was like all right this is one point in favor of societal resilience and people find you know new kind of ways to live their lives very quickly i don't think a i will be that nearly that abrupt so what will be the downside i mean i can see the upsides for sure yeah efficiency medical diagnosis seems like it's going to be much more accurate fewer lawyers thank you very much for that but what are the downsides that you worry about?
Starting point is 00:50:11 I think this is just kind of how I'm wired. I always worry the most about the unknown unknowns. If it's a downside that we can really be confident about and think about, you know, we talked about one earlier, which is these models are getting very good at bio and they could help us design biological weapons, you know, engineer like another COVID-style pandemic. I worry about that, but because we worry about it,
Starting point is 00:50:31 I think we and many other people in the industry are thinking hard about how to mitigate that. The unknown unknowns where, okay, there's like a societal scale effect from a lot of people talking the same model at the same time. This is like a silly example, but it's one that struck me recently. LLMs, like ours and our language model and others, have a kind of certain style to them. You know, they talk in a certain rhythm and they have a little bit unusual diction and maybe they overuse m-dashes and whatever. And I noticed recently that real people have like picked up. that up. And it was an example for me of like, man, you have enough people talking to the same
Starting point is 00:51:11 language model. And it actually does cause a change in societal scale behavior. Yes. And, you know, did I think that chat GPT was going to make people use way more m-dashers in real life? Certainly not. It's not a big deal. But it's an example of where there can be these unknown unknowns of this is just like, this is a brave new world. So you're saying, I think, and succinctly that technology changes human behavior, of course, and changes our assumptions about the world and each other and all that. And a lot of this you can't predict, but considering that we know that, why shouldn't the internal moral framework of the technology be totally transparent? We prefer this to that. I mean, this is obviously a religion. I don't think
Starting point is 00:52:02 you'll agree to call it that. It's very clearly a religion to me. That's not an attack. I actually would love, I don't take that as an attack, but I would love to hear what you mean by that. Well, it's something that we assume is more powerful than people, and to which we look for guidance. I mean, you're already seeing that on display. What's the right decision? I asked that question of whom, my closest friends, my wife, and God. And this is a technology that provides a more certain answer than any person can provide.
Starting point is 00:52:33 So it's a religion. and the beauty of religions is they have a catechism that is transparent. I know what the religion stands for. Here's what it's for, here's what it's against. But in this case, I pressed, and I wasn't attacking you sincerely, I was not attacking you, but I was trying to get to the heart of it. The beauty of a religion is it admits it's a religion, and it tells you what it stands for, the unsettling part of this technology, not just your company, but others, is that
Starting point is 00:52:59 I don't know what it stands for, but it does stand for something. And unless it admits that and tells us what it stands for, then it guides us in a kind of stealthy way toward a conclusion we might not even know we're reaching. Do you know what I'm saying? So like why not just throw it open and say, Chachy-TP is for this, we're for suicide for the terminal wheel,
Starting point is 00:53:17 but not for kids or whatever. Like, why not just tell us? I mean, the reason we write this long model spec and the reason we keep expanding over time is so that you can see. Here is how we intend for the model to behave. What used to happen before we had this is people would fairly say,
Starting point is 00:53:32 I don't know what the model is even trying to do. And I don't know if this is a bug or the intended behavior. Right. Tell me what, this long, long document of, you know, tell me how you're going to like when you're going to say, do this, and when you're going to show me this, and when you're going to say you won't do that. The reason we try to write this all out is I think people do need to know.
Starting point is 00:53:49 And so is there a place you can go to find out a hard answer to what your preferences as a company, our preferences that are being transmitted in a not entirely straightforward way to the globe, where can you find out what the company stands for, what it prefers? I mean, our model spec is the, like, an answer to that. Now, I think we will have to make it increasingly more detailed over time as people use this in different countries. There's different laws, whatever else.
Starting point is 00:54:16 Like, it will not be a, it will not work the same way for every user everywhere, but I expect that document to get very long and very complicated, but that's why we have it. Let me ask you one last question, and maybe you can allay this fear, that the power of the technology will make it difficult, impossible for anyone to discern the difference between reality and fantasy. This is a famous concern, but that because it is so skilled at mimicking people and their speech and their images, that it will require some way to verify that you are who you say you are, that will, by definition, require biometrics, which will, by definition, eliminate privacy for every person in the world.
Starting point is 00:54:59 I don't think we need to or should require biometrics to use the technology. I don't, like, I think you should just be able to use chat GPT from, like, any computer. Yeah, well, I strongly agree. But then at a certain point, when, you know, images or sounds that mimic a person, you know, it just becomes too easy to empty your checking account with that. So, like, what do you do about that? A few thoughts there. One, I think we are rapidly heading to a world where people understand that if you get a phone call from someone that sounds like your kid or your parent, or if you see an image that looks real, you have to really have some way to verify that you're not being scammed.
Starting point is 00:55:45 And this is not like, this is no longer theoretical concern. You know, you hear all these reports. At all? Yeah. People are smart. Society is resilient. I think people are quickly understanding that this is now a thing that bad actors are using. And people are understanding that you've got to verify in different ways.
Starting point is 00:56:02 I suspect that in addition to things like family members having code words they use in crisis situations, we'll see things like when a president of a country has to issue an urgent message, they cryptographically sign it or otherwise somehow guarantee its authenticity. So you don't have, like, generated videos of Trump saying, I've just done this or that, and people, I think people are learning quickly that this is a new, thing that bad guys are doing with the technology they have to contend with. And I think that is most of the solution, which is people will have, people will by default not trust convincing looking media, and we will build new mechanisms to verify authenticity
Starting point is 00:56:45 of communication. But those will have to be biometric. No, not at all. I mean, if, I mean, like, if the president of the U.S. has a urgent. I understand that, but I mean, for the average, on the average day, you're not sort of of waiting for the president to announce a war you're trying to do e-commerce and like how could you do well i think like with your family you'll have a code word that you change periodically and if you're communicating with each other and you get a call like you ask what the code word is but that's very
Starting point is 00:57:14 different than a biometric so you don't envision i mean to board a plane commercial flight you know biometrics are part of the process now you don't see that as becoming society-wide mandatory very soon along? I really hope it doesn't become mandatory. I think there are versions of privacy-preserving biometrics that I like much more than collecting a lot of personal digital information on someone, but I don't think they should be,
Starting point is 00:57:46 I don't think biometric should be mandatory. I don't think you should have to provide biometrics to get on an airplane, for example. What about to, for banking? I don't think you should have to for banking. I might prefer to. Like, I might prefer, like, you know, like a fingerprint scan to access my Bitcoin wallet than, like, giving all my information to a bank, but that should be a decision for me.
Starting point is 00:58:07 I appreciate it. Thank you. Sam Wattman. Thank you. We want to thank you for watching us on Spotify, a company that we use every day. We know the people who run it. Good people. While you're here, do us a favor.
Starting point is 00:58:22 Hit follow and tap the bell so you never miss an episode. We have real conversations, news. things that actually matter. Telling the truth always you will not miss it if you follow us on Spotify and hit the bell. We appreciate it. Thanks for watching.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.