On with Kara Swisher - OpenAI CEO Sam Altman on GPT-4 & the A.I. Arms Race

Episode Date: March 23, 2023

We’re on the cusp of an artificial intelligence arms race that has venture capitalists drooling, regulators petrified and competitors from Google to Microsoft to Elon Musk racing to get their produc...ts out the door. Kara talks to Sam Altman, CEO of OpenAI and the man who’s led the launches of ChatGPT and GPT-4. They discuss the hallucinations of ChatGPT+, why Open AI moved from an open-source nonprofit to a closed-source “capped profit” company and why Altman doesn’t believe artificial intelligence developers should enjoy Section 230 immunity. Afterwards, Kara and Nayeema break down the interview and the promises and perils of an unknowable A.I.-powered future. Questions? Comments? Email us at on@voxmedia.com or find us on Twitter @karaswisher and @nayeema  Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Support for this show comes from Constant Contact. If you struggle just to get your customers to notice you, Constant Contact has what you need to grab their attention. Constant Contact's award-winning marketing platform offers all the automation, integration, and reporting tools that get your marketing running seamlessly, all backed by their expert live customer support. It's time to get going and growing with Constant Contact today.
Starting point is 00:00:28 Ready, set, grow. Go to ConstantContact.ca and start your free trial today. Go to ConstantContact.ca for your free trial. ConstantContact.ca Do you feel like your leads never lead anywhere? And you're making content that no one sees? And it takes forever to build a campaign? Well, that's why we built HubSpot.
Starting point is 00:00:55 It's an AI-powered customer platform that builds campaigns for you, tells you which leads are worth knowing, and makes writing blogs, creating videos, and posting on social a breeze. So now, it's easier than ever to be a marketer. Get started at HubSpot.com slash marketers. Hi, everyone, from New York Magazine and the Vox Media Podcast Network. This is non-profit Open AI, which is now very much for-profit and 100% scarier. Just kidding. Actually, I'm not kidding. This is On with Kara Swisher, and I'm Kara Swisher.
Starting point is 00:01:38 And I'm Naima Raza. It's amazing how an open-source nonprofit has moved to being a closed-source private company with the big deal with Microsoft. Are you shocked? No, not even slightly. It's a huge opportunity. I'm in San Francisco now, and it's really jumping with AI. Crypto didn't quite work out. Anyway, those people moved to Miami. And so it's very AI-oriented right now. Everybody's thinking about a startup in AI. Are you more bullish on AI than Web3? Well, that's kind of a low bar. So yeah, I've always been bullish on AI. I've talked about it a lot over the years. And this is just a version of it as it becomes more and more sophisticated and useful to people. So I've always thought it was important. And I think most of the key technologists in Silicon Valley have always thought it was important. many things that are not AI are being billed as AI tech companies now, and they're really not AI. They might have like a large learning model, but they're not quite AI. But last episode,
Starting point is 00:02:29 we had Reid Hoffman on talking about what was possible with AI. And now we have one of Reid's many mentees, Sam Altman. Sam is the CEO of OpenAI, and he leads the team that has given us ChatGPT and GPT-4. He actually burst onto the scene as a young Stanford dropout, I think in 2005, with the startup Looped, right? Is that when you met him? Yes, when he had Looped. I visited him in his small. He was a little startup, and it didn't do very well. It was a location-based kind of thing.
Starting point is 00:02:57 I don't even remember. Social network, right? Like GeoSocial network. You know, it was not Facebook, let's just say. So he was one of these many, many startup people that sort of were all over the valley. Very smart, but the company didn't quite work out. Yeah, it kind of went bust, I think, not many years later. But he became super important in the valley, especially in my generation.
Starting point is 00:03:15 He's about my age because of Y Combinator. He led the startup accelerator that has incubated and launched Stripe, Airbnb, Coinbase. Yeah, he got there later. It was working before he got there, but he really led it to new heights, I think, in a lot of ways. It was very good. He came in in 2014? Yes, I don't remember.
Starting point is 00:03:31 I remember when he took over, but he really invigorated it and was very involved in the startup scene. It was a great role for him. He was a great cheerleader, and he's good at eyeing good startups. Do you see him as kind of one of the Elon Musk, Peter Thiel, Reid Hoffman's of his generation? Kind of, yeah. There's a lot of really smart people, but yeah, he's definitely special. And he really did, you know, he had a bigger mentality, more like Reid than the others, although they had it initially, not Peter Thiel, but he was thinking of big things with the
Starting point is 00:04:02 startups at AI. And I really like him. I've gotten to know him pretty well over the years. And so I've always enjoyed talking to him. He's very thoughtful. He's got a lot of interesting takes on things. And this is a really big deal now that he's sort of landed on taking open AI to these heights. Yeah, he has. Like you, he once entertained the notion of running for office in California. He thought about running for governor, something I think you've talked to him about. Yeah, we've talked about it. But he went on to revolutionize AI. So you think that's better or worse for humanity? I don't know. We'll see. You know, California is probably easier to fix than what we're going to do about AI once it gets fully deployed. Although, you know, the whole issue is there's lots of great things and there's lots of bad things. And so we want to focus on
Starting point is 00:04:40 both because it's, like I say, it's like when the internet started, we didn't know what it was going to be. I think a lot of people are being very creative around what this could be and what problems it could solve. And at the same time, the problems it could create. Do you think that the fear is overblown like this? Our jobs are at risk. AI is going to, you know. On those stories, yes. Yes. It's like saying, what is, you know, the car done for us or lights or something like that. You know, things will change, as they always do. And so I've always thought most of the fears are overblown. But as I say in the book I'm working on right now, which is why I'm in San Francisco,
Starting point is 00:05:14 is everything that can be digitized will be digitized. That's just inevitable. And that's where it's going. So this will soon be two bots talking to each other? No, no. But search is so antiquated when you think about it, typing words into a box. It's really Neanderthal in many ways, and this is an upright homo sapien. Well, it's been interesting because critics have kind of swarmed about ChatGPT earlier on, and Sam was coming back on Twitter saying, just wait for the next iteration, which we now have in GPT-4.
Starting point is 00:05:38 We couldn't book the interview with him until GPT-4 was out. But the model still has many issues, and he himself has noted this. He tweeted that it's still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it. This was about GPT-4. Yeah, I would agree. But that's a very interesting thing because the fact that it's more impressive on first blush than it is after you use it is part of the problem because I've been using my GPT plus and it pulls up all kinds of interesting like write me a research paper and then it will it will look really good and it will have a bunch of false information on it so this can compound
Starting point is 00:06:15 the misinformation problem when something looks slick well but isn't informed right well data in data out crappy crap in crap out I mean it's just the the same. That's a very simplistic way of saying it. But I think, you know, it's like the early internet really sucked, too. And now it kind of doesn't and sort of does. And there's great things about it. But if you looked at early Yahoo or Google or Google was much later, but early Yahoo and others, it was a lot of bubble gum and bailing wire. All right. Well, let's see what Sam Altman has to say and if he feels confident in the choice of having done open AI versus running for governor of California.
Starting point is 00:06:47 We'll take a quick break and we'll be back with the interview. Fox Creative. This is advertiser content from Zelle. When you picture an online scammer, what do you see? For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night. And honestly, that's not what it is anymore. That's Ian Mitchell, a banker turned fraud fighter.
Starting point is 00:07:20 These days, online scams look more like crime syndicates than individual con artists. And they're making bank. Last year, scammers made off with more than $10 billion. It's mind-blowing to see the kind of infrastructure that's been built to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we understand the magnitude of this problem,
Starting point is 00:07:50 we can protect people better. One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says one of our best defenses is simple. We need to talk to each other. We need to have those awkward conversations around what do you do if you have text messages you don't recognize? What do you do if you start getting asked to send information that's more sensitive? Even my own father fell victim to a, thank goodness, a smaller dollar scam, but he fell
Starting point is 00:08:21 victim and we have these conversations all the time. So we are all at risk and we all need to work together to protect each other. Learn more about how to protect yourself at Vox.com slash Zelle. And when using digital payment platforms, remember to only send money to people you know and trust. Support for this show comes from Indeed. If you need to hire, you may need Indeed. Indeed is a matching and hiring platform with over 350 million global monthly visitors, according to Indeed data, and a matching engine that helps you find quality candidates fast. Listeners of this show can get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash podcast. Just go to Indeed.com slash podcast right now and say you heard about Indeed on this podcast.
Starting point is 00:09:19 Indeed.com slash podcast. Terms and conditions apply. Need to hire? You need Indeed. Sam, it's great to be in San Francisco, rainy San Francisco to talk to you in person. It is on. Sam, it's great to be in San Francisco, rainy San Francisco, to talk to you in person. We need the rain. It's good. I know. This atmospheric river is not kidding. A lot of them. I got soaked on the way here. I miss San Francisco. I'm here for a couple weeks. I'm going to. I'm going to. I'm trying to convince my wife. We're having a moment here. I agree. It's time to come back. I love San Francisco. I've never really left in my heart.
Starting point is 00:09:44 You started Looped. That's where I met you. Explain what it was. It was a location-based social app for mobile phones. Right. So what happened? The market wasn't there, I'd say, is the number one thing. Yeah. Because? Well, I think you can't force a market. You can have an idea about what people are going to like. As a startup, part of your job is to be ahead of it. And sometimes you're right about that and sometimes you're not.
Starting point is 00:10:09 You know, sometimes you make loops. Sometimes you make open AI. Yeah, right, right. Exactly. But you started in 2015 after being at Y Combinator. And late last year, you launched ChatGPT. Talk about that transition. You had been, you reinvigorated Y Combinator in a lot of ways.
Starting point is 00:10:25 I was handed such an easy task with Y Combinator. I mean, I don't know if I reinvigorated it. It was sort of a super great thing by the time I took over. No, what I mean is I think it got more prominence. You changed things around. I don't scaled it more and we sort of took on longer term, more ambitious projects. OpenAI actually sort of got, that was like something I helped start while at YC. And we did, we funded other companies, some of which I'm very closely involved with, like Helion, the nuclear fusion company. They were going to take a long time. So I definitely like had a thing that I was passionate about and we did more of it. But I kind of just tried to like keep PG and Jessica's vision going there. This is Paul Graham and Jessica.
Starting point is 00:11:10 You had shifted, though, to open AI. Why was that? When you're in this position, which is a high-profile position in Silicon Valley, sort of king of startups, essentially, why go off? Is it you wanted to be an entrepreneur again? No, I didn't. You had started as a nonprofit. I didn't. I am not a natural fit for a CEO. Like an investor
Starting point is 00:11:28 really, I think suits me very well. I got convinced that AGI was going to happen and be the most important thing I could ever work on. I think it is going to like transform our society in many ways. And, you know, I won't pretend that as soon as we started opening, I was sure it was going to work, And, you know, I won't pretend that as soon as we started opening, I was sure it was going to work. But it became clear over the intervening years and certainly by 2018, 2019, that we had a real chance here. What was it that made you think that? A number of things. It's hard to point to just a single one.
Starting point is 00:12:04 But by the time we made GPT-2, which was still weak in a lot of ways, but you could look at the scaling laws and see what was going to happen. I was like, hmm, this can go very, very far. And I got super excited about it. I've never stopped being super excited about it. Was there something you saw that it just scaled or what was the... Yeah, it was the like looking at the data of how predictably better we could make the system with more compute, with more data. And there'd already been a lot of stuff going on at Google with DeepMind. They had bought that earlier, right? Or around that. Yeah, there had been a bunch of stuff, but somehow like it wasn't quite the trajectory that has turned out to be the one that really works.
Starting point is 00:12:36 But in 2015, you wrote that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Explain. I still think so. Okay. All right. We're going to get into that. Why did you write that then?
Starting point is 00:12:48 And yet you also called it the greatest technology ever. I still believe both of those things. I think at this point, more of the world would agree on that. At the time, it was considered a very extremely crazy position. So explain, roll it out that you wrote it was probably the greatest threat to continued existing humanity and also one of the greatest technologies that could improve humanity. Roll those two things out. Well, I think we're seeing finally little previews of this with chat GPT and especially when I put GPT-4 out.
Starting point is 00:13:15 And people can see this vision where, just to pick one example out of the thousands we could talk about, everyone in the world can have an amazing AI tutor on their phone with them all the time for anything they want to learn. That's really, we need that. I mean, that's wonderful. That'll make the world much better. The creative enhancement that people are able to get from using these tools to do whatever their creative work is, that's fantastic. The economic empowerment, all of these things. And again, we're seeing this only in the most limited primitive larval way. But at some point, it's like, well, now we can use these things to cure disease. So what is the threat? Because when I try to explain it to regular people who don't quite understand.
Starting point is 00:13:53 I'm not a regular person? No, you're not. You're not a regular person. I'm so offended. I'm not a regular person. But when the Internet started, nobody knew what it was going to do. When you thought superhuman machine intelligence was probably the greatest threat, what did you mean by that? I think there's levels of threats. So today we can look at these systems and say, all right, no imagination required. We can see how this can contribute to computer security exploits or disinformation or other things that can destabilize society.
Starting point is 00:14:21 Certainly there's going to be economic transition. And those are not in the future. those are things we can look at now in the medium term i think we can imagine these systems get much much more powerful now what happens if a really bad actor gets to use them and tries to like figure out how much havoc they can wreck on the world or harm they can inflict. And then we can go further to all of the sort of traditional sci-fi, what happens with the kind of runaway AGI scenarios or anything like that. Now, the reason we're doing this work is because we want to minimize those downsides while still letting society get the big upsides. And we think it's very possible to do that, but it requires, in our
Starting point is 00:15:05 belief, this continual deployment in the world where you let people gradually get used to this technology, where you give institutions, regulators, policymakers time to react to it, where you let people feel it, find the exploits, find the creative energy of the world will come up with use cases we and all the red teamers we could hire would never imagine. And so we want to see all of the good and the bad and figure out how to continually minimize the bad and improve the benefits. And you can't do that in the lab. And this idea that we have that we have an obligation and society will be better off
Starting point is 00:15:43 for us to build in public, even if it means making some mistakes along the way. I think that's really important. When people critique chat GPT, you essentially said, wait for GPT-4. Now that it's out, has it met expectations? A lot of people seem really happy with it. There's plenty of things that's still bad. Yeah, I'm proud of it. Again, very long way to go, but as a step forward, I'm proud of it. So you tweeted that at first glance that GPT-4 seems, quote, more impressive than it actually is. Yeah, I'm proud of it. Again, very long way to go, but as a step forward, I'm proud of it. So you tweeted that at first glance that GPT-4 seems, quote, more impressive than it actually is.
Starting point is 00:16:10 Why is that? Well, I think that's been an issue with every version of these systems, not particularly GPT-4. You find these like flashes of brilliance before you find the problems. And so a thing that someone used to say about GPT-3 that has really stuck with me is it is the world's greatest demo creator. Because you can tolerate a lot of mistakes there. But if you need a lot of reliability for a production system, it wasn't as good at that. Now, GPT-4 makes less mistakes. It's more reliable. It's more robust.
Starting point is 00:16:37 But still a long way to go. One of the issues is hallucinations. They're called hallucinations, which is kind of a creepy word, I have to say. What do you think we should call it instead? Mistakes. Mistakes. Mistakes. Or something like hallucinations feels like it's sentient. It's interesting. Hallucinations, that word doesn't trigger for me as sentient, but I really try to make sure we're picking words that are in the tools camp, not the creatures camp, because I think it's tempting to anthropomorphize this in a very bad way.
Starting point is 00:17:01 That's correct. And as you know, there were a series of reporters wanting to date GPT-3. But anyway, sometimes a bot just makes things up out of thin air and that's hallucinations happen. Now it'll cite research papers or news articles that don't exist. You said GPT-4 does this less than GPT-3. We should all give them actual names, but it still happens. No, that would be anthropomorphic. I think it's good that it's letters plus a number. Not like Barbara. Anyway, but it still happens. Why is that? So these systems are trained to do something, which is predict the next word in a sequence. Right. And so it's trying to just complete a pattern.
Starting point is 00:17:35 And given its training set, this is the most likely completion. That said, the decrease from 3 to 3.5 to 4, I think, is very promising. We track this internally. And every week, we're able to get the number lower and lower and lower. I think it'll require combinations of model scale, new ideas, a lot of user feedback. Model scale is more data. Not necessarily more data, but more compute thrown at the problem. Human feedback, people flagging the errors for us, developing new techniques so the model can tell when it's about to go off the rails.
Starting point is 00:18:04 Real people just saying this is a mistake. Yeah. flagging the errors for us, developing new techniques so the model can tell when it's about to kind of go off the rails. Real people just saying this is a mistake. Yeah. One of the issues is that it obviously compounds a very serious misinformation problem. Yeah. So we pay experts to flag, to go through and label the data for us. Not just bounties, but we employ people. We have contractors.
Starting point is 00:18:20 We work with external firms. We say we need experts in this area to help us go through and improve things. You don't just want to rely totally on random users doing whatever, trying to troll you or anything like that. So humans, more compute, what else? To reduce the... Yeah. I think that there is going to be a big new algorithmic idea that a different way that we train or use or tweak these models, um, different architecture perhaps. So I think we'll find that at some point.
Starting point is 00:18:50 Meaning what? For the non-techie, the different architecture. Oh, it will, it could be a lot of things, but you could say like a different algorithm, but just some different idea of the way that we create or use these models. Mm-hmm. these models that encourages during training or inference time when you're when you're using it that encourages the the models to really ground themselves in truth be able to cite sources microsoft has done some good work there we're working on some things so talk about the next steps how does this move forward i think we're sort of on this very long-term exponential. And I don't mean that just for AI, although AI too. I mean that is like cumulative human technological progress. And it's very hard to calibrate on that. And we keep adjusting our expectations. I think if we told you five years ago we'd have GPT-4 today, you'd maybe be impressed.
Starting point is 00:19:46 Mm-hmm. But if we told you four months ago after you used ChatGPT we'd have GPT-4 today, you're probably not that impressed. And yet it's the same continued exponential. So maybe where we get to a year from now, you're like, eh, you know, it's better, but sort of the new iPhone's always a little better too. Right. But if you look at where we'll be in 10 years, then I think you'd be pretty impressed. Right, right. Actually, the old iPhones were not as impressive as the new ones. For sure, but it's been such a gradual process.
Starting point is 00:20:14 That's correct. That unless you hold that original one and this one back to back. Right, right. I just found mine the other day, actually. Interestingly enough, that's a very good comparison. You're getting criticism for being secretive, and you said competition and safety require that you do that. Critics say that's a very good comparison uh you're getting criticism for being secretive and you said competition and safety require that you do that um critics say that's a cop-out it's just about competition what's your response i mean it's clearly not that that we we make no secret of like
Starting point is 00:20:37 we would like to be a successful effort like that and i think that's fine and good and we try to be clear but also we have made many decisions over the years in the name of safety that have been widely ridiculed at the time that are later, you know, people come to appreciate. Even in the early versions of GPT, when we talked about not releasing model weights or releasing them gradually because we wanted people to have time to adapt, We got ridiculed for that, and I totally stand by that decision. Would you like us to push a button and open source GPT-4 and drop those weights into the world? Probably not. Probably not. One of the excuses that tech always uses is you don't understand it. We need to keep it in the back books.
Starting point is 00:21:16 It's often about competition. Well, for us, it's the opposite. I mean, we've said all along, and this is different than what most other AGI efforts have thought is everybody needs to know about this like AGI should not go be built in a secret lab with only the people who are like you know privileged and smart enough to understand it part of the reason that we deploy this is I think we need the input of the world and the world needs familiarity with what is in the process of happening the ability to weigh in to shape this together like we want that we need that input and people people deserve it so i think we're like not the secretive company we're quite the opposite
Starting point is 00:21:56 like we put this we put the most advanced ai in the world in an API that anybody can use. I don't think that if we hadn't started doing that a few years ago, Google or anybody else would be doing it now. They would just be using it secretly to make Google search better. Secretly to themselves. So you think you're forcing it out. But you are in competition. And let me go back to someone who was one of your original funders, Elon Musk. He's been openly critical of OpenAI, especially as it's gone to profits. He said, OpenAI was created as an open source, which is why I named it OpenAI, nonprofit company to serve as a counterweight to Google, but now has become closed source, maximum profit company, effectively controlled by Microsoft. Not what I intended at all. We're talking about open source versus closed, but what about his critique that you're too close to the big guys?
Starting point is 00:22:46 I mean, most of that is not true. And I think Elon knows that. We're not controlled by Microsoft. Microsoft doesn't even have a board seat on us. We are an independent company. We have an unusual structure where we can make very different decisions than what most companies do. I think a fair part of that is we don't open source everything anymore. We've been clear about why we think we were wrong there originally. We still do open source a lot of stuff. You know, open sourcing clip was something that kicked off this whole generative image world. We recently open sourced Whisper.
Starting point is 00:23:19 We open sourced Tools. We'll open source more stuff in the future. But I don't think it would be good right now for us to open source GPT-4, for example. I think that would cause some degree of havoc in the world, or at least there's a chance of that. We can't be certain that it wouldn't. And by putting it out behind an API, we are able to get many, not all, many of the benefits we want of broad access to this society, being able to understand it, not all, many of the benefits we want of broad access to this society, being able to understand it, update and think about it.
Starting point is 00:23:49 But when we find some of the scarier downsides, we're able to then fix them. How do you respond to when he's saying you're a closed source maximum profit company? I'll leave out the control by Microsoft, but in strong partnership with Microsoft, which was against what he said. I remember years ago when he talked about this. This was something he talked about a lot. Was what part? Oh, we don't want these big companies to run it.
Starting point is 00:24:11 If they run it, we're doomed. You know, he was much more dramatic than most people. So we're a capped profit company. Yeah. We invented this new thing where we started as a nonprofit. Explain that. Explain what a capped profit is. We, our shareholders can make us, which is our employees and our investors, can make a certain return.
Starting point is 00:24:30 Like their shares have a certain price that they can get to. But if OpenAI goes and becomes a multi-trillion dollar company, whatever, almost all of that flows to the nonprofit that controls us. Not like people hit a cap and then they don't get any more. What is the cap? It continues to vary as we have to raise more money. profit that controls us not like people hit a cap and then they don't what is the cap it continues to vary as we have to raise more money um but it's like much much much and will remain much smaller than like any tech company what in terms of like a number i truly don't know but it's not a significant the non-profit gets the significant chunk of the revenue well well it gets no it gets
Starting point is 00:25:01 everything over a certain amount so if we're're not very successful, the nonprofit might not, or gets a little bit along the way, but it won't get any appreciable amount. The goal of the capped profit is in the world where we do succeed at making AGI, and we have a significant lead over everybody else, and that could become much more valuable, I think, than maybe any company out there today. That's when you want almost all of it to flow to a nonprofit, I think. I want to get back to what Elon was talking about. He was very adamant at the time, and again, overly dramatic, that Google and Microsoft and Amazon were going to kill us. I think he had those kind of words. There needed to be an alternative. What changed in your estimation to do that, to change from that idea? Oh, it was very simple. Like when we realized the level of capital we were going to need
Starting point is 00:25:52 to do this, scaling turned out to be far more important than we thought. And we even thought it was going to be important. Then we tried for a while to raise, to find a path to that level of capital as a nonprofit. There was no one that was willing to do it. So we didn't want to become a fully for-profit company. We wanted to find something that would let us get the access to and the power of capitalism to finance what we needed to do, but still be able to fulfill and be governed by the nonprofit mission. So having this nonprofit that governs this capped profit LLC, given the playing field that we saw at the time,
Starting point is 00:26:35 and I still think that we see now, was the way to get to the best of all worlds we could see. In a really well-functioning society, I think this would have been a government project. That's correct. I was just going to make that point. The government would have been your funder. We talked to them. That was not, it wouldn't have not just been that they would have been our funder, but they would have started the project.
Starting point is 00:26:58 We've done things like this before in this country. Right, sure. But the answer is not to just say oh well the government doesn't do stuff like this anymore so we're just going to sit around and you know let other countries run by us and get an agi and do whatever they want to us it's we're going to like look at what's possible on this playing field right so elon used to be the co-chair and you have a lot of respect for him sure you thought deeply about his critiques have Have you spoken to him directly? Was there a break or what? You two were very close, as I recall.
Starting point is 00:27:28 We've spoken directly recently. Yeah. And what do you make of the critiques? When you hear them from him, I mean, it can be quite in your face about this. He's got his style. Yeah. To say the positive thing about Elon,
Starting point is 00:27:42 I think he really does care about a good future with AGI. That is correct. And he's a jerk, whatever else you want to say about him. He has a style that is not a style that I'd want to have for myself. He's changed. But I think he does really care and he is feeling very stressed about what the future is going to look like for humanity. Yeah, he did apply that both to when we did an interview about Tesla. He's like, if this doesn't work, we're all doomed, which was sort of centered on his car. But nonetheless, he was correct.
Starting point is 00:28:16 And the same thing with this. And this was something he talked about almost incessantly, the idea of either AI taking over and killing us, or maybe it doesn't really care. Then he decided it was like ant hills. Do you remember that? I don't know the ant hills part. He said, we're like, you know how we think when we're riddling a highway, ant hills are there, and we just go over them without thinking about it. So they don't, it doesn't really care. And then he said, we're like a cat, and maybe they'll feed us and bell us, but they don't really care about us. It went on and on. It changed and iterated over time. But I think the most critique that I would agree with them is that
Starting point is 00:28:49 these big companies would control this and there couldn't be innovation in the space. Well, I would say we're evidence against that. Except Microsoft. They're like a big investor, but again, not even a board member. So when you think... Like true full independence from them. So you think you are a startup in comparison with a giant partner? Yeah, I think we're a startup with a giant...
Starting point is 00:29:10 I mean, we're a big startup at this point. And there was no way to be a nonprofit that would work? I mean, if someone wants to give us tens of billions of dollars of nonprofit capital, we can go make that work. Or the government, which they're not. We tried. Now, he and others are working on different things. He has an anti-woke AI play.
Starting point is 00:29:29 Greg Bachman also said you guys made a mistake by creating AI with a left-leaning political bias. What do you think of the substance of those critiques? Well, I think that... This was your co-fan. Yeah, yeah. I think that the reinforcement learning from human feedback on the first version of ChatGPT was pretty left biased, but that is now no longer true. It's just become an internet meme. There are people, some people who are intellectually honest about this. If you go look at like GPT-4 and test it on us, it's relatively neutral.
Starting point is 00:30:04 Not to say we don't have more work to do. The main thing though, is I don't think you ever get to two people agreeing that any one system is unbiased on every topic. And so giving users more control and also teaching people about like how these systems work, that there is some randomness in a response, that the worst screenshot you see on Twitter is not representative of what these things do. I think it's important. So when you said it had a left-leaning bias, what did that mean to you? And of course, they will run with that.
Starting point is 00:30:32 They'll run with that quite far. People would give it these tests that score you on the political spectrum in America or whatever. And one would be all the way on the right, 10 would be all the way on the left. It would get like a 10 on all of those tests, the first version. Why? Because of, it was a number of reasons, but largely because of the reinforcement learning from human feedback step. We'll be back in a minute. Thumbtack presents the ins and outs of caring for your home. Out. Uncertainty. Self-doubt.
Starting point is 00:31:17 Stressing about not knowing where to start. In. Plans and guides that make it easy to get home projects done. Out. Word art. Sorry, live laugh lovers. In. Knowing what to do, when to do it, and who to hire. Start caring for your home with confidence. Download Thumbtack today. Support for this podcast comes from Aura. We'll be right back today. a great can opener or a cozy pair of socks. That's where Aura comes in. Wirecutter named Aura the number one digital picture frame, and it's not hard to see why. Aura's frames make it
Starting point is 00:32:11 incredibly easy to share unlimited photos and videos directly from your phone to the frame. And when you give an Aura frame as a gift, you can personalize and preload it with a thoughtful message and photos using the Aura app, making it an ideal present for long-distance loved ones. For a limited time, you can visit auraframes.com and get $45 off Aura's best-selling carbomat frames by using promo code VOX at checkout. That's A-U-R-A-FRAMES.COM, promo code VOX. This exclusive Black Friday Cyber Monday deal is their best of the year, so don't miss out. Terms and conditions apply. What do you think the most viable threat to open AI is? I hear you're watching Claude very carefully. This is the bot from Anthropic, a company that's founded by former open AI folks
Starting point is 00:33:00 and backed by Alphabet. Is that it? We're recording this on Tuesday. BARD launched today. I'm sure you've been discussing it internally. Talk about those two to start. Honestly, I mean, I try to pay some attention to what's happening with all these other things. It's going to be an unbelievably competitive space. I think this is the first new technological platform
Starting point is 00:33:20 in a long period of time. The thing I worry about the most is not any of those uh because i think we can you know there can there's room for a lot of people and also i think we'll just continue to offer the best the best product the thing i worry about the most is that we're somehow missing a better approach and that this idea like everyone's chasing us right now on large language models kind of trained in the same way i don don't worry about them. I worry about the person that has some very different idea about how to make a more useful system. Like a Facebook 2. Probably not Facebook, to be honest.
Starting point is 00:33:53 Like a Facebook 2. Oh, oh, oh. No, not like Facebook. Not Facebook. No, Facebook's not going to come up with anything unless Snapchat does, and then they'll copy it. I'm teasing, sort of. But you don't feel like these other efforts, that they're sort of in your same lane lane you're all competing so it's the one that is not that's what i would worry about more yeah like the people that are trying to do exactly what we're doing but you know
Starting point is 00:34:15 scrambling muscling it like but is there one that you're watching more carefully uh not especially really i kind of don't believe you, but really? I mean, no, the things I was going to say, the things that I pay the most attention to are not like language model startup number 217. It's when I hear about someone, it's like, these are like three smart people in a garage with some very different theory of how to build AGI. And that's when I pay attention. Is there one that you're paying attention to now? There is one I don't want to say. Okay. You really don't want to say?
Starting point is 00:34:51 I really don't want to say. Okay. What's the plan for making money? So we're sort of like, we have a platform, which is this API that anybody can use to the model. And then we have like a consumer product on top of it. Right. And the consumer product, 20 bucks a month for the sort of premium version. And the API, you just pay us per token, like basically like a meter. Businesses would do that depending on what they're
Starting point is 00:35:09 using it for. If they decide to deploy it in a hotel or wherever. The more you use it, the more you pay. The more the more you use it, you pay. One of the things that someone said to me that I thought was very smart is if the original internet started on a more pay subscriber basis rather than an advertising basis, it wouldn't be quite so evil. I am excited to see if we can really do a mass scale subscription funded, not ad funded business here. Do you see ads funding this?
Starting point is 00:35:35 That to me is the original sin of the internet. We've made the bet not to do that. Right. I'm not opposed to it. What would it look like? I don't know. We haven't thought like
Starting point is 00:35:42 it's going great with our current model. We're happy about it. You've been also competing against Microsoft for clients. They're trying to sell your software through their Azure cloud businesses
Starting point is 00:35:50 as an add-on. Actually, that, I don't. Like, that's fine. I don't care about that. That's fine. But you're also trying to sell directly sometimes the same clients.
Starting point is 00:35:57 You don't care about that. I don't care about that. You don't care. How does it work? Does it affect your bottom line that way? Again, we're like an unusual company here.
Starting point is 00:36:06 We're not, like, we don't need to squeeze out every dollar. Former Google Tristan Harris, who's become a critic of how tech is sloppily developed, presented to a group in DC of regulators. I was there. Among the points he made is that you've essentially kicked off an AI arms race. I think that's what struck me the most. Meta, Microsoft, Google, Baidu are rushing to ship generative AI bots when the tech industry is shedding jobs. Microsoft recently laid off ethics and society team within its AI org. That's not your issue. But are you worried about a profit driven arms race? I do think we need regulation and we need industry norms about this. I am disappointed to see people, like we spent many, many months and actually really the years
Starting point is 00:36:49 that it's taken us to get good at making these models, getting them ready before we put them out. You know, people, it obviously became somewhat of an open secret in Silicon Valley that we had GPT-4 done for a long time.
Starting point is 00:36:59 And there were a lot of people who were like, you got to release this now. You're holding this back from society. You know, this is your closed AI, whatever. But but like we just wanted to take the time to get it right and there's a lot to learn here and it's hard and in fact we try to release things to help people get it right even competitors i am nervous about the shortcuts that other companies now seem like they want to take such as oh just rushing out these models without all the safety features built.
Starting point is 00:37:26 Without safety. So they're just, this is an art that they want to get in here and get ahead of you because you've had the front seat. Maybe they do, maybe they don't. They're certainly making some noise, like, you know, they're going to. So when you say worried, what can you do about it? Nothing. Well, we can, and we do try to talk to them and explain, hey, here's some pitfalls. Here's some things we think you need to get right. We can continue to push for regulation.
Starting point is 00:37:51 We can try to set industry norms. We can release things that we think help other people get towards safer systems faster. Can you prevent that? Let me read you this passage from a story about Stanford doing it. They did one of their own models. $600, I think it cost them to put up. They trained a model for $600? Yeah, yeah, yeah, they did.
Starting point is 00:38:08 It's called Stanford Alpaca, just so you know. That's a cute name. It is, it's a cute name. I'll send you the story. But so what's to stop basically anyone from creating their own pet AI now for a hundred bucks or so and training it however they choose?
Starting point is 00:38:20 Well, OpenAI's terms of service say you may not use output from services developed models that compete with OpenAI. And Meta says it's only letting academic researchers use Lama under a non-commercial license at this stage. Although that's a moot point since the entire Lama model was leaked onto 4chan. Within hours or something. Yeah, and this is a $600 version of yours. One of the other reasons that we want to talk to the world about these things now is this is coming.
Starting point is 00:38:44 This is totally unstoppable yeah and there are going to be a lot of very good open source versions of this in coming years and it's going to come with you know wonderful benefits and some problems by getting people used to this now by getting regulators to begin to take this seriously and think about it now um i think that's our best path forward. All right. Two things I want to talk about, societal impact and regulation. You've said, as I told you, this will be the greatest technology humanity has ever developed. In almost every interview you do, you're asked about the dangers of releasing AI products, and you say it's better to test AI gradually, in an open quote, while the stakes are relatively
Starting point is 00:39:20 low. Can you expand on that? Why are the stakes low now? Why aren't they high right now? Relatively is the key word there. Right. Okay. What happens to the stakes if it's not controlled now? Well, these systems are now much more powerful than they were a few years ago. And we are much more cautious than we were a few years ago in terms of how we deploy them. We've tried to learn what we can learn. We've made some improvements. We've found ways that people want to use this. In this interview, and I totally get why in many of these topics, I think we're mostly talking about all of the downsides.
Starting point is 00:39:53 No, I'm going to ask you about the upsides. Okay. But we've also found ways to improve the upsides by learning too. So mitigate downsides, maximize upsides. That sounds good. And it's not that the stakes are that low anymore. In fact, I think we're in a different world than we were a few years ago. I still think they are relatively low to where we'll be a few years from now. These systems are still, they have classes of problems, but there's things that are totally out of reach that we know they'll be capable of. totally out of reach that we know they'll be capable of. And the learnings we have now, the feedback we get now, seeing the ways people hack, jailbreak, whatever, that's super valuable.
Starting point is 00:40:33 I'm curious how you think we're doing. I know you're- I think you're saying the right things. You're absolutely saying- Not from saying, like how you think we're doing as you look at the trajectory of our releases. I think the reason people are so worried, and I think it's a legitimate worry, is because the way the early internet rolled out, it was gee whiz almost the whole time. Almost up and to the right, gee whiz, look at these rich guys. Isn't this great? Doesn't this help you? And they missed every single consequence. Never thought of them. It was, I remember seeing Facebook live and I mentioned, I said, well, what about, you know, people who kill each other on it? What about, you know, murderers? What about suicides? And they called me a bummer.
Starting point is 00:41:09 A bummer? A bummer in this room. And I'm like, yeah, I'm a bummer. I'm like, I don't know. I just noticed that when people get ahead of tools, they tend, and you know, this is Brad Smith's thing. It's a tool or a weapon. Weapons seem to come up a lot. And so I always think same thing happened with the Google founders when they were trying to buy Yahoo many years ago. And I said, at least Microsoft knew they were thugs. And they called me and they said, that's really hurtful or really nice. I said, I'm not worried about you. I'm worried about the next guy. Like, I don't know who runs your company in 20 years with all that information on everybody. And so I think, you know, I am a bummer. And so if you don't know
Starting point is 00:41:43 what it's going to be, while you can think of all the amazing things it's going to do, and it probably be a net positive for society, net positive isn't so great either sometimes, right? It's a net positive. The internet's a net positive, like electricity's a net positive. But every time, it's a famous quote, when you invent electricity, you invent the electric chair, when you invent this and that.
Starting point is 00:42:04 And so that's what would be the thing here that would be the greatest thing. Does it outweigh some of the dangers? I think that's going to be the fundamental tension that we face, that we have to wrestle with, that the field as a whole has to wrestle with, society has to wrestle with. Especially in this world we live in now, which I think we can all agree has not gone forward. It's spinning backwards a little bit in terms of authoritarians using this you know i am super nervous about yeah what is the greatest thing you can think now you're not you and i are not creative have to think of all the things we are not going to go not even what from your perspective and uh you know don't do term papers don't do dad jokes what do you think
Starting point is 00:42:40 that's fine is that what you thought i would say for the greatest thing at all but i'm getting tired of that i don't care that it can write a press release. I don't care. Fine. Sounds fantastic. I don't read them anyway. What I am personally most excited about is helping us greatly expand our scientific knowledge. I am a believer that a lot of our forward progress comes from increasing scientific discovery over a long period of time.
Starting point is 00:43:03 In any area? All the areas. I think that's just what's driven humanity forward. And if these systems can help us in many different ways, greatly increase the rate of scientific understanding, you know, curing disease is an obvious example. There's so many other things we can do with better knowledge and better understanding of science. AI has already moved in that area, folding proteins and things like that. So that's the one that I'm personally most excited about.
Starting point is 00:43:26 Is science. Yeah. But there will be many other wonderful things too. You just, you asked me what my one was and. Is there one unusual thing that you think will be great that you've seen already that you're like, that's pretty cool. Using some of these new AI tutor like applications is like, I wish I had this when I was growing up.
Starting point is 00:43:44 I could have learned so much and so much better and faster. And when I think about what kids today will be like by the time they're finished with their formal education and how much smarter and more capable and better educated they can be than us today, I'm excited for that. Using these tools. I would say health information to people who can't afford it is probably the one I think is most promising. That's going to be transformative. We've seen, even for people who can't afford it, this in some ways will just be better. Yeah, exactly.
Starting point is 00:44:13 It's 100% broken. And the work we're seeing there from a bunch of early companies on the platform, I think it's remarkable. So the last thing is regulation, because one of the things that's happened is the Internet was never regulated by anybody, really, except maybe in Europe. But in this country, absolutely not. There's not a privacy bill. There's not an antitrust bill, et cetera. It goes on and on. They did nothing.
Starting point is 00:44:32 But the EU is considering labeling chat GBT high risk. If it happens, it will lead to significant restrictions on its use, and Microsoft and Google are lobbying against it. What do you think should happen? With AI regulation in general? This one one the high risk one i have followed the development of the eu's ai act but it is changed it's you know obviously still in development i don't know enough about the current version of it to say if i think this way like this definition of what high risk is in this way of classifying it and this is what
Starting point is 00:45:02 you have to do i don't know if i I would say that's like good or bad. I think like totally banning this stuff is not the right answer. And I think that not regulating this stuff at all. I mean, you're not TikTok, but go ahead. And I think not regulating this stuff at all is not the right answer either. And so the question is like,
Starting point is 00:45:19 is that gonna end in the right balance? Like, I think the EU is saying, no one in Europe gets to use chat GPT. Probably not what? Like, I think the EU is saying, you know, no one in Europe gets to use chat GPT. Probably not what I would do. But the EU is saying, here's the restrictions on chat GPT in any service like it. There's plenty of versions that I could imagine being like, all right, super sensible. All right. So after the Silicon Valley non-bailout bailout, you tweeted, we need more regulation on banks.
Starting point is 00:45:40 But what sort of regulation? I know. And then someone tweeted at you. Now he's going to say, we need them on AI. And you said, we need them on AI. And you said we need him on AI. But I mean, I do think that SVB was an unusually bad case. But also, if the regulators aren't catching that, what are they doing? They did catch it, actually.
Starting point is 00:45:57 They were giving warnings. They were giving warnings. But like there's often in an audit, you know, this thing is not quite like that's different than saying. They were giving them pretty significant. You need to do something. They just didn't do anything. Well, they could have. I mean, the regulators could have taken over like six months ago.
Starting point is 00:46:10 So this is what happens a lot of the time, even in well-regulated areas, which banks are compared to the Internet. What sort of regulations does AI need in America? Lay them out. I know you've been meeting with regulators and lawmakers. I haven't done that many. Well, they call me when you do. They want to say they've seen you, I guess. What do they say? Well, you're like the guy now. So they like to say I was with Sam and lawmakers. I haven't done that many. Well, they call me when you do. They want to say they've seen you, I guess. What do they say?
Starting point is 00:46:26 Well, you're like the guy now. So they like to say I was with Sam Aldman. Oh, I did one. He seems nice. I go, he is nice. I don't know what to tell you. I did like a three-day trip to DC earlier this year. So tell me what you think the regulations were
Starting point is 00:46:38 and what are you telling them? And do you find them savvy as a group? I think they're savvier than people think. Some of them are quite exceptional, yeah. I think the thing that I would like to see happen immediately is just much more insight into what companies like ours are doing. Companies that are training above a certain level of capability at a minimum, like a thing that I think could happen now, is the government should just have insight into the capabilities of our latest stuff released or not what our internal audit procedures and external audits we use look like how we collect our data how we're red teaming these systems what we expect to happen which we may be totally wrong about we get hit a wall anytime but like
Starting point is 00:47:20 our internal roadmap documents when we start a big training run, I think there could be government insight into that. And then if that can start now, I do think good regulation takes a long time to develop. It's a real process. They can figure out how they want to have oversight. Reid Hoffman has suggested a blue ribbon panel so they'd learn up on this stuff. I mean, panels are fine. We could do that too. But what I mean is like government auditors sitting in our buildings.
Starting point is 00:47:47 Congressman Ted Lieu said there needs to be an agency dedicated specifically to regulating AI. Is that a good idea? I think there's two things you want to do. This is way out of my area of expertise, but you're asking, so I'll try. I think people like us that are creating these very powerful systems that could become something properly called AGI at some point. Explain what that is. Artificial General Intelligence. But what people mean is just like above some threshold where it's really good. Right. Those efforts probably do need a new regulatory effort.
Starting point is 00:48:20 And I think it needs to be global body, new regulatory body. And then people that are using AI, like we talked about the medical advisor. I think FDA can give probably very great medical regulation, but they'll have to update it for the inclusion of AI. But I would say like creation of the systems and having something like an IAEA that regulates that is one thing. And then having existing industry regulators still do their regulation. So people do react badly to that because the information bureaus, that's always been a real problem in Washington. Yeah, not everyone. Who should head that agency in the U.S.?
Starting point is 00:48:58 I don't know. Okay. All right. So one of the things that's going to happen, though, is the less intelligent ones, of which there are many, are going to seize on things like they've done with TikTok, possibly deservedly, but other things like Snap released a chatbot powered by GPT that reportedly told a 15-year-old how to mask the smell of weed and alcohol and a 13-year-old how to set the mood for sex with an adult. with an adult, they're going to seize on this stuff. And the question is, who's liable if this is true when a teen uses those instructions? And Section 230 doesn't seem to cover generative AI. Is that a problem? I think we will need a new law for use of this stuff. And I think the liability will need to have a few different frameworks. If someone's tweaking the models themselves, I think it's going to have to be the last person that touches it has the liability. But there be liability.
Starting point is 00:49:48 It's not full immunity that the platforms give. I don't think we should have full immunity. Now, that said, I understand why you want limits on it, why you do want companies to be able to experiment with this. You want users to be able to get the experience they want. But the idea of like no one having any limits for generative ai for ai in general that feels super wrong um last thing trying to quantify the impact you personally uh will have on society as one of leading developers of this technology do you think about that do you think about your impact do you like me open air me sam you sam uh i mean hopefully i'll have a positive impact like
Starting point is 00:50:24 do you think about the impact on humanity the level of power that also comes with it? Yeah, I don't. I think about what OpenAI is going to do a lot and the impact OpenAI will have. Do you think it's out of your hands? No. No. But it is very much, like, the responsibility is with me at some level, but it's very much a team effort. And so when you think about the impact, what is your greatest hope and what's your greatest worry?
Starting point is 00:50:53 My greatest hope is that we are, we create this thing. We are one of many people that is going to contribute to this movement. We'll create an AI. Other people create an AI. to this movement will create an AI, other people will create an AI, and that we will be a participant in this technological revolution that I believe will be far greater in terms of impact and benefit than any before. My view of the world is it's this one big, long technological revolution, not a bunch of smaller ones, but we'll play our part.
Starting point is 00:51:22 We will be one of several in this moment. And that this is going to be really wonderful. This is going to elevate humanity in ways we still can't fully envision. And our children, our children's children are going to be far better off than the best of anyone from this time. And we're just going to be in a radically improved world. We will live healthier more interesting more fulfilling lives we'll have material abundance for people and you know we will be a contributor and you know we'll put in our your part our part of that you do sound alarmingly like the people i met 25 years ago i have to say if you were not i don't know how old you are but you weren't you young. You were probably very young.
Starting point is 00:52:06 37. Yeah, so you were 12. And they did talk like this. Many of them did, and some of them continued to be that way. A lot of them didn't, unfortunately. And then the greed seeped in, the money seeped in, the power seeped in, and it got a little more complex, I would say. Not totally. And again, because net, it's better.
Starting point is 00:52:24 But I want to focus on you on my last question. There seem to be two caricatures of you. One that I've seen in the press is a boyish genius who will help defeat Google and usher in utopia. The other is that you're an irresponsible, woke tech overlord Icarus that will lead us to our demise. I have to pick one? No. Is it? No, I don't think.
Starting point is 00:52:41 How old do I have to be before I can drop the boyish qualifier? Oh, you can be boyish. Tim H like drop the boyish qualifier oh you can be boyish tim hanks is still boyish yeah and what was the second one uh you know icarus overlord tech overlord woke woke something yeah yeah well whatever the icarus part is that i like boyish that i'm i think we feel like adults now you may be adults but boyish always gets put on you i don't ever call you boys i think you're adults um adults. Icarus meaning like we are messing around with something that we don't fully understand. Well, we are messing around with something we don't fully understand. Yeah.
Starting point is 00:53:11 And we are trying to do our part in contributing to the responsible path through it. All right. But I don't think either of those characters. You're not either of those characters. So describe yourself then. Describe what you are. Technology brother. Oh, wow. You're going to go for techno. Describe what you are. Technology brother. Oh, wow. You're going to go for tech.
Starting point is 00:53:28 No, I'm kidding. I just think that's such a funny meme. I don't know how to describe myself. I think that's what you would call me. No, I wouldn't. No? 100%. All right. Because it's an insult now. It's become an insult.
Starting point is 00:53:38 I'd call you a technology sister. I'll take that. We leave it on that note? Let's leave on that note. I do have one more quick question. Last time we talked, you were thinking of running for governor. I was thinking of running for mayor. I'm not going to be running for mayor. Are you going to still run for governor?
Starting point is 00:53:50 No. No. I think I am doing the most amazing thing I can imagine. I really don't want to do anything else. You don't want to do anything else. It's tiring, but I love it. Yeah. Okay.
Starting point is 00:54:00 Sam Altman, thank you so much. Thank you. Sam Altman, thank you so much. Thank you. You said he sounded a lot like a lot of founders a generation before him. Yes. What are the lessons you would impart to Sam as someone who has so much impact on humanity? You know, I think what I said is that they were hopeful and they had great ideas. And one of the things that I think people get wrong is to be a tech critic means you love tech.
Starting point is 00:54:27 Like, you know, you really love it. You do. Yeah, of course. And you don't want it to fail. You want it to create betterment for humanity. And if that's your goal, when you see it being warped and misused, it's really sad and disappointing. And I think one of the things early Internet people had all these amazing ideas, the world talking to each other, we'll get along with Russia, we'll be able to communicate over vast distances. And again, just like I talked about with Reid Hoffman, it's a Star Trek vision
Starting point is 00:54:53 of the universe. And that's what it was. And boy, the money and the power and the bad people that came in were really significantly shifted it, Not completely by any means. I love my Netflix. You know, I just do. But the unintended or intended consequences ultimately are very hard to bear, even if it's a net positive. So it's just the money and the power that's corrupting is what you're saying. It's inevitable? No, not inevitable. But often, often. Often, yeah. Well, not him. Not a lot of people. But let's see this standing the test of time, right? You're saying about Reid Hoffman and Max Levchin versus, say, Peter Thiel and Elon Musk. estimation, he's been very consistent in how he looks at the world, which is not a particularly positive light. I think that a lot of them do stay the same, and they do stay true to what they're like. And I don't know why that is over certain people, and others get sucked into it in a way
Starting point is 00:55:57 that's really, I'm thinking about this a lot, because that's what my book's about. Yeah, of course. How people change and why, and whether that's a good thing or a bad thing, because, you know, Yeah, of course. for in every kind stranger there is one who would break you, though I keep this from my children. I'm trying to sell them the world. Any decent realtor walking through a real shithole chirps on about good bones. This place could be beautiful, right? You could make this place beautiful. And that's how I feel about this. They could make this place beautiful. And I think Sam thinks that too. Yeah. It's not just a lie you tell your children, right? Well, no. But it is. You can't tell them terrible things all the time. They would be like just lying on the ground. Yeah. It's not just a lie you tell your children, right? Well, no. But it is. You can't tell them terrible things all the time. They would be like just lying on the ground. Yeah. But sometimes it's so idealistic. Like when he said global regulatory body to regulate AI, I'm like, oh, man, we're fucked. That's never going to happen.
Starting point is 00:56:56 Like when was the last good global regulatory body? I know, but it could work. It could work. This has to be global. This has to be global. But there's no infrastructure to set up a sustainable, like, global society. Yes, there is. In medicine, there is. What, you think the World Health Organization has been effective in medicine?
Starting point is 00:57:11 No, I think there's stuff around cloning, around all kinds of stuff. It's never going to be perfect, but boy, there's a lot of people that hew to those ethics. I mean, I think it depends how bought in state governments are, including China. But the regulation thing is particularly tricky because it can also become a moat, right, for incumbents. Like Facebook's like, regulate them. It's like, well, you can afford the regulation in a way that new competitors maybe can't. I think the governments can play a lot of roles here.
Starting point is 00:57:35 They do it in nuclear nonproliferation. It's never perfect, but we still haven't set one off, have we? I think that's largely the deterrent power and not because of any effective regulation. I am a great believer in nuclear nonproliferation. Yeah, I do too. And so I think there's lots of examples of it work. And I think the most significant thing that he said here was about the government's role, the U.S. government's role. It shouldn't give this all over to the private sector. It should have been the one to give them money
Starting point is 00:58:02 and to fund them. And that is 100%. We've talked to Mariana Mazzucato about that. And many people, that to me is the big shame is the government abrogating its role in really important things that are important globally and important for the US. But even when the government has played that kind of like, let's call it kindling role for industry, whether it be Elon Musk's loan for Tesla, whether it be what DARPA was doing that became, you know, parts of Siri and Echo and whatnot. The government here is bad at retaining like a windfall from that that would be reinvested into taxpayers. But it didn't used to. It used to just do it because it was the right thing to do,
Starting point is 00:58:38 that we would research and investment by the government. You know, highway system seems to have worked out pretty good. The telephone system seems good. You know, I mean, we always tend to, like, talk about what they do wrong. But there's so much stuff that the government contributed to that matters today. It used to be a culture also if people would want to go into government and civil service. My father was in that generation. Like, you know, and I think that it's interesting to hear Sam say, no, he won't run for governor. And, in fact, you think sometimes, well, it would be so great if some of these bright minds went into—
Starting point is 00:59:08 Except he's more effective where he is. Why would he do that when he's more effective where he is? Arguably the right regulator for this is a person who could have built it. Yeah. Or conceived building it. Maybe. Did you find his answers to the moderation questions and this idea of hallucination and overly impressive at first glance. Did you find those satisfying? Yeah, I thought he doesn't have answers. I think one of the things I like
Starting point is 00:59:29 about Sam is if he doesn't have an answer, I don't think he's hiding it. I don't think he knows. And I think one of the strengths of certain entrepreneurs is I don't really know. And I think a lot around AI right now, anyone that's going to give you a certainty is lying to you. Well, they had experimented with using these low-wage workers in Africa through SAMA and outsource. Well, I think it was that it was exposed. They were paying them less than $2 an hour and training them to build up what was reported a content moderation AI layer, which is ironic when you think about it. So there were workers in Africa being paid less than $2 an hour to train machines to replace them for that
Starting point is 01:00:05 job. Well, have you been to an Amazon warehouse lately? There's a lot of machines doing everything. That's the way it's going. That's like you're telling me something that happens in every other industry. Yeah, I know. And yet we're going to grow smarter. Do you think that's true? AI tutors, everyone's going to be smarter? I do. I think we do a lot of rote, idiotic work that we shouldn't be doing. And we have to be more creative of what the greatest use of our time is. My great hope for AI is actually that it takes out the rote bits and all of a sudden creative industry flourishes because those are the parts that can't be replicated. And though I think, you know, a sad reality of technology in the last generation has been that kids maybe don't read as well or as much or as fast or as early
Starting point is 01:00:43 as used to, but they make video. Right. What if they're spoken to smarter? Like the idea of education on these things or information or healthcare in an easy way is really, these phones are just getting started and they will not just be phones. They will be wrapped around us in more good information you get and the more communication you get. That's a good thing. They might just be getting started, but we are ending. Do you want to read us our credits today? Yes. Remember, you can make this place beautiful or ugly, depending. It's got good bones.
Starting point is 01:01:12 It's got good bones. It's got good bones. Today's show was produced by Neha Miraza, Blake Nishik, Kristen Castro-Rossell, and Rafaela Seward. Special thanks to Haley Milliken. Our engineers are Fernando Arrudo and Rick Kwan. Our theme music is by Trackademics. If you're already following the show, you get the red pill. If not, Rick Deckard is coming after you. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New
Starting point is 01:01:41 York Magazine, the Vox Media Podcast Network and us. We'll be back on Friday, that's tomorrow, with a special bonus episode.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.