The Journal. - A Conversation with OpenAI’s Sam Altman and Mira Murati

Episode Date: October 20, 2023

Two of the creators of ChatGPT discuss job disruption, data and the ‘person-ness’ of AI chatbots with WSJ’s Joanna Stern.  Further Listening: -The Hidden Workforce That Helped Filter Viole...nce and Abuse Out of ChatGPT  -The Company Behind ChatGPT  Further Reading: -3 Things I Learned About What’s Next in AI  -You Can Now Talk With ChatGPT and It Sounds Like a Human (Pretty Much)  Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Nine years ago, technology entrepreneur Sam Altman made a prediction. He said it would take a long time before humans had to worry about losing their jobs to artificial intelligence. Here he is speaking at a Wall Street Journal conference at the time. Computers and humans are very good at very different things. So a computer doctor will outcrunch the numbers and do a better job than a human on looking at a massive amount of data and saying this. But on cases that require judgment or creativity or empathy, we are nowhere near any computer system that is any good at this. system that is any good at this. But now, almost a decade later, Altman and the company he co-founded, OpenAI, have released an AI chatbot called ChatGPT that can do a lot of things humans can.
Starting point is 00:01:01 It can write emails, business plans, even computer code, things that nine years ago didn't seem possible. And this week, Altman returned to that same conference, the Wall Street Journal's Tech Live, and spoke with our colleague Joanna Stern, who asked him about that prediction. Does 2023 Sam agree? Partially right and partially wrong. Okay, yeah. Could have been worse. Could have been worse? What's your outlook now?
Starting point is 00:01:21 Partially right and partially wrong. Okay, yeah. Could have been worse. Could have been worse? What's your outlook now? Welcome to The Journal, our show about money, business, and power. I'm Kate Linebaugh. It's Friday, October 20th. Coming up on the show, a conversation with the leaders of OpenAI.
Starting point is 00:02:04 Snacks in economy class, now extended to flights within Canada and the U.S. Cheers to taking off this summer. More details at aircanada.com. Earlier this week, our colleague Joanna Stern sat down with two of OpenAI's executives, CEO Sam Altman and CTO Meera Muradi. Sam, Meera, Sam, thank you so much. Thanks for having us. Joanna started by asking Altman about the company's ultimate goal,
Starting point is 00:02:42 creating Artificial General Intelligence, or AGI, a type of AI that's as smart as humans. And Sam, why is AGI the goal? I think AGI will be the best tool humanity has yet created. With it, we will be able to solve all sorts of problems. We'll be able to express ourselves in new creative ways. We'll make just incredible things for each other, for ourselves, for the world, for kind of this unfolding human story. And, you know, it's new, and anything new comes with change,
Starting point is 00:03:09 and change is not always all easy. But I think this will be just absolutely tremendous upside. And in nine more years, if you're nice enough to invite me back, you'll roll this question, and people will say, like, how could we have thought we didn't want this? When will it be here? And how will we know it's here? We kind of define AGI
Starting point is 00:03:33 as like the thing we don't have quite yet. So we've moved, I mean, there were a lot of people who would have 10 years ago said, all right, if you could make something like GPT-4, GPT-5 maybe, that would have been an AGI. And now people are like, well, you know, it's like a nice little chatbot or whatever.
Starting point is 00:03:46 And I think that's wonderful. I think it's great that the goalposts keep getting moved. It makes us work harder. But I think we're getting close enough to whatever that AGI threshold is going to be that we no longer get to hand wave at it, and the definition is going to matter. OpenAI has released several versions of GPT, each more powerful than the last.
Starting point is 00:04:09 Its latest version, which came out earlier this year, is called GPT-4. But there's a lot of anticipation about what the company is working on next. Joanna asked Meera Maradi, Open AI's CTO, whether the company is working on GPT-5. Meera, how's GPT-5 going? We're not there yet, but it's kind of a need-to-know basis. I'll let you know. That's such a diplomatic answer. I'm going to make Meera do all of these now. I would have just said, oh yeah, here's what's happening. That's great. We'll leave it there. No, no, no. We're not sending him back, here's what's happening. That's great. We'll leave it there.
Starting point is 00:04:45 No, no, no. We're not sending him back here. Who paired these two? Who paired, whose idea was this? You're working on it. You're training it. We're always working on the next thing. One of the things OpenAI has been working on is trying to reduce the number of so-called hallucinations.
Starting point is 00:05:02 This is when the chatbot generates a totally false answer. Will GPT-5 solve the hallucination problem? Well, I mean, actually, maybe. Like, let's see. We've made a ton of progress on the hallucination issue with GPT-4, but we're still quite, we're not where we need to be but you know we're sort of on the right track and it's it's unknown it's research it could be that continuing in this path of reinforcement learning with human feedback we can get all the way to really reliable outputs and we're also adding other elements like retrieval and search, so you can, you have the ability to provide more factual answers or to get more factual outputs from
Starting point is 00:05:53 the model. So there is a combination of technologies that we're putting together to kind of reduce the hallucination issue. Another problem OpenAI has faced are lawsuits, from writers like George R.R. Martin and John Grisham to the comedian Sarah Silverman. They're alleging copyright infringement because their copyrighted work was used to train the company's AI models. One lawsuit calls ChatGPT, quote, systematic theft on a mass scale. OpenAI has said it trains its AI models on publicly available information. The company has also said that it respects the rights of creators and authors and that many creative professionals use ChatGPT. Sam, I'll ask you about the data, the training data.
Starting point is 00:06:43 Obviously, there's been maybe some people in this audience who may not be thrilled about some of the data that you guys have used to train some of your models. Not too far from here in Hollywood, people have not been thrilled, publishers. When you're considering now as you're walking through and going to work towards these next models. What are the conversations you're having around the data? So we obviously only want to use data that people are excited about us using. We want the model of this new world to work for everyone. And we want to find ways to make people say like, you know what? I see why this is great. I see why this is like going to be a new, it may be a new way that we think about some of these
Starting point is 00:07:29 issues around data ownership and like how economic flows work. But we want to get to something that everybody feels really excited about. But one of the challenges has been people, you know, different kinds of data owners have very different pictures. So we're just experimenting with a lot of things. We're doing partnerships of different shapes. So we're just experimenting with a lot of things. We're doing partnerships of different shapes. And we think that with any new field, we'll find something that just becomes a new standard. Also, I think as these models get smarter and more capable,
Starting point is 00:08:00 we will need less training data. So I think there's this view right now, which is that we're just going to like, you know, models are going to have to like train on every word humanity has ever produced or whatever. And I technically speaking, I don't think that's what's going to be the long-term path here.
Starting point is 00:08:16 Like we have existential proof with humans that that's not the only way to become intelligent. And so I think the conversation gets a little bit led astray by this because what really will matter in the future is particularly valuable data. People trust the Wall Street Journal and they want to see content from that. And the Wall Street Journal wants that too. And we find new models to make that work. But I think the conversation about data and the shape of all of this,
Starting point is 00:08:45 because of the technological progress we're making, it's about to shift. Over the past few months, OpenAI has introduced a lot more personalization to GPT-4. Users can give it information about themselves, which ChatGPT uses to provide custom responses. It also now has its own voice, which Joanna demonstrated on stage by asking it a question. I've got a big interview with OpenAI executives, and I'm nervous. Any advice? Absolutely. Given your background as a technology columnist, you're well equipped for this. Just be yourself, do your homework on OpenAI's latest developments, and trust in your expertise. Remember, they'll likely appreciate thoughtful and well-researched questions.
Starting point is 00:09:36 And hey, everyone gets nervous. Take a deep breath and rock it. Am I rocking it? Definitely. I'm rocking it. All right. But what's so striking about that, first of all, I want every, that's, that's the voice of chat GPT. They have one of five. Yeah. That, I mean, that is the voice. It is, it sounds so human. It sounds so natural. It knows about me because I've already put into custom instructions. I'm a tech journalist. It also knows I'm allergic to avocado. It's always putting that in there. Don't eat avocado.
Starting point is 00:10:06 I'm like, I'm not asking about avocado. We got some work to do. Is there a future, and this is what you're maybe trying to build here, where we have deep relationships with this type of bot? It's going to be a significant relationship, right? Because, you know, we're building these systems that are going to be a significant relationship, right? Because, you know, we're building the systems that are going to be everywhere in your home, in your educational environment,
Starting point is 00:10:31 in your work environment, and maybe, you know, when you're having fun. And so that's why it's actually so important to get it right. That's a big responsibility, though. And you guys will be in sort of control of people's friends. Maybe it gets to being people's lovers. How do you guys think about that control? First of all, I think there's... We're not going to be the only player here. Like, there's going to be many people.
Starting point is 00:10:56 So we have... We get to put, like, our nudge on the trajectory of this technological development. And we've got some opinions. But A, we really think that the decisions belong to sort of humanity society as a whole whatever you want to call it and b we will be one of many actors building sophisticated systems here so it's going to be a society-wide discussion it's and there's going to be all the normal forces there'll be competing products that offer different things there will be different kind of like societal embraces and pushbacks.
Starting point is 00:11:26 There will be regulatory stuff. It's going to be like the same complicated mess that any new technological birthing process goes through. And then we pretty soon will turn around and we'll all feel like we had smart AI in our lives forever. And, you know, that's just that's that's the way of progress. And I think that's awesome. and that's the way of progress. And I think that's awesome. I personally have deep misgivings about this vision of the future
Starting point is 00:11:50 where everyone is super close to AI friends and not more so than human friends or whatever. I personally don't want that. I accept that other people are going to want that. And some people are going to build that. And if that's what the world wants and what we decide makes sense, we're going to get that. I personally think that personalization is great.
Starting point is 00:12:12 Personality is great. But it's important that it's not like person-ness. And at least that you know when you're talking to an AI and when you're not. You know, we named it ChatGBT and not, it's a long story behind that, but we named it ChatGBT and not a person's name very intentionally. And we do a bunch of subtle things in the way you use it to like make it clear that you're not talking to a person.
Starting point is 00:12:35 And I think what's going to happen is that in the same way that people have a lot of relationships with people, they're going to keep doing that. And then there'll also be these like AIs in the world, but you kind of know they're just a different thing. After the break, how to prevent the AI apocalypse. We'll see you next time. with live trip tracking and highly rated drivers. Add your teen to your Uber account today. Summer is like a cocktail.
Starting point is 00:13:31 It has to be mixed just right. Start with a handful of great friends. Now add your favorite music. And then, finally, add Bacardi Rum. Shake it together. And there you have it. The perfect summer mix. Bacardi Rum. Shake it together. And there you have it. The perfect summer mix. Bacardi.
Starting point is 00:13:48 Do what moves you. Live passionately. Drink responsibly. Copyright 2024. Bacardi, its trade dress and the bat device are trademarks of Bacardi and Company Limited. Rum 40% alcohol by volume. How do we go from the chatbot we just heard that told me to rock it to one that, I don't know, can rock the world and end the world? Well, I don't think we're going to have a chatbot that ends the world.
Starting point is 00:14:16 But how do we go to this idea of, we've got simple chatbots, they're not simple, they're advanced, what you guys are doing, but how do we go from that idea to this fear that is now pervading everywhere? If we are right about the trajectory things are going to stay on, and if we are right about not only the kind of like scaling of the GPTs, but new techniques that we're interested in that could help generate new knowledge, and someone with access to a system like this can say like help me hack into this computer system or help me design uh you know like a new biological pathogen that's much worse than covid or any number of other things
Starting point is 00:14:56 it seems to us like it doesn't take much imagination to think about scenarios that deserve great caution and and again, we all come and do this because we're so excited about the tremendous upside and the incredibly positive impact. And I think it would be like a moral failing not to go pursue that for humanity. But we've got to address, and this happens with like many other technologies,
Starting point is 00:15:22 we've got to address the downsides that come along with this. And it doesn't mean you don't do it. It doesn't mean you just say, this AI thing, we're going to go full dune and blow up and not have computers or whatever. But it means that you are thoughtful about the risks. You try to measure what the capabilities are. And you try to build your own technology in a way that mitigates those risks. Another risk is what AI could do to the workforce
Starting point is 00:15:53 and how quickly that could happen. Every technological revolution affects the job market. And over human history, every maybe 100 years, you can feel different numbers for this 150 years, half the kind of jobs go away, totally change, whatever. I'm not afraid of that at all. In fact, I think that's good. I think that's the way of progress. And we'll find new and better jobs. The thing that I think we do need to confront as a society is the speed at which this is going to happen. It seems like over, you know, two, maximum three, probably two generations, we can adapt,
Starting point is 00:16:27 society can adapt to almost any amount of job market change. But a lot of people like their jobs or they dislike change. And going to someone and saying, hey, the future will be better, I promise you, and society is going to win, but you're going to lose here. That doesn't work. That's not cool. So we're going to keep finding things to do. And
Starting point is 00:16:46 the people in the future will probably think some of the things, we'll think some of the things those people do is very silly and not real work in a way that like a hunter-gatherer probably wouldn't think this is real work either. You know, we're just trying to like entertain ourselves with some silly status game. That's fine with me. That's how it goes. But we are going to have to really do something about this transition. It is not enough to just give people a universal basic income. People need to have agency, the ability to influence this. We need to sort of jointly be architects of the future. And one of the reasons that we feel so strongly about deploying this technology as we do, as you said, not everybody's in these discussions,
Starting point is 00:17:25 but more and more are every year. And by putting this out in people's hands and making this super widely available and getting billions of people to use ChatGPT, not only do people have the opportunity to think about what's coming and participate in that conversation, but people use the tool to push the future forward.
Starting point is 00:17:42 And that's really important to us. What is your biggest fear about the future? And what is your biggest hope with this technology? I think the future is going to be amazingly great. We wouldn't come work so hard on this if we didn't. I think this is one of the most significant inventions humanity has yet done. So I'm super excited to see it all play out. I think things can get so much better for people than they are right now.
Starting point is 00:18:15 And I feel very hopeful about that. We covered a lot of the fears. Again, we're clearly dealing with something very powerful that's going to impact all of us in ways we can't perfectly foresee yet. But what a time to be alive and get to witness this. I mean, the hope is definitely to push our civilization ahead with augmenting our collective intelligence. And the fears, we talked a lot about the fears, but we've got this opportunity right now and you've got summers and winters in AI and so on.
Starting point is 00:18:54 But when we look back 10 years from now, I hope that we get this right. And I think there are many ways to mess it up. And we've seen that with many technologies, so I hope we get it right. You're not so fearful, but I was going to actually ask this, but I'll ask it now. Do you have a bunker?
Starting point is 00:19:15 I have, like, structures, but I wouldn't say, like, a bunker. Structures? None of this is going to help if AGI goes wrong. Really? It's a ridiculous question, to be honest. Okay, good, good, good. That's all for today, Friday, October 20th. The Journal is a co-production of Spotify and The Wall Street Journal. Enrique Perez de la Rosa, Sarah Platt, Alan Rodriguez Espinosa, Heather Rogers,
Starting point is 00:20:05 Jonathan Sanders, Pierce Singey, Jivika Verma, Lisa Wang, Catherine Whalen, and me, Kate Leinbaugh. Our engineers are Griffin Tanner, Nathan Singapak,
Starting point is 00:20:16 and Peter Leonard. Our theme music is by So Wiley. Additional music this week from Catherine Anderson, Peter Leonard, Bobby Lord, Emma Munger, Nathan Singapak, Griffin Tanner, So Wiley, and Blue Dot Sessions. Fact-checking this week by Kate Gallagher and Sophie Hurwitz.
Starting point is 00:20:38 Thanks for listening. See you Monday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.