The Decibel - The Canadian scientist at the centre of the OpenAI drama

Episode Date: November 24, 2023

In the span of a week, OpenAI went from being Silicon Valley’s dominant artificial intelligence company, to teetering on the brink of collapse, to a total board overhaul. And at the centre of the dr...ama were two men: Sam Altman, its CEO, and Ilya Sutskever, its Chief Scientist.Report on Business journalist Joe Castaldo explains who Ilya Sutskever is, what his role was in the past week’s chaotic chain of events, and why he is driving to build even smarter AI, despite the risks.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 By now, you've probably heard all about the drama that has been unfolding at the world's most well-known AI company, OpenAI. But here's a quick recap, just in case. In the last week, the CEO, a man named Sam Altman, was fired by the board for pretty vague reasons. Then the vast majority of employees of the company revolted and signed a letter threatening to quit if he wasn't reinstated. There was a late Sunday meeting between Altman and the board, possibly to see if there was a way to bring him back as CEO. But instead, Altman was hired by Microsoft, and OpenAI announced an interim CEO.
Starting point is 00:00:45 Since then, more staff threatened to quit, one board member involved in Altman's firing professed his regrets, and then, very late on Tuesday, Altman was reinstated as OpenAI's CEO. He left behind his role at Microsoft, and the interim CEO at OpenAI was gone, and the board that fired Altman in the first place is now also mostly gone. It's a lot. At the center of this news hurricane is a Canadian AI scientist named Ilya Sutskovor. Today, Report on Business journalist Joe Costaldo is on the show to tell us who Ilya Setskover is, his role in the wild news events of the past week, and his hopes for the future of artificial intelligence.
Starting point is 00:01:33 I'm Mainika Raman-Wilms, and this is The Decibel from The Globe and Mail. Joe, thank you so much for being here today. Thanks for having me. So we are talking around noon on Thursday, and I think that's important to note because, you know, every day this past week there's been some new development in this saga. So, you know, things are changing fast here. But I think to start, let's just go through some of the main characters, so to speak. So Sam Altman and Ilya Satskevar. Let's begin with Ilya, who is probably lesser known to most people in general.
Starting point is 00:02:08 Who exactly is he? Yeah, so Ilya Sutskiver is one of the co-founders of OpenAI and the chief science officer over there. So for the past eight years, he's been directing the research at OpenAI and has played a really big role in a number of AI breakthroughs. He's widely regarded as a brilliant thinker. Someone I spoke to described him as a bit of a prophet who has a mystical take on things. And AI has been his whole life, essentially. He's well known within the AI community, but not exactly a household name. Interesting. Yeah. And then let's talk about Sam Altman, because he is a little more of a household name, really. Yeah. Sam is sort of in the category of celebrity CEOs these days.
Starting point is 00:03:13 He's really made himself not just the face of OpenAI, but I would say this era of AI in general. He went on a bit of a world tour, like visiting multiple cities to meet with developers and speak about AI at a time when governments are really motivated to regulate it. He came to Toronto back in May, and I had about six minutes to chat with him after a talk he gave. And just a little anecdote to sort of show his popularity. You know, we were chatting sort of next to an elevator, and there was this hallway next to us. And at the other end of the hallway was a reception area where everyone went after the talk to mingle. And everybody was just staring at Sam like it was palpable that they're waiting to mob this guy. And then Sam, you know, ducked out.
Starting point is 00:04:00 And this woman came up to me and, like, she had this really plaintive look on her face. And she's like, do you know if Sam is coming back? And I had to say, no, I'm sorry. Sam has left the building. Wow. Okay. So, yeah, when you say if Sam is coming back? And I had to say, no, I'm sorry. Sam has left the building. Wow. Okay. So yeah, when you say celebrity CEO, he's got fans. He does.
Starting point is 00:04:13 And a ton of supporters in Silicon Valley specifically. Okay. So we've got these two men who are really integral to this company that's been caught up in all kinds of corporate drama recently. Joe, I guess beyond the intrigue around all of this, why is this power struggle worth paying attention to? So open AI is probably the most important AI institution right now. It's been kind of setting the standard in terms of generative AI. You know, its flagship product, ChatGPT, which was released around this time last year, just kicked off this whole generative AI boom.
Starting point is 00:04:51 And so it's this really pivotal moment for AI. You know, a lot of people say this moment is like when the World Wide Web launched or when the iPhone launched. You know, AI is set to change how we do everything. So it's this crucial moment for AI and open AI specifically. So let's come back to Ilya Sutskovor. We know he played a role in Sam Altman's firing last week. What exactly was his role in all of that show? Well, it seems like he's played a key role, sort of according to kind of one account.
Starting point is 00:05:34 He texted Sam Altman on, I guess, November 16th and invited him to a meeting the next day. And Sam hops on this Google Meet call. And most of the board is there, including Ilya, because he's a board member. And he reads a statement to Sam, effectively telling him that he's fired a CEO. That same day, Ilya reaches out to Greg Brockman, who's another OpenAI co-founder and the chair of the board. And in a similar meeting, he's kicked off the board. And so the statement goes out and says, you know, Sam is no longer CEO because he hasn't been consistently candid with the board. And Greg quits the company entirely. And chaos ensues. Wow. Yeah. I guess I wonder about Ilya's role here. Do we have any sense of why Ilya would have been the one, the person to deliver this message?
Starting point is 00:06:28 Greg is the chair of the board. It's ordinarily the chair that might do these sorts of things. So somebody else had to step in. And Ilya is the only board member who officially works at OpenAI. And he's known Sam for a very long time. All right. So now that Sam Altman is back, do we know what Ilya thinks of all of that? He seems to be quite thrilled that Sam is back.
Starting point is 00:06:57 He tweeted, there's no sentence in any language that exists that can describe just how happy he is. So he's really enthusiastic. But let's remember, he fired the guy a few days before, right? So this is kind of weird. He delivered the message that Sam was fired and he reversed course completely at one point. So after, you know, the weekend after Sam was let go on the Monday, this open letter comes out from OpenAI employees demanding that Sam be reinstated as CEO and the board resign. And it was signed by Ilya, who himself is a board member. So he's effectively demanding his own resignation.
Starting point is 00:07:42 Wow. Or he'll quit, right? Like almost all 800 of them were ready to leave if the board didn't resign and if Sam didn't return. And that same day, Ilya said again on X that he deeply regrets his participation in the board's actions. So whatever Sam did to get fired apparently was not worth imploding OpenAI for Ilya. So just to be clear here, now Sam Altman is back as CEO.
Starting point is 00:08:16 Did Ilya quit after that happened? Ilya is no longer on the board. So there is an entirely new board that is quite corporate. It's all men at the moment. You know, it's been described as the initial board, which indicates they're going to bring on some more people. where Ilya's kind of, you know, he's been on one side of things and then he's switched to the other, I guess. How do we make sense of his reversals here? We don't know. I mean, there's been some reporting in the U.S. that, you know, personal appeals were made to Ilya to reconsider. People I spoke to who know him say, you know, he has opinions, he can be strong-willed but he ultimately does care about people he's a principled individual who wants to do the right thing as he
Starting point is 00:09:11 sees it and you know it looks like nobody on the original board anticipated all of the fallout but we don't really know because he hasn't said much We'll be back in a minute. So we've sorted out kind of the main events of last week, Joe. I think we should maybe get a deeper picture now of Ilya Sutskovor. So let's go back a little bit into his past and his history. How did he actually get involved in AI research? So he has said that he's been interested in AI essentially since he was a kid. So he was born in Russia, grew up in Israel, his family moved to Canada when he was 16. And he said that as a kid, you know, he could remember thinking about this notion that like he is a conscious being with his own thoughts and feelings and experience. And, you know, other people are their own beings with their own consciousness.
Starting point is 00:10:13 And this idea really intrigued him and he wanted to Canada, he was able to enroll at the University of Toronto when he was effectively in the equivalent of grade 11 owing to his education in Israel. So he starts in mathematics, but he's really interested in AI. And I think when he's still an undergrad around 2002, 2003, he seeks out Geoffrey Hinton. He's, of course, he's known as one of the godfathers of AI, right? And he was also at U of T at that time. Hinton is an icon in AI research today. He wasn't necessarily back then. He had been sort of toiling away for decades, working on what are called neural networks, an approach to AI
Starting point is 00:11:07 that everybody else thought was a dead end. And Ilya himself has said, when he started in AI, he said, kind of with the flair of a Russian novelist, he said, it's a field of desolation and despair, right? Nobody was making any progress. And his his goal was to make just one meaningful contribution to advance this research forward. He ended up making, you know, more than one. And, you know, in particular, worked very closely with with Jeffrey Hinton for years. And Hinton was his Ph.D. advisor. And they did go into business together quite briefly. You mentioned neural networks. Can you just remind us what does that mean? It's not a perfect sort of comparison, but a lot of people compare it to the human brain. Like Hinton was a cognitive psychologist interested in how the brain works. And so that analogy made sense to him. So similar to how the brain has billions of
Starting point is 00:12:07 neurons that trade signals back and forth, an artificial neural network has a lot of digital neurons that trade signals back and forth. And his thinking was, if you could feed some data into an artificial neural network, it can learn. It can learn to recognize patterns. So this is kind of like a type of machine learning, right? That's kind of like a computer brain in a way? Yes.
Starting point is 00:12:39 Yeah, that's a very easy way to think about it. A lot of people will quibble with that, but I think for lay people it makes a kind of sense. But everybody thought this was a dead end makes a kind of sense. But everybody thought this was a dead end. Hinton did not. Ilya did not. And eventually they were vindicated in their approach. Interesting. So that actually, that gives us a good idea kind of of how Ilya started to get into this space. How did he eventually get involved with OpenAI? So he was at Google in 2015, along with Jeffrey Hinton, and he had a reputation as being one of the top minds in AI research. And he gets an email from Sam Ullman, who at that time was a bit of a Silicon Valley entrepreneur running an incubator,
Starting point is 00:13:23 investing in startups. And Sam invites him out for a dinner Valley entrepreneur running an incubator, investing in startups. And Sam invites him out for a dinner. And Elon Musk is there. Greg Brockman is there. Reid Hoffman, founder of LinkedIn, is there. And they start talking about AI. Wow, this is a powerful table. A very powerful table, yes.
Starting point is 00:13:40 These are heavy hitters in the Valley. And it's from that conversation that they went on to found OpenAI in December of 2015. And it's important to note that they didn't start a company. They started a nonprofit. Their goal was to pursue AI for the benefit of all humanity, free from sort of commercial and financial pressures. And that nonprofit ethos has eroded over the years, you could say. But back then, everybody had rose-colored glasses and the best of intentions. And specifically, they wanted to work on artificial general intelligence or AGI, which is about AI systems that are smarter than us.
Starting point is 00:14:33 Okay, so let's talk a little bit more about this because this kind of sounds like where things are headed. So we have ChatGPT now, which has been in the public consciousness for, I guess, about a year. What do we know about what Ilya thinks AI's future is? So Ilya has been obsessed, someone told me, with AGI for a very long time. It's all-consuming. I spoke to somebody who knows Ilya, and he told this anecdote about how they hadn't seen each other in a couple of years,
Starting point is 00:15:06 and they go out to dinner, and the first thing Ilya asks is like, so how long do you think before AGI gets here? And there's this anecdote that has emerged recently that at an OpenAI event last year, he leads employees in this chant, feel the AGI. What exactly is AGI? Well, that's the thing. It's going to mean different things to different people, but it's just AI systems that are as smart as us or even smarter than us at a very broad level. But how you measure that, how you define intelligence, it gets really messy. To him,
Starting point is 00:15:46 this is the goal. The goal is to build something that is smarter than us, because maybe it can teach us something about ourselves, or it can improve our lives in ways that we can't really foresee today. And do we know how close we might be to such a thing, to AGI? Again, everybody will have their own timelines. Jeffrey Hinton previously thought that it would take, you know, 30 to 100 years to get there. But owing to the recent advancements, including ChatGPT, he's revised that to like five to 20 years. Wow. And that's reason for concern. Okay. So if we were to create AGI, how would we contain it? An artificial intelligence that part of the definition of it would be that it's smarter than us.
Starting point is 00:16:37 So there's this concept in AI research called alignment, which just means very broadly ensuring that AI systems do what they're supposed to do. OpenAI has come out with this concept that they call super alignment, which is, I guess, alignment on steroids, right? It's about keeping AGI systems in check. So the more powerful AI system you have, the more damage it can do if it's misaligned or if it goes rogue, that kind of thing. That has been Ilya's focus the past few months. Because to be clear, Ilya and some other people at OpenAI believe that a super intelligent AI threatens humanity, right? Let's get into the actual scale of this though, Joe. When we're talking about these risks that pose a threat to humanity, what are we saying here? Are we talking about a worry that we could
Starting point is 00:17:37 actually be wiped out as a species? For some people like Hinton and Yoshua Bengio, who's another very prominent Canadian AI researcher, they do mean that literally. Like it is a possibility. Like they view it as a remote possibility. But even if there's like, you know, a 0.0001% chance, it's worth considering. The AI community likes open letters. So earlier this year, there was an open letter that consisted of a single sentence about how we need to deal with that AI could pose an existential risk to humanity, and we need to deal with that the same way that we deal with climate change. I spoke to some people earlier this year who signed the letter, and there was a bit of backtracking.
Starting point is 00:18:30 Like, oh, extinction's a pretty strong word. Or I agree with the sentiment of it. There was a lot of fear about the pace of AI development back then, and some people really wanted to help motivate governments to take it seriously. So they were looking at it from that perspective. Other people foresee bad outcomes if AI isn't developed properly, such as mass job loss or a bad actor using super powerful AI to develop bioweapons or do sophisticated hacks into critical infrastructure or spread misinformation and destabilize democracy. All those kinds of... I mean, all of those sound pretty bad too, right? Yeah, sort of dystopic scenarios that people foresee. So, you know, fall short of extinction, but not great.
Starting point is 00:19:20 Yeah, we're still not in a great place if any of those do happen, right? So, I mean, it sounds like there are, you know, some a lot of significant concerns here, Joe. I guess I wonder why, why risk to do it. Like, you know, if we stop development, somebody else isn't going to. And there is a sentiment that it could be really beneficial to society and the costs of not pursuing it sort of outweigh the risks that come with it. So it is worth doing. So one of the examples that Ilya was talking about recently, he gave a TED Talk just a month ago about AGI. He seemed really enthusiastic about healthcare applications, and he was talking about an AGI doctor that's trained on all the medical literature in the world and has tens of thousands of hours of lab experience or the equivalent of that. You know, someday we're going to look back at health care as we have it today. You know, it'll be the equivalent of like dentistry in the 16th century. I guess to end here, Joe, we should just maybe circle back to what we started talking about, which is all the drama from the last week.
Starting point is 00:20:44 I guess, what should we take away from all of that, Joe? Overall, it's not a good look for open AI or even sort of corporate development of AI in general. Like, if AI is as dangerous as a lot of people believe it to be, and, you know, there are real documented risks, for sure. So, you know, if this is a dangerous technology, it needs to be developed responsibly, transparently, equitably, so that, you know, it benefits us all, you know, and if the people developing it can't get their act together, in terms of just running the organization, like that, that's, that's kind of disconcerting.
Starting point is 00:21:26 So it raises a lot of questions about, okay, well, how do we regulate this? How do we govern this? How concerning is it that AI is being developed by really just a handful of private corporations? Is that level of concentration a good thing? How do we address that? So I think like all great corporate dramas, this comes back to proper regulation and corporate governance, which is dull but true.
Starting point is 00:21:58 Joe, thank you so much for taking the time to be here today. Thank you. That's it for today. I'm Mainika Raman-Wells. Our producers are Madeline White, Cheryl Sutherland, and Rachel Levy-McLaughlin.
Starting point is 00:22:15 David Crosby edits the show. Adrian Chung is our senior producer and Angela Pachenza is our executive editor. Thanks so much for listening and I'll talk to you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.