Behind The Tech with Kevin Scott - Sam Altman: Entrepreneurial prodigy, Y Combinator President and OpenAI CEO

Episode Date: August 22, 2019

Sam started his first company at 19 – and has launched many more since then. From Y Combinator to OpenAI, his insights and determination spark inspiration. Hear Sam’s ideas about career motivation..., and why he thinks the human brain may be replicable in silicon. 

Transcript
Discussion (0)
Starting point is 00:00:00 You know, the two strategies to succeed in life are you either go super deep in one field of knowledge or you go extremely broad. And I've always been go extremely broad and find the connections and sort of be good at the intersection. Hi, everyone. Welcome to Behind the Tech. I'm your host, Kevin Scott, Chief Technology Officer for Microsoft. In this podcast, we're going to get behind the tech. We'll talk with some of the people who made our modern tech world possible and understand what motivated them to create what they did. So join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes insights into what's happening today. Stick around. Hello, and welcome to Behind the Tech.
Starting point is 00:00:53 I'm Christina Warren, Senior Cloud Advocate at Microsoft. And I'm Kevin Scott. Today, our guest is Sam Altman. Sam is an entrepreneur, an investor. Sam was, for a while, the president of Y Combinator, which is the most successful startup incubator, I think, without argument in the entire world. And recently, Sam became the chief executive officer of an organization called OpenAI that is seeking to build general artificial intelligence inside of a nonprofit structure so that the value that AGI creates accrues to the public good. Yeah, that's right. And this is one of the rare times where we have a guest on that I actually know, that I actually have known before he was Sam Altman. Yeah, so fantastic. Tell us a little bit about that.
Starting point is 00:01:34 I didn't know that. Yeah. So when Sam was doing Loops, his first startup, I was a reporter at Mashable and I used to cover startups all the time. And Loops was actually one of my favorites, not so much because I thought that what they were doing was the most revolutionary thing in the world, but because Sam was so incredibly smart. He was always three or four steps ahead of what the whole industry was doing. And although that ended up not necessarily working out in Loops' favor, I actually remember I sent him an email when Loops made its exit that said,
Starting point is 00:02:08 you know, you might not love what's happening now, but I have no doubt that whatever you go on to do in the future, you're going to be amazing. And that's, I'm going to be honest, that's probably the only time I've ever sent an email like that. I mean, and it's really true. Sam is a super, super impressive guy. Like, not just in the sense that he's, like, really bright,
Starting point is 00:02:32 but that he's very determined to, like, make big things happen. And in a whole bunch of ways, like, Loop was an interesting company in that it was, like, sort of ahead of its time. It was. It was doing the location-based thing before the Foursquares and the Goalas and Facebook, you know, were a thing. And a lot of the stuff that he was imagining, like, now has become just sort of a standard feature set that any modern mobile application is more or less built on top of.
Starting point is 00:03:01 So, like, he predicted he predicted like this whole big thing that was happening. It was just like timing was like less than great. Well, that was the whole thing. Every time I would get on the phone with him or we'd meet in person and I would talk, I would just walk away and think, this is the most impressive founder I've ever met.
Starting point is 00:03:18 And so although he's been successful beyond what I ever could have expected, I also have to say I'm not in the slightest surprised. Yeah, no, some of the stuff that he has been doing with entrepreneurship and, like, trying to help, like, really smart, motivated entrepreneurs, like, find their way to having impact, like, has been amazing. And the stuff that he's doing right now with OpenAI, again, like, getting a bunch of, like, very, very bright individuals sort of rallied around this, like, very interesting cause, like, also super impressive. Well, I can't wait to hear what he's up to and to hear your conversation. Yeah, I'm excited to chat with Sam.
Starting point is 00:04:01 So let's do it. So next up, we'll meet with Sam Altman. Sam is an entrepreneurial prodigy. I believe he started his first company when he was 19 years old. And that was where you and I first met. Since then, you have gone on to become an enormously successful investor, president of Y Combinator through one of its most interesting runs in its history. And most recently, you become CEO of OpenAI, which, you know, like obviously we just did a partnership with you all, but that partnership notwithstanding, OpenAI is unquestionably doing some of the most interesting things in contemporary artificial intelligence. So, welcome to the show. Thanks very much.
Starting point is 00:04:56 So, I think we met the first time when you were at Loop. Yeah. I was head of engineering at another mobile startup at the same time. And, you know, and that was sort of an interesting, like, crazy time. Like, one of those things where you sort of, I guess, both of us in, like, in our own way were, like, experiencing the explosion of a brand new platform and ecosystem. And so, you did that for eight years, and then you took this year off, and then... And I took over YC.
Starting point is 00:05:31 Yeah. So you went to Stanford. Yeah. And what did you choose to major in there? Computer science. I actually took mostly non-computer science classes, which at the time sort of felt indulgent. And looking backwards, that was all the super valuable stuff.
Starting point is 00:05:52 So the time that I spent like taking writing classes or studying history or particularly studying science, like hard science, didn't have a big impact sort of for the next 10 years. But after that, those were all the most valuable classes. I was going to learn to program no matter what, and I was going to be good at it no matter what. And you were, how good a programmer were you by the time you got to Stanford? Like, were the programming assignments were easy, hard? They were easy.
Starting point is 00:06:17 The first freshman year was easy, and then it got hard. Okay. You know, I think as you're thinking about how we educate our kids, like, that's a great luxury to, like, have, by the time you get to college, you've already got a reasonably good skill, and then you can sort of do this exploration. Like, that's an incredibly beneficial thing. Yeah. Like, I'm sure you do, too. Like, I think the education system in general is just not nearly ambitious enough. But I think, like, I was incredibly lucky
Starting point is 00:06:45 to go to an amazing high school. And I learned a lot of the sort of basic skills and knowledge. Certainly, I had learned how to learn. And so by the time I got to college, I could just pursue stuff. I didn't have to, like, think really hard about just making sure I got everything done for my major. Yep.
Starting point is 00:07:03 I'd already done a lot of that. Yep. So how did done a lot of that. Yep. So how did you get started in tech? There was this period of time that I was sort of born smack in the middle of kids who hit the computer revolution exactly right. Yeah. Where the computer started easy enough to work with, where we could figure it out on our own, and then they kind of got powerful at the right time for us.
Starting point is 00:07:26 But I was born in a very lucky time from that. And a lot of people who have gone on to start important companies or be technology investors were born in this relatively short window, it seems like. I mean, I say this a lot. I feel the same thing. So I got lucky to be an 11-year-old right when the personal computing boom was taking off, right when the personal computers started showing up, hooked up to little 13-inch TVs and department stores. That was when I was developing as a little human being. A very interesting question is what are the sort of 7 to 12-year-olds now, like what is that technological revolution going to be that they're going to grow up with? And it's super, like I've got a, like I have a 9-year-old and 11-year-old right now. So do you have a guess?
Starting point is 00:08:16 Do you have a guess then what it'll be? I don't know. It's really hard to say. And I don't know whether I would have had a guess back then when I was right in the middle of it. I certainly would not have. I think it's sort of hard to say. And I don't know whether I would have had a guess back then when I was right in the middle of it. I certainly would not have. I think it's sort of hard to tell. I know that their expectations are fundamentally different than mine were. So, like, they expect a world where you can talk to computers and where you touch them and they don't understand people programming content for you that you sort of have to consume based on their,
Starting point is 00:08:46 you know, sort of abstract understanding of your preferences. Like, they just sort of watch what they want and read what they want whenever they want. I mean, it's very, very different than we were when we were little kids. For sure. But, like, I don't know what the technology thing is that's going to captivate their interest. One of the things that was magic about computers is you could go very far in terms of what you can do with them, but you could start easily as a kid. Like, maybe synthetic bio is going to be the thing,
Starting point is 00:09:13 but, like, we're not going to have, like, seven-year-olds playing in the lab making new organisms. I don't think. Maybe we will. Yeah, I don't know. Computers were just easy to start with. Yes. Synthetic bio, like, you would sort of hope that that would be a thing because, like, the benefit to humanity if you could have a whole generation who were as, you know,
Starting point is 00:09:34 sort of enthused by that as we were with computers, like, I think that would be beneficial. And, you know, maybe if you can get a bunch of that stuff in a simulation environment where the cost of doing an experiment wasn't so high. Yeah, something like that. I think it has to be something. I mean, like your point about the ease of use. It's always something. It always starts looking kind of like a toy. And then it just keeps going.
Starting point is 00:09:59 Yeah. So, I don't know what it is right now, which is, I think, a curious thing. As you said, we didn't know what it was when it was the computers in the first place. Yeah. Well, what I'm – yeah, I certainly didn't. And I'm confident what is going to happen is that they're going to be the ones who figure it out. Yeah, for sure. Yeah. And so, you took all of this amazing, I mean, it almost sounds like a liberal education in a way.
Starting point is 00:10:30 I think the two strategies to succeed in life are you either go super deep in one field of knowledge or you go extremely broad. And I've always been go extremely broad and find the connections and sort of be good at the intersection. So, what was the most interesting non-computer science thing you took when you were at Stanford? Like the most intellectually satisfying thing ever is like physics. The thing that's surprisingly most relevant one was creative writing. Yep. And so what- There's like nothing that's more fun than a great physics class, right?
Starting point is 00:11:03 That's just the most intellectually stimulating but what was your favorite physics class uh well I'll answer my favorite physics book but it's related to my favorite physics class
Starting point is 00:11:18 um quantum electrodynamics I think is the best science book ever written and this is Murray Gellman's book? Richard Feynman. Okay. And it's like a series of four lectures. But everyone always wants to focus on the parts of physics we don't perfectly understand.
Starting point is 00:11:37 And then there's a few areas that are incredibly beautiful, and we don't understand what's happening in an easily moddable level, but the math we understand perfectly. Right. And that was this example. And there was a class I took that was basically teaching this of, like, wow. Like, there's this, like, big piece of reality that we actually just perfectly understand. Or we understand well enough to work with and model.
Starting point is 00:11:59 And, you know, it's amazing. Yeah, and so quantum electrodynamics, just for the audience, who is also pretty broad. So this is the study of the very, very small-scale interactions. Basically everything but gravity. Yeah. But all of the other forces between particles. Yeah. It is fascinating stuff.
Starting point is 00:12:19 I highly recommend the book. It's a short read. There's no math in it. It's really fun. So why not become a physicist? Well, physics has been a bad field to go into as a career for a long time now. And I remember there was this thing where all of the kids that were studying physics at Stanford ended up going to work in finance. Which I almost briefly got tempted into doing, too.
Starting point is 00:12:44 Actually accepted an offer to be an intern. And then I realized I really didn't want to do that. But there was clearly something wrong with physics as a career path at the time I was there. Maybe it's better now. Just in the sense that it was going to be hard to get a job. All the really smart physics kids weren't going to do physics after they graduated. Gotcha.
Starting point is 00:12:59 And all the smart computer science kids were going to do some sort of programming. And so it was like I think think maybe physics just like got too hard or the problems got too trivial or something. But it was very hard to see what I was going to do. I still studied out of interest, but it was like I could sort of sense at the time it was not the right career trajectory. And so let's talk about this creative writing thing. Like in what ways is that useful to you now? Because I actually agree with your assertion
Starting point is 00:13:29 that it's fabulously useful. Well, when I was at YC, certainly the highest leverage on time thing I could ever do was write startup advice. Yep. Like the not secret secret to YC is that we started because PG is incredible at writing essays and was able to sort
Starting point is 00:13:46 of create a brand and a community and a nexus just from his essays. No one else will be as good at writing as PG, but I was over the bar where I was able to continue that. Like I was well aware it was worse, but it was good enough to like keep funnel going. And you can write something in a couple of hours and get hundreds of thousands of people to read it, and many of them come apply to YC or later do or come work at a YC company. And so that was like one of the important jobs, I think, of the person running YC is to be able to write reasonably well about startups. And was it important as CEO of Looped? No, not at all.
Starting point is 00:14:32 Not at all. Yeah. So again, there were all these like things that I studied at college that didn't, that I like went heads down on one project for like eight years and seven years, whatever it was. And they kind of were just sat in the back, but then later came to be super valuable. So one of the things that I did after, I ran it for like, yeah, seven or eight years. And then one of the best decisions I made, and a thing that I think a lot of people could do and don't, is I took an entire year off.
Starting point is 00:14:58 And like most people who are, it's an incredible luxury, but most people who have worked as an engineer in the tech industry for a number of years and don't have, you know, are young enough or free enough, they don't have familial obligations, they could save up and do this. Like, my cost of living, because I was, like, living in hostels in cheap parts of the world,
Starting point is 00:15:20 was, like, a tiny fraction of the rent of a San Francisco apartment. And I, like, just studied stuff I was interested in. I read like stacks and stacks of textbooks. I talked to people that were working on problems that I was interested in. And it was the time I came back to AI finally because that happened to be the year AI started to work. Deep learning started to work. I got really into nuclear energy and ended up becoming the chairman of two nuclear energy companies that year
Starting point is 00:15:43 that are now doing super well. And a whole bunch of other areas that I pursued that then became important investment areas for YC. But they started because I had some enough background knowledge from college to be conversant and stuff like biology. And then they kind of bloomed because I had this year to just, really follow things I was interested in without the press of a job. And let's talk a little bit about this, like, talking to people who are working on the things that you're interested in. Because Nathan Myhrvold, who was Microsoft's first CTO. I love that dude. Yeah.
Starting point is 00:16:19 He is a super interesting cat. Like, he was, you know, Stephen Hawking's postdoc, I think, for a while. Like, also, like, you know, sort of his training was in physics and math. He's, like, a very good archaeologist. He's, you know, like, he wrote these modernist cuisine books about cooking. Like, he was on a world champion barbecue. I mean, so he's, like, he's also very, very broad. And if you talk to him, like one of the things that he says that is sort of a superpower is like, you can sort of read the books and then, but like being able to go talk
Starting point is 00:16:56 to the people who are doing the work and having enough of a foundation where you can engage in a conversation with them is like incredibly powerful. Yeah, that has certainly been the thing that has worked for me. I do learn pretty well by reading, but I learn much better by like talking to the experts. And one of the like big secrets of life, this is not a small one, is if you're kind of like around the edges,
Starting point is 00:17:18 if like the interesting work is usually happening in sort of like at the edges of where everyone's paying attention. And so those people like are not usually the ones who are so busy they won't respond to your email. So I have almost always found, even at the time when I was completely unknown to anyone, that if you just like send a thoughtful email that shows you're serious and have done some work to somebody working at the edge of some field on something interesting, they will probably meet you. Yep.
Starting point is 00:17:44 And I was just like, I had no obligation. So I would just like, if someone said, sure, I can talk to you tomorrow, and they were in London, I would say, okay, but I'll come meet you in person. I was just like, go to the airport. And that was great. And one of the things I've noticed that stops people from doing that, beyond it being like actual hard work to write the thoughtful email it's just getting up the courage to do it and like being worried about your own ego because
Starting point is 00:18:10 you are thrusting yourself into a situation where you're not going to be nearly as expert at this thing as someone else and like you could be embarrassed by some gap in your knowledge which is uh everyone gets over that in their own way. This is one of the advantages of starting a startup at a young age is you really get good at dealing with a lot of rejection. Yeah, sort of a humility engine. It really is just beaten into you quickly. So I'm thankful for that. But I got that dispatched quickly. The biggest problem with being afraid of having your ego bruised, by the way,
Starting point is 00:18:51 is that it makes you, it makes it hard for other people to give you feedback. This is a thing that I was horrific at, at the beginning of my career, is like if anyone told me I was doing anything other than a great job, I would just completely shut down. And because I took it as like an ego bruise. And the most important professional skill that I learned the hard way, and I wish I had learned it earlier, was being willing to take very hard feedback and not just shut down when hearing it. And that's relevant to a whole bunch of other stuff, like being willing to email someone and say, I'm not going to meet you. But that's even a harder version of it.
Starting point is 00:19:23 And once you can get over that, none of the rest of the rejection stings that bad. Yeah. So it's super interesting. You're at Stanford for, what, two years? Two years. And then you, like, how do you decide to go start a company? I had been hacking on this program as, like, a side project. I was sort of very into mobile phones.
Starting point is 00:19:44 And this is before iPhone, right? Way before. Yeah. Yeah. Three years before. I had a Palm Trio at the time to date it. Yep. Which was a good device.
Starting point is 00:19:56 Yeah. And I was really cool for having one. Let me tell you, because most people had flip phones. I had a Trio 650 on Sprint. It was like the, it was like a big deal. Yeah. I had one too. It was a good phone. Yeah. I had one too. It was a good phone.
Starting point is 00:20:06 Yeah. I wasn't cool, but like I had one. You know, I probably wasn't cool either, but I felt in my heart, I felt cool. Anyway,
Starting point is 00:20:14 so I started like hacking on this with some friends. I had accepted this offer to go be an intern at a bank in New York and I was, I knew it was a mistake
Starting point is 00:20:20 as soon as I had said yes. And then I had like followed Paul Graham online for some time. And he announced this thing called the Summer Founders Program, which is what became YC. It was going to be a program of YC, and then it became all of YC. And this is when it was in Boston, right? In Boston, yeah. And so I sort of applied with like an hour to go.
Starting point is 00:20:40 And, you know, still plan to go back to school at the end of the summer. But I was just like, well, work on this. They're going to give us $12,000. And we got in and did it, and then it just kept going. And so, what were some of the interesting things you learned through that experience? Which is sort of a broad question because you were super early. Yeah. So, I always think it's like,
Starting point is 00:21:08 it didn't, that company did not go nearly as well as we were hoping. It went fine, and I'm grateful for it and thankful it gave me enough sort of money to do everything else, but it was not the outcome we were hoping for. And I always think it's tempting to learn too much from failure, and it's better to learn from success.
Starting point is 00:21:24 But one of the things that we success. But one of the things that we did learn, one of the things that was fun at the time, is we, like the collective tech industry, had not quite figured out startups. Like, at this point, they're kind of well understood. And like, there's a playbook, and you can follow it. And you can like pick an enterprise vertical and like build some software and build a sales team and like wash, rinse, repeat. And at the time, none of this stuff was canon. And so the thing that was fun was
Starting point is 00:21:51 all of us together, a lot of the people that I went through YC with or went through shortly after, super close friends, people that I work with, invest with, still all the time. And all of us figuring out together how to make startups work at scale, figure out how to mass produce startups, that felt like a frontier.
Starting point is 00:22:10 And now it's really fun. So this is sort of interesting. I totally agree with you. And you obviously know better than I do. But it seems that the playbook for doing startups is a lot clearer now than it was 10 or 15 years ago. But like, there's, strikes me that there's still some flavors of startups that we don't know how to do
Starting point is 00:22:30 or maybe we've forgotten how to do. We don't know how to do hard tech startups that well. This was one of my areas of passion at YC, thing that directly led to OpenAI. That is still a frontier and that had always been pulling me. Yeah. And what do you mean by hard tech?
Starting point is 00:22:47 Like rockets, nuclear fusion, AGI, stuff like that. Yeah. And so, like, there are— Stuff where the risk is science risk more than it is market or engineering risk. And capital intensive. And capital intensive. By the way, I think there is a magic moment for it, because although this stuff is very capital intensive. And capital intensive. By the way, I think there is a magic moment for it because although this stuff is very capital intensive, so much money has fled into venture in the last 10 years. And I think software startups were unusual in the returns that they offered, but so much money was like, wow, these are incredible returns. There's now this huge overhang of capital,
Starting point is 00:23:28 desperate to find good opportunities and willing to accept lower returns, which, to be perfectly honest, the hard tech startups sometimes are at least higher risk. And so there is this magic moment that I don't know how long it's going to last. But right now, not only do I think it's possible to start a hard tech startup, I think it's actually easier to start a hard startup than it is to start an easy startup. I think it's actually easier to start a hard startup than it is to start an easy startup. People are quite tired of enterprise software startup number 1,422. And if you're doing something that sort of makes people's eyes glass over, it is hard to hire, it is hard to get the press to care, it is hard to do anything except get capital. It's hard to concentrate talent.
Starting point is 00:24:07 And if it's a startup that like really matters that people like want to help on organically, there's this incredible tailwind for those companies. Yeah. Certainly we feel that OpenAI. Yeah. So why don't we see more of these things? I mean, because I think you're right.
Starting point is 00:24:21 There's OpenAI, there's SpaceX, there's Tesla, there's Tesla. Why we don't see more, I'm very interested in this question because I spent a lot of time trying to convince people to start these companies. There is a feeling, like my general belief about the hierarchy that people go through in terms of career motivations is it starts with money. Then it starts with like power in the weak sense, which is like I want to manage people and be able to control them and do whatever. And then it goes to status, like I don't care what people think about me.
Starting point is 00:24:55 And then it goes to impact, like I want to do this company, it's really going to matter. And then finally, either people end up like really going after self-actualization, which is like the last level of infinite Tetris and you can just get better and better, or they end up like really going after self-actualization, which is like the last level of infinite Tetris and you can just get better and better. Yeah.
Starting point is 00:25:06 Or they end up chasing enlightenment and meditate and do whatever else all the time. But that's like the trajectory. And I've studied this hard and looked at a lot of people. Yes, it sounds like sort of like Maslow's hierarchy for entrepreneurs. And the issue is most people want to get those first few levels checked off as fast as they can. And it's very hard to, like, play at a different level when you're truly internally stuck at this lower one. And so a lot of people are like, well, I really do want to go start this company that's going to, like, take on climate change.
Starting point is 00:25:38 But first, I just, like, want to not think about money anymore. So I'm going to, like, do this enterprise software company. And I am sympathetic to it. I understand the drive there. But I think that's why people don't do more. The problem and why it usually doesn't work is if you're starting a company you don't actually care about, where you're just trying to like make a three-year exit so you can go start your climate change company, you never quite make it work. And probably those people, if they were willing to make a 10-year commitment to the enterprise software company would do fine.
Starting point is 00:26:02 And probably if they just jumped into the climate company, it might work too. But this whole, like, this thing is a detour so that I can go solve the problem that I really want to solve, that's hard to make work. So, like, I'm really, really interested in these transitions. Like, how did you decide to stop doing looped to the extent that that was your decision? I had sort of run out of ideas about how to make it work. Just sort of like it was clear and getting clearer that it was not going to be the company that I had hoped.
Starting point is 00:26:32 And what did Loop do? We made location-based software for cell phones. And yeah, I think I had just, and my co-founders too, had just run out of ideas. And it was like, all right, this is like, you know, we could let this drag on for a while or we could say, you know what, there's like enough of a win here.
Starting point is 00:26:47 Let's call it and move on. Yep. And then in terms of- Was that hard? Of course, of course. But there was like a big sense of relief when it was done. In terms of like going to YC, so one of the things that I had done
Starting point is 00:26:59 in kind of the year off was I raised a small venture fund. This was before everybody was doing it. This was also like, this felt like a frontier too, which was cool. And I thought I was going to like it. I was going to love investing. And I did not at all.
Starting point is 00:27:16 It was like, it felt deeply unfulfilling. In what way? I'm really curious about this. Like basically the story of being a seed investor, even more now but then too, was trying to find really good founders that didn't need your help and convince them to take your money and not someone else's and at a lower price. And I was like, I cannot delude myself into thinking I'm creating value here. These are companies that would exist, the good ones anyway, whether or not I invested. Like, sure, maybe I can help them a little bit, but like the best founders don't need that much help. And again, these are companies by the time I'm seeing them that are already going to be fine.
Starting point is 00:27:53 And I'm just like trying to squeeze some capital in. It was not my thing. And so when PG first talked to me about running YC, I was like, no, I tried that. I don't want to do that. And it was over a process where I was sort of like studying a bunch of other things that I was like, well, YC is this sort of singular force in the ecosystem. And even if it's the current version, it's not quite what I want to do because I've had this experience. Like, I could take it in any direction. And like, we could make YC the platform for hard tech and funding research and doing later stage investment and like making a growth program. Because one of the things I had learned is that YC was really good at teaching how to start a company, but there was nothing that was teaching you how to scale a company. And so I was like, okay, like I didn't like running a seed fund, but like running YC,
Starting point is 00:28:39 that's more like running a company, which I do like. And certainly my job at YC felt much more like being a CEO than an investor. So that was cool. And do you know why? I mean, PG at the point, like it was smaller, Y Combinator was certainly smaller than it is now, but it was also like, it was already like the most successful startup incubator that had ever existed. Like, he could have done anything, literally. But, like, why do you think you were the one that he chose? Honestly, I think the biggest thing was, like, he wanted to retire. Yep. He could have done anything, but he didn't.
Starting point is 00:29:18 He had done it for a while. He had done it for, yeah, like the same period of that point. It was like nine years because it was my year off too. And PG and I and Jessica too were – been super close for a long time. And he kind of had a good mental model of me. And we thought about – he cared about YC being what he wanted to be. And he sort of knew that we thought the same. So, a lot of it was like he knew you had sort of the basics down in terms of like how to operate it but it was like it was sort of a culture thing yeah it was mostly that i mean yeah like so he had this clear
Starting point is 00:29:52 idea in his head of like this is the thing i want yc to be doing we have been talking about for nine years that you've been talking about for nine years and like this is the way that i want to i mean because it always struck me that he had this very deeply personal reason for – and I've never – like, I don't know him at all. So, this is just me sort of reading tea leaves. But it struck me that he had such an interesting experience on his own starting his company that, like, he wanted to create this thing where he treated founders in a fundamentally different way. He cares about that so deeply. And it's such a moral issue for him. And like watching
Starting point is 00:30:31 it always used to drive me crazy watching people like attack him on Twitter. It's like if you knew this guy and like how much he cared and like how morally driven he is on this point, you would never say these things. He truly, truly is. So yeah, that was like a deep thing for him. And he also just loves a great hack.
Starting point is 00:30:47 And this was like, this is a great hack. Like this is a way at scale to unlock a huge amount of talent and potential in the world. Yep. That's awesome. And so what was your favorite thing about running YC? It was super high leverage. Super high leverage. Like, you know, we could...
Starting point is 00:31:09 YC is a, at this point, extremely powerful force in the ecosystem. Probably the most powerful force in the startup ecosystem. And so the ability to change norms or sort of the Overton window of the kinds of companies you can invest in or help a lot of people at once
Starting point is 00:31:24 was cool. And the network is so powerful. Like the central learning of my career so far has been you should almost always scale things up more. And there are all these weird emergent properties of scale. Like this AI is something I talk about all the time. But you see this everywhere. And so I had this theory that was PG's theory.
Starting point is 00:31:44 That's why I say PG had a theory that I believed in, which is that if you scale YC because it is fundamentally a network effect business, not only is that required because advice and capital are both going to get commoditized, and the only thing left will be the network and the brand. So you have to scale it for that reason. But if you scale it, there will be all of these difficult to predict emergent effects from a very large network of all of the best startups. So what's an example of one of these? The classic one that sort of investors just can't believe they missed, other investors,
Starting point is 00:32:17 is that at this point, people feel like a lot of affinity towards YC, founders do. And so they will try to buy other YC founders' products. And so at this point, if you are an enterprise company, you can get to like series B scale with only other YC companies as your customers. Yep. That's an amazing thing. And that's because of breadth
Starting point is 00:32:37 and because like you've got this head of companies that are actually really quite big now, right? Yeah, and they feel a lot of allegiance to each other. Yeah. And so they will preferentially work with any other YC company. Another example is that, like, sort of investors can't mistreat YC companies. And, like, a lot of companies get killed because investors mistreat them. But people know that, like, the network talks.
Starting point is 00:32:59 In fact, on this point, we have, like, special software just to, like, you can look up how other investors have treated YC companies. And that makes investors treat companies well. So, when did you start thinking about AI? Well, as an undergrad, when I was 18, I made this list of things I wanted to work on. And AI was the top. But I took the AI classes at Stanford, and it was clearly not working. And why when you were 18? So, you were 18 when?
Starting point is 00:33:25 Like, this was 2003? I was born in 85. Okay. So AI in 2003 was not what it is now. Well, I think everybody, like most, everyone who grew up reading sci-fi like wanted to make AI.
Starting point is 00:33:38 Like this is kind of, it just feels like we're all on this inevitable path and that's where it's going and it's like the most interesting thing to work on. But it just didn't feel like there was an attack factor. And then in 2012, it started to work.
Starting point is 00:33:48 And then in 2015, which is when we started talking about creating open AI, which we started in early 2016, it felt like not only was it going to work, but it might work much better and much faster than we thought because there had been this one trend of just scale things up that kept working. And again, this has been like, I mentioned it's been like the central learning of my career. The asterisk to that, though, is that humans have not apparently evolved well to guess
Starting point is 00:34:19 how exponential curves are going to play out. Yeah. And so when you scale these things up, if they're getting like, you know, doubling every year in the case of AI, maybe 8x in every year, we don't have good intuition for that. And so people are never bullish enough if the curve is going to continue.
Starting point is 00:34:35 Yeah. And so I was like, huh, maybe this is really going to work. But AI is like a tricky thing, you know, in the sense that the term artificial intelligence, like, wasn't really coined until the Dartmouth workshop in, what, 55, 56? Something like that. And they thought they were going to get it done that summer. Oh, yeah. They were completely convinced.
Starting point is 00:34:56 Like, if you read those documents, like, they had this list of things, and they were just sort of convinced that the progress was going to be much faster than it actually was. And like we have had a couple of booms and busts now, you know, where you can actually go to Wikipedia and look up AI winter and like the bust has a name. So, you know, one of the things and I'm just for what it's worth, like I am I'm in the optimist's column here. Booms and busts are the way of the world. Like, you know, we talked earlier about startups. Like, we had a lot of booms and busts there. But the curve, though it squiggles, if you zoom out enough, goes up and to the right.
Starting point is 00:35:34 Yep. And the curve of computers getting smarter does, too. Now, how much further we have to go when we're going to get there, very hard to say. What I can say with confidence is maybe the current trends don't get us all the way to general intelligence, but they're going to get us surprisingly far. They're going to change the world in very meaningful ways. Yep. And maybe they go all the way. Yep. And so, like, I'm interested to go back to this whole creative writing thing, because, like, I think the storytelling around AI is around AI is one of the really, really interesting things right now.
Starting point is 00:36:08 Like getting, because you guys, so OpenAI is a nonprofit organization that is committed to realizing artificial general intelligence and for having the value that AGI creates sort of accrue to the public good. To be clear, we have not figured out the storytelling yet. I agree it's really important.
Starting point is 00:36:29 I think about this stuff all day. I can barely in my own head think clearly about what the world really does look like if AGI happens. You know, all of the stories I can tell are either like too mundane or too grandiose. Yep. It's like either like, oh, medicine gets better, or it's like sentient beings colonize the universe until the heat death. And sort of neither of those are quite feel right.
Starting point is 00:36:53 And people get really, really, you know, I know one of the things that you've said is, you know, something about, you know, the light cone of all. People don't like that. Yeah. And like people get really upset about, you know, the grandiose things, which sort of makes them miss all of the, like, really concretely useful things that this stuff is going to do with 100% predictability over the next few years. If you're doing anything interesting, you're going to have a lot of haters. And you may as well, like well say the thing you actually believe.
Starting point is 00:37:28 So I could try to sort of figure out exactly how to calibrate this somewhat dishonest version of what I believe the future's going to look like, or I could just say here's what I actually think. It might be wrong, but here's what I genuinely think,
Starting point is 00:37:43 and not try to under or oversell it. And that's what I actually think. It might be wrong, but here's what I genuinely think. And not try to under or oversell it. And that's what I actually think. So, why do you think that? It is possible that there is a very... Actually, I don't even think it's that unlikely. I think there is a reasonable chance that there is something very deep about consciousness that we don't understand. Or we're all Boltzmann brains and none of this is real or whatever. But if not, if physics as we understand it works and everything is just sort of an emergent property in our brains of this very large computer we have, then I think that will be replicatable in silicon.
Starting point is 00:38:26 And I don't like, I still think that's the more likely outcome. Yeah. Which I like, honestly, I think that's reasonable. And like where people sort of seem to be getting wrapped around the axle is like what the architecture of that silicon looks like and what the time scale is. I hate that argument though, because like there's like the number of that silicon looks like and what the time scale is. I hate that argument, though, because, like, there's, like, the number of people who kind of get really mad because they're, like, you know, you people who say AGI is, like, 10 years away. It's, like, it's more like 30. It's, like, okay, this is, like, the most important technological development in human history.
Starting point is 00:38:59 It is in the blink of an eye on the scales of humanity. And you're going to, like, sit here and get in a fight because it's 10 or 30 years. Either way, what an important moment to be in. I think the timescale argument is quite dumb. The silicon architecture argument, that's intellectually more interesting at least. That's more
Starting point is 00:39:17 practical, more fun. But the work we're doing with you guys, we're making incredible progress and the future looks really exciting. I think the computers work we're doing with you guys like we're making incredible progress and the future looks really exciting I think the computers that we're going to have
Starting point is 00:39:29 in five years are going to be mind blowing yes and I think the interesting thing for me is I don't even have
Starting point is 00:39:38 a prediction in my head for like when I think AGI might happen but like what I do know is that the push that we've had for the past seven years,
Starting point is 00:39:49 basically since deep learning has started really working on perception and a little bit on language and a little bit on game playing, is you've got this really fantastically interesting two exponentials. So you've got an explosion fantastically interesting two exponentials. So like you've got an explosion of data and you have an explosion of compute power. Like one of the things. There's a third one, which I think is more, you know,
Starting point is 00:40:11 we talked about what the sort of ambitious young people are working on. You have an explosion of talent. Yes. Like this is the thing that every smart 18-year-old that goes to college now and studying computer science wants to focus on. Almost every. And those, yeah, I mean, you can totally see it. Like just look at how many people go to NeurIPS now.
Starting point is 00:40:25 So this is the big, you know, deep learning conference. It's it's sort of like SIGGRAPH was when I was in grad school. So SIGGRAPH is a big graphics conference. And it was it's I mean, it still is like a huge event. But like NeurIPS is like, I mean, of all things, like has turned into like this occasion. Totally. And like, I think you're totally right. All three of those things are super exciting.
Starting point is 00:40:50 And I think whenever you have an opportunity to invest in things that have these exponential forces pushing on their progress, you're going to get something interesting. Whether it's exactly the thing that you're aiming at, something interesting is going to get something interesting, like whether it's exactly the thing that you're aiming at, like you're going, something interesting is going to happen. And like one of the interesting things that's happening right now with these, you know, computers that we're building to train very big models is that we are, like computer architecture is all of a sudden interesting again. And it hasn't been for, you know, 20 years, maybe 15, like a while. Yeah.
Starting point is 00:41:24 That's cool because there's only people that really want to work on that. They've had nothing to work on, which means we can get incredible talent focused on this. Yeah, we've got all of these people who did high-performance computing in the 90s who, you know, and like I was not an important person working on high-performance computing in the 90s, but like I was a compiler person. And like I thought that none of the stuff that I learned in graduate school was ever going to be directly useful again. And, like, here it is. Here we are. It's cool. It's really cool.
Starting point is 00:41:49 It is really cool. And just a reminder of, like, how cyclical not just technology is but history. I mean, like, how much do you think about, like, the historical corollaries for the disruption that we're going through? Like, industrial revolution, you through, like industrial revolution. You know, like I think the steam engines are really fascinating. That's a great one. Example. Like, do you have any others that, because I know you've thought a lot about this. Yeah, I mean, I think the analogs are the agricultural revolution, industrial revolution, the computer revolution.
Starting point is 00:42:20 And I think the AI revolution will be bigger than any of those three or bigger than all three of them together. I love reading sort of firsthand accounts of people at the time as they were kind of going through those. There's this great book called Pandemonium, which is all primary source material of the Industrial Revolution as it was arriving. And many of the things that people say in that book could be said now about how people feel about AI. There's no jobs. It's going to take over. The machines are going to kill us. Like, the future is going to be terrible.
Starting point is 00:42:50 Or, like, it's going to be utopia. It's like, this is so amazing. Like, there's nothing these machines can't do. And the reality was some complicated thing in the middle. And we always figure out something new to do. Like, the rate of, for instance, so one of the common themes in that book was, like, what are we all going to work on? The rate of job turnover is something like 50% of the jobs every 75 years. And this is held remarkably constant. You know, it has like fits and spurts, but that's held constant for hundreds of years.
Starting point is 00:43:27 And like technology changes, whole classes of jobs go away, and we find new ones, and they're difficult to predict what they're going to be. But like, I think the jobs this time will change a lot, but we're going to find things to do, I'm pretty sure. Yeah. Well, and the thing that I, I think one of the hardest things to do right now, and it's why I think we really need to get a lot of folks who aren't computer scientists thinking about these problems, is like we're sort of thinking about a bunch of things sort of superficially right now. You know, so for instance, like there's been some knee-jerk things that people have said that like, oh, well, if robots are going to take all the jobs, then we should tax the robots. And if you really look at the – Bill Gates said that, I think. I think Bill Gates did say that.
Starting point is 00:44:07 And look, Bill Gates is not a superficial thinker in any shape, form, or fashion. Bill is one of the deepest guys. It's super impressive. It's intimidatingly impressive. But, you know, one of the things that – one of the things, if you look at the data that you will see, is that we are in manufacturing at this sort of efficiency equilibrium right now where a relatively fixed percentage of the population for many, many years has been responsible for producing all of the manufactured goods that the rest of us consume. And it's a tiny little percentage. And it had sort of, it's like had fallen, like this sort of percentage of the population
Starting point is 00:44:49 working in manufacturing had gone sort of straight down since almost the very beginning of the industrial revolution. So, almost as soon as we invented the notion of manufacturing, like we started to get more and more efficient. And one of the things that happens when you get to these equilibria is that you have very few things that you can do to like create more opportunity and growth. Like one trick you can play is global labor market arbitrage. So like you can sort of like try to find people who are less expensive than the people that you've got in like your particular geography or
Starting point is 00:45:25 consolidation is another good trick. So you can like take a bunch of small things and turn them into big things. And I think, you know, one of the things that happens with advanced automation and AI and manufacturing is that a CNC mill that you put in St. Louis is cost about the same and is about as productive as one that's in Shenzhen. And so, you actually don't want to tax the robots, I think, because – or, like, not at least in a, like, very broad way. Like, maybe there are some targeted, like, robot taxes that you want to put in place. But if you think very broadly about it, what this equalization of productivity does is going to sort of undo some of the- It should be the anti-globalization effect.
Starting point is 00:46:13 Correct. And it should help actually with consolidation as well, because there's sort of a Moore's law of manufacturing automation equipment that means that for the machines get cheaper and more powerful, like over time, very quickly, like AI will accelerate that. Like I've got anecdotal examples where I grew up in rural central Virginia. Like I have friends who like work for these companies now that are like running, you know, very successful like manufacturing businesses that, you know, just wouldn't exist without all of this, you know, advanced automation.
Starting point is 00:46:47 If you like tax the machines that they were buying, then like the work that they are repatriating from overseas and creating jobs would like all of a sudden become less competitive again. Yeah, I think like, I personally think that's a very silly idea. I think we need radically new taxing systems for this kind of a world. But I think like taxing the robots, with my air quotes there, is not the answer. And if it is, so the thing that I've been trying to encourage people to do is like I would love more non-computer scientists, more non-engineers, like thinking very, very deeply about the set of problems. Because the very broad solutions that we're painting for some of these things are probably not going to be the things that are needed.
Starting point is 00:47:37 And we should have lots of people talking about it right now, not just, like, a few of us dorks here in Silicon Valley. For sure. Like, if we don't get a very broad set of people thinking about these issues soon, I think we're very unlikely to get to the right answers in time. Yeah. So, what is the most exciting thing that you think is going to happen in AI over the next few years that you can talk about?
Starting point is 00:48:04 Well, I'll give a few, because I think the interesting thing is the breadth of things that are going to happen. I think we'll have language models where we can interact with computers with natural language in an amazing way that feels unimaginable now, that's going to feel like intelligence. I think we'll have robots that can do human dexterity levels of manipulation, and that's going to be a huge impact on the world. I think computer games are going to get really good, really fun to play. It's a sort of small sample. Yeah. So it's exciting.
Starting point is 00:48:42 Totally. It's amazing. And none of those things, so this is sort of to my point, like none of those things is like Commander Data from Star Trek The Next Generation walking around and still useful stuff will happen. Right. So that's the thing that makes me like super, super excited. Totally. And if we get Commander Data, like I'm excited about that as well. Might happen.
Starting point is 00:49:05 Probably not in the next couple years. Probably not. So, we're sort of running out of time here, but you do some crazy, interesting things in your spare time. Like, you personally fund
Starting point is 00:49:22 some interesting physics things. Yeah. So like what's the most fun like non-work thing that you've done over the past few years that's sort of just wild and interesting? I'm very thankful that there's so many things I could say here. One thing that has been surprisingly great over the last year is a lot of long meditations. And finding a group of people who have been nice enough to spend time with me and teach me. And that's been sort of a significantly changed my perspective on the world.
Starting point is 00:50:05 In what way? I think I just like I'm a very different person now. I think I'm so much more content and grateful and happier and calm. And it's something that I just really wouldn't have expected me to get into. I know that a few years ago, I think, so I don't meditate, but like a bunch of these sort of Buddhist practices around, you know, sort of compassion and mindfulness are like really helpful. Like the thing that I've latched on to that's been really useful is just gratitude. Totally. Like trying to find in as many moments, in as many days as possible, something to be truly grateful for. And like, I surprised myself because I'm a, yeah, I think engineers are sort of pessimistic and,
Starting point is 00:50:52 you know, like a little bit cynical by nature, like, but, you know, by the, you're sort of wired a little bit to sort of see all of the problems in the world, because like, that's part of what motivates you to go out and like, you know, change them and make them better. But it is, like, a sort of a jaundiced way of, you know, looking at the world sometime. But, like, I've just been shocked at how many things I've, I can find to be grateful for every day and, like, how much, like, calmer that makes me. Totally. You know, I think that's a, I had tried all of these practices before settling into my current one. That was a good one. But I had done like a lot of this sort of like mindfulness stuff in the 15 minutes a day of meditation.
Starting point is 00:51:30 And the thing that actually has worked for me is less frequent but very long meditations, like hour and a half or two hours just sitting and doing nothing. Not focused on a mantra, not focused on breath necessarily, but just like sitting in calmness and gratitude to the universe with my eyes shut for long periods of time. Yeah. And that's hard to do. I mean, especially like... It gets easy fast. It gets great.
Starting point is 00:51:54 But yeah, it's hard the first few times. But like, especially like when you live in a world where you've got this, you know, like little dopamine triggers sitting in your back pocket called a smartphone. I'll tell you, one of the things that you think about the first few times is just how far that's gone. Yeah.
Starting point is 00:52:09 Yeah. Which is, it's an interesting. So, before we go, like, you were telling me about this physics experiment that you actually got someone to, like, build an experiment to verify this very bizarre quantum mechanical phenomenon. I think the... I probably don't have time to get into it in detail, but the quantum eraser experiment and sort of all the derivatives thereon,
Starting point is 00:52:37 I needed to see that. I actually want to make a series of videos about that. You should totally do it. I think it's one thing to read about it, and then it's another thing to, like, it's always, like, when you do the math yourself, you understand it in a way you don't when you read about it. Yep.
Starting point is 00:52:53 And this thing just, like, so broke my conception of, like, how I thought the world worked, even though I believed it and understood it, like, I wanted to, like, go through the motions. And, yeah, I do want to make videos of it. But it was like one of the more mind-blowing experiences in my life to like know it was going to work, but still just like see the interference pattern and then see it go away. So that's the teaser for everyone. So like go right now, look up quantum eraser experiment on the internet. If you really want to blow your
Starting point is 00:53:19 mind, look up the delayed choice quantum eraser experiment. Delayed choice quantum eraser experiment, and then wait for Sam Altman's video series on the subject. And so with that, thank you very, very much for coming in and chatting with us. Thanks for having me. Awesome. That was Sam Altman, the CEO of OpenAI. And Kevin, that was such an interesting conversation. You guys really went over a lot of different areas.
Starting point is 00:53:45 But one of the things that kind of stuck out to me was your conversation towards the end about how you can overcome maybe some of the fears around not just AI, but anytime there's a big change in how things are done. So these systems that we build, like whatever technology you're talking about that has sort of disruptive impact, whether it's a steam engine or a bunch of industrialization for agriculture or it's personal computers or AI, you have to remember that there are people who are building them. And like we all get to collectively decide what good purposes that they are people who are building them and like we all get to collectively
Starting point is 00:54:25 decide what good purposes that they are put to. And so, you know, the interesting thing that I think we are facing right now with AI is like can we all sort of get this vision of what it is that this positive impact of AI is going to be. And like, what are the safeguards we can put in place to make sure that, you know, that we're sort of focusing all of the collective efforts of everyone working on it on like creating that good. Like OpenAI in and of itself is like structurally an interesting thing for trying to focus it
Starting point is 00:55:02 on good. Like they could have done a bunch of different things to, like, try to accomplish the same end effect. Like, they could have just created a normal for-profit company and tried to run it that way. But instead, like, they decided, like, we're going to create this nonprofit. Like, we're going to arrange things in a way where, like, in success, the bulk of the value of this company is going to go to the public good.
Starting point is 00:55:27 And they've also chosen to focus on safety. Like, they really, really take it very seriously, the obligation that they have to make sure that as these technologies develop, that they are released in a way that's safe and that they create more good than harm. This can't be just a thing that a bunch of folks in Silicon Valley and in, you know, in Seattle and like the other places that tech has done at scale are sort of debating like in this insular way amongst themselves and, you know, then making a bunch of decisions. Like we need a lot of people participating in this conversation overall. And, you know, we chatted about, like, you know, taxation as a mechanism. Right. But let me just ask then, how do we get, I mean, this might be a loaded question, but how do we get those people who are not the engineers, who are not living in
Starting point is 00:56:22 Silicon Valley, who are not, you know, in Cambridge, who aren't in Seattle? How do we get them involved in this process? I think it's like everybody has to say that this is what we want to do. So, it means for the folks in tech, you have to slow down enough to try to better explain the things that you're doing because a lot of this stuff is very complicated and it moves very, very fast. And like having enough time in what it is that you're doing to actually, you know, sort of engage in a public conversation about it in a real way, like not this sort of, you know, trivial way where you sort of say, oh, you know, trust us, you know, like,
Starting point is 00:57:10 it'll turn out okay. Like, that is, that's not enough engagement. And, you know, I think, you know, on the flip side, like, we need folks who, like, are willing to, like, get themselves to be a little bit uncomfortable in, like, sort of poking into some stuff that is, like, genuinely complicated and where the answers for, like, what we need to do to sort of influence this direction as a society, like, isn't the, you know, the easy soundbite-y sorts of things that, like, we sort of seem to be inclined to want, like, as part of our, you know, Twitter-oriented public discourse. Makes sense. Makes sense. Makes sense. I hope that we can get there and hope we continue to open up those channels of communications across the different groups. Yeah, I think so.
Starting point is 00:57:52 Like, I'm seeing promising signs. Like, we've got, we have a bunch of good engagement. Like, I see policymakers, like, asking great questions and engaging. I see more willingness from the tech folks to participate in the dialogue. I think the journalists are getting super smart about this stuff. And it's getting better.
Starting point is 00:58:15 I have hope, as usual. I love that you have hope. All right, well, that does it for us for this episode of Behind the Tech. If you like this episode, please give us a rating on Apple Podcasts. That really helps us out tell all of your friends, whether they're techie or not, because I think these conversations have a lot of value. And actually, one of the
Starting point is 00:58:36 things that Sam and Kevin talked about was the idea of what is going to be the tool or the platform that this generation of kids uses to build the next big thing. And so if you have any ideas as to what that is, send us an email at behindthetechatmicrosoft.com. Awesome. Thanks, Christina. And we'll see you all next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.