The Knowledge Project with Shane Parrish - Garry Tan: How Y Combinator Turns Ambitious Misfits Into Billion-Dollar Founders

Episode Date: April 29, 2025

Most accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade’s unicorns, YC knows what separates the founders w...ho break through from those who burn out. It's not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap.  If you care about innovation, agency, or the future of work, don’t miss this episode.  Approximate timestamps: Subject to variation due to dynamically inserted ads. (00:02:39) The Success of Y Combinator (00:04:25) The Y Combinator Program (00:08:25) The Application Process (00:09:58) The Interview Process (00:16:16) The Challenge of Early Stage Investment (00:22:53) The Role of San Francisco in Innovation (00:28:32) The Ideal Founder (00:36:27) The Importance of Earnestness (00:42:17) The Changing Landscape of AI Companies (00:45:26) The Impact of Cloud Computing (00:50:11) Dysfunction with Silicon Valley (00:52:24) Forecast for the Tech Market (00:54:40) The Regulation of AI (00:55:56) The Need for Agency in Education (01:01:40) AI in Biotech and Manufacturing (01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs (01:13:34) The Role of Meta in AI Development (01:28:07) The Potential of AI in Decision Making (01:40:33) Defining AGI (01:42:03) The Use of AI and Prompting (01:47:09) AI Model Reasoning (01:49:48) The Competitive Advantage in AI (01:52:42) Investing in Big Tech Companies (01:55:47) The Role of Microsoft and Meta in AI (01:57:00) Learning from MrBeast: YouTube Channel Optimization (02:05:58) The Perception of Founders (02:08:23) The Reality of Startup Success Rates (02:09:34) The Impact of OpenAI (02:11:46) The Golden Age of Building MOMENTOUS: Head to livemomentous.com and use code KNOWLEDGEPROJECT for 35% off your first subscription.  Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: ⁠⁠⁠⁠⁠⁠⁠fs.blog/membership⁠⁠ and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 The world is full of problems, like, why are people sort of retired in place pulling down, you know, insane, by average American standards, absolutely insane salaries to build software that, you know, doesn't change, doesn't get better? You know, sometimes I sit there and I run into a bug, whether it's a Google product or an Apple product or, you know, Facebook or whatever. I'm like, this is an obvious bug. And I know that there are teams out there. there are people getting paid millions of dollars a year to make some of the worst software. And it will never get fixed because people don't care. No one's paying attention. That's just one symptom out of a great many that is, you know, the result of basically treating people like, you know, hoarded resources. The world is full of problems. Let's go solve those things.
Starting point is 00:00:55 Welcome to the Knowledge Project. I'm your host. host, Shane Parrish. In a world where knowledge is power, this podcast is your toolkit for mastering the best of what other people have already figured out. If you want to take your learning to the next level, consider joining our membership program at fs.blog slash membership. As a member, you'll get my personal reflections at the end of every episode, early access to episodes, no ads, including this, exclusive content, hand-edited transcripts, and so much more. Check out the link in the show notes for more. Today, we're pulling back the curtain on one of the most powerful forces in the tech and venture capital world, Y Combinator. With less than a 1% acceptance
Starting point is 00:01:38 rate and a track record that includes 60% of the last decade's unicorn startups, YC has shaped the startup world as we know it. Gary Tan, president of Y Combinator, joins us to break down what separates transformative founders from the rest and why so many ambitious entrepreneurs still get wrong. We'll explore the traits that matter the most, the numbers behind billion-dollar companies, and why earnestness often beats raw ambition. But there's a seismic shift happening in venture capital, and AI is at the center of it. We'll dig into how artificial intelligence is reshaping startups from idea generation to regulation and what it means for the next wave of innovation. If you're curious about Silicon Valley's secrets, the present and the future of AI,
Starting point is 00:02:26 or how true innovation gets funded, this conversation is for you. It's time to listen and learn. I want to start with what makes Y Combinator so successful. I guess I can't talk about YC without talking about Paul Graham and Jessica Livingston. I mean, it started because they're remarkable people. And, you know, Paul, when he started his company, company. I don't think he ever had the idea that he would ever become someone who created a thing like YC. He was just trying to help people and sort of follow his own interests, I think. He just
Starting point is 00:03:13 said, I know how to make products and make software and make them in a way that people can use them and then after he actually sold that company via web was one of the first you know to say we have shopify via web was sort of like the very first version of it he actually basically created the first web browser based program so he was one of the first people to hook up a web request to an actual program in unix you know today we call it cg i bin or you know all these different things but you know he was so early on the web that um you know it was a new idea to make software for uh the web that didn't require like some desktop thing that you had to use to configure the website and so i think he's just always been um an autodidact uh a really great
Starting point is 00:04:14 engineer and then just a polymath so i think that that's what really made yc i mean he wrote essays, he sort of attracted all the people in the world who wanted to do the thing that he wanted to do. And so I think Paul Graham and his essays became a shelling point for people who, this new thing that could really happen in the world. And, you know, that started very early. I mean, I think it started literally with the web itself. And, you know, that's why in 2005 he was able to get hundreds to thousands of really amazing applications from people who wanted to do
Starting point is 00:04:53 what he did. And then the magic is it's only a 10-week program. I think he had only a dozen people in that very first program in 2005. And then out of that very first program, Sam Altman went through it. And Sam,
Starting point is 00:05:09 I guess it's interesting. I mean, if you have a draw that is very profound, it will draw out of the world. the people who, you know, that speaks to those people. And so you end up needing in society these like sort of shelling points for certain ideas. And then that, you know, the idea that someone could sit down in front of a computer and create a piece of software that a billion people could use turned out to be very contrarian and very right.
Starting point is 00:05:40 And so, you know, today I think of YC as really, it's actually, software events and media and I think you've had Nival Ravikant on before and I think I remember distinctly Nival talking about like those are the few forms of extreme leverage you have in the world and so I think Wicombinator is this crazy thing it's like when people realize
Starting point is 00:06:11 they could start a startup they went on Google and they searched and they found Paul's essays and then through at his essays he they found Y Combinator and then YC started funding people like you know Steve Huffman who ended up creating Reddit in that very first batch and selling that to Condé Nast and um you know drop box then Airbnb then you know today you know coinbase um your door dash there are just so many companies that you know are incredible I mean Airbnb is this insane marketplace that houses way more people on any given night than, you know, the biggest hotel chains in the world. And it's like, on the one hand, unimaginable, on the other hand,
Starting point is 00:06:56 like, that's the kind of thing that you can do. Like, you can just, you know, do things, which is wild. And so I think that that's why it works. It's, we attract people who want to create those things, and then we give them money. And then more importantly, I think the know-how is, we give it away for free, actually. Go deeper on that. Yeah. Earlier just now, we were chatting about this podcast setup, but we spend a lot of time writing essays and putting out content on our YouTube channels and just trying to teach people, how do you actually do this stuff? There's like a lot of mechanical knowledge about how do you incorporate or how do you raise money for the first time, and all of that is out there for free.
Starting point is 00:07:45 And, you know, on the other hand, I think of doing YC, being in the program, it's a 10-week program, we make everyone come to San Francisco now. At the end of it, it culminates in people raising, you know, sort of the median raise is about a million to a million and a half bucks for, you know, sometimes teams that are two or three people just an idea starting at, you know, at the beginning of that. And that's the demo day. Yeah. And yeah, we have, you know, I think we have about a billion dollars a year in your funding that comes into YC companies. And that's because the acceptance rate to get into YC is only 1%. So let me get this straight. You have, I think I read somewhere 40,000 applications a year. Yeah, I think it's close to 70,000 80,000 at this point. How do you filter those? Well, we ourselves use software, but we also have 13,000, general partners who actually read applications and we watch the one-minute video you post. And the most important thing to me is that I want us to try the products, right? You know, sure, we can use the resume and, you know, people's careers and where they went to school. You know, we're not going to throw that out.
Starting point is 00:09:04 Like, it's a factor in anything. But the most important thing to me is not necessarily the biography. It's actually, you know, what have you? build? What can you build? Go deeper on the software thing. I don't think I've heard that before that you guys, obviously you have to use software, but what does the software do? How does it filter? Yeah, I mean, ultimately the best thing that we can do is actually brute force read. And on average, I think a group partner will read something like a thousand to fifteen
Starting point is 00:09:34 hundred applications for that cycle that they're working. So the best thing we can do is like not, it is basically like humans trying to make decisions, you know, which is maybe a little antithetical to, you know, the broader thing right now. And now it's, you know, let's just use AI for everything. But I think that the human element is still very important. Most mornings, I start my day with a smoothie. It's a secret recipe the kids and I call the Tom Brady. I actually shared the full recipe in episode 191 with Dr. Ronda Patrick. One thing that hasn't changed since then, protein is a must. These days I build my foundation
Starting point is 00:10:15 around what Momentus calls the Momentus 3, protein, creatine, and omega-3s. I take them daily because they support everything, focus, energy, recovery, and long-term health. Most people don't get enough of any of these things through diet alone. What makes Momentus different is their quality. Their way protein isolate is grass-fed. Their creatine uses Creepure. The pure form available, and their omega-3s are sourced for maximum bioavailability, so your body actually uses what you take. No fillers, no artificial ingredients, just what your body needs, backed by science. Head to livemomentus.com and use code Knowledge Project for 35% off your first subscription. That's code Knowledge Project at livemometus.com for 35% off your first subscription.
Starting point is 00:11:04 With Amex Platinum, access to exclusive Amex pre-sale tickets can score you a spot trackside. So being a fan for life turns into the trip of a lifetime. That's the powerful backing of Amex. Pre-sale tickets for future events subject to availability and vary by race. Terms and conditions apply. Learn more at amex.com. And then at the end, you sort of like, I guess the last filters, like this 10-minute interview used. So what do you ask in 10 minutes to determine if somebody's going to be part of what,
Starting point is 00:11:34 culminator. I guess the surprising thing that has worked over and over again, ultimately, is in those 10 minutes, either you learn a lot about both the founders and the market, or you don't. So we're looking for incredibly crisp communication. So I want to know, what is it? And often the first thing I ask is not just what is it, but why are you working on it? I want to sort of understand where did this come from. Did you just read about it on the internet? Or a much better answer is, you know, well, I spent a year working on this and I got all the way to the edge of, you know, what people know about this thing. And, you know, what's cool about, you know, the biographical is that then it invites more questions, right? It's the best interviews in 10 minutes. Like you learn about an entire market, you learn about a set of people that, you know, normally you might not ever hear. hear of. It's like you're traveling. It's like you're traveling the idea maze with the people
Starting point is 00:12:41 you're talking to. This is all over Zoom. And at the end of those 10 minutes, like sometimes the 10 minutes becomes 15. You want to talk to people longer because that's what a great interview feels like to me. It feels like I'm a cat and I see a little yarn and I'm just pulling on the yarn. I'm just pulling on the thread because it's like there's something here. This person understands something about the world that, you know, actually makes sense to me. And I think what we're looking for is actual signal that there's a there. There's a real problem to be solved. There are people on that end who are willing to pay.
Starting point is 00:13:22 And then, you're working backwards, what a great startup ultimately is, is something real that people are willing to pay for that probably has durable. moats that, you know, it doesn't mean that, you know, it means that that company could actually become much bigger than, you don't want to start a restaurant, for instance, because there's infinite competition for restaurants, but you do want to start, you know, something like Airbnb that has network effects or, um, that can really scale. Exactly. Or, you know, in AI today, one of the more important things is, you know, are people willing to pay? And, uh, today, because,
Starting point is 00:14:04 people are not selling software, they're increasingly actually selling intelligence. They're like it or not. These are things that you could not buy before. Like probably the most vulnerable things in the world today are things that you could farm out to an overseas call center. That's sort of like the low-hanging fruit today. Basically, how do you find things that people want and how do you actually provide it them and the remarkable thing is that you know in that's why it only has to be 10 minutes um you know one of the things i feel like i learned from paul graham interviewing alongside him so many years was that sometimes i'd go through and this person would come in they had an incredible
Starting point is 00:14:50 resume you know they're like had a phd or they studied under this famous person or you know they worked at uh google or facebook or all these really famous places uh they had an impressive resume um or they had the credentials of someone who I felt like, you know, should be successful, but then they had a mess of an interview. Like, we didn't get any signal from it. We didn't understand. Or like, it just, it seemed garbled. Or, you know, at the end of it, sometimes they're asking like, oh, we just, you know, 10 minutes is too short. We need more time. And one of the things I feel like I learned from Paul was that if in 10 minutes you cannot actually understand what's going on. It means the person on the other end doesn't actually understand what's going on and there
Starting point is 00:15:35 isn't anything to understand, which is surprising. That's a really good point. I bet you that holds true. Do you look at people that you've been successful with that don't work out and then people that you've filtered out that do become maybe successful and try to learn from that? Oh, definitely all the time. I mean, I think that's the trickiest thing. I think the system itself will always produce you know, both false positives and false negatives because it is only 10 minutes. But you have the highest batting average. Like, Wycombinator, my understanding is
Starting point is 00:16:08 it's like 5% of the companies become billion-dollar companies. Yeah, about 2.5% end up becoming decacorns sooner or later. But that would be the highest batting average of any VC firm, maybe with Sequoia being the exception. What's interesting to me is most of the people that I know in that space are doing, hundreds of hours of work per company. And you guys can't do that because you have 80,000 people applying. And you're still the most, or at least top tier in terms of success.
Starting point is 00:16:39 Yeah, I mean, what's great is I don't want to compete with Sequoia or Benchmark or Andrewsson Horowitz or, you know, they're our friends, honestly. Done right. Like, we're much earlier than everyone else because we want to actually give them half a million dollars when they have just an idea or, Maybe they don't even know their co-fender yet. That's what makes it more incredible. It's because the batting average should be way lower based on where you're at in the stock in terms of funding.
Starting point is 00:17:07 Yeah. You know what it is, though. I spent seven years, actually, away from YC before coming back a couple years ago. So I ended up, I think, in the top 10 of the Forbes Midas list as my final year before coming back to YC. And why haven't other people... We ask this all the time.
Starting point is 00:17:26 Why haven't other people come for us? I think there are lots of people who are doing various things that might work. And I guess so far, people sort of lose interest or float off and go do higher status things. Working with founders when they're just right at the beginning and just an idea is actually relatively low status work because it's very high status to work with a company that, is worth $50 or $100 billion now. But guess what? That's 10 years from now,
Starting point is 00:18:04 or sometimes 15 or 20 years from now. It all starts out very low status and all the way in the weeds. You're answering sort of relatively simple questions, and you're giving relatively small amounts of money. Well, you were giving 20 at the start, right? Now you give $500? Yeah.
Starting point is 00:18:23 Half a million dollars today, yeah. Has that changed the ratio of success? I think some of it is, well, we find out in 10 years. If anything, I think that the unicorn rate has gone up over time. You know, 10, 15 years ago, I think it was closer to maybe 3.5 to 4%. And now we're around 5.5%. Some batches from maybe 2017, 2018 are pushing 8 to 10%. Oh, wow.
Starting point is 00:18:52 Some of those companies in that area, in that vintage, about 50% of companies end up raising what looks like a Series A. And then the wild thing about it is it actually takes a long time for people to get there. So I think the YC has actually flipped a lot of the, I guess, myths of venture. One of the myths of venture maybe 10, 15 years ago was that within nine months of funding a company, you will know whether or not that company was good or bad. And going back to that stat, about half of companies that go through YC will end up raising a Series A.
Starting point is 00:19:35 That's much higher than any other pre-seed or seed sort of situation that I know of. But about a quarter of those who raise the Series A, they do it in year five or later. And that's a function of like, We're funding 22-year-olds, you know, 19-year-olds, 24-year-olds. I mean, we're funding people who are so young that sometimes they've never shipped software before. Sometimes, you know, they're fresh off of an internship.
Starting point is 00:20:05 You know, it takes three to five years to mature, to learn how to iterate on software, how to deliver really high-quality software, how to manage people, how to manage people effectively, give feedback. And so the wild thing is, I mean, sometimes it takes five years for those things to come together. In my head, and correct me if I'm wrong here, there's a bit of, like, misfit, geek, people have told me this won't work or won't be successful. And then when I get to Y Combinator, I'm around a whole bunch of other people who are exactly like me for the first time in my life. And they're super ambitious. To what extent do you think that that environment just creates better success or better,
Starting point is 00:20:49 comes oh that was definitely true for me i mean um without that i feel like what my i mean i had a good a really great community at the end of the day like it was um you know my fellow stanford grads but i guess the weird thing to say is that like being around people who are really earnestly trying to build uh helps you know 10x war um the the default startup scenario out there is not about signal it's about the noise like you're playing for these other things like how much money can I raise and what you know high status investor like you know some people sort of float off and they become scenesters they're like oh let me try to get a lot of followers on Twitter that's the most important thing and then what we really try to do at YC during the batch and then afterwards
Starting point is 00:21:41 and you know in our office hours working with companies is like when we spot that kind of stuff it's like oh no no like maybe don't do that like you know let's go back to product market actually building and then iterating on that getting customers uh you know long-term retention all of those things are the fundamentals and everything else is like the trappings of success or and those will always feel i what's funny is like in other communities uh all of those things will always feel more present to hand and they're easier like you can just get it like you're you know on stage keynoting or you know even doing the podcast game i feel like guilty you know like it's kind of funny um we see that in people and then sometimes you know often that will kill their
Starting point is 00:22:29 startup like they take their eye off the ball you know angel investing if you're a startup founder and uh suddenly some you know people have heard of you and uh people try to add you as a scout. People kill their startups all the time by that just by taking their eye off the ball. Go deeper on that a little bit in terms of focus and how people sort of lose their way unintentionally. And then do they catch it before it starts to go off the rail? Or does it, it sort of just crashes and then there's no coming back from it? I mean, it crashes. And then, you know, sometimes you have to go and do your next startup or, you know, or I don't know, sometimes people just go off and become VCs after that. And that's okay, too. Is that the difference between somebody who wants to run a company and start a company versus somebody who wants to be seen as running a company and starting a company?
Starting point is 00:23:19 I think that that's probably the biggest danger to people who want to be founders. I mean, I think I've seen Peter Thiel talk about this. He doesn't really want people who want to start startups. From my perspective, it's certainly much better to find people who have a problem in the world that they feel like they can solve and they can use technology. to solve and that's like sort of a more earnest way to look at it and uh if it if you look at the histories of some of the things that are the biggest in the world they actually start like that you know there are lots of interviews with steve jobs and steve wasniak saying um you know i
Starting point is 00:23:59 never meant to start a company or ever wanted to make money all i wanted to do was uh make a computer for me and my friends and so you know many many more people kept coming to me saying can you build me a computer? And they just, you know, like a cat, we're pulling on this thread. It's like the company was a reluctant side effect on the list. In history, it seems like a lot of innovation comes from great concentration of people together, whether it's a city or the Industrial Revolution or all these things tends to be localized and then spread over the world, if I understand it correctly.
Starting point is 00:24:34 Why Silicon Valley, why San Francisco, and why haven't other countries been able to replicate that success inside? Well, at YC what we hope is that people actually come to San Francisco and we do strongly advocate that they stay but it's no requirement and then what we hope is that if they do
Starting point is 00:24:56 leave they end up bringing the networks and know how and culture and frankly vibes and they bring it back to all the other startup hubs in the world and I think that that's some of the stuff that has actually come about i mean um monzo was started by now my partner tom blomfield
Starting point is 00:25:18 uh he's a partner at yc now but he started you know multiple startups and a few of them you know multiple unicorns actually and both of them are some of the biggest companies in london for instance so what we hope is that uh san francisco becomes sort of really athens or rome in antiquity you know send us your best in the brightest you know ideally you stay here. One thing we spotted is that the teams that come to San Francisco and then stay in San Francisco or the Bay Area, they actually double their chance of becoming a unicorn. Oh, wow. So if it's one thing that you could do, it's be around people and be in the place where making
Starting point is 00:26:00 something brand new is in the water. So hypothetically, you created a new country tomorrow and you wanted to spur on innovation, And what sort of policy, you've got to compete with San Francisco. Oh, yeah. What sort of policies would you think about? Like, how would you think about setting that up to attract capital, to attract the right mindset of people, to attract and retain these people? I think what I want for San Francisco, for instance, is I think the rent should be lower.
Starting point is 00:26:30 And so rather than subsidizing demand, we actually need to increase supply, like, fairly radically, actually. And that just hasn't happened. I think I was looking at it for the entire last calendar year. I think maybe Scott Weiner had just posted this on X that literally there were no new housing starts in all of San Francisco proper for the last year. So how are we supposed to actually bring down the rents and make this place actually livable? If San Francisco is the microcosm where people build the few. future. And it is sort of the siren song for, you know, 150 IQ people who are very, very ambitious and have our, you know, techno-optimistic ideology. And it's also where they are most
Starting point is 00:27:23 likely to succeed. Society and certainly, you know, America is not serving society the right way if we're getting in the way of these smart people trying to solve these problems, trying and build the future. But just continuing on the Y combinator theme for a second, are there ideas that you've said no to, but you think they're going to be successful, they just scare you? And you're like, no, that's too scary.
Starting point is 00:27:48 I mean, if it's scary, but might or probably will be good, I think we want to fund them. And certainly there are things that would be bad for society, but are likely to make money. And, you know, the history is our partners are, everyone's independent. We have a process that is very predicated on, you know, if you're a general partner at YC, you know, you pretty much can fund what you want. You know, we run it by each other
Starting point is 00:28:17 to make sure, you know, sort of double check like the thinking. But I think we're pretty aligned there. Like there are lots of examples of, you know, maybe five or six years ago. There was a rash of telehealth companies that are focused on, for instance, ADHD meds. And I distinctly remember one of our partners, Gustav Allstromer, he met that team and he said, you know what, we're not going to fund these guys. You know, it's going to make money, but I don't want to live in a world where it is that easy to get, you know, people on these drugs. Like, they're ultimately methamphetamines and, you know, these are controlled substances and
Starting point is 00:28:58 this is the wrong vibe. Like, we didn't not like the vibe that we got from the founders of that company. So, you know, I hope that YC continues that way, and I think it will. Ultimately, we want people who are, I mean, ultimately trying to be benevolent at least, you know. How would you think about just the idea of spiffballing if I were to come to you and be like, I'm starting a cyber weapons company? I guess some of it is like, are you only going to sell to five eyes? Because, you know, I really liked what MIT put out recently. they were very clear.
Starting point is 00:29:36 They said, you know, MIT is an institution, and that institution is an American institution. And so being very clear about that, I thought was totally the right move for MIT. And, you know, I think that YC needs to be a similar, you know, an institution of similar character. I like that. What do you wish founders knew about sales coming in?
Starting point is 00:30:00 Oh, how hard it is. And I mean, you know, like it or not, you know, the ideal founder is someone who has lived like 20 lifetimes and has the skills of 20 people. And the thing is, you know, you can't get that. And so probably the first conference that we had, the first mini conference we have when we welcome the batch in is the sales mini conference. And essentially, it is don't run away from the no. Spencer Skates of Amplitude has this great analogy that he told, you know, some companies when he came by to speak recently that I've been thinking a lot about, which is sales is about, you know, having a hundred boxes in front of you and maybe five or six of those boxes has a gold nugget in them. And if you haven't done sales before, you think I really, I'm going to gingerly, in a very gingerly way, open that first box and hope, hope, hope that, you know, I have a gold nugget. And then, you know, I almost don't want to know that there isn't a golden nugget in there. Like, I'm so afraid of rejection.
Starting point is 00:31:09 It's sort of remarkable how often high school and family and, you know, the 10,000 hours of human training people get from their childhoods comes up in Paul Graham's essays. I always think about that because I think that most people's backgrounds just don't prepare them for sale it's a very unnatural thing to do sales but then the sooner that you acquire those skills like the more free you become and what spencer says about those hundred boxes is instead of like being incredibly afraid of you know getting an f or you know nothing's going to happen to you just like flip open all hundred boxes immediately and then you know you should aggressively try to get to a no and um you know you'd rather get a no so you can spend less time on that
Starting point is 00:31:58 lead and you can get onto the next one i mean i think that that's like a very interesting example of the mindset shift that you can read about but you sort of need it takes a village like you sort of need to be around lots and lots of people for whom that is true that has been true um and i think that you know maybe that's actually one of the reasons why why c startups are much more successful Like other people give as much money or, you know, as you said, like venture capital VC firms tend to give, you know, a lot more money. I mean, there are clones of YC right now that give like twice as much money, for instance. But I don't think that they're going to see this level of success because they're not going to have as earnest people who become as formidable around you. Like, it's actually a process.
Starting point is 00:32:49 It's so interesting to me because as you're saying that, there's something that strikes me. but the simplicity of what you're doing. And then also like Berkshire Hathaway. You know, everybody's tried to replicate Berkshire Hathaway, but they can't. Yeah. Because they can't maintain the simplicity. They can't maintain the focus. They can't do the secret sauce, which obviously has a lot to do with Charlie Munger and
Starting point is 00:33:11 Warren Buffett. And with you guys, it has a lot to do with the founders that you attract and you can bring together. But you have billions of dollars effectively trying to replicate it. Nobody's able to do that. I think that's really interesting. And it's not like you're doing something that's super complicated. It doesn't sound like it unless I'm missing something.
Starting point is 00:33:28 It's a very simple sort of process to bring the people together. And obviously there's filtering, and you guys are really good at doing that. I mean, what my hope is, I feel like when Paul and Jessica created YC for my, I went through the program myself in 2008, and I came out transformed. And then that's very explicitly what I want. to happen for people who go through the batch today it's you know it isn't just like show up to a bunch of dinners and network with some people who happen to be or you know it's it's much deeper than that like I want people to come in maybe with like you know the default worldview and then I want
Starting point is 00:34:11 them to come out with a very radically different worldview I want someone who is much more earnest someone who is not necessarily trying to sort of like hack the hack. They're trying to, you know, and I think this mirrors what you're saying from, you know, what, you know, rest in peace, Charlie Munger talks about and what Warren Buffett talks about around all of these things are in the short-term popularity contests. But in the end, all that matters is the weighing machine. So you can raise your Series A, you can throw amazing parties, tech crunch can write about you
Starting point is 00:34:51 all these Twitter anons can fete you as like the next greatest thing and you could get you know hundreds of thousands of followers on X or whatever but you know at the end of the day you look down and did you create something of great value like did you with your hands
Starting point is 00:35:09 and you know did you assemble people and capital and you know create something that you know when all is said and done solve some real problem, put people together, is there real enterprise value? And that's the weighing machine.
Starting point is 00:35:27 And the way that YC makes money, the way that the founders make money, it's all aligned at that point. Yeah, there's a way to hack the hack. And I don't really know what the end game is on the other stuff. It's just very short term. Whereas on a 5, 10, 15, year basis. Like, if you are nose to the grindstone earnestly working on the thing, you
Starting point is 00:35:56 know, you will succeed. Like, I think that that's what Paul Graham's essay about being a cockroach actually is. And, you know, that's why 25% of the people who reach some form of product market fit at YC do it in year five or later. It's like they don't quit year one. They don't quit year two. Like, you know, they are learning and growing. I have one other really crazy stat that, like, I I'm thinking about all the time right now. There's a VC, actually. His name is Ali Tamasab. He works at Data Collective.
Starting point is 00:36:27 He wrote a book called Super Founders. And I get this email from him out of the blue. He says, did you know that about 40% of the unicorns from the last 10 years in the world were started by multi-time serial founders? And it was like, okay, that's a cool stat. Like, makes sense. Like, multi-time founders are, you know, they know a lot more. They have networks, they have access to capital.
Starting point is 00:36:51 Like, that's not a surprising stat. You know, if anything, it's a little surprising that it's only 40%. Like, you would have guessed maybe that was 80, but the thing he said after that really shocked me. He said, did you know that 40, you know, of those 40%, 60% of those people, the people who created unicorns the last 10 years, are YC alumni. Oh, wow.
Starting point is 00:37:15 So I'm like, that's crazy. Like, I'm really glad that YC exists now. Because, you know, even if, you know, YC today is basically a thing that is for first timers, you know, we do have second timers apply. We have, we do accept them. But, you know, we primarily think of the half a million dollars. You know, it really is for people who are starting out. And it's kind of hilarious. Like, I have no product right now for people who are, you know, for my YC alums. And maybe that's okay, you know. It's, you know, that's our gift to the rest of San Diego. Hill Road because, you know, they're the ones who are going to be the fund returners for all of the rest of Sand Hill Road. Would you say, like, in terms of personal characteristics, it sounded like determination was definitely one of the most important outside of the company or venture. What are the other personal sort of skills or behaviors or characteristics that people have that you say you would think correlate to the, not only the successful first time, but second, third,
Starting point is 00:38:17 for yeah i mean the the number one thing that i want um that comes to mind for me is uh i mean maybe it's even surprising because that's not a word that you might associate with silcom valley founders i think of the word earnest so what does ernest mean like incredibly sincere i think basically what you see is what you get like you're not trying to be something else it's like authentic but like you know even humble in that respect right like i'm trying to trying to do this thing. The opposite, I mean, and it's surprising because, you know, I don't know if people associate that with Silicon Valley startups, but I see that in the founders that are the most successful and most durable. I see it in Brian Armstrong at Coinbase, like, and which
Starting point is 00:39:04 is fascinating because that's definitely not the trait that you would apply to most crypto founders. And, you know, I would use Sam Bankman-Fried as sort of the opposite of that. Like, you Brian Armstrong is an incredibly earnest founder who literally read the Satoshi Nakamoto white paper and said this is going to be the future and let me work backwards from that future. When you talk to him, the reason why he wanted these things comes directly out of his own experience. I mean, at Airbnb, they were dealing with the financial systems of, you know, myriad countries. And it's like international, just sending money from one kind of. country to another was totally fraught and totally not, you know, something that was accessible
Starting point is 00:39:52 to normal people. Like remittance is this crazy scam. It's insane, like how many fees that people have to pay just to like send money home or do cross-border commerce, right? So this is something that was incredibly earnest of Brian Armstrong to do. He said, here's a thing that is broken in the world that, you know, he saw personally. I think he spent time in, you know, Buenos Aires and Argentina. And he saw hyperinflation and he said, you know, this is a technology that solves real problems that I have seen hurt people.
Starting point is 00:40:25 And I know that this technology can solve it. And then after that, he's just like nose to the grindstone working backwards from that thing that he wants to create in the world. And, you know, it's no surprise to me. I mean, there were many years in there that I think our whole community were looking at, we were looking at someone like Sam Bankin-Fried and just wondering, like, what's going on over there. He speed ran this sort of money power fame game to an extreme degree, so much so that he stole customer funds to do it. And that was the answer.
Starting point is 00:40:57 Like that's anti-Ernest. That is the definition of he was a crook. He's in jail now. And, you know, my hope is that people who look, you know, if you just look at Brian Armstrong versus SBF, I'm hoping that, you know, young people listening to this right, now take that to heart it's like the things that actually win you know i mean i and going back to buffet i you know i went to um their uh you know sort of conclave in omaha oh you went to the woodstock
Starting point is 00:41:29 for capital yeah yeah it was uh i mean amazing and uh i think those guys are by definition extremely earnest you know i don't think it's an affectation i think it's like it's like legit and serious like those guys did everything you know What is it, it's their thing, right? It's, you know, work on high class problems with high class people. Like, I mean, that's very, very simple. You know, just do it the right way, right? Yeah.
Starting point is 00:41:59 And so that's what I want. I think that if YC is the shelling point for earnest, friendly, ambitious nerds to steal something from, you know, I have a friend on Twitter who goes by Visa, VisaCon. And, you know, he has a whole book on it. I think it's called Friendly Ambitious Nerd, if you look it up. I mean, I think that that's what YC, by definition, should be attracting. And, you know, Brian Armstrong is like the best found, one of the best founders I've ever met and gotten the chance to work with and fund.
Starting point is 00:42:36 And I think the world desperately needs more people like that. Where, you know, in the background, just like consistent doing the right thing, trying to attract the right people like chop wood carry water that's it he also took a big stand before it became popular that the workplace is
Starting point is 00:42:57 like a performance place it's not you don't bring all of your politics and all that stuff in but he did that at a time when it was courageous like it was really he was one of the first people out of the gate and he took so much flack for that yeah
Starting point is 00:43:12 and I remember vindicated now I know but I remember reading like his thing and I was like oh this is great but like why are we why are we even pointing this out you know like and then he got like I read the stuff online and I was like this is crazy that's the media environment right I thought it was interesting anyway that he came out and did that and I think where it relates to the earnestness is only somebody who's really comfortable with themselves and like trying to do good in the world could really come out and take that stand at that point in time yeah that's true leadership yeah what's the biggest
Starting point is 00:43:45 unexpected change you've seen in building companies in the AI world I think the biggest thing that is increasingly true and we're seeing a lot of examples of it in the last year is blitzscaling for AI might not be a thing what's blitz scaling so I think Reid Hoffman wrote a whole book about it it was definitely true in the time of Uber so you know that was sort of a moment when interest rates were descending And then these sort of international, increasingly international marketplaces, these sort of, you know, offline to online marketplaces like Uber in cars or delivery or you could say Instacard, DoorDash, you could throw in, you know, lift.
Starting point is 00:44:30 There was sort of this whole wave of, you know, sort of the top startups were marketplace startups. But also in software, too, this idea that, you know, scale could be used as a bludgeon, that the network effects grow, you know, sort of exponentially, and then because you could have access to more and more capital, whoever raised more money would have won. And I feel like that was extremely true in that era, sort of the 2010s. And then in the 2020s, especially by, you know, we're in the mid-2020s now.
Starting point is 00:45:04 I think that we are seeing incredible revenue growth with way fewer people. And that's very remarkable. We have companies basically, you know, going from zero to six million dollars in revenue in six months. We have companies going from zero to $12 million a year in revenue in 12 months, right? And with under a dozen people, like usually five or six people. And so that's brand new. Like this is the result of large language models and intelligence on tap. And so that's a big change.
Starting point is 00:45:42 I think we are seeing companies that in the next year or two will get $250 million a year in revenue really with maybe 10 people, maybe 15 people tops. And so that was relatively rare, and my prediction would be this becomes quite common. And my hope is that's actually a really good thing. This is sort of the silver lining to, you know, you know, what has been really a decade of big tech, right?
Starting point is 00:46:16 Like, it's more and more centralized power. You know, what might happen here is that, you know, and what we're actively trying to do at YC is we hope that there, you know, are thousands of companies that each can make hundreds of millions to billions of dollars and give consumers an incredible amount of choice. And we hope that that will be very different than sort of this, the opposite, I think, was increasingly true. We have fewer and fewer choices in operating systems,
Starting point is 00:46:46 in web browsers, and across the board, just more and more concentration of power in tech. Two thoughts here. One, like, how much do you think that cloud computing plays into that? Because now I don't have to buy $6 billion in infrastructure to be that five-person company. I can rent it based on demand. So that's enabled me not to compete on.
Starting point is 00:47:09 a capital basis. Yeah, that was true. That was even why Y Combinator in 2005 could exist. You know, I remember working at a startup in 1999, 2000 or at like internet consulting firms. And these were like million dollar projects because you had to actually pay $100,000 or hundreds of thousands of dollars to Oracle. You had to pay hundreds of thousands of dollars to your Kolo, to like rack real servers. So the cost of even starting a company was just huge. Yeah, I mean, I remember Jeff Bezos actually launched AWS at a YC startup school at Stanford campus in 2008, right? When I was starting my first company. So I think, you know, cloud really opened it up and that, you know, that's part of the reason why startups could be successful.
Starting point is 00:48:01 You know, you didn't need to raise $5, $10 million just to rack your server. And, you know, that's the other big shift. I think in the past, it was very, very common to have, you know, Stanford MBAs or Harvard MBAs be the CEO, and then you would have to go get your hacker in a cage. You had to, you know, get your CTO. And, you know, there was sort of that split. And then now what we're seeing is, you know what, like the CEO of the majority of YC companies, they are technical. Is this the first revolution, like technological revolution where the incumbents have a huge advantage? You know, I think they have an advantage, but it's not clear to me that they are conscious and aware and like at the wheel enough to take real advantage of it because they have too many people.
Starting point is 00:48:59 Right. And then it's all, I mean, I think this is what founder mode is actually about. So last year we had a conference with Brian Chesky, we invited our top YC alums there. We brought Paul and Jessica back from England, and we had this one talk that wasn't even on the agenda, but I managed to text Brian Chesky of Airbnb, and I got him to come and speak very openly and honestly in front of a crowd of about 200 of our absolute top alumni founders. and he spoke very eloquently and in a raw way about how your company ends up not quite being your own unless you are very explicit. Like, you know, I, this is actually my company.
Starting point is 00:49:50 I am actually going to have a hand and a role to play in all the different parts of this company. I'm not going to, you know, basically the classic advice for management is hire the best people. people you possibly can and then give them as much rope as you possibly can. And then somehow that's going to result in, you know, good outcomes. And then I think in practice, and this is sort of the reaction that is turning out to create a lot of value across our community, certainly. But I think the memes are out there and it's actually changing the way people are running businesses. It's sort of a shade of what you were saying earlier with Brian Armstrong. like you know you can sit back and allow your executives to sort of run amok and you know if the founder and
Starting point is 00:50:39 the CEO does not exercise agency you know then it's actually a political game and then you have sort of fiefdoms that are fighting it out with one another and the leader is not there then you enter the situation where neither the leader nor the executives have power or control or agency and then you have you're everyone's disempowered everyone is making the wrong choice uh you know retention is down you're wasting money you have lots and lots of people who are sort of working either against each other or not working at all and that's you know i think uh a pretty crazy dysfunction that took hold across arguably every silicon valley company period and it's still taken, it's still mainly in power at quite a few of those companies, actually. Though I think
Starting point is 00:51:34 people are aware now that that's not the way to run your company. Are the bigger companies sort of like shaping up or not? The way that I think about this analogy is sort of like if I'm the young skinny kid and I'm competing against the fat, bloated company, I want to run upstairs. It's going to suck for me, but it's going to suck way more for them. Right. I think this is maybe a function of um you know blitz scaling and uh using capital as a bludgeon like gone wrong um you know you can look at um you know almost any of these companies um they probably hired way too many people and at some point they were viewing smart people as uh you know maybe a hoarded resource that you know if you were playing um some sort of adversarial uh you know starcraft and you didn't want
Starting point is 00:52:26 You know, the ironic thing is like they themselves were not using the resources properly either, right? They just didn't want somebody else to have them. Exactly. I guess it felt like a little bit of a prisoner's dilemma because I think the result is that, you know, tech progress itself decelerated. You have like the smartest people of a generation basically retired in place, working at places that, you know, the world is actually full of problems. like why are people sort of retired in place pulling down, you know, insane, by average American standards, absolutely insane salaries to build software that, you know, doesn't change, doesn't get better. You know, I mean, sometimes I sit there and I run into a bug into, you know, whether
Starting point is 00:53:16 it's a Google product or an Apple product or, you know, Facebook or whatever. I'm like, this is an obvious bug and i know that there are teams out there there are people getting paid millions of dollars a year to make some of the worst software and it will never get fixed because there's no way like you know people don't care no one's paying attention yeah that's just one symptom out of a great many that is you know the result of uh yeah i don't know basically treating people like you know hoarded resources instead of like they should you know the world is full of problems let's go solve those things. When it comes to AI, the raw inputs, I guess if you think about it that way, are sort of the LLM, then you have power, you sort of have compute, you have data. Where do you think incumbents
Starting point is 00:54:04 have an advantage and where do you think startups can successfully compete? Yeah, I mean, we had a little bit of a scare, I think, last year with AI regulation that was potentially premature. Sure. So, you know, there was sort of a moment maybe a year or two ago. And you sort of see it in the shades of it did make it into, say, Biden's EO. These sort of, you know, passed a certain amount of, you know, mathematical operations. Like that's banned or not banned, but, you know, we require all of this extra regulation. You have to report to the state. Like, you better get a license. You know, it's, that felt like the early versions of potentially regulatory. capture where, you know, they wanted to restrict open source, they wanted to restrict, you know, the number of different players, you know, sitting here a year after a lot of those attempts, I feel pretty good because it feels like there are five, maybe six labs, all of whom are competing in a fair market trying to deliver models that, you know, honestly any startup, anyone you know any of us could just you know pick and choose and you know there's no um monopoly danger
Starting point is 00:55:21 there's no uh you know crazy pricing power that one person one entity uh wields over the whole market and so i think that that's actually really really good um i think it's a much fair playing field today and then i think it's interesting because it's an interesting moment i think that um you know Basically, there's a new Google-style sort of oligopoly that's emerging around who provides the AI models. But because it probably won't be a monopoly, that's probably the best thing for the consumer and for actually every citizen of the world. Because you're going to have choice. Let's go deeper on the regulation and then come back to sort of competition. how would you regulate AI
Starting point is 00:56:12 or how do you think it should be regulated or do you think it should be regulated? It's a great question. I guess there are a bunch of different models that I could see happening. You know, I think what's emerging for me is that the two things that I think, I think the first wave of people
Starting point is 00:56:30 who are really worried about AI safety, not to be flippant, but like my concern is that they basically watch Terminator 2. And I'm like, I like that movie too, but you're right now, you know, there's sort of that moment in the movie where they say suddenly the AI becomes self-aware
Starting point is 00:56:51 and it becomes, you know, it takes agency, right? And I think the funny thing, at least as of today, you know, these systems are, it's just matrix math. And there is no agency yet. Like, there's basically, they're equivalent to incredibly smart toasters. And some people are actually kind of disappointed in that. And personally, I'm very relieved, and I hope it stays that way. Because that means that there's still going to be a clear role for humans in the coming decades.
Starting point is 00:57:29 And, you know, I think it takes the form of two very important things. One is agency. I mean, people often ask, like, what should we be teaching our kids? And, you know, the ironic thing is we send them to a school system that is not designed for agency. It is literally designed to take agency away from our children. And maybe that's a bad thing, right? Like, we should be trying to find ways to give our children as much agency as possible. That's why I'm actually personally pretty pro-screens and pro-Mindcraft and Roblox.
Starting point is 00:58:05 and giving children like this sort of playground where they can exercise their own agency. Have you tried synthesis tutor? Oh, yeah, yeah, yeah. I'm a small personal investor in them. And I think that we're just scratching the surface on how education will actually change. But that's a great example.
Starting point is 00:58:24 Like those synthesis is designed around trying to help people help children actively be in these games that increase instead of decrease agency. And it's crazy, so it teaches the kids' math. And my understanding just from reading it a little bit is El Salvador just replaced the K-5 math with synthesis tutor, and the results are, like, astounding.
Starting point is 00:58:48 Incredible. Yeah, it's way better. I mean, the kids get involved, and they're obviously invested in it. The regulation question is really interesting, too, because it begs the question of it's a worldwide industry. Yeah. And so regulating something in one country, be it the United States or another country, doesn't change what people can do in other countries,
Starting point is 00:59:08 and yet you're competing on this global level. Yeah, I think the biggest question around it is, of course, I mean, the existential fear is, like, where are all the jobs going to go? And then my hope is that it's actually two things. One is, like, I think that robotics will play a big key role here, where I think that if we can actually provide robots to people that do real work for people,
Starting point is 00:59:38 that will actually change people's sort of standards of living in like fairly real ways. So I think universal basic robot is relatively important. You know, I think some of the studies coming back about UVI have not, you know, universal basic income where you just give money to people. It's just not really resulting in a different... I think they've never read a psychology textbook.
Starting point is 01:00:01 I mean, just going away from the economics of it, people need to feel like they're part of something larger than themselves and if they don't feel like they're part of larger than something like they're contributing to something they're part of a team they're bigger than what they are as a person then it leads to all these problems yeah exactly and then I think that we really need to actually give
Starting point is 01:00:26 everyone you know on the planet some real reason why this stuff is actually good for them right like i think if if there is only sort of a realignment without a material increase in people's day-to-day livelihoods and you know their quality of life like maybe we're doing something wrong actually and left to its own devices like it's you know it's possible so i don't know what the specific things are but i think that that's what it would look like you know if if regulation were come to come into play or there was some sort of of realignment in reaction to, you know, the nature of work changing, that would be the outcome
Starting point is 01:01:11 that, you know, the majority of people, if not all people, like, see the benefit in some sort of direct way. And if we don't do that, then there will be unrest. I think that that's one of the criteria, you know, I don't have the answer, but I think that that's sort of one of the things I'd be on the lookout for. Tim's new scrambled egg loaded croissant, or is it croissant? No matter how you say it. Start your day with freshly cracked scrambled eggs loaded on a buttery, flaky croissant. Try it with maple brown butter today at Timms, at participating restaurants in Canada for a limited time. At what point do you think the models start replacing the humans in terms of developing the models? So like at what point of the models doing the work of the humans in Open AI right now?
Starting point is 01:01:54 And they're actually better than the humans at improving the model. Yeah, we're not there yet. So there's some of evidence that synthetic data is working. And so, some people believe that synthetic data is, you know, where the models are like sort of self-bootstrapping. So just to explain to people, synthetic data is when the model creates data that it trains itself on? That's right. And so I guess the other really big shift is actually test time compute. Like literally, O1 Pro is this thing that you can pay $200 a month for. And it actually just spends more time at the sort of query level.
Starting point is 01:02:31 It might come back, you know, five minutes, ten minutes. later. But it will be much more correct than sort of the, you know, predict next token version that you might get out of, you know, standard chat GPT. Yeah, from what I can tell, that's where a lot of the wilder things might come out. You know, level four AGI as defined by OpenAI is innovators. So we have, you know, lots of startups, both YC and not YC, that are trying to test that out right now. They're trying to apply the latest reasoning models from Open AI that are about to come out, you know, like 03 and 03 mini. And they're trying to apply them to actually, you know, scientific and engineering use cases. So, you know, there's a
Starting point is 01:03:23 cancer vaccine biotech company called Helix that did YC a great many years ago. But what they've figured out is they can actually hook up some of these models to actual wet lab tests. And that's something that I'd be keeping track of over the next couple years. If only by applying dollars to energy that then goes into these models, will there be real breakthroughs in biological sciences, like being able to do new processes or come to a deeper understanding of, you know, whether it's, you know, cancer or cancer treatment or, you know, anything in biotech, you're the first experiments of those, of that sort that's happening in the next year. Even, you know, in computer-aided design and manufacturing, I mean, there's a YC
Starting point is 01:04:20 company called CAMFER that is trying to apply. They actually were one of the winners of the recent YC-O-1 hackathon we hosted with OpenAI and their winning entry was literally hooking up O1 to airfoil design so being able to increase the sort of lift ratio just by applying you know you spend more time thinking about this and it's able to create a better and better airfoil
Starting point is 01:04:50 given a certain number of constraints so you know obviously these are like relatively early in toy examples, but I think it's a real sort of optimistic point around how do we increase the standard of living and push out like sort of the light cone of all human knowledge, right? Like, you know, that is like a fundamental good
Starting point is 01:05:15 for AI, you know, between that and the inroads it might make in education. These are like some real, you know, white pill things that I think are going to happen over the next 10 years. And these are the ways that AI becomes not, you know, sort of Terminator 2, but instead like, you know, sort of the age of intelligence, as you know, Sam pointed out in a recent essay. Like I think that if we can create abundance, if we can increase the amount of knowledge and know-how and science and technology in the world that solves real problems. And, you know, I don't think it's going to happen
Starting point is 01:05:54 on its own like you know each of these examples are there's you know frankly a YC startup like right there on the edge trying to take these models and then apply them to domains that you know it's kind of like you know Google probably could have done what Airbnb did but it didn't because Google's Google right and so in the same way I think that whether it's open AI or anthropic or meta's lab or deep seek or some other lab that wins like I think that we're going to have a bunch of different labs and they're going to serve a certain role like pushing forward human knowledge that way and then you know my white pill version of what the world I want to live in is one where you know our kids or really any kid um with agency can get access to a world class
Starting point is 01:06:44 education can get all the way to the edge of you know what humans know about and are able to do are able to like sort of affect um and then you know sort of empowered by these agents empowered by chat gpT or perplexity or you know whatever agent you know it's going to look like her from the movie right like we're going to have these you know basically super intelligent entities that we talk to um i'm hoping that they don't have that much agency you know i'm hoping that actually they are just like sort of these inert entities that are your helpers. And if that's true, like, that's actually a great scenario to be in. You know, that's the future I want to be in.
Starting point is 01:07:26 Like, I don't want to be, I don't think anyone wants to be sort of, you know, to borrow a term from Van Ketesh Rao. Like, I don't think any of us want to be under the API line of, you know, these AIs, right? Like, and I think that really passes through agency. The minute a robot can do laundry, I'm in, I'll be the first customer. There are YC companies and many startups out there that are actively trying to build that right now. My intuition is that it strikes me as immediate progress could come from just ingesting all of the academic papers that have been done on a certain topic and either disproving ones that people think are still correct and thus cutting off research on top of something. something that's not likely to lead to anything, or making connections, because nobody can read
Starting point is 01:08:18 all these papers and make the connections and make maybe the next leap, right? Like not the quantum leap, but like the next logical step. Who's doing that? I mean, that's inevitable. And then someone listening here might want to do it. And then in which case they should apply to YC. And maybe you should, we should do a joint request for startup for this next YC badge. I like it.
Starting point is 01:08:39 I want equity, though. All right. But it's also interesting because then you think about that and you're like, if I'm a I'm funding research. That research should all be public because I want people to be able to take it, ingest it, and make connections that we haven't made. And it seems like a lot of that research these days is under lock and key. So you get this data advantage in the LLMs where some LLMs buy access or steel access or whatever have access to it and then some don't. How do you think about that from a data access LLM quality point of view? Hmm, it's a good question. I mean, yeah, it's a bit of a gray area these days. I mean, I'm not all the way in. I don't actually run an AI lab, even though, you know, and I was not actually at one way.
Starting point is 01:09:22 You run the meta AI lab. Yeah, that's right. Not the meta, AI lab. Not meta the company, but like meta as in all of them. Yeah. That's a good question. I guess the funniest thing, my main response to all of that around like provenance of the data, is, at some point, like, it feels like it actually is fair use, though.
Starting point is 01:09:46 I mean, that's all the way into case law. Yeah. Well, here's another interesting twist on this then. Like, so the airfoil, they design this new airfoil. Is that patentable? I mean, at least in terms of, like, generated images, my understanding is generated images are not copyrightable. But if AI generates not only the science behind it, maybe,
Starting point is 01:10:07 like, we're at a point where, you know, maybe in the next couple years, is doing more science than we've done. Right. Is that going to be copyrightable or patentable or sort of like withheld or is that public access, public knowledge now? Well, my intuition would say people are just going to take the outputs of, you know, these AI systems. And as far as I know, you can submit a patent and there's not a checkbox yet that says, like,
Starting point is 01:10:34 was this, did you use AI as a part of it? Why wouldn't, here's another startup idea for anybody listening that we both want on why wouldn't somebody just read all the patent filings in the US and be like make the next logical step for me and patent that like attempt to just patent it yeah like a one person company could literally like ingest the US patent database and be like okay here's the innovation in this what's the next quantum leap or the next even the next step that's patentable okay automatically file and you're funded I'm in I got two ideas there I love those I don't know I think these are all totally open and fair game and then I guess maybe going back to regulation that's
Starting point is 01:11:16 one of the stranger things that is happening right now you know one of the pieces of discourse out there during the AI safety debates like in the last year for instance are about bioterror and you know the wild thing is you know basically possessing um instruments of creating bio weapons is already illegal. So do you really need special laws for a scenario that are already covered by laws that exist? I mean, that's just like my sort of rhetorical question back when people are really, really worried about bioterror. You know, I think there's a funny example where AI safety think tanks were in Congress and they were sort of, you know, going to chat GPT and you know, typing in sort of a doomsday example
Starting point is 01:12:08 and it spits out this, you know, kind of like an instruction manual on like, well, you'd need to do this, you'd have to acquire this, you know, here's this thing you would do in the lab. And, you know, of course, like those steps are illegal. And then I think a cooler head prevailed in that, you know, the rebuttal was someone next went to Google, entered the same thing
Starting point is 01:12:30 and got exactly the same response. So, you know, yes, like I've seen terminate or two as well, you know, am I worried about it? You know, my P-Doom score is 1%. Like, I'm not totally, you know, unworied, right? It would be a mistake to completely dismiss all worries. It would also potentially be worse to prematurely optimize and basically make a bunch of worthless laws that slow down the rate of progress and prevent things like better
Starting point is 01:13:04 cancer vaccines or better airfoils or, you know, frankly, like, you know, nuclear fusion or like clean energy or better solar panels or engineering manufacturing methods that are better than what we have today. I mean, there's so many things that technology could do. Like, why are we going to stand in the way of it until we have a very clear sense? Like, that is actually what we need to do. What does scare you about AI? I mean, it's brand new, right? So the risk is always there. It's so funny though, I mean, I'm not unafraid. On the other hand, like, you know, this principle of you can just do things still applies to computers, right? Like, if the system becomes so onerous, like, maybe you would go and, like, let's shut down the power systems.
Starting point is 01:13:55 Let's shut down the data centers themselves. Like, why wouldn't people try to do that, right? And they might do that. And, you know, I think that... People try to do that every day now. Right. Before AI. Right.
Starting point is 01:14:07 If it became that bad, like, you know, I'm sure there would be some sort of human solution to try to fix this. But, you know, just because I read about the Butlerian jihad in the Dune series doesn't mean that I need to live like that's what's going to happen. So you don't believe there's going to be one winner that dominates, like Open AI or Anthropic or, it might still happen right um you know i think that there are lots of reasons why it won't happen right now but you know who's to say everything is moving so quickly like i think that you know these questions are the right questions to ask i just don't have the answers to them like i know but you're the person to ask it's like asking like uh i guess will windows or mac win or you know we're just literally living through that time where very very smart people are you know fighting over the marbles
Starting point is 01:14:59 right now totally and then to me though, like working backwards, the best scenario is actually one where we have lots of marble vendors and you get choice and nobody has sort of too much control or, you know, a cornering of all the resources. What's your read on Facebook almost doing a public good here and spending, you know, I think it's over 50 billion at this point and just releasing everything open source? Yeah, I think that, you know, what Zuck and Ahmad and the team over there are doing is, frankly god's work i think it's great that they're doing what they're doing um and i hope they continue what would you guess is the strategy behind that it's kind of funny because uh my critique on meta would be you know um they very openly make everyone they put in everyone's faces right like you can't use facebook or instagram without or even WhatsApp without seeing like hey meta has AI now
Starting point is 01:15:57 but um the funniest thing is like i'm very surprised that they don't think about sort of like the basic product part of it. Like I went to Facebook Blue App recently and I was going to Vietnam and I just wanted to say, okay, meta-AI, you're so smart. Tell me my friends in Vietnam. And it didn't know anything about me. I'm like, this is some basic rag stuff. Like, I get it.
Starting point is 01:16:18 Like, you're already spending billions of dollars on training these things. How about, like, you know, spend a little bit of money on like the most basic type of, you know, retrieval augmented generation for me and my, you know, like, it's, They're just sort of sprinkling it in, and it's a little bit of a checkbox. So, you know, I'm a little bit mystified, right? Like, if they were very unified about it, I would really get it, right? Like, clearly, the way that we're going to interface with computers is totally going to change. What Anthropic is doing with computer use is, you know, I think that, you know,
Starting point is 01:16:51 what I've heard is basically every major lab is probably going to need to release something like that, whether it's an API the way Anthropic has or literally built-in, to the you know the um you know runtime that you run on your computer like there's going to be a layer of intelligence like you can sort of see the shade of the very very dumb version of it from apple and apple intelligence it's like sort of sprinkling in uh intelligence into notifications and things like that but i think it's virtually guaranteed that the way we interface with computers will totally change in the next few years um you know the rate of improvement in the models you know as of today all the smartest things that you might want to do
Starting point is 01:17:38 there's still actually things that you have to go to the cloud for and then that opens a whole can of worms but there's some evidence that you know in the frontier research of you know the best AI labs it's pretty clear that there's sort of parent models and child models and so there's distillation happening from the frontier very largest models with the most data and the most intelligence down into smarter and smarter tiny models. There's a claim this morning that a 1.5 billion parameter model, I think, got 84% on the AIME math test, which is like 1.5 billion parameters is like so small that it could fit on anyone's phone. Yeah. So, and that was like deep seek R1 just got released this morning. So hasn't been verified yet, but I think it's super
Starting point is 01:18:32 interesting. We are literally day to day, week to week, learning more that these intelligent models are going to be on our desktops, in our phones, and we're right at that moment. So is the model better? Is the LLM better? What makes that model so successful with so few parameters? Oh, I don't know. I haven't tried it yet. But, you know, I mean, some of it is you can be very specific about what parts of the domain you keep. Okay. And then, you know, I guess, you know, math might be one of those things that just isn't, you know, it doesn't require, you know, 1.5 trillion parameters.
Starting point is 01:19:11 It takes $1.5 billion to do an 84% job of it, which is pretty wild. I mean, that's another weird thing of AI regulation. You know, I think Biden, for instance, his last E.O was sort of this export ban and deep seek is a Chinese company releasing these models open source and I believe that they only have access to last generation Nvidia chips and so you know some of it's like why are we doing these like measures that like may not actually even matter it's interesting right because you think of constraint being one of the key contributors to innovation yeah by limiting them you also maybe enable them to be better because now they have to work around these constraints
Starting point is 01:19:56 or presumably I have to work around them. I doubt they're actually sort of working around. That sounds right. I mean, I think the awkward thing about AI regulation is there's something like $4 billion of money sloshing around think tanks and AI safety organizations. And, you know, someone was telling me recently, like, if you looked at on LinkedIn for some of the people in these sort of giant NGO morass of think tanks, sorry if people are part of that and getting mad at me right now hearing this but uh you know there's a lot of people who went
Starting point is 01:20:31 from you know bioterror safety experts to like you know one one entry right you know right above that in the last even six or nine months they've become AI bioterror safety experts and i'm not saying that's a bad thing but it's just you know very telling right like any time you have billions of dollars going into you know a thing maybe prematurely um you know people have to justify what they're doing day to day and I get it so many rent seekers I want to foster an environment of more competition within sort of like general safety constraints but I don't I don't think we're pushing up against those safety constraints to the point where it would be concerning but we also operate in a worldwide environment where other people
Starting point is 01:21:16 might not think the same way about safety that we do and then it's almost irrelevant what we think in a world where other people aren't thinking that way and it can be used against us I think we're going into a very interesting moment right now with, you know, the AI czar is Sri Ram Krishna, who, you know, used to be a general partner at Andresnehorowitz. And I think that that's a very, very good thing. Like, we want people who have the networks into people who have built things, who have built things themselves, you know, as close to that as possible. And, you know, I think that it is actually a real concern that the space is moving so quickly. quickly that, you know, if it takes legislation two years to make it through, that might be too slow. And so it's sort of even more important that the people who are close to the president and the people who are in the executive branch, at least in the United States, like they should be able to respond quickly, whether it's through an EO or other means. I don't know what it's like in the States, but in Canada, I was looking at the Senate the other day and I was just trying to,
Starting point is 01:22:20 like, is there anybody under like 60 in the Senate kind of thing? Does anybody under, like, does anybody understand technology or they all grow up in the world where, you know, Google became a thing after they were already adults. And it strikes me that there's a difference, you know, the pace of technology improvement versus the pace of law, but also, or regulation, but also the people that are enacting those laws don't tend to, they have a different pace as well, right? Like, our kids are in a different world. Like, my kids don't know what a world without AI looks like. Neither do yours. Yeah. But we do. because we're similar age
Starting point is 01:22:55 and then our parents have this other thing where it's like well we used to have landline phones and like all of these other things and it strikes me that those people should maybe not be regulating you know AI that sounds right I mean I think it's more profound now than ever before
Starting point is 01:23:11 I mean the other thing that's really wild to think about is it's I what comes to mind is that meme on the internet where like there's the guy at this dance it's like you know that everyone else is dancing and they're in the corner and it's like
Starting point is 01:23:26 they don't know if you go almost anywhere in the world people maybe have heard of chat GPT they definitely haven't heard of anthropic or clod it just hasn't touched their lives yet and then meanwhile the first thing they do is they look at their
Starting point is 01:23:45 smartphone and they're using Google and they're addicted to TikTok and things like that so do you think we get to a point where And this is very, like, Ender's game, if I remember correctly, in the movie, where, you know, you pull up an article on a major news site, and I pull up an article on a major news site. And at the base, it's sort of like the same article, but now it's catered to you and catered to me based on our political leanings or what we've clicked on or what we watched before. Well, my hope is that there's such a flowering of choice that, you know, it's going to be your choice, actually. I mean, the difficulty is like, well, then you have a filter bubble,
Starting point is 01:24:24 but that exists today with social media today. Okay, so here's a white pill that I don't know if it's going to happen, but I hope it happens. You know, one of the reasons why it's so opaque today is literally that, you know, X has, you know, or X or, you know, before it was called Twitter, and Twitter had thousands of people working at that place and you needed thousands of people maybe right or I guess the tricky thing is like Elon came in
Starting point is 01:25:00 and quickly acts like 80 or 90% of the people and it turns out you didn't need 80 or 90% of the people so that's like another form of founder mode taking hold but like it or not I can't go into Twitter today and tool around with my 4U like my 4U is written for me right it's in some server someplace and there's a whole infrastructure thing yeah you don't control it but it's conceivable you know today with code gen you know today engineers are basically you know writing code about 5 or 10x faster than they would before
Starting point is 01:25:37 and that sort of capability is only getting faster and better like it's sort of conceivable that you should be able to just write your own algorithm and maybe you'll be able to just write your own algorithm and maybe you'll be able to, you know, run it on your own, you know, and you'll want choice. And so, you know, the kind of regulation that I would hope for is actually open systems, right? Like, I would want to actually write my own version of that. Like, I don't want, the best version of that is actually, like, I want to see an, you know, I maybe want to see my 4U algo, like, very plainly. And then I want to be able to see if I can convert that into the one that I want. or I can choose from 20 different ones.
Starting point is 01:26:20 Two ideas here, you know, as you're mentioning, that one, like, your list could be your default. Like, I want this list to be up. But the other one is, like, maybe there's just 20 parameters, and you get to control those parameters. And it could be, you know, you could consider it political as one parameter from left to right. Right. You could be, like, happy, sad.
Starting point is 01:26:39 Like, you could sort of filter in that way. I know that'd be super interesting. So, I mean, if regulation is coming, like, give me open systems an open choice, and that's, you know, sort of the path towards liberty and, you know, sort of human flourishing. And then the opposite is clearly what's been happening, right? Like Apple, you know, closing off the iMessage protocol so that, you know, it's literally a moat. Like, oh, no, like that person has an Android, so they're going to turn our really cool blue chat into a green chat. We don't talk to those people do. Yeah, right. I know, right? I mean, that's just a pure example
Starting point is 01:27:16 of, you know, Apple, even today, still, you know, they're opening it up a little bit more with RCS, but, you know, it's, those are actually in reaction to the work of Jonathan Cantor and the DOJ. Yeah. So there are efforts out there that are very, very much worth our attention around reigning in big tech and reining in the ways in which, like, these sort of subtle product decisions only make money for big tech. and they reduce choice and ultimately reduce liberty. It would be super interesting to be able to have an advantage if you're big tech and you're a company and you come up with this,
Starting point is 01:27:58 but have that advantage erode automatically over time in the sense that you might have a 12-month lead. But what you're really trying to do is foster continuous. Like if you're a government and you're trying to regulate, it's like I don't want to give you a golden ticket. I want you to have to earn it and you can't be complacent. so you have to earn it every day. And so, yeah, maybe you have like a two-year window on this blue bubbles
Starting point is 01:28:21 and then you have to open it up. But now you've got to come up with the next thing. You've got to push forward instead of just coasting. Like, Apple really hasn't come up with a ton lately. Yeah. And then I think the reason why it's so broken is actually that government ultimately is, you know, very manipulatable by money. Yeah.
Starting point is 01:28:44 And, you know, that's sort of the world we live in. Do you think that'll be different under Trump? I don't tend to get into politics here, but so many people in the administration are already incredibly wealthy. Oh, yeah. That's the hope. I mean, we're friends with a great many people who are in the administration. We're very hopeful and we're, you know, wishing them.
Starting point is 01:29:02 We're hoping that really great things come back. And, you know, in full transparency, like, I think I was too naive and didn't understand how anything worked in 2016. That's not what I was saying in 2016. I was fully an NPC in the system. But also, that being said, I'm a San Francisco Democrat, so I really have very, very little special knowledge about how the new administration is going to run,
Starting point is 01:29:31 except that I really am rooting for them. I'm hoping that they are able to be successful and to make America truly great. Like I am 100%, even though I didn't vote for Trump, I am 110% down for making America truly awesome. What do you believe about AI that few people would agree with you on? It might be that point that I just gave you. I think that a lot of people are hoping that the AI becomes self-aware or have agency.
Starting point is 01:30:05 And from here, the kind of world we live in will be very different if somehow the, you know, literally AI entities are given, you know, maybe the line is actually, will we have an AI CEO? Like, will we have a company that just like literally gives in to, you know, whatever the central entity says, like, that's what we're going to do. Every problem, you know, it's sort of the exact extreme opposite of founder mode. It's like AI mode. Like, will we live in a world in the future where, you know, corporations decide, like, you know, what, a human is messy and kind of dumb and doesn't have a trillion token context window and won't be able to do what we wanted to do. So we would trust in AI and an LLM-based
Starting point is 01:30:56 consciousness more than a human being. I'd be worried about that. I was thinking about this last night, watching the football game actually. And I was like, why are humans still calling players? Like, yes, for coaching. But calling players in the game, an AI, I feel like at this point with like 01 Pro or something. We'd be ahead of where we are as human. I'm wondering if a team should try that. That'd be super interesting. Oh, that's going to be the next level of money ball then.
Starting point is 01:31:24 We'll just try it in preseason, right? Or try it in a regular season game. I don't know, but it strikes me that like they would know who's on the field, who's moving slower than normal. Like all these a million more variables than we can even comprehend or compute or end historical data. You know, the last 16 weeks this team has played. You know, when you run to the.
Starting point is 01:31:44 right after they just subbed or something like they can see these correlations that we would never pick up on not causation but correlation and it'd be super fascinating yeah i mean what's funny about it is um i think in in those sort of scenarios you might just see a crazy speed up because um of human effects i mean when you look at organizations and how they make decisions um so many of them, you know, there's sort of like a Straussian reading of them. There's sort of like at the surface level, you're like, I want to do X. But like right below that is actually something that is not about X. You know, for a corporation, it has to be like we have a fiduciary duty to our shareholders and we need to maximize profit, for instance. And then right
Starting point is 01:32:31 below that, you know, corporations or, you know, entities of any set of people, like they do all sorts of things not for reason X on the top it's actually like oh actually um you know the people who are really in power uh you know don't like that person or you know they rub them the wrong way or or human yeah exactly right it's like these are like extremely influential yeah uh systems your idea might be best but i'm going to disagree because it's your idea not my idea right and then i think that's why in general we really hate politics inside um companies um because, you know, it sort of works against the collective. Do you think we'd ever see a city, like a mayor, then first, before even a CEO, as like an AI mayor?
Starting point is 01:33:18 You know, I guess, like, now that we're sitting here thinking about it, it's like sort of conceivable, but, you know, in sort of all of these cases, I would much rather there be a real human being. Kind of like a plane, right? Like, we want a physical pilot, even though the plane is probably better off by itself. Yeah, that's right. And that might be what. what ends up happening. Even if 90% of the time you're using the autopilot, you always need a human in the loop. And I'd be curious if that turns out to be one of the things that society learns. One of the crazier ideas I've been talking to people about that I feel like
Starting point is 01:33:55 would be a fun sci-fi book. It would be just speculation playing out on how this interacts with nation states. like China obviously is run by a central committee and arguably Xi Jinping you know seemingly if you had ASI you would only want you know sort of the central committee to have it and so that might turn into like a very specific form of that you know China might end up having one ASI that is totally centrally controlled and then everything else about it you know sort of
Starting point is 01:34:34 comes out of that and then you might end up with you i mean controversially like i think often they're trying to be benevolent right like if you spend time in china it's incredibly clean it's you know i'm sure there's all sorts of crazy stuff that happens that is quite unjust you know i have no idea it's not really even my place to like uh argue one way or another um what what it's like to be in china but um that's an interesting idea it's like you know that society probably, you know, unless there's other changes there, like, that's, you can sort of count on a single artificial superintelligence, like sort of setting the, how everything works over there. I mean, probably internal to the Politburo itself, you know, they're going to have to have
Starting point is 01:35:21 all these discussions about what do we do with this ASI and who gets to, you know, where does the agency, the ultimate agency of that nation come from? Going back to something you said earlier, I think the ultimate combination of least for right now, is human and machine intelligence working in concert where the machine intelligence might be the default and then the human ops out. And that's exercising judgment. It's like, no, we're not going to. And when you look at chess, that tends to be the case where the best players are using computers, but they know when, oh, there's something the computer can't see here or there's an opportunity that it just doesn't recognize. And I think it was Tyler Cowan,
Starting point is 01:35:59 you said that. He had a word for it mixing the technology. fascinating yeah and then yeah the question is like well what how does america approach it like potentially it's much more laissez-faire and then in that case like my argument would be like the most american version of it is that like you know you and i have our own asi and like each you know each citizen should be you know issued an asi and be taught how to get the most out of it and you know maybe it needs to be embodied with a robot like we should all you know we should all be superman in that in that sense and that would be like the most um empowered version of a society that of like free and uh you know free people created equal right and then
Starting point is 01:36:41 you know there might be other versions and you're i mean i'd be curious like you know what's the european version of it maybe that version has you know all the check marks and like oh is you know every decision has to be uh you know was this AI assisted or not and like let's check the provenance on like yeah you know how that that AI was like trained and i mean i don't know there There are all of these different, there's like a billion different ways all of these different governments are going to sort of approach this technology. What are the smartest people at the leading edge of AI talking about right now? I mean, you know, the hard part is like I spend most of my time not with those people.
Starting point is 01:37:20 I spend most of my time with people who are commercializing it. So the very, very smartest people are clearly the people who are in the AI labs actually actively doing you're sort of creating these models um but you know sort of the uh the people who I know who are in those rooms I mean sounds like test time compute is really it um you know that's the reasoning models are sort of the thing that will really come to come to bear this year like we're sort of under you know understanding that right now um you know for now it sounds like pre-training might have hit some sort of scaling limit, the nature of which I don't understand yet.
Starting point is 01:38:03 There's a lot of debate about it. Will there be new 4-0-style models that have more data or more compute? And seemingly, there's just rumors of training runs gone awry that basically the scaling laws may have petered out, but I don't know. So we have sort of like the LLM, we have the reasoning,
Starting point is 01:38:25 the LLM and the reasoning model are different, correct? The way OpenAI talks about 01, they're sort of connected, but like different steps. Okay. And so we have progress there. Then we have progress with the data, and then we have progress with inference. Yep. Well, we just don't have enough GPUs, really.
Starting point is 01:38:44 Like, you know, I think what's funny is, like, I'm still pretty bull on Nvidia and that they more or less have the monopoly on, you know, sort of the best price performance. So you think this is going to continue? Well, the demand for trillions of dollars of investments in AI. Basically, I think you can live in two different worlds. One world says, like, all of this is hype. We've seen AI hype before. Like, it's not going to pan out. And then I think the world that we're spending a lot of time in,
Starting point is 01:39:19 like the world really wants intelligence. And then the scary version of this is like, yes, some of it actually is labor displacement right like in the past what tech would do is we'd be selling you hardware we'd be selling you a computer on every desk like everyone needs a smartphone you know we're selling you Microsoft office we're selling you package software we're selling you uh Oracle SQL server like you know we're selling uh you know SaaS apps like Salesforce like you know, it's $10,000, you know, per seat per year, that kind of thing. Or we're selling, you know, classically Palantir was selling, you know,
Starting point is 01:40:00 million dollar or $10 million ACV, you know, very specific vertical apps, right? And so all of those things are selling software or hardware, and that's like selling technology. And so increasingly what we're starting to see is like, you know, especially the bleeding edge is probably customer support and all of the things that you would use for a call center. Those are sort of the things that are already so well defined and specified. And there's a whole training process for people in, you know, usually overseas to do these jobs.
Starting point is 01:40:40 And AI now is just coming in. And like, it's, you know, the speech to text and text to speech, those things are indistinguishable from human beings now. And you can train these things. The evals are good. The prompting is good. You know, going back to what we were saying earlier, like what we're seeing is like, you know, like it or not, is actually replacing labor. Has anybody created an AI call center from scratch? now is ingesting customers?
Starting point is 01:41:16 Yes. I mean, I funded a company in this very current batch that, you know, it's called Leaping AI. They are working with some of the biggest wine merchants in Germany, which is fascinating. So, I mean, that's another fascinating thing. These things speak all human. They certainly speak all the top languages very, very well and are indistinguishable. And, you know, I think 80% of, the ordering volume for some of their customers is entirely no human in the loop. I would love to see government call centers go to this. A, it would scale so much better. I was on the hold for like three hours the other day for like a 15-minute question that I had to answer.
Starting point is 01:42:02 And it's like, this could be, A, it could be done so much quicker by somebody's not a human. And probably more securely and reliably and more consistent, regardless of who's on the other end or how they're talking. Okay. How would you define AGI? I guess the funniest thing is Microsoft, I think, is defining it when it gets its $100 billion back. But I, you know, am sort of skeptical of that because, you know, I think basically only Elon Musk then would, you know, qualify as a human general intelligence, I think. Like, AGI, the thing is, like, in a lot of domains, it feels like it's here, actually. I mean, you know, can it have a conversation with someone and take, you know, give incredibly good wine pairing recommendations and have a perfectly fine, indistinguishable from a real human, you know, sort of, or even better than human, sort of interaction and also, like, take orders for very expensive wine and have that just work. Yeah.
Starting point is 01:43:06 Yes. Like, that's happening right now. Yeah. So I think in a lot of domains, and this is sort of the year where like maybe there's like five or ten percent of things that like it's, you know, sort of hitting the Turing test and, you know, really satisfying that. But, you know, I think maybe this is a year where it goes from like 10 to 30 percent and the year after that it doubles again. And, you know, the next few years are like actually the golden age of building AI. Totally. I think I'm super optimistic, at least for the next like five years about the things we'll discover. the progress will make, the impact will have on humanity and a lot of the things that plague us.
Starting point is 01:43:45 What do you, I want to get into how you use AI a little bit. What do you know about prompting that most people miss? I mean, I'm mainly a user. You know, I spend a lot of time with people who spend a lot of time in prompts. Probably the person I would most point people to is Jake Heller. So he's the founder of case text. He was one of the first people to get access to GPD 4 and we think of him at YC as the first man on the moon and that he was the first to successfully commercialized GPD 4 in the legal space. So what he said was that they had access to GPD 3.5 and it basically hallucinated too much to be used for actual legal work. Like lawyers would see one wrong thing and say like, oh, I can't trust this. GPD
Starting point is 01:44:38 he found actually, you know, with good evals, would actually, you know, give, they could program the system in a way that it would actually work. And what he says, he figured out was if GPD4 started hallucinating for them, they realized that they were doing too much work in one prompt. They needed to take that thing that they asked GPD4 to do and then break it up into smaller steps and they found that they could get deterministic output for for GPD 4 like a human if they broke it down into steps oh interesting and what he needed to do i mean i sort of um it's sort of equivalent to uh tailor time and motion studies and factories it feels like that's what he did for what a lawyer does um you know let's say you have to put together a chronology
Starting point is 01:45:37 of what happened in a case and what a real he's a real life lawyer which is like sort of unusually perfect to figure out this prompting step like he realized that he needed to look at what a real lawyer would do and literally
Starting point is 01:45:53 replicate that like tailored time and motion style in the process and prompts and workflow so for instance doing this type of summarization he would have to go through and read all the materials and then this is why apparently lawyers have you know sort of their
Starting point is 01:46:12 many many different colored uh little flags and highlighters and things like that they just get very good at um you're doing a read through a paragraph by paragraph sentence by sentence and pulling out the things that are relevant and then sort of synthesizing it and so you know early versions of case text and a lot of it today i think is still just doing that it's like what is a specific thing that a human does, break it down into the very specific steps that a real human would do. And then actually, basically, if it breaks, you're just asking in that step to do too many things. So, like, break it down into even smaller steps. And somehow that worked. And, like, basically, this is the blueprint that I think a lot of YCE companies and AI vertical SaaS
Starting point is 01:46:57 startups are doing across the whole industry right now. They literally are taking, you know, model out what a human would do in knowledge work. and then break it down into steps and then have evaluations for each of those prompts. And then as the models get better, because you have what we call the golden e-vals, basically you just run the golden e-vals against the newest model, like, you know, 4-0 comes out, Cloud 3.5 comes out, deep-seek comes out, you know, you have e-vals, which is basically a test set of prompt, context window, data, and output. it. And you can actually, you know, what's funny is, like, it's even fuzzy that way. Like, you can even use LLMs in the e-vals themselves to, you know, score them and figure out,
Starting point is 01:47:45 you know, does it make sense? Can you give us an example of an e-vail, like make it tangible for people? Oh, yeah. It's really straightforward. It's just a test case, right? So, given this prompt and this data, you know, evaluate the prompt to see if, you know, and it usually maps directly to, like, something that is, you know, true, false, yes, yes, no. something that is pretty clear like you know let's say there's a deposition and you know someone makes a certain statement right you might have a prompt that is like you know is this uh you know
Starting point is 01:48:17 is what this person said um in conflict with uh you know any of the other witnesses or i don't know i'm i'm totally making this example of like this is the kind of thing that you can do um you know at a very granular level you might have thousands of these and then that's how uh you know jay Keller figured out he could create something that would you know basically do the work of hundreds of you know lawyers and paralegals and it would take you know a day or an afternoon instead of you know three months of discovery that's fascinating how do you use AI with your kids oh i love um making stories with them so uh you know what i find is oh one pro is actually extra good now um so yeah actually there's like an interesting thing that's happening right now and I saw it up close
Starting point is 01:49:10 and personal this morning looking at some blog posts about deep seek R1 which is deep seek's reasoning model I was reading Simon Willison's blog post about he got deep seek R1 running it's the first one of the first open source versions of sort of the reasoning and so what we just described with how Jake Heller broke it down into chain of thoughts to make case text work, it turns out that that maps to basically how the reasoning stuff works. And so, you know, the difference between what Jake did with GPT4 when it first came out and what 01 and 01 Pro maybe is doing and what DeepSeek R1 is doing clearly because it's open source and you can see it is that those steps,
Starting point is 01:50:02 breaking it down into steps and the sort of metacognition of like whether or not like it makes sense at all of those micro steps that's what um in theory this reasoning is actually happening you that's actually happening in the background for 01 and 03 and if you use chat GPT you'll see the steps but it's like a summary of it right and so it's you know I just only saw it this morning I mean, this is such new stuff. I was hoping that someone would do a open-source reasoning model just so we could see it. And that's what it was.
Starting point is 01:50:40 I think Simon's blog post this morning showed, here's a prompt. And then he could actually see, I think he said, pages and pages of the model talking to itself. Literally, does this make sense? Like, can I break it down into steps? So what we just described as a totally manual action, that a really good prompt engineer CEO like Jake Heller did
Starting point is 01:51:05 and he sold his company case text for almost half a billion dollars to Thompson Reuters that is actually very similar to what the model is capable of doing on its own in a reasoning model and that's what it's doing when it's taking doing like test time compute it's actually just spending more time you know thinking before it spits out the final answer So how do you create a competitive advantage in a world like that where perhaps that company had an advantage for a year or two and now all of a sudden it's built into the model for free? Yeah, I mean, I think ultimately the model itself is not the moat.
Starting point is 01:51:47 I think that the e-vals themselves are the moat. I don't have the answer yet. Basically, for now, maybe it's a toss-up. If you're a very, very good prompt engineer, you will have far better golden e-vals and the outcomes will be much better than what O3 or DeepSeek R1 can do because it's specific to your data and it's much more in the details. I think that that remains to be seen, like the classic thing that Sam Altman has told YC companies and told most startups, period, is you should count on the models getting better. So if that's true, then, you know, that might be a durable moat for this year, but it might not be past, you know, I mean, 03 we haven't even seen yet. The results seem like fairly magical. So it's possible that advantage goes away even as soon as this year. But all the other advantages still apply. Like, you know, one thing that a lot of our founders who are getting the $5 to $10 million a year in revenue with five people in a single year are saying, is, you know, yes, there's prompting, there's evals, like, there's a lot of magic that, like, is sort of mind-blowing, but what doesn't go away is building a good user experience,
Starting point is 01:53:10 building something that a human being who does that for a job, sees that, knows that's for me, understands how to start, knows what to click on, how to get the data in. And so, you know, one of the funnier quips is that, you know, the second best, software in the world for everything is using chat GPT because you can basically copy and paste almost any workflow or any data and it's like the general purpose thing that you know you can just drop data into it and it's the second best because the first best will be a really great UI made by a really good product designer who's a great engineer who's a prompt engineer who actually creates software that doesn't require copy paste. It's just like link this,
Starting point is 01:54:01 link that. Okay, now this thing is now working. And so I think that those are the moats are not different actually at the end of the day. You still have to build good software. You still have to be able to sell. You have to retain customers. You have to, but you just don't need like a thousand people for it anymore you might only need six people okay i want to play a game i'm gonna you have 100% of your network you have to invest it in three three companies oh god okay and so uh the first company you have to invest half and then 30 and then 20 so altogether 100% which companies out of the big tech companies how would you allocate that between here's here's my biggest bet my second biggest bet my third from today going forward okay i guess you know is it
Starting point is 01:54:51 cheating to say I'd put even more money into the YC funds that I already run, but that's a cop-out. That's a cop-out. That goes without saying. I think that it's very unusual just because, you know, we end up, like this is the commercialization arm of every AI lab is what I realize. But short of that, I mean, maybe Nvidia, Microsoft meta. In that order? Probably. Why? I mean, Nvidia just, you know, has an out and out. Like, for now, they're just so far ahead of everyone else. I mean, it can't last forever, but I think that, you know,
Starting point is 01:55:28 the demand for building the infrastructure for intelligence in society is going to be absolutely massive and maybe on the order of the Manhattan Project and we just haven't really thought about it enough, right? Like, it's entirely conceivable. Like, if, say, like, level four, innovators turns out to work like you know it's sort of the meta project because then it's like the Manhattan project of instantiating more Manhattan projects yeah actually like you know you could imagine if we can if if more test time compute or you know you could do the work of you
Starting point is 01:56:08 know 10,200 IQ Einstein's working on bringing us you know basically unleashing limited clean energy. Yeah. Like, that alone will, I mean, if anything, like, that's probably the bigger problem right now. Like, we know that the models will continue to get better. We know that, you know, the demand for intelligence will be unending. And then, you know, even going back to the robotics question, it's like if we end up making
Starting point is 01:56:40 universal basic robotics, you know, the limit will still actually be, you know, sort of the climate crisis and the available energy available to human beings, right? And, you know, maybe solar can do it, but maybe there are lots of other sort of solves. But, you know, I think energy and access to energy is sort of the defining question at that point. Like, everything else you could solve, like, and everything else you could sort of either, you know, if it's in the realm of science and engineering, like, you know, in theory, between robots and more and more intelligence
Starting point is 01:57:23 we could sort of figure these things out but not if we run out of energy okay why Microsoft and why meta I mean I think Microsoft has just really really deep access to open AI and I think opening eye is probably you said public companies so you know I think there's a non-zero
Starting point is 01:57:43 pretty large percentage of like the market cap of Microsoft that I think is pretty predicated on Sam Altman and the team at OpenA.I continuing to be successful. Totally. And then why meta? I mean, I think meta is sort of the dark horse
Starting point is 01:57:58 because they are amassing talent and then they have crazy distribution. And I think I just would never count suck out. I think that he, you know, it's so crazy that it's super smart that he is on that. He's always thinking about what is
Starting point is 01:58:17 the next version of computing, like so much so that he'd probably put more money than he should have into AR, and that was maybe premature. He might still end up being right there, but, you know, AI for a fraction of what he's put into AR is likely to push forward all of humanity and, you know, and accelerate technological progress in a really profound way. I want to switch subjects a little bit. A few years ago, you met with Mr. Beast. Oh, yeah. And talked about YouTube. What did you learn?
Starting point is 01:58:50 Because your channel changed. Oh, yeah, he was great. I mean, he was very brusk with me. He said, you know, look, man, your titles suck and your thumbnails are even worse. And, you know, I think that he spent so much time trying to understand the YouTube algorithm and what people want, that he just loaded it completely into his brain. What makes a good title? I think it's clickbait.
Starting point is 01:59:14 unfortunately and this is the thing when you're trying to make smart content it's actually kind of tricky because you don't want necessarily more clicks you want more clicks from people who are smart we title our episodes
Starting point is 01:59:32 differently on YouTube usually than on the actual audio feed because if you want YouTube to pay attention you have to almost be more provocative intentionally that sounds right yeah like we could call this you know, AI ends the world or something. Yeah, that's right. Give people to watch, but that's not actually what we're talking about at all.
Starting point is 01:59:51 What makes a good thumbnail? What did you learn about thumbnails? Oh, usually, like, a person looking into the camera seems to help a lot. Okay. And then you want it to be relatively recognizable. Like, you know, you want some sort of style of it when someone sees it. I mean, basically what I was doing at the time was just taking whatever, frame that was kind of representative and throwing it in there. But when you train someone
Starting point is 02:00:21 to look at YouTube, you know, back to back to back every time it shows up, like you sort of want to be highly recognizable. So you want to have a distinct thumbnail. Like yours with the overlay, sort of like the red. Yeah. But, you know, once I stopped posting so regularly, you know, then it sort of didn't matter as much anymore. But if you're going to post very regularly, that's pretty important actually so yeah unfortunately it's clickbait and then there is an interesting interaction like um you know yes you can optimize for better thumbnails and better titles for the click through but if it has absolutely nothing to do with the actual body as you mentioned yeah um you will not get watch time and then youtube will be like oh people aren't watching this so we're not
Starting point is 02:01:08 going to promote it because the big thing about youtube is discovery yeah and like we noticed this all the time where it's sort of like you just get this audience but you don't get to keep the audience as a creator which is really interesting well you do if you are uh regular and then the other hack is uh be very shameless about asking for subs and then the funniest thing is like subs do very little actually um there's no guarantee that you show up in um people's feeds if someone subs it like helps a little bit um liking helps more watch time helps the most and then And the extreme, like, you know, over-the-top hack that, you know, probably you should do here is you should ask for the like, subscribe, and hit the bell icon. Because if you hit the bell icon and they have notifications on, that's the only thing that is almost as good as having their email address and emailing them.
Starting point is 02:02:06 You heard it here, people. Gary just told you. You got to click, like, subscribe, and hit the bell icon because. you want knowledge. You want to be smart and this is the place to get it. Oh, I love that. Thank you. Good advertising. Yeah. I want to ask just a couple of random questions for your wrap up here. What are some of the lessons that you learned from Paul Graham that you sort of apply or try to keep in mind all the time? I think the number one thing that is very hard, but is so, I mean, you can see it and read it in his essays. It's to be plain spoken and to sort of be hyper aware of artifice of kind of like bullshit basically like don't let bullshit you know I think like it creeps in here and there I'm like oh yeah you know I um you know I sometimes am in danger of like caring too much about like the number of followers I have and things like that you know whereas like actually I shouldn't be worried about that like
Starting point is 02:03:09 what I should be worried about is, and I spend a lot of time with our YouTube team and our media team at YC talking about this. It's like, if we get too focused on just view count, we're liable to just, yeah, like optimize for the wrong audience. If we're not being authentic to ourselves or, you know, if we're just trying to like follow trends or, you know, do things that get clicks, it's like, that's not helpful to them either. Like, then we're just on this treadmill, right? yeah basically like trying to be very very high signal to noise ratio you know the thing that I probably struggle with most and you know I don't know maybe some of the listeners here might feel this it's like sometimes I think out loud and then you know really really great
Starting point is 02:03:56 ideas are not like thinking out loud they're actually figuring out a very complex concept and then trying to say it in like as few words as possible and um you know the amount of time that Paul spends on his essays is fascinating. It's sometimes days, like sometimes weeks. Like he'll just iterate and iterate and send it out to people for comment. And the amount of time he spends whittling down the words and trying to combine concepts and say the most
Starting point is 02:04:33 with the least number of words, it would shock you. And then also that is actually like writing is thinking. Like, um, one of the more surprising things that we do a lot of at YC is we help people spend time thinking about their two-sentence pitch. So, um, you know, you would think that that's, oh yeah, that's like something, you know, startup 101, like, uh, you're helping people with their pitch that sounds so basic. Like, yeah, I guess that makes sense. Like, that's what an incubator would do. But, um, the reason why it's very important is that it's actually almost like a mantra. It's like a special
Starting point is 02:05:11 incantation. Like you believe something that nobody else believes and you need to be able to breathe that belief into other people and you need to do it in as few words as possible. So if you, the joke is like, oh
Starting point is 02:05:27 yeah, like what's your elevator pitch? But like you might run into someone who could be your CTO, who could introduce you to your lead investor, who could be your very best customer and you will literally only have that time. You know, you will only have time to get two sentences in. And so, and even then, I mean, I guess it's kind of fractal. Like, that's what I love about a really great interview. Like,
Starting point is 02:05:49 you know, someone comes in and I'm like, oh, yes, I get it. Like, I know what it is and I know why that's important. I know why I should spend more time with you. That's what a great two sentence pitch is. And, you know, knowing what it is is very hard. Like, that's all of Paul Graham's, you know, sort of editing down and whittling down. in a nutshell. It's like people do really complex things. How do you say what you do in one sentence? That's very hard, actually. And then, you know, the second sentence is like, why is it important? Why is it interesting? Why should I, you know? And then that may well change with, like, the person that you're talking to. So, yeah, to the degree that clear communication is
Starting point is 02:06:32 clear thinking, you know, one of the things I did when I first joined YC, I had no intention of ever becoming an investor, ever being a partner, let alone running the place. Like, I was just a designer in residence. And what I did was I did 30-minute, 45-minute office hours with companies in the YC Winter 11 batch sitting in back then as an interaction designer. I used an Omnigraffle a lot. And so we just sat there and designed their homepage. And it's like, this is what the call to action should say.
Starting point is 02:07:04 Here's, you know, put the logo here. Here's the tagline. maybe you have a video here or right below you have a how it works and then what's funny about it is like some people would take the designs we did in those 30, 45 minute things and that would be their whole startup
Starting point is 02:07:22 and sell those companies for hundreds of millions of dollars years later which is just like fascinating to think about it's like clear communication, great design creating experiences for other people all of those are sort of exercising the same skill And so that's what a founder really is. It's like, you know, a founder to me is a little bit less what you might expect. It's like, oh, this is someone with a firm handshake who looks like a certain way and like bends the will of the people.
Starting point is 02:07:51 Like you might think of an SBF that's like, that's all artifice. Like, think about that guy. Like that guy was like full of shakes and like the guy was like on meth, right? Like the guy was, you know, everything about it was an affectation. Right. Like, he was a caricature of, like, an autist, right? Like, we see very autistic, incredibly smart engineers all the time. But, you know, for him, it was like, that was part of the act.
Starting point is 02:08:17 Yeah. Like, I remember he did a YouTube video with Nasdaily. And I love, you know, Nasdaer's great, and I love Nasdaily. But I couldn't believe the video that SBF went on. It was just, like, full of basically bullshit, right? And the exact opposite of Brian Armstrong. Yeah, we're always on the lookout for that. He wasn't trying to fool you.
Starting point is 02:08:39 What's that? Oh, yeah, I guess so. I mean, he was fooling the world. Because you know, right? Like, you know. It's hard to fool somebody who knows versus somebody who doesn't know, and he wasn't trying to appeal to you. He was trying to appeal to other people who didn't know.
Starting point is 02:08:54 It's the same as going back to Buffett, just tying a few of these conversations together, right? Like, everybody repeats what Buffett says. But the people who actually invest for a living or no Warren or Charlie or have spent, time with them can recognize the frauds because they can't go a level deeper into it. They can't actually go into the weeds, whereas those guys can go from like the one inch level to the 30,000 foot level and everything in between and they don't get frustrated if you don't
Starting point is 02:09:19 understand. Whereas a lot of the fraudsters, one of the tells is they can't go, they can't traverse the levels. And then they do tend to get defensive or sort of angry with you for not understanding what they're saying, which is really interesting. And then, I just want to tie the writing back to what you said. You said, if you can't get it clear in like two sentences, you might miss an opportunity. That goes to the 10-minute interview, right? Or you're looking for maybe it's not the perfect pitch, but you want that level of clarity with people. And it's really the work of producing that that helps you hone in on your own ideas and discover new ideas.
Starting point is 02:09:57 Yeah. I mean, I feel like we're in like the idea fire hose. So it's just like hearing about all kinds of things that are very important. promising. And then I think the most unusual thing that I'm still getting used to is in full transparency. I mean, probably, you know, the median YC startup still fails, right? Like, you know, YC might be one of the most successful, you know, sort of, you know, institutions of its sort that has ever existed, you know, inclusive of venture capital firms on the one hand. On the other hand like the failure rate is absolutely insane right like you know it is still a very small percentage
Starting point is 02:10:39 of the teams actually do go on and you know create these you know companies worth 50 or 100 billion dollars but the remarkable thing is not that you know it's that low the remarkable thing is that it happens at all like it's just unbelievable that um i think you have the coolest job in the world or at least like warm that if i had to pick like the top 10 like you'd be up there i agree i mean it's uh especially to you know i pinch myself every day on the regular like in the morning i wake up and it's like oh this ai thing is happening and then somehow i'm filling the shoes of the person who like i mean sam altman probably brought forward the future by you know five years 10 years at least 10 years maybe like all of the things that you know him
Starting point is 02:11:31 him and Greg Brockman and all the researches he brought on, like, we're working on. That happened, that was going to happen, right? Like, I think there's a lot of the Sam Altman haters or the Open AI haters out there love to point out, like, oh, you know what? Like, the transformer was made by all these teams. I mean, some of them was like, these teams absolutely did incredible things. Like, you can't take away from that, right? The researchers did, you know, Demis did incredible things.
Starting point is 02:11:58 But at the same time, it's like they believed a thing that nobody else believed and they brought the resources to bear. And so recently, you know, Sam Haltman came back to speak at our AI conference this past weekend. And we, you know, I couldn't think of another way to start that conference than have Sam Homan. And, you know, a bunch of his old, we had Bob McGrew there.
Starting point is 02:12:25 We had Evan Morikawa, who was the Eng manager, who released ChatGPT, Bob McRue actually worked with me at Palantir back in the day, but he's outgoing chief research officer. Jason Kwan was there. He actually worked at YC Legal before leaving to run a lot of things at OpenAI. And so I had them all stand up. And we had a room full of, you know,
Starting point is 02:12:48 290 founders, all of whom were working on things that happened essentially because Open AI existed. And there was like a standing ovation. Oh, that's awesome. And, you know, Sam, to his credit was like, you know, not just us. You know, these researchers did so many things as well. But all that being said, it's like, we're in the middle of the revolution. Oh, totally.
Starting point is 02:13:13 I mean, it's not even the middle. I think it's like just after the first pitch of the first inning of what is about to be like a great, great time for humanity, for technology. I'm with you. I'm like so excited to be alive right now. So lucky, so blessed to like be a witness to this. And I think we're going to make so much progress on so many things and go back to the haters. Like there's always people pulling you down, but they're never people that are in the trenches doing anything. I've rarely seen, you know, people who are working on the same problem attacking their competition like that or undermining them or, you know, it's, you know, on our end, we're just hoping to lift up the people who want to build. And this is the golden age of building. Amazing. I want to just end with the same question we always ask, which is what is success for you? I think looking back, I mean, growing up,
Starting point is 02:14:10 I always just looked up to the people who made the things that I loved. And Steve Jobs, Bill Gates, like the people who really created something from nothing. And I just think of Steve saying, you know, we want to put a dent in the universe, and ultimately that's what I want. Like, that's, you know, success to me is how do we bring forward? You know, actually, this is actually when Paul Graham came to recruit me to come back to YC, I had actually left and started my own VC firm, you know, got to $3 billion under management.
Starting point is 02:14:50 Yeah, you guys did CoinBiz. Yeah, totally. I mean, returned $650 million on that. investment alone um you know i was sort of right at the pinnacle of my uh investing you know as of you're running my own vc firm and paul and jessica came to me and said gary we need you to come back and run yc and uh it was really really hard to walk away from that um luckily i had very great partners breck gibson my partner my multi-time co-founder went through yc with me uh he actually built a bunch of the software with me at y c
Starting point is 02:15:26 see, you know, before we left. He runs it now. They're off to the races and still doing great work. And, you know, I sat down with Paul and, you know, right after we shook hands and, you know, he's like, Gary, do you understand what this means? It means that, you know, if we do this right, we, you know, kind of like, I think, what Sam did with Open AI with, you know, pulling forward large language models and AI and bringing about AGI sooner like YC is sort of one of the defining institutions that is going to pull forward the future. And it's not more complicated than how do we get in front of optimistic, smart people
Starting point is 02:16:11 who, you know, have benevolent, you know, sort of goals for themselves and the people around them. How do we give them, you know, a small amount of money and a whole lot of know-how and a whole lot of access to networks and, you know, a 10-week program that hopefully reprograms them to be more formidable while simultaneously being more earnest. And then the rest sort of takes care of itself. Like, you know, this thing has never existed before like this. And it deserves to grow. Like, it deserves to, you know, if we could find more people and fund them and have them be successful at even you know the same rate we would do that all day i mean and i think what are the alternatives right like i think of all the people who you know they're locked away in companies
Starting point is 02:17:04 they're locked away in academia you know or heck like you know these days the wild thing about intelligence is like intelligence is on tap now right like all of the impediments to being able to all the impediments to fully realizing what you want to do in the world are starting to fall away like you know there's always going to be something that stands in the way of any given person and i'm not saying like those things are equal but they you know through technology and through access to technology those things are coming down like if there's the will if there's the agency if there's the taste like that's what i want for society And I want them to achieve that. In a lot of ways, we have more a quality of opportunity now than we've ever had in the history of the world, but not a quality of outcome. That's right. Yeah. And that's sort of the quandary, right? You have to choose. Do you want the outcomes to be equal, or do you want a rising tide to raise all boats? I'm a huge fan in equal opportunity, but unequal outcome. I'm with you.
Starting point is 02:18:13 Thank you for listening and learning with me. If you've enjoyed this episode, consider leaving a five-star rating or review. It's a small action on your part that helps us reach more curious minds. You can stay connected with Farnham Street on social media and explore more insights at fs. blog, where you'll find past episodes, our mental models, and thought-provoking articles. While you're there, check out my book, Clear Thinking. Through engaging stories and actionable mental models, it helps you bridge the gap between intention and action.
Starting point is 02:18:45 So your best decisions become your default decisions. Until next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.