a16z Podcast - Marc Andreessen on Why This Is the Most Important Moment in Tech History

Episode Date: January 29, 2026

Recently, Marc Andreessen joined Lenny Rachitsky on Lenny's Podcast. They talked about why 2025 may be the most significant year in tech history, how AI is reshaping the future of product managers, de...signers, and engineers, and what founders need to understand about building in this moment—from where moats actually exist in AI to what the most AI-native companies are doing differently to the skills Marc is teaching his own kids to thrive in what comes next. Resources:Follow Marc Andreessen on X: https://twitter.com/pmarcaFollow Lenny Rachitsky on X: https://twitter.com/lennysanCheck out Lenny’s Podcast: https://www.lennysnewsletter.com/podcast Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 If we didn't have AI, we'd be in a panic right now about what's going to happen to the economy. Because what we'd be staring at is the future depopulation, and, like, depopulation without new technology would just mean that the economy shrinks. My friend Larry Summers used to tell people he said, the key for planning is he said, don't be fungible. He is an economist. And so that was economics speaking way. That means essentially don't be replaceable. We're going to have AI coders that are actually better coders than the best human coders. I think we're going to have AI doctors that are better than the best human doctors.
Starting point is 00:00:25 I think we're going to have AI lawyers that are better than the best human lawyers. I think we're used to living in a world where we just don't. don't understand how good, good can get, because we've been capped by our own biology. And we're going to get to experience what it's like when you have the capability at your fingertips that's actually better than human in these domains. What if the most consequential technology shift in human history is happening right now, and most people are still debating whether it's real? In November 1989, when the Berlin Wall fell, few understood they were watching the end of one world order
Starting point is 00:00:52 in the beginning of another. The Cold War had lasted 44 years. Its collapse took weeks. Within a decade, a young programmer would be able to be. find himself at the center of the next transformation, building a browser that brought the internet to everyone. Three decades later, that same person believes 2025 rivals those movements in magnitude. AI models have crossed from creative parlor tricks into genuine reasoning, solving problems in medicine, law, and science that seemed impossible just 18 months ago. But here's what's unsettling.
Starting point is 00:01:20 We don't yet know what this means for the people who build software. Product managers, engineers, designers. The roles that define the last 30 years of tech faced fundamental questions about their future. The optimistic view and the pessimistic view can't both be right, yet both have evidence. This conversation examines what's actually changing, what skills matter now, and how the most AI-ne native founders are building differently. Today, we're sharing a conversation between Lenny Richinsky and Mark Andresen from a recent episode of Lenny's podcast. Mark Andresen, thank you so much for being here. And welcome to the podcast.
Starting point is 00:01:55 Awesome, Lenny. Thank you. It's great to be here. I want to start with just a big picture question. I have a billion directions I want to go, but I think this is going to give us a little bit of a frame of reference. How big of a deal is the moment in time that we're living through right now? This is a very, very historic time. I think 2025 was maybe the most interesting year in my entire career and probably life. And I think I would expect 2026 to succeed back. Wow. That says a lot. Yeah, I've seen some stuff. So it feels like two things are happening. One is the trust that a lot of people have had in kind of what you could describe as kind of legacy
Starting point is 00:02:28 institutions around the world is, I think, in kind of full-scale collapse right now. By the way, there's a lot of data to support that. And so I think there's just, there's like a lot of structures and orders and institutions that people have just relied on for a long time that have just proven to not be up for the, up for the challenge. And then kind of corresponding with that is the national and global conversation have become like, let's say, liberated. And so, you know, this sort of incredible revolution that we have in kind of, you know,
Starting point is 00:02:56 what I would describe as freedom of speech, freedom of thought. ability for people to hopefully discuss things that maybe they couldn't discuss even a few years ago, you know, is just dramatically expanded. And I think that's, that's now on a one-way train for just a much broader range of discourse. And then, you know, there's also just these, like, incredibly massive geopolitical shifts that are happening. And obviously, the U.S. is changing a lot. Europe is changing a lot. China's changing a lot. Latin America, by the way, is changing a lot. You're very dramatic, you know, events playing out down there right now. You know, kind of all over the world, like I think a lot of assumptions are being pulled out into the daylight and re-examined.
Starting point is 00:03:28 And then it's kind of the fact that all these things are happening at the same time, right? And so you've got all of these countries and industries, you know, where things are kind of increasingly upheaval, but you have AI as this kind of new technology that's going to really affect things. And then you've got, you know, people, you know, citizens being able to fully participate, being able to argue things out. And so it's kind of like those three kind of big mega things are kind of all colliding at the same time. And I think we're probably just the very beginning of all three of those. And those all feel like kind of, you know, historical, you know, moment shifts.
Starting point is 00:03:56 comparable in magnitude to maybe default the Berlin Wall in 1989, you know, maybe the end of World War II, you know, kind of moments like that. It certainly feels like that. Good God. What a time to be alive. Yeah. In terms of the AI piece, which is where a lot of people are trying to figure out what to do,
Starting point is 00:04:15 what do you think isn't being priced in yet in terms of the impact AI is going to have on, say, the world or just people listening. The I think I think at this point, I think it's pretty clear with, you know, our technology hats on that like this stuff is really working out, right? And so there was this, you know, kind of, you know, when there was a chat GPT moment, you know, three years ago, it was only, by the way, only three years ago, right, was the chat GPD moment. And the big question, and the big question was, all right, this is like, this is like, we have machines now that can compose Shakespearean sonnets and rap lyrics and like, you know, this is amazing. But then
Starting point is 00:04:45 there was, you know, there's a very big question, like, can you harness this technology for reasoning and for, you know, problem solving and domains that, like, really matter, you know, medicine and science and law and so forth. And, you know, it turns out the answer to that is yes, right? And, you know, the last 12 months and especially the last, even just the last three months have really proven that, like, AI can really do, like, you know, you're seeing it all now. You know, you can actually, you know, AI is not developing new math theorems. You know, over the holiday break, you know, there's sort of the, but it feels like
Starting point is 00:05:14 the AI coding thing, you know, really hit critical mass. And the world's best, you know, the world's best programmers, right, including like Linus Torvald's, you know, for the first time over the holiday. break basically said, yeah, AI is now coding better than we can. And so that, you know, that's, incredibly, incredibly powerful. And I think we all, you know, kind of, I think, assume that AI now is going to get really good at reasoning in any domain in which there are verifiable answers. And so, you know, that's going to include, like, many very important domains. So, so like the technology feels like it's, it's moving fast and it's going to be working really well. I think the thing that
Starting point is 00:05:46 is not well understood, I think a lot of people have a, I think, you know, a lot of people in the industry have kind of what I would describe as this one dimensional thing, which is, okay, as a result of the technology now working AI, just kind of sweeps the world and changes everything. And I think that's kind of the wrong, that's kind of the wrong framework. I think it's based in an incomplete understanding of the world that we live in or the world that we've been living in for the last, you know, 80 years. And I recall it two things in particular. So one is, it has, I think it's felt to us like in the U.S. and the West for the last, you know, whatever, 30 years or 50 years, it's felt like we've been in a time of great technological change. But actually,
Starting point is 00:06:21 if you look for actually evidence of that, like in statistical evidence of that, analytical evidence of that, you basically can't find it. And in particular, economists have a way of measuring the rate of technological change in the economy that is productivity growth,
Starting point is 00:06:35 which we could talk about what that means, but basically it's sort of the mathematical expression of the impact of technology on the economy. And productivity growth for the last 50 years has actually been very low, not very high. So we all feel like it's been very high. There's been lots of technological change. What's actually happening is,
Starting point is 00:06:51 It's been very low. And in fact, the pace of productivity growth, like in the U.S., is running at like a half of what it were, in my lifetime, in our lifetimes. It's been running at about a half the pace that it ran in between 1940 and 1970. And it's been running at about a third the pace that it ran between about 1870 to about 1940. And so statistically, in the U.S., in the West, technology progress in the economy, technology impact the economy, has actually slowed way down. And so we, you know, the AI thing is going to hit, but it's hitting an environment in which we have actually had almost no technological progress in the actual economy for a very long time. So we could talk about that. And then there's this other like just incredible thing that's happening, which is the, you know, sort of the demographic collapse, right? It's sort of a Western phenomenon, an increasingly global phenomenon, which is, you know, the rate of reproduction of the human species is is in rapid decline. And, you know, there are many countries, you know, including in the U.S. where, you know, the rate of reproduction is, you know, under two. meaning that many, many countries around the world, by the way, including China, which is a really
Starting point is 00:07:53 big deal, are actually going to depopulate over the next century. And so you have this kind of precondition that says there's actually been very little technological progress happening in the world, and the world is going to depopulate. And so AI is going to enter a world in which those two things are true. And I think this is incredibly important because we actually need AI to work in order to get productivity growth up, which is what we need to get economic growth up. And we actually need out of work because we're going to need machines to do all the jobs that we're not going to have people to do because we're literally going to depopulate the planet over the next 100 years. And so I think the interplay of these factors is going to be much more interesting and frankly,
Starting point is 00:08:31 more complex than a lot of people have been thinking. I'm going to follow this thread about kids. I know you have a kid. And one of my most, my favorite lenses into how people think and what they value is what they're teaching their kids, what they're steering their kids towards. Are there specific skills or, I don't even careers that. you're steering your kid towards? The way I think about this,
Starting point is 00:08:51 and you know, yeah, we have a 10-year-old, and so, you know, we actually homeschool, and so we think a lot about this. So I think the way to think about the impact of AI on people, on specifically people as individuals, I think it's actually, you know, a lot of people just focus on kind of this, you know, kind of very, I would say, straightforward
Starting point is 00:09:09 and overly simplistic view of just literally job gains, you know, job losses, which we can talk about. But there's two specific things at the level of like an individual person or an individual kid, So I think it's pretty clear that AI is going to take people who are good at doing things, and it's going to make them very good at doing things, right? And so it's going to be a tool that's going to sort of raise the average kind of across the board. And, you know, look, you see that playing out already.
Starting point is 00:09:31 You know, anybody who's in a position where they need to, you know, write something or design something or write code or whatever. If they're pretty good at it today, they use AI and all of a sudden they're very good at it. And so there's sort of that aspect to it. And I think the way the education system right large is going to teach AI is going to be based, you know, hopefully a lot on that. But then there's this other thing that's happening, which we're also starting to see, and we're really seeing it particularly in coding right now,
Starting point is 00:09:56 where the really great people are becoming, like, spectacularly great, right? And so you kind of use it, use the term, you think about, like, the super empowered individual, right? So the individual who is, like, really good at coding or really good at making movies, or really good at making songs, or really good at designing, you know, making art or whatever, whatever those things are, or, you know, or podcasting or, you know, hopefully venture capital, you know, if you're very good at
Starting point is 00:10:24 and you can really harness AI, you can become spectacularly great. And, like, super productive, right? And, you know, I'm sure you have a lot of friends in this category as well, but, like, you know, the really, really good quarters are experiencing this right now. My friends are really good coders are like, oh, my God, all of a sudden, I'm not twice as good as I used to be. I'm like 10 times as used as good as I used to be. And so I think at the unit of like N equals one of like an individual kid,
Starting point is 00:10:49 I think the question is kind of how do you get them into a position where they're kind of this kind of super empowered individual such that they're going to be really kind of deep in whatever it is they're going to do, but they're going to be deep in a way that's going to let them fully use the power of AI to be not just great but to be like spectacularly great. And I think that that's going to be the real, you know, that's the real opportunity. And that's, you know, at least that's what we're shooting for. And that's what I heard there is essentially agency.
Starting point is 00:11:13 word that we see on Twitter all the time is building an agency, them not waiting for someone to tell them what to do, figuring out what to do. Yeah, yeah. So this thing with this term agency has become very, very, you know, very popular. Certainly California for the last couple of years. It's really interesting because it's, I had a lot of trouble with this early on because I'm like agency, what are they talking about? And what they're kind of talking about is like, you know, initiative, you know, willingness
Starting point is 00:11:37 to, you know, you could just do things. you know, what is it? The Samo Berra has the great term live player. You know, you can be like a primary participant in events. And at first I was like, well, yeah, like that's kind of obvious, right? Like, of course. And then I'm like, oh, actually, it's not so obvious anymore because kind of your point, I think so much of our society is based on like there are all these rules.
Starting point is 00:12:02 And everybody gets taught kind of by default. You're supposed to follow all these rules, right? And then everybody, if you like break the rules, like everybody gets freaked out. It's like, oh, my God, he broke the rules. And so, like, we have somehow worked our way, our way kind of, you know, I don't know, psychologically, you know, kind of into a state in which I guess the natural assumption for a lot of people is, you know, the thing that you want to train kids to do is like follow all the rules.
Starting point is 00:12:24 And, you know, you could argue that kind of, you know, for example, the school system, you came through 12 school system or whatever has gotten kind of more and more focused over time. And it's like, yeah, it's like, no, you should actually, and again, especially unit unit n equals one, like, of your kid. It's like, oh, and look, there's something to be. had we, I just had this conversation my 10 year old last night, actually, I, I rolled out the concept of, you know, in order to lead, you must first learn to obey, right? In order to,
Starting point is 00:12:49 you know, issue orders, you must learn how to follow orders and, you know, you kind of try to keep him with some level of structure in his life and not just, and not just your agency. But yeah, I mean, so look, you know, some rules are important and so forth. But yeah, no, look, there is like a huge, there's just a huge premium in life on being somebody who is able to, like, fully take responsibility for things, fully take charge. run an organization, lead a project, create something new. And, you know, maybe, yeah, that has been maybe a little bit diminished in our culture over the last 30 years. It's healthy, you know, that there's now a term for that, that is coming back into vogue.
Starting point is 00:13:25 And then, and again, that's how I view AI for kids, is like, okay, AI should be the ultimate letter on the world for a kid with agency to be able to say, okay, I can actually be a primary contributor, right? Whether that's, I can be a primary contributor and everything from, you know, developing, new areas of physics to writing code to being an artist, you know, to writing, you know, to writing novels, like, you know, whatever that thing is, I can fully participate in the world. I can really change things. And I, that feel, the combination of that idea combined with this technology, feels very healthy to me. What does that quote about giving me a lever and I'll move the world? And I'll move the world. Yeah, that's exactly right. Well, so it's actually funny, you mentioned that. So the, the, the early kind of scientists, including like Isaac Newton, were super assessed with,
Starting point is 00:14:05 with, you know, this concept of alchemy, right? It's like, you know, they, you know, they, you know, they, You know, they developed, like, you know, Newton. He's like, developed Newtonian physics, and he developed, like, calculus and all these things. But the thing he was really obsessed with was alchemy, which was the thing he could never get to work, right? And alchemy was the transmutation of lead into something that was very common, which was lead into something that was very rare and valuable, which was gold. And, you know, there was this, he spent, you know, decades trying to figure out this thing called the philosopher stone, which would be basically the machine or the process that would be able to transmute the rare, you know, the common thing into the rare thing, lead in the gold. And he never figured out. out. It's incredibly frustrating. Nobody ever figured that out. And now we literally have AI have a technology that transfers sand into thought. Right? I just blew my mind.
Starting point is 00:14:50 Right. The most common thing in the world, which is sand, converted into the most rare thing in the world which is thought. Right? And so AI is, it is, it is the philosopher's stone. Like, it is that. It actually is that. And it's just this incredibly powerful tool. And that's where I get so excited.
Starting point is 00:15:07 I mean, and again, this is what we're doing, other 10-year-old, like, all right, a primary thing that we want to make sure to do is to make sure that he knows fully how to leverage and get benefit out of the philosopher stone, right? Which is, you know, which is to say AI. And then, you know, that's certainly essential to everything. We're teaching him. You know, there's this meme going around that, you know, Silicon Valley people don't let their kids use computers.
Starting point is 00:15:26 And I just, there may be a handful of people who are like that. I don't, you know, I don't know. I think it's more, honestly, the other way around, which is that, you know, the more you're kind of plugged in stuff in Silicon Valley, the more important it is to make sure that your kids actually fully understand this and know how to use it. And that's certainly the mode that we're in. And that's certainly the mode that I would encourage Paris to think about. I did not know your kid was homeschool.
Starting point is 00:15:46 That is super interesting. It's almost a statement on, you know, education in today's day. Maybe is there any thoughts there? I'm just for folks that maybe aren't in your tax bracket that want to help their kids be successful, maybe homeschool, maybe not. What advice would you have? This is the challenge. And again, this kind of goes to how you're, you know, kind of your original question,
Starting point is 00:16:04 which is education, there's two completely different ways to talk about and think about education. The way that's usually thought about and talked about is kind of at the level of like a nation, right? So, you know, it's like a national level issue or maybe a state level issue in the U.S., which is basically like how do you educate all the kids? And of course, that's incredibly important. And of course, you're going to need like some level of large-scale system, like the national K through 12 school system or something like that, you know, in order to do that. But then there's this other question, which is like n equals one, for an individual kid, like, what can you do with,
Starting point is 00:16:37 with an individual kid. And so I'll just give you kind of the ultimate, you know, kind of the ultimate answer to that question, which is it's been known for centuries that the ideal way to teach a kid at the unit of n equals one, by far the ideal way to do it is with one-on-one tutoring. Like if you just have an individual kid and the goal is to maximize an individual kid,
Starting point is 00:16:57 by far you get the best results with one-on-one tutoring. And this is something that like every world family knew in history, It's something that every aristocratic class knew in history. There's all these amazing examples. Alexander the Great was treated by Aristotle. He took over the world, right? Like, you know, many of the great kings and queens, you know, rural families and aristocrats and so forth, you know, over the course of centuries,
Starting point is 00:17:19 you know, kind of always had this approach. There's actually also statistical evidence, analytical evidence that this is correct. There's this, you know, massive question in the field of education, which is how do you improve educational outcomes? And basically, it turns out it's just, it's very hard to improve educational outcomes except there's one method that always does it, which is called the, it's called the Bloom 2 Sigma effect, which is there's one method of education that routinely raises student outcomes by two standards of deviation, and we'll take a kid from the 50th percentile to the 99th percentile,
Starting point is 00:17:47 and that's one-on-one tutoring. So again, if you go back to like it, it equals one, you have a kid and a tutor, and they're in this like, you know, very tight loop with each other, you know, where the kid is able to constantly kind of be on the leading edge of what they're capable of doing, and they can, you know, they can move incredibly fast and they get kind of correction in real time. you get these better outcomes. But to your question, like, it's never been economically feasible for anybody other than the richest people in society
Starting point is 00:18:08 to be able to provide one-on-one tutoring kids. AI provides the very real prospect of being able to do that, right? Because obviously now, right? If you have a kid that's like super interested at something and they can talk to, you know, an LLM about it, and they can ask an infinite number of questions and they can get instantaneous feedback. And in fact, you can even tell an LM,
Starting point is 00:18:27 it's like, you know, teach me how to do the following. And you can say, you know, wow, that's like, I don't quite understand what you're saying, like, dumb it down for me a little bit, that, okay, now quiz me, you know, do I actually understand this? Like, people can just do this today, right? And so I think there's this like massive opportunity for parents, you know, in many walks of life to be, you know, with a little bit of time of focus to be able to say, okay, you know, my kids probably still going to go through a traditional education system,
Starting point is 00:18:51 but I'm going to augment this with AI tutoring. And of course, you know, and of course, there's going to be tons of startups, right? And there already are that are going to try to build on all the products and services for this. Khan Academy, you know, on the nonprofit side has a big push to do this. And so, you know, I think the broad answer might be a hybrid approach with schools plus one-to-one tutoring through AI. There's also this great, you may have heard, there's this great school, new private school system called Alpha, in which everything I just described is kind of the basis of their philosophy, which is, you know, it's a combination of in-person schools and teachers, but it's also, you know, heavily based on AI and AI tutoring. So I think there's like a, there is a magic
Starting point is 00:19:25 formula in here that I think is going to apply much more broadly. And I really, for parents interested in this, now would be a great time to really start to think hard about that and to look at the options. It's interesting because there's all this concern that young people, jobs are not going to be there for them, AI is replacing them. On the flip side, there's what you're describing here. It feels like people coming in learning today are going to be moved so fast and learn so much more. Where do you sit on this divide of young people are in big trouble or they're actually going to be the ones winning in the end? Yeah, so the job substitution job loss thing is just it's very reductive. I think it's an overly simplistic model. And again, it goes back to what I said at the very beginning, which is we've actually been in a regime for 50 years of very slow technological change in the economy. And so, you know, and again, like I said, it's like at a half the rate of the previous era and then a third the rate of like 100 years ago.
Starting point is 00:20:14 And so we're coming out of this kind of phase where we've had like almost no technological progress in the economy. We've had a remarkably little job churn as a result of that relative to any historical period. And so even if AI like ticks up, even if AI triples productivity growth in the economy, which would like be a massively big deal, you would take us back to the same level of job churn that was happening between 1870 and 1930. And if you go back and you read accounts of 1870 and 1930, people just thought the world was a watch with opportunity. At that rate of technological transformation, kids were able to develop new careers into new areas of the economy, building new kinds of products and services. I mean, you know, a huge part of everything in our modern world today was kind of invented and proliferated kind of during that period. And so even if AI like triples the pace of economic change and the economy, it's going to,
Starting point is 00:21:00 to just translate to a much higher rate of economic growth has been to translate to a much higher rate of job growth. And, you know, there will be some level of like task level and job level substitution that will take place, but that will be swamped by the macro effects of economic growth and innovation that will happen. And that, then corresponding to that, there will be, you know, there will be hiring booms, you know, quite honestly, I think all over the place. And then again, go back to the other thing, which is like, this is all happening in the face of declining population growth and increasingly population shrinkage. And so, So human workers in many, many, many, countries over the next, you know, 10, 20, 30 years
Starting point is 00:21:34 are going to be at more and more of a premium, literally because you're going to have shrinking population levels. You know, we don't really want to get into, you know, politics particularly, but it does feel like the world broadly is going to reverse course on the rates of immigration. We've had for the last 50 years. It seems to be kind of a broad-based, you know, kind of thing happening, you know, kind of rise of nationalism, you know, concerns about the rate of immigration. And immigration historically in countries like the U.S., you know, it's kind of ebden
Starting point is 00:21:58 float over time based on. on kind of how the national mood shifts. And so if you sort of combine in a country like the U.S. or any country in Europe, if you combine declining population with less immigration, the remaining human workers are going to be at a premium, not at a discount. And so I think that combination of kind of faster productivity growth,
Starting point is 00:22:18 faster economic growth, and then slower population growth and less immigration, actually means there's going to be much less of this kind of dystopian, you know, no jobs thing. I just think it's probably totally off pace. That is extremely interesting. So what I'm hearing is you're not super worried about job loss. Is the key here that the timing kind of just works out?
Starting point is 00:22:36 Does population decrease? You know, like all these kind of have to line up for there not to be this massive job loss with AI? Yeah, well, look, if we didn't have AI, we'd be in a panic right now about what's going to happen to the economy. Right? Because what we would be staring at is a future of depopulation. And like, depopulation without new technology would just mean that the economy shrinks. Right. So it would mean that the economy kind of itself kind of shrinks over time.
Starting point is 00:22:59 opportunity diminishes. There are no new jobs. There are no new fields. There's no new source of consumer demand for spending on things. And so you would be very worried about going into a period of severe decline of stagnation. And, you know, essentially you'd be looking at these very dystopian scenarios of like an economy kind of self-yuthanizing itself over time. And it'd be very worried about like the opposite of what everybody thinks that they're worried about. The only reason we're not worried about that is because we now know that we have the technology that can substitute for the lack of population growth and then, you know, also for the lack of immigration is likely. And so, you know, I would say the timing has worked out
Starting point is 00:23:37 miraculously well in the sense that we're going to have AI and robots precisely when we actually need them to keep the economy from actually shrinking. And I just think like that, that's just like a fundamentally, a fundamentally good news story. To get to the mass job loss thing that people are worried about on the other side of things, you know, you'd have to, you'd have to look at like far, far, far higher rates of productivity growth. You'd have to look at rates of productivity growth that are 10, 20, 30, 50% a year, something like that, which are orders of magnitude
Starting point is 00:24:05 higher than we've ever had in any economy in the history of the planet. You know, it's possible that we get that. I mean, look, I'm, you know, I have my utopian kind of, you know, temptation all along with everybody else. If AI, like, radically transforms everything overnight, then maybe you, you know, let's play out
Starting point is 00:24:21 the kind of utopian scenario. You get to a much higher level of productivity growth. you get to a much higher level technological change. Corresponding to that, you'll have a massive economic boom. You'll have a massive growth in the economy. And then corresponding with that, you'll have a collapse in prices. And so the price of goods and services that are sort of, you know, whatever we're going to call it affected by or commoditized by AI,
Starting point is 00:24:44 the prices of those goods and services will collapse, right? It'll be price deflation. And then as a consequence of price deflation, everything that people are buying today gets a lot cheaper. And that's the equivalent of a gigantic increase in wealth, right, across the society. Right. Take it this way. This is actually worth talking about because people, I think, get kind of sideways on this issue. So if AI is going to transform the economy as much as the, you know, whatever, utopians or dystopians or whatever kind of think that it will, the necessary economic calculation of what happens is massive productivity growth. The consequence of massive productivity growth, what that literally means mechanically is more output requiring less input, right? So you get more economic output for less input, right? So you're substituting in AI. for human workers or whatever, and as a consequence, you get like this massive boom and output with much lower input costs. The result of that is you get gluts of goods and services in all those affected sectors. The result of those blots is you get collapsing prices, right? The
Starting point is 00:25:40 collapsing prices mean that the thing today that costs you $100, now costs you $10, and now costs you $1, that's the equivalent of giving everybody a giant raise, right, because now they have all this additional spending power. That additional spending power then translates to economic growth, the development of new fields, everybody's like maturely like much better off very quickly. And then by the way, to the extent that you do have unemployment coming out the other side of that, it's now much cheaper to provide the kind of social safety net to prevent people from being immiscerated, right? Because the prices of all the goods and services that like a welfare program has to pay from, they're all collapsing. Right. And so the price of health care collapses,
Starting point is 00:26:15 the price of housing collapses, the price of education collapses, the price of everything else collapses because this incredible impact that AI is having. And so in this kind of, you know, utopian dystopian scenario that people have, it's not, there's no scenario in which like everybody's just poor. In fact, it's quite the opposite, which is everybody gets a lot richer because price is collapsed. And then it's actually much easier to pay for the social safety net for the people who, you know, for some reason can't find a job. And so like, like maybe we end up in that scenario. I mean, the kind of optimistic part of me says, yeah, maybe AI is that powerful and maybe the rest of the economy can actually change to accommodate that and maybe that'll happen. But the result of that is
Starting point is 00:26:50 going to be a much better new story than people think it's going to be. And, And again, everything I've just described, by the way, it's like just a very straightforward extrapolation on very basic economics. I'm not making any like bold predictions in what I just said. This is just like a straightforward mechanical process that plays itself out if you have higher race
Starting point is 00:27:05 of productivity growth, which are necessarily the results of higher rates of technological growth. And so I think we're looking at, and to be clear, I think we're looking at a world that's not like radically transformed the way that maybe the utopians think that it will be
Starting point is 00:27:17 or the dystopians think it will be. I think it'll be more incremental for races we can discuss. But I think that incremental It is a... Enalmingly, I think that process is going to be a good news process. And then even if it's much faster,
Starting point is 00:27:28 it's also going to be a good news process. It'll just be a good news process and the other way that I was described. I love hearing optimism and good news. I will also add that you've been... I was researching you ahead of this chat, and you've been right so many times about where the world is heading.
Starting point is 00:27:42 That's why I'm especially excited to talk to you. I'll give you a short list. I imagine there are many more things. Okay, so one, you were right about the web and web browsers becoming important. you were right about software eating the world. Check. In 2011, you said that in 10 years we're going to have 5 billion people using smartphones, and I believe the actual number ended up being 6 billion.
Starting point is 00:28:06 You also have this debate with Peter Thiel that I came across, where you were debating whether technology has stopped progressing or if new technology will continue to emerge, and you were arguing there's progress. Progress will continue. And he was like, no, I think we're done with cool technology. you were right. I imagine there are many more things you were right about. So again, I'm just, I love hearing your predictions
Starting point is 00:28:31 because I feel like they're actually going to turn out to be correct. So I should start by saying I've been wrong about tons of things, but, you know, I buried those out back behind the shed. Delete them from the Internet. No browser can discount them. Yes. I have them nuked out of the Internet archives so that they're never seen again. So, you know, I'm wrong plenty of times also.
Starting point is 00:28:49 But yeah, I mean, look, I think, Yeah, some of those are got right. By the way, I will say on the Peter one, I have come much more around to Peter's point of view. I would probably argue that one like quite a bit differently today than I did, and I would give his view, I think, a lot more credit. And it actually goes to kind of the discussion that would be the kind of conversation we just had, which is the real formal what Peter was arguing was we have lots of
Starting point is 00:29:11 process and bits, we have lots of progress in bits, right? But we have very little progress in Adams. And that's the real core of what he was arguing. And I think I was a little bit, I don't. I don't know, missing that or kind of glossing that over a little bit because I was so focused in making sure people understood, no, there actually is still progress happening in bits. But I think, you know, a lot of his critiques around the lack of progress in Adams is real. And again, this goes back to this thing of like in the last, and he's talking about this for a long time.
Starting point is 00:29:37 In the last 50 years, there has just been very little technological innovation in most of the economy. There's been very little technological innovation in particular or anything involving atoms. You know, there's been very little real world technological change. There just hasn't been. Like the built world is just not that different today than it was 50 years ago. And again, if you contrast that, you know, if you compare and contrast 1870 and 1930, it was a dramatically different world. If you contrast 1930 and 1970, it was a dramatically different world. If you contrast 1970 to date, it's not that different.
Starting point is 00:30:06 Right. And look, you just see that you could just like walk around and it's just like, oh, yeah, there's a bunch of buildings that were built in like 1960. Right. And there's a bridge that was built in like 1930. And there's a dam that was built in like 1910. And there's a city that was founded in, you know, 1880. And like, what have we done? Like, where are new cities?
Starting point is 00:30:26 Where are new dams? You know, where's the California high-speed rail? Like, you know, like, what's going on here? And so, like, I think he is right about a lot of that. Again, this is also why I think that AI is not going to have as rapid an impact. It's not going to be, again, this kind of utopian or dystopian view of like everything changes overnight. I think it just kind of can't happen because of the reasons that Peter articulates, which is there's just so much about how the world works that's basically just like wrapped up in red tape.
Starting point is 00:30:54 Like bureaucratic process, rules, restrictions, you know, the politics, by the way, unions, cartels, oligopolis, there's all these structures in the world that are kind of economic or political or regulatory structures that basically prevent things from changing. And so, I mean, let's take a great example. Like, AI's impact on the health care system. Like, by rights, AI is going to have a dramatic impact on the health care system and in very positive ways. But, you know, at large parts of the medical system today are, they are cartels, right?
Starting point is 00:31:29 And so there's like, there's the doctors are a cartel and like nurses are a cartel, like hospitals are a cartel. And then there's this push to like nationalize all the health care systems. And then you've got, you know, then you've got a government monopoly, right? And it's like, and guess what cartels of monopolies don't like, is they don't like rapid change, right? And so, you know, you show up as a kid and you're like, wow, I've got like this new technology to do like AI medicine. And they're like, oh, well, does it threaten doctors?
Starting point is 00:31:52 Well, in that case, we're going to block it. So, and I think a lot of consumers, by the way, you know, I don't know, I see this in my life and you'll probably see this in your life also, which is, you know, like, ChachyPT is like almost certainly a better doctor than your doctor today. But like, ChatsyPT can't get a license to practice medicine, right? So it can't substitute for a doctor. It can't prescribe medications, right? It can't, you know, perform procedures. right? And so there are these, anyway, so Peter, Peter, I think, was very articulate and it has been for a long time on like, no, there are actually real structural impediments in the economy and in the political system that we have that actually prevent any, the race of change that are anywhere near the race of change that people have in the past. And you can maybe say optimistically, you know, maybe the presence of it of the new magic technology of AI, maybe it causes us to revisit a lot of these assumptions for the first time in decades to really say, okay, is this really the world we want to live in? Don't we actually want to. get to the future faster. So maybe that would be the optimistic view. It's time to build. Somebody
Starting point is 00:32:46 famously said. I, uh, in my calendar, I actually have that as my, when I start to work. It's time to build. That's my block in the morning of the day. Thank you for that. Okay. I love, I love the way you go from just like macro to just like end of one. And I want to go to end of one. A lot of the listeners of this podcast are product managers, they're engineers, their designers. They're not a lot of, there's a lot of founders, but there's also a lot of non-founders. There's a lot of people building product that aren't founders. And obviously, a lot of people are worried about where their career is going. Is one of these roles going to disappear?
Starting point is 00:33:16 Is one of these roles going to do really well? How do I stay up to date? You're close with a lot of teams, a lot of product teams. What's your sense of just the future of these three very specific roles, product manager, engineer, designer? This, I think, is a really funny question. So these three roles in particular, obviously, are kind of the central roles for building, you know, for tech companies. So the way I've been describing it is, you know, the concept of the Mexican standoff, right? Which is the movie scene where the, you know, the two guys have guns pointing each other's heads.
Starting point is 00:33:41 And then there's, if you watch like John Wu movies, he loves to have, he does the three-way Mexican standoff where you've got like a triangle, you know, people and like that, you know, and of course, John Wu movie is they've got, you know, guns in both hands. So they're all, each is aiming at the other two. And you got this kind of standoff situation. And so the way I've been describing this is there's like a Mexican standoff happening between those three roles between product manager, designer and coder. Specifically of the following, which is every coder now believes they can also be a product manager and a designer. Right. because they have AI. Every product manager thinks they can be a coder and a designer, and then every designer knows they can be a product manager, right, and a coder. Right? And so people in each of those roles now, you know, know or believe that with AI,
Starting point is 00:34:24 they don't need the other two roles anymore, right? They can do that because they can have AI do that. And then, of course, and then of course there's the real irony, which is, you know, all three of them are going to realize that AI can also be a better manager, right? So they're going to,
Starting point is 00:34:37 they're going to aim me the guns off the Earth chart, but that's probably that's the next. face. And what I think is so fascinating about this Mexican staff is they're actually all kind of correct, I think, right? Which is AI is actually a pretty good, you know, it's now, it's actually now a really good coder. It's actually now a really good designer and it's also a really good product manager, right? It's actually good at doing all three of those things, or at least doing a lot of the tasks involved in those three jobs. And so again, this goes back to the superimpower, this kind of idea of the super empowered individual where if I'm a coder, like, you know, I mean, step one is
Starting point is 00:35:08 like, I need to make sure that I really understand AI coding and like what that means. and how coding is going to change in the future. You know, I need to understand, you know, specifically how to go from being a coder who writes code entirely by hand to being a coder who, you know, orchestrates, you know, a dozen instances of, you know, coding bots.
Starting point is 00:35:24 You know, there's a change in the actual job of coding itself, which is happening right now. But the other part of it is, okay, how do I become that super part individual? How do I become a coder that also then harnesses AI so that I can also be a great product manager and I can also be a great designer, right? And then the same thing for the product manager,
Starting point is 00:35:40 which is how do I make a good? sure that I can now use coding tools. How do I make sure I can also, you know, do AI-based design? And the same thing for the designer, which is how do I use AI to also become a coder and also become a product manager? And then what you did is maybe the, maybe the, those individual roles change. Like maybe those are not any more sort of stove pipe roles with the way that, you know, they have been for the last 30 years or whatever. But what happens is that the talented people in any of those roles become super-powered and they become good at doing all three of those things. And then, and then those people become incredibly valuable because then those are people who can actually
Starting point is 00:36:10 like, you know, build a design, right? New products, right, from scratch, which is like the, you know, which is the most valuable thing. And so I think, I think that's, I think, I think, I think that's the opportunity. So I love this answer. So what I'm hearing is essentially, if you're amazing at any of these three roles, you will do well. Number one, if you're amazing of these roles, that's great.
Starting point is 00:36:28 But also, part of being amazing these roles is also being able to fully harness the new technology, right? So if you're, if you're a master coder today and you don't ever get to the point where you figure out how to use AI to leverage your coding. and do more, right? Like, at some point, you are going to hit an issue, right? Here's another way economists talk about this, which is there's the concept of the job, but the job is not actually the atomic unit of what happens in the workplace.
Starting point is 00:36:54 The atomic unit of what happens in the workplace is the task. And so, and then what the way the economists think about it is, a job is a bundle of tasks. And everybody wants to talk about job loss, but really what you want to look at is task, task loss, right? Tasks changing. I mean, the classic example of task changing. Classic example of task changing was once upon a time, executives never used typewriters or personal computers themselves, right?
Starting point is 00:37:20 You know, if you were a vice president and a company in 1970 or whatever, you did not have like a typewriter or a computer on your desk typing things. You had a secretary who you dictated memos to, right? And then there was this change where, like, email started to show up. And what would happen was the job of the secretary then went from, you know, it went from, you know, the job of the secretary changed for sending out letters. with stamps on them to like sending or receiving emails with the other admins. And then the secretary would print out the email and bring it into the executive's office.
Starting point is 00:37:45 And the executive office would read the email and paper, scroll the reply, and give that message back to the secretary who would go back and type it into the computer on his or her desk and send it as an email. Fast forward to today, none of that happens. Now executives just do all their own email. They still have secretaries or admins, but they're now doing different tasks. You know, they're travel planning and orchestrating events and, like, doing all these other things, you know,
Starting point is 00:38:10 that the great admins do. And then the task set ironically of the executive has expanded to do actually more of the clerical work themselves, actually, like, sit there and type their memos, which, again, 50 years ago, they never would have done that. And so the executive job still exists, the secretary job still exists, but the tasks have changed.
Starting point is 00:38:29 And I think that's like a great example of what's going to happen in coding, the tasks during and change. This is what's got product management, the task for in and change, designer, are going to change. And so the job persists longer than the individual tasks. And then as the tasks change enough, then that's when the jobs change. And so at the level of an individual, you kind of want to think of like, okay, I have this job. The job is a bundle of tasks. I need to be really good at making sure that I can swap the tasks out, right? I can really adapt, use the new technology,
Starting point is 00:38:59 you know, get really good at AI coding, for example. And then you want to kind of add skills. I can also get really good at design. I can also get really good at product management because I've got this new tool. So you want to kind of pick up more and more scope as you do that. And then, you know, 10 years from now is your job title coder or coder designer product manager or is it just I build products or is it just I tell the AI how to build products? It's like whatever that, whatever that job is called, who even knows what is going to be. But it's going to be incredibly important because the people doing that job are going to be orchestrated in the AI. And so that that's the track that the best people are going to be on. And I think that's the thing to lean hard into.
Starting point is 00:39:35 I think people aren't fully grasping just specifically software engineering and how much that is changing. Like, it's pretty clear we're going to be in a world soon where engineers are not actually writing code, which I think a year ago, we would not have thought. And now it's just clearly this is where it's heading. It's like there's going to be this artisanal experience of sitting there writing code, which is so crazy how much that job is going to change. Yeah. So again, here, I'd go back and again, pardon maybe the history lesson, but like, go back like, coding, so the first,
Starting point is 00:40:04 do you know the original definition of the term calculator? Do you know what that referred to? No. I referred to people. Right? So back before there were like electronic calculators or computers
Starting point is 00:40:16 or any of these things, the way that you would actually do computing, the way that you would do calculating, like the way that an insurance company would calculate actuarial tables or the military would like calculate, you know, I don't know, whatever troop logistics, you know, formulas
Starting point is 00:40:28 or whatever it was, the way that you would do it, is you would actually have a room full of people. And by the way, these are big rooms. You could have hundreds or thousands or tens of thousands of people doing this. And you would actually figure out, you know, somebody at the head of the room who was responsible for, like, whatever the mathematical equation was.
Starting point is 00:40:43 And then they would parcel out the individual mathematical calculations to people sitting at desks who were doing them all by hand. Right. And those, that job title was those people were calculators, right? And so we've gone from a world in which you literally have people doing mathematical equations by hands, by hands. then we got the first computers. The first computers, of course, didn't have programming languages, right?
Starting point is 00:41:04 They only had machine code, right? So the first computers were programmed with ones and zeros. And so the task of the programmer became do the ones and zeros. And then that became punch cards. And you can still, you know, there's still people, you know, kicking today who, you know, whose job as a programmer was to, like, build the punch cards. And then you got, actually, this big breakthrough, which was called assembly language, which was basically the way to do machine code, but, like, with some level of, like,
Starting point is 00:41:27 English kind of added to it. and then the best programmers did assembly language. And then, you know, when I was coming up, it was higher level languages like C that compiled into machine code, and that's what programmers did. And then I still remember when scripting, you know, when scripting languages,
Starting point is 00:41:40 you know, we developed JavaScript at Nescape and then, you know, Python took off and Pearl and these other scripting languages. But scripting language just, you know, took off in the, in the 2000s. There was this big fight, you know, in the technical community, which is scripting real programming or not, right? Because it's like it's kind of cheating, right? Because real programmers write code that compiles to machine code
Starting point is 00:41:58 and like real programmers do like memory management themselves and they do all, you know, this whole craft of writing, writing, you know, writing C code. And, you know, these JavaScript or Pythad programmers are just doing this kind of lightweight thing. It doesn't even really count as coding.
Starting point is 00:42:10 And of course, the answer is yes, it very much counted. And now most coding is done with the scripting languages, right? Which have, you see my point. The scripting languages have abstracted away like five layers of detail underneath that that people used to do by hand and they don't anymore. And then there's, and then, to your point, like AI coding is the next layer on that.
Starting point is 00:42:27 AI coding actually abstracts the way the process of actually writing the scripting code. Right. And so in one sense, this is a really big deal for all the obvious reasons. But on the other hand, it's like, okay, this is the next layer of the task redefinition under the job of programmer. Right. Now, what's the job of the programmer? It's to your point. It's not necessarily to write the code by hand.
Starting point is 00:42:48 But what it is now is, all right, if you talk to the world's best programmer today, what they'll tell you is, oh, my job is I'm sitting there and I'm orchestrating 10 code bots, right, coding bots that are running in parallel. And literally, they sit there and they shift from browser to browser, or terminal to terminal. And their day job now is kind of arguing with the AI boss to try to get them to write the right code. And then debug it and fix the problems and change the spec and do all these things.
Starting point is 00:43:13 So now the job of the program is to argue with the coding bots. But like if you don't know how to write the code yourself, you don't know how to evaluate what the coding bots are giving you. Right? And so you asked about the 10, you know, our 10-year-old is, you know, super-in-computers and super into programming. And what I'm telling you, you know, he's using Claude and chat GPT, and co-pilot and all these things. And what I'm telling him is like, look, by the way, he loves vibe coding.
Starting point is 00:43:33 He's on repplet all the time, doing vibe coding, you know, doing games. You know, he's sitting there, you know, it's hysterical, right? Because he's sitting there. It's a 10-year-old, basically, who spends two hours at dinner arguing with an AI for fun, right? Right, but what I'm telling him is, no, look, you need to still fully understand and learn how to write and understand code because the coding bots are giving you code. If it doesn't work or if it's not doing what you expect or it's not fashion it for whatever, like you need to be able to understand the results of what the AI is giving you, right?
Starting point is 00:43:59 In the same way that somebody who's writing scripting language code does need to understand ultimately how the microprocesses it works. And so again, it's kind of this up-leveling of capability where you actually want the depths to be able to go down and be able to understand what the thing is actually doing, even if you're not spending your day actually doing that by hand. And again, I look at that and I'm like, okay, now programmers are going to be 10 times or 100 times or a thousand times more productive than they used to be, right? And that is overwhelmingly a good thing. The tasks are definitely changing. The nature of the job is changing. But our human being is going to be involved in, like, in the coding process and overseeing the AI
Starting point is 00:44:35 coding and all that. And the answer is, of course, absolutely 100%. No question. So you're in the camp of still learning to cope, still a valuable skill. Oh, yeah, totally. Well, again, if you want to be one of these superimpos, look, look, if you just want to put your, like, self-and autopilot and, like, I can't be bothered and I'm just going to have AI write the code and it's going to generate whatever it does and that's fine. And I'm going to be, you know, I'm going to be, if the goal is to be a mediocre coder, then just let the AI do it. It's fine. The AI is going to be perfectly good at generating infinite amounts of mediocre code.
Starting point is 00:45:03 No problem. It's all good. If the goal is, I want to be one of the best software people in the world and I want to build new software products and technologies that really matter, then, yeah, you 100% watch. You still, you want to go all the way down. You want your skill set to go all the way down to the assembly and machine code. You want to understand every layer of the stack. You want to deeply understand what's happening at the level of the chip, right, and the network
Starting point is 00:45:23 and so forth. By the way, you also really deeply want to understand how the AI itself works, right? Because if people will understand how the AI works are able to, they're clearly able to get more value out of it than somebody who doesn't understand how it works. I mean, you're always more productive if you know how the machine works, right, when you use the machine. And so, yeah, the super empowered individual on the other end of this that wants to do great things with a new technology, yes, you 100% want to understand this thing all the way down the stack because you want to be able to understand what it's giving you, right? And when something doesn't work or when something isn't right, you want to be able to really quickly understand why that is.
Starting point is 00:45:56 By the way, again, this goes back to education. AI is your best friend at helping you learn all that, right? Because it's like, oh, I need to understand. I don't know, like, this isn't fast enough. I need to figure out as a coder. I need to figure out how to do a different approach to memory management or something. And you can be like, well, you know, shit, like I, you know, I don't quite know how to do that. Okay, AI, let's spend 10 minutes.
Starting point is 00:46:15 Teach me how to do this, right? Teach me what this all means, right? So all of a sudden you have this, like, incredibly synergistic relationship with the AI. where it's also helping you get better at the same time that's doing a lot of work for you. By the way, I was going to say, I was a big parole programmer.
Starting point is 00:46:29 I was an engineer for 10 years, and that was my language of choice. Do you remember, I don't know when you were doing it, but do you remember that, at least early on, did you ever hit this where like, C coders were like looking down their nose at you, being like,
Starting point is 00:46:40 for sure. For sure. It's like, this is so slow. It's not going to scale. What do you spend your time on this thing? Yeah, exactly. And, of course, you know, and again, it was sort of this thing
Starting point is 00:46:48 where, you know, they were sort of correct, which is, at the beginning, it wasn't fast enough or whatever. By the end, they were definitely wrong, right, which is it got much better, much faster, and it swept the world. You know, most coding today happens in scripting languages. And then by the way, the people along the way, the people who really understood the scripting languages and the people who understood all the lower level systems, they were the ones who were able to actually make the scripting languages actually work really well. Right. And so that was a great example of this kind of adaptation. And again,
Starting point is 00:47:14 the result of that was, you know, a far higher number of people writing code of scripting languages than were ever writing code with lower level languages. And I, I think, this will just kind of be a more dramatic burden of that. I love that part was designed by a linguist. I don't know if you remember that part, and that's what made it so nice to code with. Well, that's funny because, of course, it was so notorious for being impossible to understand.
Starting point is 00:47:32 So how ironic. Coming back to this kind of triad, the other element that I hear more and more of is just the skill of taste and design and user experience. It feels like that's a very hard skill to learn. And to me, tells me design is going to be much more valuable in the future. Yeah, that's right. And again, here, this is a great example. So again, the task level, the task level of like design the perfect icon, right, is going to be like, all right, is going to do that all day long. It's going to give you a thousand icon designs. It's going to be great. Like, it's going to be fantastic, like, whatever.
Starting point is 00:48:07 And there will still, by the way, there will still be some level of human icon design or whatever, but like, A is going to get really good at that. But like, what are we trying to do? Like, you know, kind of capital D design of like, all right, what is this thing for? and how is this going to function in a world of human beings? And like, you know, what's going to, is this going to make people happy when they use it? It's going to make people feel good about themselves. Is it going to fit into the rest of their life? Is it going to, you know, I don't know, challenge them in the right way? You know, all these kinds of higher-level questions with the great designers have always thought about,
Starting point is 00:48:36 like the job of designer, right, will involve much more of those higher-level, more important components. And then again, with AI doing a lot more of the underlying tasks. And so, you know, one way to think about it is, you know, I don't know, you think it's like that. of the world's best designers, you know, Johnny Ive or whatever, you could be like, wow, like, if I'm a designer today, if I'm a 25-year-old designer and I aspire to be, you know, Johnny I've in a decade, it's all of a sudden I have a new path that I can use to kind of get, to get there, which is, you know, because Johnny I did everything you did without AI.
Starting point is 00:49:07 Now, you know, a young designer can be like, wow, if I really harness AI in a decade, I'm going to be like the best design of the world's ever seen because it's not just going to be me. It's going to be me plus being so super empowered by this technology to be able to do so much more. And then so much more of my time and attention is going to be, is going to be able to be focused on these higher level things that most, most designers never get to. And I think that's going to be another great example of that. So maybe what I'm hearing here is kind of this T-shaped strategy of be, if you want to be successful in any three of these roles, be very, very, very good at that specific role, product management, engineering design, and then get good enough at these
Starting point is 00:49:39 other two roles. Well, so I think that's great. I think that's really relevant. And then, you know, Scott Adamson firstly just passed away, you know, which is a real tragedy. But I was always, I've referred for years to actually Scott's, Scott Adams, he had this famous, he had his famous kind of career advice he would get people, which I think makes a lot of sense, which, which, which, which, which, which, he used to say, used to say it's like, look, he said, you know, I could, he said, you know, I could have been a pretty good cartoonist or I could have been, like, pretty good at business. But the fact that I was a cartoonist who understood business made me, like, spectacularly great
Starting point is 00:50:14 and making Dobert, right? Because even the world's best cartoonist who did, didn't understand business could have never written Dilbert. And then the world's best business people who didn't know how to do cartoons couldn't have done Dilbert. It took somebody who actually had both of those skills to be able to make Dilbert, right, which was one of the most successful cartoons in history. Right. And so the way Scott always described it was that from a career development standpoint,
Starting point is 00:50:34 the additive effect of being good at two things is like more than double, right? The additive effect of being good at three things is more than triple, right? because you become a super relevant specialist in the combination of the domains. And you, like, you see this all over, you know, you see this all over the economy. I mean, you see this all over the economy, but I'll give you an example, Hollywood.
Starting point is 00:50:57 You know, there are a lot of writers who can't direct a movie, and they can be very successful writers. There are a lot of directors who can't write a movie and they can be very successful directors. But the superstars in the entertainment industry are the people who can write and direct. Right. And, you know, they don't have a term for those.
Starting point is 00:51:12 They call this Arturists, right? and that's, you know, those are the people who are like the real creative forces that move the field. And so, and so again, and by the way, Hollywood, it's just really funny. It's been spending a lot of time talking to Hollywood people about AI. Hollywood has the same Mexican standoff going right now that we described an attack, except in Hollywood, for example, for filmmaking, it's the director, it's the writer and the actor. Right? Because the director is now thinking, wow, I don't need the writer anymore because the AI can write the script,
Starting point is 00:51:35 and I don't need the actor anymore because I can have AI actors. The writer is saying, well, I don't need the director because I can direct the movie and the AI can do the actors. And the actor is saying, I don't need either one of these guys. I can have the AI direct the thing. I can have the AI write the thing, and I'm just going to show up to do my performance. Right. And so it's the same kind of triangular configuration.
Starting point is 00:51:54 And again, what's great about it is they're all correct, right? Each person, each of those three fields is going to be able to expand laterally and pick up those additional skills. And then as a consequence, you're going to have more people who can write and direct or write and act or direct and act or do all three. And I think, you know, to your point, like your T-Shift thing, like I, I think that's going to be true basically across the entire economy. And if you think about the T configuration,
Starting point is 00:52:19 it's like, yeah, the breadth, the top of the T is like, how many individual domains are you familiar enough with to be able to use the AI tools to be able to do really good work? And then this part of the T is how deep can you go in at least one of those domains so that you really, really deeply know what you're doing. But like if you're like super deep on coding and you can use AI to do design and you can use AI to do product management, right? that's your T right there.
Starting point is 00:52:43 And you're a triple threat at the top of the T, but with this level of technical grounding underneath that. And I mean, at that point, again, you're the super empowered individual. You're going to be able to just perform like seats of magic, for example, in terms of designing and building new products. You know, the people in my generation couldn't have even dreamed of. And so I think this is a universal kind of theory that I think can apply across the entire economy.
Starting point is 00:53:03 I'm going to invent a new framework right now. Okay, forget the T framework. I'm picturing an F sideways or an E, where there's three, two, three, I don't know, downward parts. And so what I'm hearing is get good at least two. That's right. I think that's right.
Starting point is 00:53:18 Yeah, the combination, yeah. My friend Larry Summers had a different version of the Scott Adams thing, which is he used to tell people he said, the key for career planning is he said, don't be fungible, right? And, you know, that's he's an economist. And so that was economics-speaking. And what that means is, what that means essentially is don't be replaceable. And so don't be a cog, right?
Starting point is 00:53:38 And what that meant was, don't just be one thing. Right. So if you're, if you're, if you're, if you're quote unquote, you know, again, just a designer, just product manager, just a product manager, just decoder, like, then in theory, you can be swapped in or out. But if you have this, if you have this E or F, you know, laying on inside kind of thing. And if you have, if you have this combination of things, then all of a sudden you're not fungible. Not only you're not fungible. Like, you're actually massively important because you're one of the only people in the world who can actually do that combination of things. And yeah, that your ability not to become one of those people is like, like, titanically enhanced with AI as compared to anything we've ever seen before. This is so interesting because I've worked with people that are good at these two skills, and they were always called unicorns at the company. She can coat and design. Oh, my God. And what I'm hearing here is this is what you need to become. You need to become really good at at least two things.
Starting point is 00:54:24 I think you use the term smoke stack or something where it's like PM over here, engineer design. And what I'm hearing here is you need to get good at at least two of these skills. The silos of these two roles are disappearing. That's right. That's right. And again, I can't overstress the following. For anybody listening to this, the thing about AI that I think people are just like not
Starting point is 00:54:41 getting enough benefit out of yet is just it will teach you. Like, this is amazing. Like, there's never been a technology before where you can ask it, like, teach me how to do this thing. So I always feel like it's like people spend too much, it's one of these things where it's like so much focus on figuring out how to use like a large language model is like, okay, what am I going to try to get it to do for me, right? Which is, of course, very important. But the other side of it is, what can I get it to teach me how to do? Right. And it's just as good at that.
Starting point is 00:55:11 Right. And so, again, this is this level of late superpower. Like, you know, people who really want to, like, improve themselves and, like, develop their career should be spending every, every spare hour in my view at this point talking to an AI being like, all right, train me up. Like, tell me, tell me, super empower me, tell me how to, you know, I'm a coder, train me how to be a product manager. It will happily do that. It knows exactly how to do that. You know, run me, dream, you know, make me problems, you know, make me assignments and then evaluate my results, right? And it will do, it will do that just as happily as it will do work, quote, unquote, for you. Two tricks I've heard along those lines. One is to watch the output what the agent is doing and thinking as it's doing the work. So if you're not an engineer, just sit there and watch it think and make decisions. And it's almost become this layer on top of learning to code is learning to see what the agent is doing and thinking,
Starting point is 00:55:57 because that teaches you about architecture. And the other is a couple of podcasts guests. I've mentioned this. When you get stuck and then you figure out how to unstuck yourself, you ask it, what could I have done differently? What could I have said that would have avoided this error in the first? first place. Yeah, that's right. That's right. Yeah, look, on that first one, and this, again, that's what I'm doing on my 10-year-old. Yeah, look, if you ask it a, yeah, this is a really good point. So if you ask an AIA, write me this code, and then it doesn't, it comes back, and it doesn't
Starting point is 00:56:22 work right, like, if all you know is, like, single function I asked it and it gave me back something that's not good, like, well, what do you, like, what do you even do with that, right? Like, you don't understand why it gave you that result? Do you really understand it even what's, you even understand what to tell it, to try to get it to do something different? But to your point, like if you actually watch, if you actually watch what it's doing, and then you have the grounding, you know, kind of that leg of your ear or your F, if you have that grounding, then you can be like, oh, I see what it's doing. I see where it made the mistake.
Starting point is 00:56:51 I see where it went sideways. And then you're all of a sudden able to intervene and be able to say, no, no, that's not what I'm mad to do, this other thing. Right. And so, and again, this is a big part of having the actual kind of, you know, synergistic relationship is that you understand. And by the way, look, I mean, this is like everything I'm saying is, you know, Everything that we're saying right now also is the same as if you're working with human beings, right?
Starting point is 00:57:10 Like, you know, you and I, you know, ask you to do something. You'd come back with something completely different. Like, I do need to understand what was happening in your head, right, in order to be able to give you feedback, right? If I just tell you, oh, that's wrong, it doesn't, like, nothing happens. I need to actually understand. I need to have theory of mind, right? I need to understand what you were thinking in order to really give you the right feedback. And so, and, you know, and again, the great thing with AI is AI will happily sit there and explain all day long why it's doing what is doing.
Starting point is 00:57:36 it'll happily critique itself. You know, you can do this. By the way, it's a very fun thing where you can have one AI critique the other AI, right? Which is another thing, which is like you have one AI, write the code. You have another AI, debunk the code. And so you can actually use, you can play the a ayes off against each other
Starting point is 00:57:52 and get some to argue with each other. And yeah, these are all the kinds of skills that are to become, I think, incredibly valuable. I think people call those LLM councils. Yes. They're talking to each other. Yeah, that's right. That's right. I do feel like if I were, like,
Starting point is 00:58:04 I have no design background. I've always wanted to design. I've always wanted to be a great designer. It feels like that's the hardest one to learn of all these three by just watching and talking, right? Because there's a lot of exposure. Hours, as folks have used this term, just like, how do you learn to be a great designer?
Starting point is 00:58:19 That feels like that's going to be really hard and valuable. So my true confession is I've always kind of wanted to be a cartoonist. But I have no art skills. But as we're talking, I'm like, hmm, it might be time. Their time has come, Mark. Yes. I want to pivot to founders, maybe your bread and butter.
Starting point is 00:58:37 You spend a lot of time with the most cutting edge AI forward founders. I'm curious what you see them do, how you see them, some way they operate that's maybe blowing your mind about how the future of starting company looks, how the future of AI forward companies look. Yeah, so this is a great, very topical topic. It's all playing out in real time right now on the leading edge.
Starting point is 00:58:59 So I think there's like three layers of it, see if this makes sense. I think there's like three layers of it. I think layer one is they're thinking, all right, how does AI redefine the products themselves? Right? And this is kind of the time-honored, you know, kind of thing that happens with technology transitions,
Starting point is 00:59:15 and this is kind of what, you know, a lot of entry capital is based on, which is, you know, okay, there's a new technology that comes out. And, you know, maybe it's the personal computer or the iPhone or the internet or now it's AI. And it's like, all right, is this a new capability that gets added to existing products, right? So all of a sudden you've got, I don't know,
Starting point is 00:59:33 an existing, you know, software business, and now you've got your PC version of it and now you've got your iPhone version of it and you just kind of keep on going and you know you kind of add the new technology kind of gets kind of added into the mix. You know, it's kind of another ingredient
Starting point is 00:59:44 to an existing formula. And of course, you know, a lot of new technologies are like that, right? You know, I don't know when, I don't know, when flash storage came out or something, you know, it didn't really, you didn't really redefine the software industry because people just went from using,
Starting point is 00:59:58 you know, hard disk using flash storage or something. But when the internet came out, like basically, old school on-prem software, for the most part, not entirely, but like a lot of it died and just got replaced by like web software. Right. And so sometimes you get the kind of, it's additive to an existing thing.
Starting point is 01:00:15 Sometimes you get the actually it redefines, an entire product category, redefines an industry, the actual company, you know, in many cases of the companies themselves turn over. And so, so, so, you know, so there's sort of this question. And like, you know, an example, you just mentioned nanobanana. So like a great example is there, you know,
Starting point is 01:00:29 there are these businesses like, you know, like Photoshop is built a whatever 40-year franchise in image editing. Okay, is AI a sort of a feature now that gets added to Photoshop to be able to do AI-based image editing? Or, you know, do you just like stop editing images entirely because you're using nanobanana and all images are just being generated? And it's just easier to just have AI generated a new image than it is to try to edit them in the mold one. So I think, you know, there's many areas of tech in which that question is being asked. And, you know, the answers I think will vary by domain.
Starting point is 01:01:00 But, you know, obviously as a venture firm, we're betting hard on many of these categories being totally. reinvented and a lot of the best founders are trying to figure out how to do that. So that's kind of AI, you know, changing the definition of the product. I think the next layer is actually a lot of what we've already talked about, which is AI changing the jobs. And so it's, you know, a lot of what we already talked about, but like, okay, if I'm a founder of a company and I've got, you know, if I have, you know, room in my budget for 100 coders, you know, how do I get those coders to be super empowered AI coders, not, you know,
Starting point is 01:01:30 not the kind of coders I used to have. And if they're super empowered AI coders, then does that mean, you know, do I still need 100, maybe now I only 10, or does that mean I still want 100, but now they're doing 10 times more, right? And so, you know, as a lot of the best founders are working on that right now. And then I think the third shoot of drop hasn't quite dropped yet, but it's, you know, it's kind of the big one, which is like, all right, like the basic idea of having a company, right? You know, does that change? And again, here, you've got this concept of the superpowered individual, which is like, okay, you know, can you have entire companies where you have basically the first, you know,
Starting point is 01:02:04 founder does everything, right? Because what the founder is doing is like overseeing an army of AI bots. And there's sort of this, you know, there's kind of this holy grail in our industry that's been running for a long time, which is like, can you have the, can you have like the one person billion dollar outcome? And, you know, we've had a few of those over the years. Bitcoin is probably the most spectacular example, you know, was Assyrium right behind it, you know, which wasn't quite one person, but, you know, a very small team. You know, you had, you know, kind of Instagram and WhatsApp that had very big outcomes with very small teams. You know, every once in a while, you get one of these things where you just, you know, something hits and you just have a, you know, very small
Starting point is 01:02:36 number of people associated with, you know, but that said, you know, most software companies obviously end up with, you know, huge numbers of employees. And so I think, you know, the most leading as founders are thinking of like, okay, how do I reconstitute the actual very definition or idea of a, of having a company? And, you know, can you have a company that's literally basically just all AI? And so, and if you're doing something, you know, if you're doing anything in the real world, that's hard. But if you're doing software like that, that seems like it might be usable in some cases. And then, you know, there's like the ultimate example of that, which is like, you know, can you have like, AI, can you have like autonomous like
Starting point is 01:03:09 AI economy stuff happening where you have like AI bots in the blockchain or something, you know, that are out basically out there like functioning as a, as a business. or like making money and just, you know, literally where the AI does all the work itself and just, you know, issues me as. And so maybe, you know, maybe that, you know, maybe that's the final outlier result. We have, we have a few founders who are chasing that kind of thing. So I would describe that as, I would describe that as kind of the latter that the best founders around. Super interesting. This whole idea of a one-person billion-dollar company, I think it depends on your definition of what this is, like an outcome I could see.
Starting point is 01:03:40 Having run, running my newsletter as one person with some contractors, there's so many little annoying things that I have to deal with, with just support tickets and issues and bugs. And like, it's hard for me to imagine, actually, a one-person billion-dollar company, even if AI is handling so much of your support because there's just so many random edge cases that I'm just like filling out forms. And so I guess, This depends on do you have contractors? Does that count? You know, like, what does it mean to be a one person? But I'm just like, I can't see that happening. Yeah, I mean, look, Bitcoin, Satoshi pulled it off. But like, you know, the open source community, you know, like, does that count? I don't know. I guess it counts. Okay. Yeah, exactly, right? So, yeah. Yeah. And I would say I don't propose to have answers here, but more just like, the smartest people I know are, many of the smartest people I know are thinking hard about this.
Starting point is 01:04:29 Yeah. What do you think about? MOTS, a big question, constantly in AI, you know, the fact that everything's changing. Just what's your guys' thesis on moats in AI? Does that even a thing? Do you care? My experience with like really big technological transformations, and of course I kind of lived this directly with the internet and I saw this happen, is the really big technological transformations, they take a long time to play out and there's there's all of these
Starting point is 01:04:53 structural implications that just kind of cascade out over time. And then there's kind of this, there's this like rush to judgment up front where people kind of say, oh, it's therefore obvious that, you know, XYZ, it's therefore obvious that this kind of company is going to be the company of the future and not that kind. It's obvious that this incumbent's going to be able to adapt and this other one isn't. It's obvious that there's economic opportunity in this kind of startup and not in these others. It's obvious that the moats are going to be in this area of the technology, but not in this other area. And, you know, what everybody does is they kind of state those things with like just an
Starting point is 01:05:25 enormous amount of self-assurance where they, you know, where they really sound like they have all the answers. And then, you know, what happens is this, these ideas kind of saturate the media right? Because the media naturally prizes like definitive answers over open questions. Because, you know, you want, you know, like when CNBC is like booking guests, they want a guest who's going to come on and say, yes, this is the way it's going to be X. Not like, you know, I think that's a really good question. And let's like debate it from like eight different angles. And what I found is if you look back on those predictions a few years later, and you can do this, by the way, if you pull up like coverage of the internet from like 1993 through like 1997 or like, for that matter, even through like 2005 or 2010 and you look at like the kinds of confidence. statements people were making in the first 10 or 15 years. Like, I would say, like, almost all of them are wrong. Again, generally, like, quite badly wrong. And so I just, I think the process, I think with massive, with,
Starting point is 01:06:15 if there's going to be a massive amount of technological change, it's going to be like, I don't know, five or six layers of, like, stressful change that will play out over time. And again, we've talked about a lot of this, but like, implications on, like, what are the definition of products? What are the definitions of companies? What are the definitions of jobs? So one of the definitions of industries, how does this play out of the national level?
Starting point is 01:06:34 How does this play out at the global level? You know, how does this intersect with politics? How does this intersect with, you know, unions? How does this intersect with, you know, war? You know, what's China going to do? You know, and so it's just like there's just, there's, there's just a tremendous number of unknowns. Like a very, very large number of unknowns. And I think it's just like really, really dangerous to prejudge these things.
Starting point is 01:06:58 And so I'll just give it. And it's just, I'll just run this as a thought experiment. You can see what you think on this, but it's like, you know, like, do AI models, are AI models themselves, like, defensible? Like, is there a moat on AI models? And on the one hand, you'd be like, wow, it certainly seems like there is or should be because, like, if something takes, you know, billions of dollars to build and you know, you need this, like incredible critical mass of, like compute and data and there's only a certain number of engineers in the world that know how to do this. And, you know, they are getting paid like MBA stars. and then these companies have to deal with all these crazy, you know, political issues and press issues and reputational stuff and regulatory and legal.
Starting point is 01:07:36 Like all of that translates to like, you know, okay, probably at the end of this, there's going to be two or three companies that are going to end up with like, you know, 100 percent, you know, I don't know, whatever, 50, 50 or 30, 30, 30, 30, or 90, 10, 1 or whatever it is market share and then they're going to have whatever probability they have and it's going to be a kind of a classical oligopoly and or maybe, you know, or maybe one company's definitively, it will be a monopoly. And by the way, those outcomes have happened in software many times before. And so maybe that will be the outcome.
Starting point is 01:08:01 You know, the other side of it is, you know, if you had told me three years ago, you know, that in the, you know, kind of Christmas of chat GPT, that like within basically a year to year and a half, there would be, you know, five other American companies that would have basically, you know, exactly capable products. And then there would be another five companies out of China that would have exactly capable products. And then there would additionally be open source that was basically the same. I would have been like, wow, like, you know, the thing that seemed like it was black magic all of a sudden,
Starting point is 01:08:29 you know, has become like commoditized really fast, you know, which, by the way, is exactly what happened, right? Like, you know, within a year of GP3 coming out, where there were open source GP3 is running on a fraction of the hardware, right, they were available for free. And then there were, and then, you know, there were five, you know, now you've got, you know, fully in the game, you've got Anthropic and you've got XAI and you've got meta
Starting point is 01:08:49 and you've got, you know, all these other companies that are, and then Deepseek and, you know, Kimi and all these other companies. And so, like, even at the level of, like, LLMs or, you know, AI models, like, you can squint and make that argument either way. By the way, same thing at the level of apps, right? It's like, you know, one school of thought is, you know, apps are not a thing because, like, the model's just going to do everything. But another way of looking at it is, no, actually, like, actually adapting the model is kind of the engine into a domain involving human beings, where you need to, like, actually have it fit for purpose to be able to function in the medical industry or the legal industry or whatever.
Starting point is 01:09:24 or coding, you know, no, you actually need, like, the application level is actually going to matter enormously. And maybe the LLMs commoditize and maybe the value goes to the apps. And again, you can kind of squint either way on that one. And I know very smart people who are on both sides of that argument. And so my honest answer on this is I think we're in a process of discovery over time, which is, you know, the way I think about this kind of structurally is it's a complex adaptive system. The technology itself, you know, provides one of the inputs.
Starting point is 01:09:49 The legal and regulatory process, you know, is another input. But, you know, actual individual choices made by entrepreneurs, you know, matter a lot. You know, the economics matter a lot. Availability of investor capital varies over time. That matters a lot. And this is a complex system. And so we actually don't know the outcomes on this yet. And we need to basically be open to surprises at the structural level of what happens.
Starting point is 01:10:15 And, of course, as a VC, this is very exciting because it means we, you know, we're doing this now. We should kind of make bets along every one of these strategies. and kind of see and see how this plays out. And I just say, like, there may be like one, I don't know, there may be like one particularly brilliant. I don't know, edgeman manager or something that has all figured out, but I guess I would say if they exist,
Starting point is 01:10:33 I haven't met them yet. So what I'm hearing here is don't over-obsess with moats at this point because we have no idea what it'll end up being as much as it may feel like, okay, there's no way open eye will lose this lead. Clearly we're seeing a lot of competition. GPT wrapper point is really great. A lot of such a derogatory term.
Starting point is 01:10:51 I don't know, year ago, just like, you're just, GPT rapper. Now it's like the companies that are the biggest companies
Starting point is 01:10:56 as fast as growing companies in the world. Yeah, well, it's like a little bit like, I don't know, I mean, even just like with, you know,
Starting point is 01:11:00 the, you know, this has been the, you know, the holiday, if, you know, three years ago was the holiday of chat GPT,
Starting point is 01:11:05 this last, you know, months or whatever, has been the holiday of a cloud, particularly Claude, right, for coding. But it's like,
Starting point is 01:11:11 you know, it's pretty amazing because it's like, okay, there was Claod, which was, and obviously a great accomplishment. But then there's Claude Code, which is an app, which is an app, right?
Starting point is 01:11:19 It's a Claude rapper. Right? It's, you know, Agent Harness. And then they did this amazing thing where they came out with, it was a co-work. Co-work. Co-work. And remember what they said a co-work, which is Cloud Code wrote co-work in a week? Yeah, a week and a half, yeah.
Starting point is 01:11:34 100%. Well, and that's, and there's two ways looking at that, which is like, wow, that's really impressive. I mean, obviously, that's really impressive that Claude Code was able to build co-work in a week and a half. That's great. That's amazing. The other way to look at it is, teamwork was developed in a week and a half.
Starting point is 01:11:49 Like, how much complexity could there be? How much of a variant entry can there be in something that was developed in a week and a half? And then, and then, again, it's this push and this pull thing where it's like, it's like, wow, it's incredibly functional, incredibly valuable. And people are like all over the world every day now are like, wow, I can't believe what I can do with this. It's like the most magical product ever.
Starting point is 01:12:09 But at the same time, it took a week and a half. Right. And so every other model company, you know, I'm sure you'd have to expect to sit in there being like, okay, obviously we need to build, you know, an Asian artist. And then obviously we need to build a co-work, you know, thing for regular people. And obviously, you know, I don't even saying I know anything, but just like obviously they're all going to do that. Right. And so, you know, how defensible is that?
Starting point is 01:12:30 And, you know, in six months, you know, and we've seen this happen before. Like is quad code going to get lapped the same way that, you know, get up copilot got lap? You know, the history in the last three years has been everything that looks like it's like the fundamental breakthrough, gets basically replicated in lab very quickly. Like, many of the smartest people I know in the field, when I really kind of talk to them, kind of, you know, get a couple of drinks in them, they're like, yeah, they're basically, you know, one theory is like there really aren't any secrets among the big labs. Like the big labs kind of all have the same information and they kind of have all the same
Starting point is 01:12:58 knowledge and they're, you know, they're kind of, they lap each other on a regular basis, but, you know, there's not a lot of proprietary anything at this point. And then, and then, you know, again, evidence to that is, you know, deep seek, you know, came out of left field and basically was like, you know, re-implementation of a lot of the ideas on the American big labs and, you know, and had some original ideas of its own. But like, you know, wow, it wasn't that hard for, you know, some, you know, basically hedge fund in China to do it. And so, like how much defensibility is there. But on the other side of it, you've got, wow, these big labs are now paying, you know, individual engineers like they're rock stars. And they're, you know, incredibly
Starting point is 01:13:29 bright and creative people. And, you know, maybe there's, you know, a dozen nascent ideas in any one of these labs that it's actually going to be a huge breakthrough that's going to be hard to replicate. And so, again, it's just like, I think we just need a, I don't know, my views, I, my view, I need to put like a big discount on my forecasting ability on this one. Like for me, it's much less interesting to try to say, okay, as a consequence, industry structure in five years is going to be X. The big winner in the category is going to be company Y. The big product killer app is going to be Z.
Starting point is 01:13:54 It's like I don't think I can predict that. I think a much better use of my time as it being very flexible and adaptable at a time like this. So with all this in mind, do you feel like there's something you're paying attention to more to help you decide, okay, this is where we want to place our bet or is the answer, essentially the strategy you guys have, which is place a lot of bets. You guys raise the largest fund in history. Is that the way you win in this world?
Starting point is 01:14:19 Yeah, so for, I mean, for us, yeah, for us, we obviously have a very deliberate strategy. One way to think about this, you remember the Peter Thiel formulation of, you said there's a two by two, there's optimism and pessimism, and then there's determinate and is it indeterminate and indeterminate, right?
Starting point is 01:14:37 And so, and he always argued you've argued like there's, you have argued that like Silicon Valley is characterized by too much what he calls indeterminate optimism, right? And what he, what he always described, what he meant by that is basically, I think the way he would describe it is, an indetermined optimist who thinks the world is going to be better, but can't explain why. Right. Like some combination of things is going to happen to make the world be better, even if we don't know what those things are. And, and, you know, I think he, he at least historically would say, like, that's basically, you know, that, that risks at least being just like wishful thinking
Starting point is 01:15:06 or delusional thinking. And what the world needs more is determinate optimists, which are people who are like, no, the world is going to be better because I'm going to do this specific thing, right? And he would classify, for example, Elon, you know, he was sort of maybe say,
Starting point is 01:15:19 you know, VCs are indeterminate optimist. And then he would say, you know, Elon is the determinate, determinate optimist where it's like, no, I'm going to build the electric car. I'm going to, you know, solar and then I'm going to do, you know, Mars, you know, right? And I mean, these very concrete things. And I think there's a lot,
Starting point is 01:15:33 I think there's a lot to Peter's framework. But the way I would describe it is, I think maybe, if you know, if you know, disagree with that, it would be, I think the indeterminate optimism is a stronger phenomenon than at least I think he's historically represented it as, and I would put myself firmly in the indeterminate optimist category, and that's the strategy that we have at A16Z, which is, and the reason for that is, it's not, hopefully it's not so much wishful thinking,
Starting point is 01:15:54 it's more, no, what the indeterminate optimism of venture capital or the indetermined optimism of A60s or Silicon Valley is very, it's actually very specific, which is, there are these extremely bright and capable people like Elon, and many others who are founders, right, and product and, you know, kind of product creators, right? And each of those individual people is a determinant optimist, like each of them individually has like a very strong view
Starting point is 01:16:18 of what they're going to do. But the great virtue of the capitalist system, the great virtue of the American economy, the great virtue of Silicon Valley is we don't just have one of those and we don't just have 10 of those. We have 100 and 1,000 and then 10,000 of those. And the way to optimize the outcome is to have as many of those as possible, be as good as possible, run as hard as possible.
Starting point is 01:16:35 And then just the nature of, you know, the nature of the future is like we just don't know all the answers. And that's okay. And then the right way to deal with that is to run as many experiments as possible and have as many smart people trying to do as many interesting things as possible. And so, yeah, I would put myself firmly on the side of the indeterminate optimist. I mean, I'm wondering if the answer to the question of what you look for now more and more is this determinate optimistic founder that has this massive ambition and is actually working on achieving it. Yeah. No, that's right. That's right. I mean, look, founders need to be determined to determine. determine an optimist. Like, they need to have a very specific plan. Now, and you know, look, the critique, the critique from the founders is, oh, UVCs have it easy because, like, you don't have to, like, you don't actually have to commit, right? You don't actually have to like make, you don't,
Starting point is 01:17:17 you don't actually have to like make the bad you lay in. You can, like, place multiple bets. You can operate as a portfolio. You know, you should have a lot more sympathy for us as founders, you know, because we, you know, we only get to make the one bet. You know, and there's, there's truth to that. You know, the kind of argument on that is the founders get to run their companies. We don't. So, you know, we do. We don't get to put our hand on the steering wheel. And so, you know, the great virtue of being a determined optimist is you actually get to, get to single-mindedly execute against that goal.
Starting point is 01:17:44 And, you know, look, in the long run, who does history remember? History remembers Henry Ford, right? Not, you know, whoever was the, you know, whatever, the seed investor who seated Ford Motor Company and, you know, 10 other car companies have failed, right? And so, you know, the determinant optimist is the, you know, the founder is the founder and the company builder and the engineer. I mean, these are the people who actually uses the thing and, you know, deserve 99.9.99% of the credit. But, you know, having said that, I do think there is a role for having something to German and optimist in the background, not helping along the way and helping keep the whole cycle going.
Starting point is 01:18:14 Do you think about AGI in shifting your investment thesis, like as we approach AGI and hit AGI as an investor, how do you think about your investment thesis changing? Yeah, so I've always kind of had a little bit of, I've always kind of struggled with the concept of EGI because it at least, well, it put those way, there's, let's define terms. which is where I kind of struggle with it, which is there's like the prosaic, there's the prosaic definition of AGI, and then there's like the, I don't know, cosmic definition. And the way I would describe it as,
Starting point is 01:18:46 well, let's start with the cosmic one. So the cosmic one is basically the, it's the singularity, right? And so AGI is the moment where you enter the singularity, which is to say where the world fundamentally changes. And like the rules of the old world are gone, we're not operating in a new domain.
Starting point is 01:19:01 And then, you know, the kind of the full definition of singularity is like it's a world in which, you know, human judgment is no longer really relevant because the, you know, you get this self-improvement loop. The AI is improving itself, and it's sort of racist, you know, so-called takeoff scenarios. You can see if this takeoff thing where the AI is improving itself and the machines are making decisions so much faster than people and people are just sitting there watching the machine do its thing. You know, and I kind of described by I don't really, I don't really think we live in that world. Like, whether you could call that utopian or dystopian or dystopian, like I don't think we're lucky or
Starting point is 01:19:30 unlucky enough to live in that world. We could debate that. We can talk about that more. But the pro-sag definition of AGI that at least I think the industry purchases bits have kind of converged on and tell me if you agree with this is it's when the AI could do every economically relevant task as good as a person. The way the co-founder of Anthropic put it is like a basket of the most valuable economic tasks. So it's like 10-15, not every single economically valuable task. Okay, got it.
Starting point is 01:19:53 Yeah, so that's maybe even a slightly reduced definition. And by the way, you're clearly getting close to that if we're not already there. And so on that one, I kind of feel like, so I kind of feel like, so I kind of feel like the cosmic one overstates what's going to happen, and then I kind of feel like the kind of AGI definition that you just gave, I think it kind of understates what's going to happen. Like, it's almost too reductionist. And the reason for that is, I don't think there's any reason to assume that human skill level is the cap on anything. Right. And so, so we always say that as the AGI always you gave the definition, you gave the definition I give, it's kind of income,
Starting point is 01:20:26 it's always kind of relative in comparison to a human worker. Right. And it's like, I don't know. like human skill level caps out at a certain point, but that's because of the inherent, like, biological limitations of the human organism, right? Like, we're, you know, human, I give you an example, human IQ, you know, kind of, what they call fluid intelligence or the sort of G-factor kind of, you know, fluid intelligence.
Starting point is 01:20:48 IQ, I think, tops out in humans as a species. It tops out around 160, right? Where at, like, 160, it's like Einstein level. Einstein, Feynman. So terms of IQ. In terms of IQ. Like, you just tops out at 160. The 160, you, you know,
Starting point is 01:21:01 IQ people are the ones who come up with new physics. There's only a small handful of those. Generally speaking, when we run into somebody in the world who's like incredibly smart, who's like a best-selling author or like a, you know, one of the world's best, I don't know, research scientists or one of the world's best doctors, you know, whatever, it would be probably 140. It's kind of the IQ that you're looking for there. If you're looking for like a really good lawyer, it's probably 130. If you're looking for like a really good line manager in a business, it's probably 110. You know, if you're looking for like an accountant, like a small business accountant who's good at doing the books for small businesses is probably 105. Right. And so the kind of scope of like impressive human, you know, the ability of the human organism to do intellectually impressive things, you know, it's sort of that 110 to 160 is kind of the spectrum.
Starting point is 01:21:49 And, you know, good news is there's a lot of those people running around, but like there's not that many at 140, 150, 150, 160. But it's like that's just, that's like the limitations of what can fit in here, right? And it's like there's no theoretical limit. on where this goes if you release the limitations of human biology, right? And so can you have a, and you know, you already have people running these extramers to kind of do human equivalent, you know, kind of IQ, you know, for existing A model. And by the way, existing AIM models right now are kind of testing around the 130, 140 level, which means they're going to get to the 160 level.
Starting point is 01:22:19 And they're, you know, they're arguably on the mass size starting to get to the 160 level now. But like, I think we're going to have AM models relatively quickly that are going to be like 160, 180, 200, you know, 250, 300. By the way, and I think that's great, right? Like, I feel, I feel as great about that as I do about the fact that we occasionally get an Einstein, right? It's like, would the world be better off or worse off with more or fewer Einstein? And the answer is, of course, the world would be better off with more Einstein's. And, of course, the world would be better off with machines that have IQ, you know, more IQ like Einstein or greater than Einstein.
Starting point is 01:22:47 But, like, I think IQ's the machines is going to exceed that on the humans. I think that's really good. And then the performance, you know, again, it goes back like the AI-Coding thing is happening. The performance against task is going to get better also. Like I think, you know, this is where Linus Starvaltz in particular is like, yeah, okay. Like this thing is starting to generate better code than I can. Okay, so now we're going to have AI coders that are actually better coders than the best human coders. I think that's great.
Starting point is 01:23:09 I think we're going to have AI doctors that are better than the best human doctors. I think we're going to have AI lawyers that are better than the best human lawyers, which actually is going to be very interesting to see, which I think is also great. And so, like, I don't think there's a – I think we're used to living in a world where we just don't understand how good, good can get, because we've been capped by our own biology. and we're going to get to experience what it's like when you have the capability at your fingertips that's actually better than human in these domains.
Starting point is 01:23:35 And so you see what I'm saying, which is like, I think this idea of like human equivalent is just going to be like a footnote. It's like, oh yeah, that was just on Tuesday, you know, in 2026 is when they hit that. And it kind of didn't matter because the next question is like,
Starting point is 01:23:49 okay, what are we going to, what are we get to do in a world in which we actually have machines that are better than that, right? And so I think this is going to be much more of an exploratory process for actually exceeding human capability than it's going to be any sort of particular singularity moment or whatever that happens just that just happens to coincide with the human threshold. 200 IQ. Just like that frame of reference is such a
Starting point is 01:24:10 mind-expanding way to think about just how fast and how smart these things are going to get and quickly. Well, I don't know if you have this experience. I have this experience all the time. Well, two experiences I have all the time. One is just like, I know I ought to be able to do this, but, like, I just can't, like, it's going to take too long. You know, I want to write this thing, or I want to, like, whatever, I want to have this theory on this thing or I have a plan or whatever, and it's just like, fuck, like, I don't have the eight hours, or, by the way, the eight weeks or the eight years, right? And, like, I just don't know enough yet. And I'm just like, I can't do the math in my head and my memory isn't perfect. And, like, I can't remember. And I read, you know,
Starting point is 01:24:51 if you have this, you get interested in some of you read 10 books. And then you're like, shit, I forgot almost everything that I just read. Like I wish I could retain it all, but I can't. It's just like, you just have this. I sort of live in this kind of state of like, almost frustration. I was like, if I could just be smarter than I was, like I'd be so much better at what I do, but I'm not. So there's that. And I don't know how often you have this, but I have this on a regular basis. It's just like, you know, I, you know, because of what we do, like I know a bunch of people who I know for fucking sure are smarter than I am. And I know it because when I talk to them, I just find myself at a certain point. You know,
Starting point is 01:25:24 It's like for the first half of the conversation, I'm just taking notes the entire time. And for the second half of the conversation, I'm just like, fuck, like, fuck me. Like, this person is just smarter than I am and they're just outthinking me and they're going to keep outthinking me. And I just like, all right, God damn it. Like, I got to go home and I got to like have a drink because I'm just not, you know, I'm just not. Whatever that is, I'm not that. And so we're just so used to having those limitations that the idea of having machines that work for us that don't have those limitations. I just, I think that's much more exciting than people are giving your credit for.
Starting point is 01:25:56 Oh, man. I could talk to you for hours, Mark. I'm thinking to close out the conversation. I want to ask about your media diet and your product diet. You just talked about books, reading 10 books. I think you famously read constantly. I saw an interview with you where you're just like AirPods, change in my life. I'm just listening to audiobooks now all the time.
Starting point is 01:26:14 So in terms of media diet, what are you reading? What are you paying attention to these days in terms of, I don't know, podcast, newsletters, blogs, things like that, and then any books in particular? there. Yeah, yeah. So what I read is basically, I mean, I say I read basically three categories of things. So like in terms of like general media, it's basically I sort of, I was described it as I have like a almost perfect barbell strategy, which is I read acts and I read old books. Right. So it's basically either like up to the minute what's happening right now or it's like a book that was written 50 years ago that has stood the test of time. And then, you know, we're presumably there's something timeless in it. And and then it's sort of everything in the middle. I'm always like, much more skeptical about. And in particular, it's kind of what I already said, which is, I think if you go back and you read old, nobody ever does this. It's actually really funny.
Starting point is 01:26:59 Nobody ever does this. There's no market for it. But if you go back and you read old newspapers, and by the way, you can do this. Just read last week's newspaper, right? I'd say we're taping on Friday. So read last Friday's newspaper, right? And just go back and read it and be like, oh, my God. Like, none of this happened.
Starting point is 01:27:16 Like, none of what they predicted played out the way that they said that it would. none of this turned out to actually be that, like, relevant or correct. Like, they didn't understand, like, you know, by the way, they had no view of what was going to happen this week. Then they couldn't know. And so they were making predictions and forecasts and so forth based on, like, not having any information. But it was like, wow, like, you know,
Starting point is 01:27:35 like none of this happened. Like, I wish I had never read this. Like, oh, my God. And then, you know, it's kind of the same thing with magazines. They go back and read old magazines and just like the level of, you know, the endless numbers of predictions that they make. And kind of, you know, the problem with, you know, newspapers at least are going day to day.
Starting point is 01:27:50 The thing with magazines is like, It's like a week or month, you know, kind of a long cycle. And so it's even, you know, by the time an article even hits publication, it's often out of date. So I just, I just have like a big problem with kind of everything in the middle. And so it's either of the moment or timeless. But then, yeah, you mentioned like newsletters. I mean, so the other thing, you know, this is maybe obvious, but I think it's probably
Starting point is 01:28:10 still underrated, which is actual practitioners in the field who are actually creating content, I think probably is still like dramatically underrated. And I think this is a huge part of like the substack phenomenon and the newsletter phenomenon and the podcast phenomenon is like direct exposure to the people who are actually principles in the field who actually know what they're talking about is probably still dramatically underrated and I think again the reason for that is like we're used to being
Starting point is 01:28:32 in this mass media kind of culture in which basically everything is mediated right everything got filtered through like TV interviews or language paper interviews or magazine interviews and you know obviously now more and more it's just no you actually want like smart people who are actually working on something explaining themselves and then you have you know you have new kinds of intermediation like podcasts that kind of open that up for people that makes that possible. And so, yeah, like domain practitioners are, you know, really great.
Starting point is 01:28:55 I mean, just to state the obvious in AI, you know, it's obviously your stuff, but also like, you know, Lex, you know, the fact that like Lex Friedman can have, you know, the world's leading or, you know, whoever the, you know, any of you guys, you know, there's a small handful of you guys who have access to these people. You could have the world's, you know, kind of leading experts in the domain actually show up. And by the way, as, you know, it looks, the critique always is, you know, people talk their book. Like, if I'm running a startup or whatever, I'm just selling. And it's like, and there's always a little bit of that.
Starting point is 01:29:21 But it's also, you know, my experience is people love to talk about what they do. And, you know, they fundamentally like want to express what they do and they want to explain it and they want people to understand it. And everybody kind of enjoys that and they get to contribute to kind of human knowledge by doing that. And they get ego gratification by doing that. And so I think there's just actually just tremendous amounts of alpha in listening to the world's leading experts in the space who actually just like show up and talk about what they're doing. And of course, like the world is awash in that today in a way that it wasn't as recently as 10 years or ago. So I, yeah, I do as much of that as I can do. And there's also just this culture in tech, Silicon Valley, in particular of sharing,
Starting point is 01:29:54 of not trying to keep these secrets. Everyone on LinkedIn is always like, how is this free? Like, it's just the way it works. Yeah. Somebody said, Silicon Valley is a company town, but the company is the company of the company is the company. And again, at the level this goes, again, there's one of these great end equals one. At the level of N equals one, it's somebody, you know, and I've run Stratus before, everyone companies before. At the level of N equals one of like running a company, that's just a giant pain in the fucking butt, like, because, you know, your secrets are walking out the door and your employees are walking out the door and the whole thing sucks.
Starting point is 01:30:23 But, you know, the other side of it is you also benefit from that, right? Because you get to hire people with all these skills and experiences, right? And you're in this ecosystem that adapts, right? And channels, talents and skill and knowledge and people into the new fields. And so, you know, so there's kind of the push and pull of that at the level of just being an individual, individual CEO. At the level of just being in the ecosystem, to your point, like, yeah, it's an absolutely magical phenomenon.
Starting point is 01:30:45 like, you know, one of the, you know, for all the issues in Silicon Valley, you know, I think AI, I did the comment once, I think AI is the ninth major technology platform in the history of Silicon Valley, right? that, you know, Silicon Valley is, Silicon Valley still called Silicon Valley. We haven't made Silicon here in decades, right? We used to actually, you know, it's called Silicon Valley because they used to make chips, right? They used to have the, like, the actual fabs were in Silicon Valley, and then they designed them and they made the chips. And so, and that was, you know, wave one starting in the 19th, you know, actually, that was like actually, no, actually more or like wave three or whatever. But like it was, you know, that was when the, the area was named like in the 1950s. But now we're on like wave nine, right?
Starting point is 01:31:24 And the company town phenomenon where the company is the industry, like the, the, the, the, the, the, the, the, the, the, the, nobody had, nobody had to sit and plan and say, okay, in the 1990s, Silicon Valley is going to do the internet. In the 2000s, they're going to do the smartphone. In 2010s, they're going to do the cloud. In the 2020s are going to do AI. It's just the, the, right, the indeterminate optimist, optimism of ecosystem, the flexibility of the ecosystem met that the Silicon Valley could, could morph into all these categories. And again, maybe a testimony to indeterminate optimist. This reminds me of the meme of how we're all just wrappers over sand. Everything we're building is just wrapper, wrapper, wrapper. The rapper thing is hysterical. Yeah, I'm a software company. I'm a chip wrapper, right? Yeah, I'm a business application. I'm a database rapper.
Starting point is 01:32:08 Yeah, exactly. I'm a sandra. Yeah, you and I are all now sand wrappers. Perfect. Okay. One more question along the media. Diet I asked your partner, Ben Horowitz. What to talk to you about?
Starting point is 01:32:19 The Z and A16Z, if people don't know him. And he said that you're really into movies these days. And so I don't know, any movies? Any movies you're really into these days? Any movies you've absolutely loved recently? Yeah, so the movie that blew my socks off last year, which I think is the best movie of the decade for sure, maybe of the last like 15 years,
Starting point is 01:32:37 is this movie, unfortunately, it's one of these things, not a lot of people have seen it, but I would highly encourage it. It's called Eddington. Not heard of it. Have you not heard of it? Okay, so Eddie, you're going to really enjoy it. So I won't spoil too much of it.
Starting point is 01:32:49 So at the service level, the following spoils nothing. surface level. It's set in a small town in New Mexico called Eddington, which is a small town of about 600 people. And there's a sheriff who's played by Joaquin Phoenix, who's like an old, crusty, basically right winger. And then there's a mayor played by Peter Pascal, who's basically a young, hip, progressive. And then the movie starts, I think, in March of 2020. And so it starts when COVID first hits. And then it sort of, as it plays out over the next two months, it then that intersects, and it sort of extends into the summer of 2020.
Starting point is 01:33:27 So, you know, kind of the George Floyd moment and then the, you know, the protests and riots and kind of everything. So sort of the convergence of COVID and then the and then the all the BLM stuff. And then it and then and then there's a third kind of element to it, which is there's a company which is basically a loosely disguised version of meta, if you read the backstory of it, which is building an AI data center on the outskirts of town. So they kind of pull that in as sort of a thing that looms larger and large. over time. And then the thing
Starting point is 01:33:56 it really is great at is it really shows this is a small town in New Mexico. And so everybody in the town gets kind of fully wrapped up in all the COVID stuff and they get fully wrapped up in all the BLM stuff and they get fully wrapped up in all the like, you know, tech anxiety stuff. But they're all experiencing it basically through the internet,
Starting point is 01:34:12 right? Which is, which is what actually happened, right? And so it's, so the reason I love the movie so much is one is it's the first movie that directly grapples with 2020 of what happened in 2020 and just like fully, fully engages in grapples with like all the dynamics that were playing out of the country. But the other reason is it's the first movie that does a really good job of showing what it,
Starting point is 01:34:30 what it was like, especially in that era to live in a world in which there were things happened in the real world and people were kind of experiencing events online, you know, like in a way that was like very central in their lives. Right. And so it does like a really good job of pulling in like smartphones and social media in a way that, in a way that movies really really is traveling with. And then the whole thing comes together in an incredibly entertaining way. And so I wouldn't even say, I won't even say, I won't even say, completely agree with the movie or whatever, and I think the director of the movie and I would
Starting point is 01:34:57 probably disagree about a lot, but he really tries hard to, like, really grapple with, like, what is actually like to live like a human being in the 2020s in America in a way that I think many other filmmakers who are very talented have just been very scared of touching. And this guy, for some reason, he's just like, yeah, I'm just going to find all the third rails and I'm just going to, like, fucking grab him. I can see what's your favorite movie. It's great. It's great.
Starting point is 01:35:19 It's great. Everybody should see it. Oh, man. Okay. Final question. I want to ask you at your product diet. Are there any products you use that maybe are less known that you love that you want to recommend? You can mention products to your investors and if you use them constantly. I mean, we have so many that it's really hard to, you know, I always feel it's like, you know,
Starting point is 01:35:37 who's your favorite children? And so it's really hard to, you know, to pull out specific ones. But I'll, you know, I'll talk about a few. I mean, or just, I'll just, observation. So one is my 10-year-old, my 10-year-old right now is 100% obsessed with rep. And by the way, it was not from me. Do you have kids? I do.
Starting point is 01:35:57 I have one, two and a half year old. Two and a half. Okay, so you haven't run into what I'm running into now, which is whatever it is you do is not cool. Right? Like, two and a half. Whatever daddy does is like the coolest thing in the fucking world. I can tell you, by the time he's 10, whatever you do is like deeply uncool. Right?
Starting point is 01:36:11 And I'm highly aware of that. And so, like, if I mentioned, oh, yeah, we work on XYZ, you know, he's like, okay. But when he discovers something, then it's cool. or when his friends tell him about it, it's cool. And so he threw no interference on my part, discovered Replit about three months ago and discovered vibe coding. And it's like completely obsessed with vibe coding games
Starting point is 01:36:31 and all kinds of all kinds of things. And like Twitter will sit and do it for hours. And so I'm seeing that phenomenon play out, which is super fun. That's one. Two is I am just completely in love with all the AI voice stuff. I think it's just absolutely amazing, hysterical. My favorite party trick at dinner parties now
Starting point is 01:36:49 is to pull out, Grock with Bad Rudy, which is, if you've seen, it's the, it's a foul mouse raccoon avatar in the, in the Elon's Grock app. So, I think that's super fun. We have this company, Sesame that had, you know,
Starting point is 01:37:04 they went viral last year for this, you know, these, just incredibly, like, you know, intimate, emotional, you know, kind of voice experiences. So I think the voice stuff is fantastic. I'm also super fascinated by all the voice input stuff. And so, you know, You know, one of us recently,
Starting point is 01:37:22 company recently sold. But, you know, that all the, I think like dependence, the wearables, like all that stuff is going to be big. The meta glasses. I think there's going to be a whole wearables revolution here. I love the voice input stuff. I have this app in my,
Starting point is 01:37:36 there's this app on my phone now called Whisper Flow, which is voice transcription, which works like staggeringly well. It's like incredibly, it's like a voice transcription function, but you can actually talk to the air model while you're doing voice transcription. description. So you can kind of, it kind of understands when you're telling it, no, no, you know,
Starting point is 01:37:53 I want bullet points over there and I want this and that. And it understands that you're not telling it to type in the words, I want bullet points. It just actually understands that you want bullet points. And so, like, that's a great example of a super useful thing. And so I think the voice mode stuff is going to be, is going to be really great. Uh, subscribers in my newsletter get a year free of Replit and whisper flow. So there we go. Uh, what's the, what's the most memorable thing your son built with repit? Oh, so he's gotten super in a Star Trek. Um, and so, so far it's been, he's right, like, writing like Star Trek Simulators. So like all the, you know, all the,
Starting point is 01:38:23 by next generation, they actually had a... Next generation, okay, I was going to ask, which... Well, actually, we like them all. We watched the new Starfleet Academy last night, which actually is quite... It's actually quite good, but we watched the original, you know, we watched them all, but it was the next generation where they actually developed an actual design language for the computers.
Starting point is 01:38:39 Because if you watch the original series, they just had like, basically, you know, nods with lights, and they didn't really, you know, they just were like, you know, fucking around on set and trying to pretend they were doing it. But by next generation, they actually had designed... They actually had a UI design language. And so one of the fun things you can do vibe coding is you can say, give me a Star Trek next generation, you know, user interface per, you know,
Starting point is 01:38:57 whatever, this, that, or whatever. And it actually uses the, they called it the seven a nerd out. They call it L-Cars design language. And it'll, you know, it'll actually build you like Star Trek next generation, bridge cosholes using that design language. But, you know, was your choice of like a Star Trek game, for example. And so he's going crazy for that kind of thing. And that sounds extremely delightful.
Starting point is 01:39:17 You guys should open source or release that. Mark, like I said, I could talk to you for hours. Well, you've got things to do. Anything you want to leave listeners with before we wrap up? Anything you want to double down on or just leave listeners with? Yeah, so a couple things. So one is we got super lucky last week. Pachie McCormick wrote the best piece ever written about us, actually, which he released.
Starting point is 01:39:37 And so it's the best explanation of what we do and how we think. And so I would definitely recommend that. And then, you know, we're putting a lot. We have a great team of folks now. We're putting a lot of effort ourselves into video and, you know, in content. And so I definitely recommend our YouTube channel, which I think has a lot of great stuff. And it's going to be very exciting in the next year. Awesome.
Starting point is 01:39:55 We'll link to that. I think it's just YouTube.com slash A16Z, something like that. And you guys have great stuff. Mark, thank you so much for being here. Awesome. Thank you for having me. I really appreciate it. Bye, everyone.
Starting point is 01:40:08 Thanks for listening to this episode of the A16D podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family. For more episodes, go to YouTube, Apple Podcast. podcast and Spotify. Follow us on X at A16Z and subscribe to our substack at A16Z.com. Thanks again for listening and I'll see you in the next episode. This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. This podcast has been produced by a third party
Starting point is 01:40:42 and may include pay promotional advertisements, other company references, and individuals unaffiliated with A16Z. Such advertisements, companies, and individuals are not indoors by AH Capital Management LLC, A16Z, or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.