a16z Podcast - The 2045 Superintelligence Timeline: Epoch AI’s Data-Driven Forecast

Episode Date: November 24, 2025

Epoch AI researchers reveal why Anthropic might beat everyone to the first gigawatt datacenter, why AI could solve the Riemann hypothesis in 5 years, and what 30% GDP growth actually looks like. They ...explain why "energy bottlenecks" are just companies complaining about paying 2x for power instead of getting it cheap, why 10% of current jobs will vanish this decade, and the most data-driven take on whether we're racing toward superintelligence or headed for history's biggest bubble. Resources:Follow Yafah Edelman on X: https://x.com/YafahEdelmanFollow David Owen on X: https://x.com/everysumFollow Marco Mascorro on X: https://x.com/MascobotFollow Erik Torenberg on X: https://x.com/eriktorenberg Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.   Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 People are spending a lot on these models. They're presumably doing this because they're getting value from them. You can maybe argue like, oh, well, I don't think that value's real. I think people are just playing around, whatever. But like, whatever, they're paying for it. That's a pretty solid sign. We're almost giving you here the useful answer of like, I don't think it's a bubble because it's not burst yet.
Starting point is 00:00:20 When it's burst yet, then you'll know it's a bubble. People often make the case, oh, AI hasn't been profitable yet, and they're spending more to make it profitable. In reality, they'll have paid off the cost of all of the development they've done in the past very soon. It's just that they're doing more development for the future. Will they regret that spending? How much are they spending?
Starting point is 00:00:39 You can look at Nvidia and how much they're selling each year and you can see whether it keeps on growing and you can see whether stuff is kind of looking good to continue. Math team is unusually easy for AI. I'm going to be honest. People often make claims about it being like this, you know, intuitive deep thing that it would mean that AI has achieved something, some huge level of intelligence for it to solve.
Starting point is 00:01:01 I think in practice this is just like, you know, making a piece of art. It turns out to be farther down the capability street than people might have guessed. We sort of had this with chess decades ago, right? Like computers solved chess very well. And everyone was thinking of this as the pinnacle of reasoning. And everyone as a result kind of concluded by, oh, well, of course computers can do chess. The, like, interesting scenario to think about, you know, 20% chance, 30% chance, something like this will happen.
Starting point is 00:01:26 and the next decade is like, you know, a 5% increase in unemployment over a very short period of time, like six months due to AI. The public's reaction to this will determine a lot. There will be very, very strong feelings about AI once this happens. I think there will be a bunch of very strong consensus on what to do on things that we don't normally think of as things that people are considering. I know when this happened with COVID, there was a several trillion dollar stimulus package. In a matter of weeks to days, it was breakneck speed. I don't know what that will look like for AI. But I think it's like everything else in AI, it's exponential,
Starting point is 00:02:02 which means it will pass the point of people sort of care about it to people really care about it quite fast. I just expect wherever we end up there will be this certain thing, which we would have considered unimaginable a year ago. Are we building towards the biggest economic boom in human history or the fastest collapse? Right now, AI labs are burning billions on compute. Anthropic just built a data center.
Starting point is 00:02:25 that uses as much power as Indiana State Capital and Microsoft's planning one that rivals New York City. The bet? The AI will eliminate entire categories of work before the money runs out. David Owen and Yafa Edelman from Epoch AI have done something unusual. They've actually measured what's happening. They track down permits, analyze satellite imagery, and calculated exactly how fast these data centers are scaling. Their conclusion challenges both the skeptics and the true believers. They don't see a bubble. They see revenue doubling every year with inference already profitable.
Starting point is 00:02:58 But they also don't see the software-only singularity that some predict, where AI recursively improves itself overnight. Instead, they forecast something stranger, a world where AI solves the remand hypothesis before it can reliably fold your laundry, where 10% of current jobs vanish, but unemployment might barely budge, where we hit artificial general intelligence not with a bang, but through a series of increasingly surreal milestones that keep moving the goalposts. Most. Along with A16Z partner Marco Mascuro, we cover their timeline predictions, what stops or doesn't stop the scaling, and why the political response might happen faster than anyone expects.
Starting point is 00:03:38 Guys, there's a lot of conversation about the macro. Are we in a bubble? How should we even think about this question? We're going to get into forecasting later on. But why don't you just take your first stab at how you approach such a big general question? Yeah, I mean, for me at least, the way that I thought about this a little bit is I look at kind of the big indicator being how much people are spending on stuff like compute and I guess maybe some sense of will they regret that spending? That's relevant. But how much are they spending thing? Like you can see, you can look at Nvidia and how much they're selling each year and you can see whether it keeps on growing and you can see whether stuff is kind of looking good to continue. Will they regretted side? I mean, that's just to be the same. see, right? Like, we'll actually have to wait and see. It does seem as if most compute gets spent on inference that companies don't so far regret, like, using to offer their products. So, I mean, on that side, I'm, like, thinking not too bubbly yet. But, yeah, I low confidence and there's other stuff to think about. Right now, the amount of money companies are actually earning in profit, not including the cost to develop the models initially, is, seems to be, like, very positive, such that if they stopped developing bigger and bigger models
Starting point is 00:04:50 and just stick with the ones they've had, they'd have earned a profit pretty quickly at the current margins. And in this sense, it doesn't seem bubbly. On the other hand, at any given time, they're investing in building even larger and larger models, and if that goes well, then they'll learn more money, and if that doesn't go well, then no matter how profitable they are right now,
Starting point is 00:05:08 it'll be a small amount of money compared to how much they would have spent. So I think right now there are not financial signs that there's a bubble. A lot of people worrying about bubbles just aren't necessarily used to the level of spending and just, like, the level of success that sort of happened and, like, scaling.
Starting point is 00:05:25 But if there is a bubble, it could happen very suddenly and be pretty bad. Yeah, I think we're almost giving you here the useful answer. Like, I don't think it's a bubble because it's not burst yet. When it's burst yet, then you'll know it's a problem. Yeah, yeah. I do think, like, you could imagine a world, which is all the spending and the current level of success
Starting point is 00:05:43 does not, like, people often make the case, oh, AI hasn't been profitable yet and they're spending more to make it profitable but right now it's not making anything in reality they're making, they'll have paid off the cost of all of the development they've done in the past very soon. It's just that they're doing more development
Starting point is 00:06:00 for the future. So I think there's this underlying financial success so far that I wouldn't expect to see if they're at the very least an obvious bubble. Yeah, that does seem very relevant. People are spending a lot on these models. They presume, like, you know,
Starting point is 00:06:16 to use them. They're presumably doing this because they're getting value from them. You can maybe argue like, oh, well, I don't think that value is real. I think people are just playing around, whatever, but like, whatever, they're paying for it. That's a pretty solid sign. I guess one quick question related to this is like you're talking the report of the AI in 2030. Basically that you haven't seen signs of basically these models kind of plateauing or like the capabilities keep increasing and you have the benchmarks, you have the amount of data that is going, the amount of compute. Do you think faces or parts of the models are plateauing, though?
Starting point is 00:06:49 Like, for instance, pre-training, are we seeing some sort of plateauing in that? Or do you think people are still exploring some innovations in that stage? And Kirsten, what do you think about that? Yeah, I think this gets a bit harder to look at. Like, we get to an area where there isn't as much public data to say a lot, right? It seems as if pre-training is comparatively less of a focus than it was before, partly because, like, you have this exciting new direction of, well, new-ish direction of post-training where they've done so much about reasoning, whatever.
Starting point is 00:07:23 But then I don't necessarily take that as evidence of like, oh, no, and that means pre-training, you couldn't scale further, whatever. Like, it seems as if there is meaningfully more data out there, it seems as if plausibly, like, even a lot of this stuff is quite synergistic. You develop a better model. You, like, use post-training stuff to make it better. you get a load of data of the model actually being used successfully or not, a lot of that can probably go into free training next time.
Starting point is 00:07:50 You aren't projecting a software-only singularity where AI is able to automate AI research, but he's an automated feedback loop. Why not? Yeah, I mean, I guess like I'm answering this and I'll have to say more. And it's like, that report, it's no one person's kind of, oh, this is like the forecast. This is the prediction, right? This report very specifically looks at, what are the current trends? Are there reasons that they clearly couldn't continue or might not? And if they do continue, where do they lead?
Starting point is 00:08:21 I think whether you see this self-improvement thing, that's very hard to do from a sort of trend extrapolation basis, right? Like, currently AI stuff does help AIR&D at least a little in terms of stuff like coding or selecting your data sets and creating those, whatever. But it's quite hard to actually measure. It's not really helping in some big way, like this kind of self-improving thing would suggest. There are reasons that you might think it could be very hard. People have discussed before how possibly, you know,
Starting point is 00:08:53 if stuff just depend a lot on scaling up compute, then maybe automating a load of the R&D isn't that helpful. I find that somewhat compelling, but I think it's also just, it's pretty uncertain. It's hard to speculate about something that's quite out of regime like that. One thing that needs to happen in order for a software-only singularity to occur is you need to be in this world where scaling up the amount of researcher R&D time, basically, allows you to, like, improve AI enough that it makes up for the lack of being able to scale experimental compute or pre-training. I think that something you would expect to see if this were the case
Starting point is 00:09:33 is maybe not that much experimental compute being used in practice and instead all of the money is going towards researchers. Now, there's a very good case that there's a very large, amount of money going towards researchers. But as far as we can tell, experimental compute, which you seem to need to do research, is receiving a similar amount of money, and that, in fact, it's receiving many times more money than the final training runs of the models that are actually being released. I think this is, in my mind, is a strong update towards, oh, you need to do very large-scale experiments to do research, and that we don't really have good evidence that
Starting point is 00:10:06 researchers and just researchers would be able to speed things up without doing more experiments. However, there are, like, pretty good arguments on either side of this. I tend to lean towards, no, you actually need to do more experiments, and that means you can't get this software only singularity. But I don't think the people who claim otherwise are, like, crazy. I think they're making some, like, they have, like, very reasonable differences, and we're both speculating on something where the data is currently pretty sparse. I actually related to that, like, what do you think on...
Starting point is 00:10:37 So if you have, like, some of the exploration that researchers are trying, I mean, obviously, like, people are storing a lot with RL, trying to go beyond verifiable domains. And what do you think about the argument, for instance, that grading descent is really good on learning and the current data set that you're giving, right? And if you keep training this over and over, it's going to start forgetting things that it was trained before, right? Like, catastrophic forgetting. And there's this argument, right, like, well, kids don't learn that way. Or like, maybe there's some imitation learning that kids do. maybe there is some sort of exploration that they do.
Starting point is 00:11:10 And I wonder what you think about it. I mean, and it sounds right. Like if kids really would just learn on imitation learning, I think parents would have a great time just raising kids. But it seems like the reason why they have such a hard time raising kids is because they explore all these different things. What do you think about it? In terms of the algorithms and the things we need
Starting point is 00:11:28 to keep improving these models over and over beyond the data and the compute? I am cautious about comparing how AI is learned to how humans learn. not because I don't think they are comparable, but because I think we know a lot more about how AI's learned right now than we know about how humans learn, and people like making sort of assumptions about how human learning works and saying, oh, yeah, I don't do it that way. And I don't know, maybe that's true.
Starting point is 00:11:51 Maybe human kids learn via RL. I'm not very... I think that, yeah, I don't have strong opinions on whether or not, like, you know, you need to change to a method that's more like what we think kids do right now. I suspect people will find some method that works to use the computer available because they've been able to do this in the past. Yeah, I'm also sort of reluctant. I guess as well, it's one of those things where when we point to particular issues, like the example of catastrophic forgetting, it's sort of, well, okay, but as we've scaled up, we have managed to do quite well at having models that remember more and more things.
Starting point is 00:12:35 This isn't to say that hence the problem is solved, hence we're done, hence no more other mitigation is necessary or anything like that, but I'm not exactly going to write it off. Yeah, I definitely don't think we've seen any slowdown yet in capabilities from any of these concerns people have. I think that people always have these sorts of concerns. I'm reluctant to believe any given one of them until this actually shows up in numbers I can see on a graph. which I just don't think has happened yet. Dario Anthropic has said, he said in March 2025 that within six months, AI will write 90% of a code.
Starting point is 00:13:16 And of course, that hasn't happened yet. He also said, we could have AI systems equivalent of a country of geniuses in a data center as soon as 2026 or 27. How do you evaluate why Anthropic is so bullish or what is the crux of difference between what they believe and perhaps what you believe? my model at least which I don't know if it's right but what it is is that they think a bit more like the people who believe in you automate R&D and that gives you very quick takeoff so they see it as like yep we're working on these AIs that are great for kind of research engineering type coding and at some point they're going to be useful and that's going to rapidly accelerate us to develop the next ones and then it's going to be
Starting point is 00:14:02 quick progress. Yeah, I think that it's hard to tell the extent to which I don't think we've gotten a lot of evidence that there's sort of views of this like software-only take off are wrong insofar as like they were taking a little bit longer
Starting point is 00:14:20 to get to like the minimum level of competence for AI to get you there. Definitely seems to be the case. But it, I don't know, it's hard to tell the extent to which we've actually had significant up on this. I know Dario often qualifies what he says by like saying as soon as or something like this. So this is like maybe the more so the faster timelines he gives, although I'm not sure.
Starting point is 00:14:45 Yeah, there has also been, I think, sort of, you know, Talmud style commentary where people are carefully looking at his exact wording and then at wording of other people's discussion of how many lines of code that are generated by some teams at Anthropic are generated by Claude Cod and whether this does or doesn't satisfy what you said. So it gets a bit tricky. I remember there was the paper from the uplift paper that was claiming that actually models would slow you down. But I think it matter a lot what models they were using at the time
Starting point is 00:15:17 because I think they were pretty outdated by the time the report came out. And I mean, in my personal experience, you definitely become way faster. And it just saw so much more for you. Like you're just having to hold context on your code base. That's such a huge advantage that I, I think for humans just would be really hard to do. I mean, far more than 90% of the code I write is written by AI these days.
Starting point is 00:15:40 But I know I'm not like the average coder at all. But it's definitely, I don't think it's like a wild prediction at this point that 90% of code is going to be written by AI. I mean, for all I know, somewhere at OpenAI, there's someone just, you know, or that, you know, with alpha code doing evolutionary algorithms on having having tons and tons of trials, trying to, you know, million shots some hard problem. But it's just like, it's really unclear how many lines of code are actually being written by AI right now.
Starting point is 00:16:12 I don't think it's such a wild. It's by a lot of, like, people's intuitive sense in terms of like, oh, is 90% a job of a programmer being done by AI's? Definitely not. But there's this more complicated sense of, like, how much is being written by AI. Probably not 90%, but it's hard to tell. Yeah, and I think that is a very meaningful distinction. Yeah, like if you were to measure how many lines of code are being written, quote, unquote, by like, tab completion, then it's probably quite high. But you don't necessarily expect that that's taking on that much of the programmer's really hard work.
Starting point is 00:16:50 That uplift paper that you mentioned, like, I find it really interesting and really good. And it's also surprisingly recent in a way, like, you know, you mentioned, ah, the models are outdated. I mean, this was early 2025. So these were models that people actually did think were helping them. And in the paper, they even got them to say ahead of time, like, how much do you think this will speed you up? And they said, yeah, I think, however much. They then asked them afterwards, how much do you think this sped you up?
Starting point is 00:17:15 And they're like, yeah, yeah, it sped me up. And I feel it does reveal, actually, like, it might be hard for us to judge whether we were sped up or not. Yeah, one thing that might be happening here is that a lot of the code that's getting written by AI is code that wouldn't have been written otherwise. So it's not really speeding up things that would normally happen. But, you know, there's a lot of simple graphs or simulations I run that might have not gotten written otherwise. And so it's hard to tell exactly what's going on here in terms of the impacts.
Starting point is 00:17:47 I think at the end of the day, the most reliable indicator here is going to be how much money these people are making from programmers and from, you know, subscriptions in general. And it's a lot of money. I think there's definitely indications that people are finding a use for them. and probably a decent amount of that use is for coding, but not exactly for the metric of doing 90% of an existing coder's job. Yeah. Biology is this phrase that's been being used a lot, which is AI is an end-to-end, it's middle-to-middle,
Starting point is 00:18:16 and which is meant to imply that, you know, we're going to need a lot more human involvement than some people, you know, typically think. What is your mental model of what AI is going to, to do for labor markets, either on the sort of lower end and on the higher end in the next, you know, decade, let's say. Oh, in the next decade, like, on the higher end, I'm definitely like, you know, probably I expect new jobs to be created.
Starting point is 00:18:45 Everyone could still be influencers. But on the higher end, it's like, there are not very good individual things that you can point to where it's very obvious that AI can't automate that job at this point. Now, you could argue, okay, but there's some unknowns. And I think it's, like, pretty reasonable. But those unknowns, we sometimes, you know, AI gets up against its limits, and we figure out what they are, and then it learns surpasses that. And, I don't know, at the higher end, it definitely seems plausible
Starting point is 00:19:14 that it could just automate all of the, basically all of existing jobs, with the exceptions of ones that require manual labor, that people actually care about being done by a human. It's just, like, does not seem at all implausible to me, that that can happen or that that could happen very fast with the caveat there being like there's probably some regulatory pushback if that happens. On the lower end, I don't know, it could just, you know, could be a bubble and doesn't have any impact.
Starting point is 00:19:47 The thing I talk about when I'm talking about, like, the interesting scenario to think about, which I don't know, you know, 20% chance, 30% chance, something like this will happen in the next decade is like, you know, a 5% increase in unemployment. over a very short period of time, like six months, due to AI being released, to something that I think will have a very substantial impact on the world, both in terms of how people think about AI and sort of how much attention it gets
Starting point is 00:20:13 and seems plausible to me, but, you know, far from guaranteed. Yeah, I think I strongly agree with being just highly uncertain. It seems very plausible to me that you end up more, or less kind of, you know, this generation actually is exactly where we run out of progress. It would be kind of crazy, but it could happen. And then it's like, oh, okay, everything is very much just generating more jobs for technical people to try to integrate it into doing kind of useful but janky things for all the existing work people do.
Starting point is 00:20:51 The stuff where it kind of becomes a crazy runaway thing that you can, yeah, really automate large swaves that promote work with. I mean, my timelines are, I guess, pretty a bit longer than the efforts. But yeah, I mean, it seems hard to rule out that something really big happens in a decade, a decade's quite a long time. I think I would be surprised if there were not 5% of jobs that exist now, which AI has automated away over the course of the next decade. Honestly, I'd be surprised if it's not 10% of the jobs that exist now, I think.
Starting point is 00:21:24 how fast that happens and the extent to which those people find other jobs is something which I don't think I have seen compelling evidence for either way and probably depends on how fast various things go in exactly what jobs are automated
Starting point is 00:21:41 I think that 10% over the next 10% of current jobs seems like a pretty reasonable lower, it's not quite my lower bound but you know a pretty reasonable number over the next decade but this might not show up an overall employment number. Yeah. This is interesting. I mean, definitely, like, the kind of, to the extent there is a mainstream economics view of this stuff. It would probably be that automation happens at the level of
Starting point is 00:22:08 tasks rather than occupations. And occupations can, as a result, you know, go down quite a bit. But a lot of the time you're automating these like similar tasks across lots of jobs. I think this is compatible with what you're saying. It's just that some jobs, get really hit by it. I don't know. I find it, yeah, quite hard to think about. I'm not sure what even the historic base rate for kind of jobs ceasing to exist is, I know there are problems with this, like the historic employment data series. There is actually quite a high, I believe, base rate of just the tasks in a job changing, jobs themselves changing, jobs kind of going away coming in. So, yeah, even this 5% thing, I don't know what to think, yeah, that would be like
Starting point is 00:22:52 a big effect or kind of, yeah, that's actually roughly the size of the fact you've already sent from something like software. I don't know. Yeah, probably 5% of jobs that existed before software no longer exist. It seems pretty reasonable. But I'm not confident to this. It's definitely something which, like, I don't know, I expect, especially if revenue trends continue, I expect to know a lot more about this in a couple in a year or two. Probably within the next year, because it will just be the case that, okay, we will have AI's earning enough to be like a substantial part of the economy. If it's not showing up in unemployment, then we've learned something about what it's doing.
Starting point is 00:23:33 We've learned that it's able to do this without showing up in unemployment numbers. Or maybe it will show up in unemployment numbers and we'll see exactly what. There's been like some early work looking at like indicators of this. There's a lot of things that complicate looking into this because interest rates also have effects on like the sort of things you might care about or just like normal churn or also it's possible that tech companies you know maybe they'll lay off a bunch of programmers so that they have the capital to build data centers and are those programmers being laid off because of AI I don't know maybe if you had a kid that was a freshman in college
Starting point is 00:24:12 and they were asking hey you know what should I major in if I want to have a great career you know what might you tell them? And if they asked you about, you know, computer science or math or, you know, Trump engineer. Yeah, exactly. Yeah, what would you say? Uh, I mean, I'd probably say not prompt engineer, I think, in general. People get better at using AI is very easy to use. Uh, yeah, I think it's a good question. I think they should probably measure in something where, if they're majoring and programming, the thing that they should be, or computer science, the thing that they should be looking for is not being a person who's going to like, like, the skills that are going to be useful are not going to be knowing a programming language. It's going to be more
Starting point is 00:24:54 general purpose skills, ability to, like, work with other people, communication skills, this sort of thing. I don't really know entirely if this points to a particular major. Most majors are probably not majors that are like actually relevant for your job. Yeah, I guess I'd sort of be like, Well, there's not too much that you can do to plan around the super crazy futures. So I guess go for something that you're passionate about that's useful in the worlds that don't go crazy in that way. I actually think that, yeah, computer science, maths, if you're passionate about them, they're very good because you'll learn interesting things that are valuable in many worlds. But I don't know.
Starting point is 00:25:37 I gave advice to a younger relative recently and they chose to study drama instead. I do think that, you know, one of the things that if you have a better time in college, that's like four years of your life, you had a better time during. And at the end of the day, like, you know, if it's a crapshoot, which of those things is actually going to give you a better time in the future, planning for the present is a lot easier. I mean, it's definitely becoming really hard to know, right? I remember, like, the problem engineer was obviously a joke
Starting point is 00:26:08 because everyone believed two years ago that that was sort of some sort of viable thing. And obviously, models are phenomenally better at, like, just being great prompters. So obviously, that's kind of like one thing that has been happening. It's just really hard to predict what's happening as these models keep being better. One question that I have related to this is, obviously, code is such a big market and it has had such a big impact. One that I'm very excited about, but still much earlier, I think, is computer use, right? It's basically automating all the digital tasks that you're doing in your computer. and there's very few benchmarks around this,
Starting point is 00:26:45 like whether it's Webberina or the Always World, and you talk a little bit on your report about benchmarks. Curious and, like, what do you think is missing in that space? Like, why we haven't seen yet that moment where, the moment, for example, when Sonnet 3.5 came out or CloudCode or Codex, where we saw significant improvement on coding in general, we haven't had that moment for computer years. What do you think is missing there?
Starting point is 00:27:10 Interesting. I mean, there have been, improvements on computer use for sure. I do have, I mean, this, maybe I'm going out on a limb here slightly, but also I do think that there is a sense in which models are a little bit artificially hobbled by their vision capabilities. Like it does seem as if a common pattern you see when you try to get models to do stuff with a GUI is they kind of get a bit confused about manipulating it.
Starting point is 00:27:39 And, you know, in a way where it's like, okay, This is interacting with your general propensity to get infused in long, as you would in, like, difficult long coding problems, but it's kind of exacerbated because, like, you're not able to just easily look back on the thing and see kind of, oh, I was wrong. You instead go down, like, some awful dead end of just, I'm just going to click this again and again and again. So I think that's part of it. I think there is something here also probably about kind of long context coherent stuff. like those tokens to represent the GUI are pretty big and then you're filling up your context window as you go with like, oh yeah, well, I had all of this stuff that's happened before and you seem to just run into a kind of spiral
Starting point is 00:28:25 of increasingly less sensible outputs. So I feel like these are two of the big things, but I don't know if that answers your question. I found computer use. I don't know. This was the first year I found computer use actually useful. We use chat GPT agent in our data center research
Starting point is 00:28:43 because a lot of what we have to do is find permits, which are all going to be on Janky, county by county databases of air permits for, you know, the county that Abilene, Texas is in. And I don't know what databases exist for every county in the U.S. ChatGPT does.
Starting point is 00:29:05 Normal chat GPT can't search them because it's these, you know, these actual user interfaces you can't just search them with, you know, URLs because they definitely don't work that well. And it's able to navigate this, such that I can just ask it to find me permits on a data center in a particular city,
Starting point is 00:29:21 and it will come back with air pollution permits and, like, tax abatement documents and all of this stuff that let me learn a huge amount. And this is just, like, because of the improvements we've seen in computer use over the past a year or so, I'm excited to, yeah, I think it's just going to get better from there,
Starting point is 00:29:39 but I've definitely found it starting to get to the point where it's actually useful. What's your mental model more broadly for what is going to happen to productivity or just sort of economy statistically in general? Some people say GDP growth would be, you know, 5%. I think it's a Tyler Cowan view. I think some people would say, no, no, that should get up to 10% of growth or maybe even higher if we truly have AGI in terms of how we understand it.
Starting point is 00:30:07 What's your model of what happens to the productivity? I think my kind of baseline guessing would be, you know, I forecast out kind of if revenue keeps going the way it has, in theory, for it to be worth spending that much on that, you know, those chips to do that inference. You should be getting something kind of similar to that value after those chips by then. So then you could just draw from that kind of like, oh, okay, so extrapolating to 2030 you need. And I think for there it was. in the report, I don't know, I calculated it. I think it was on the order of like a percent kind of GDP increase. That's in a few years, right? That's not presuming AGI. That's presuming like
Starting point is 00:30:49 if Nvidia stock revenues keep like growing as they sort of previously have and you assume that they make roughly as much compute from it as before and so on. If you actually get something, I mean, AGI is like, yeah, people use it to be empty different things. I think if you actually get something that can do any tasks that humans can do remotely, then presumably you see a lot of growth. It feels sort of difficult to guess exactly what kind of a lag you're going to see. I think there's reasons to think, oh, well, maybe people will be slow to adopt stuff. How do they learn to trust it? Whatever. There's other reasons to think, well, they're already using these technologies. A lot of it might actually be quicker than most growth. And indeed,
Starting point is 00:31:37 adoption's been quicker for LMs than for many previous technologies. So, yeah, I think it sort of gets hard at that point to model. At some point on our site, we had some rough numbers where it was stuff like, what if you, you know, doubled the virtual labor force? What have you 10 times? Whatever. Then you see these, like, crazy GDP boosts. I don't know whether that's the most reasonable way to think about it.
Starting point is 00:32:05 I sort of, I think a lot of it comes down to whether you imagine that, like, yeah, you really get something that can do everything versus you get something first that can do a meaningful fraction of remote tasks, but maybe can't do like an entire bucket of burn the minute bottlenecks you more. So I guess it's again this thing of like, my best guess on current trends is this fairly well defined, you know, few percent of GDP in 2030 thing, which is already pretty crazy. by economic standards. But then once you go much further, it's like, God, my predictions are just going to be even crazier. I'm reluctant to make them. I am going to be slightly less reluctant. Assuming in the next 10 years, we get AI
Starting point is 00:32:53 that is capable of doing any remote job as well as any human, I think, you know, 30% GDP growth seems like a lower bound on something that's reasonable. Assuming you get, this is a big assumption, And a lot of people are going to, that, you know, there's a lot going on in that assumption. But assuming that happens, I think you either are going to get like 30% GDP growth
Starting point is 00:33:14 or, you know, negative 100% GDP growth because everyone's dead. It's just like, you know, it's just like at the end of the day, it seems like you're going to have AI that can scale, that if you have AI that can scale there, you can probably have AI that scales even farther. And right now, I think the like economic models,
Starting point is 00:33:35 I have seen of what happens if you get this sort of full replacement, you can automate a job. Are, you know, I either show this sort of an extremely fast, wild takeoff or with a couple of, or, you know, you have some people
Starting point is 00:33:51 attempted to do this who then say, and then you look down through paragraphs and it's like assuming current levels of, assuming AI is as capable as GPT3. You know, I think the smaller number is just like, You know, they're either near a term predictions or predictions that aren't looking at, like, the full, the more, the upper end of what sort of capabilities you might see in the next 10 years. Yeah, I mean, it does seem hard to imagine a world where you have this supply, a virtual labor that literally can do any stuff that humans can do, and then it doesn't lead to crazy things.
Starting point is 00:34:28 I definitely agree with that. I guess perhaps maybe some sort of a, I don't know, a heavy regulation situation. There are, the Jewish, yeah, I think there exist worlds in which things don't go crazy after that. It does seem like those worlds are not in an indefinite stable state, but, you know, it's not impossible, but it does seem like the default there is you either go crazy up or you either go crazy down, and it's probably going to be one of those two. If you get to a world where it's like genuinely AI can do any job as well as any human. I think people, I don't know, it seems wild to be. to claim that, you know, given that your default case should be, you know, not super ridiculous changes.
Starting point is 00:35:12 It's just like, that's a lot of things that your AI can do right there. And that's like, yeah, it just like seems like it should have fundamentally changed the economy in one direction or another. My intuition is a lot of the disagreement. I mean, probably some of it does come down to sort of cash beliefs people already have. But I do also think some of it is that when people talk about like, oh, yeah, AGI, AI that can do a remote. job, whatever. Even though we feel like we're talking about the same thing, maybe sometimes we're not. I don't know. I've certainly had examples of conversations, so it's like, yeah, I can do any remote job. And then they discuss stuff that it can't do and the stuff that it
Starting point is 00:35:49 can't do. It's like, well, no, like, that's also a remote job. Like, that's the kind of thing people currently do. So I think there is some of this. What do you think like, I mean, you talk about benchmarks on your report, but I wonder like 2007, 2008, what are going to be the right benchmarks, measuring the progress, more than the economic growth, more the capabilities on the model, like intelligence on the model. Like, we had in 2012 AlexNet, obviously that got solved long ago. But that was probably not a measure of AGI by any means. Do you think the same would happen with the current benchmarks we have? So SweetVenge, MLU, let's say we maxed out on those benchmarks.
Starting point is 00:36:30 What comes after that? How do we measure that is sort of like GDP growth? with these models, is it sort of breakthroughs and science, how do you think is a right measure going forward? Yeah, I mean, I think most of what we have is likely to be solved. And indeed, the examples you gave are pretty close already. Like, I don't know, it is basically solved sweepbenches like possibly close
Starting point is 00:36:55 depends a bit on how ambiguous some of the questions are. There's some details, but it's really getting there. I mean, I think some directions are obvious. You kind of do similar things but harder and a bit better and trying to make them a bit more realistic and people are doing this. There are harder software benchmarks that people have made more of an effort to try to curate and that cover larger tasks, for example. I think there's also some question of kind of budgets involved. I do think there's this kind of thing where like obviously if you just burn money, it doesn't intrinsically make the benchmark better. But probably you are going to see
Starting point is 00:37:33 something where you're just going to have to devote more resources on average to them. Like, if you're trying to prove a sort of higher level of capabilities to a higher standard of proof, probably it's going to involve kind of more effort in developing them. I do also think, though, you're going to see examples of, you know, relatively small, kind of small numbers of things that are just very impressive. And these are also a valuable signal. Like, when you see Allen's being able to do things like, oh, yeah, I just refacted this entire code base
Starting point is 00:38:06 and it was really useful, then this is going to be useful. And even if it's not yet formalized into a benchmark, if you've seen it for yourself, it's going to be kind of useful for you as evidence. And then people are probably going to make benchmarks that cover things like this to try to systematize them. I want to go back to our question on timelines.
Starting point is 00:38:25 And I want to ask you about a few different sort of milestones and get your perspective on timelines there. So first is what is a rough timeline for a major unsolved math problem being solved by AI? I actually wondered, yeah, because you had a few of these that you said, trust the look at. When you say that it solves this, I mean, is this unassisted entirely? It's, or is it kind of a news report or someone tweets that, hey, like, I dump this at GPT and it solved it. And what counts is major? Something that we would all agree.
Starting point is 00:39:00 like a substantive version of it not a you know just an anecdotal you know person describing it hmm
Starting point is 00:39:09 that does it have to solve it on its own yeah let's go with that sure yes honestly oh yeah because I mean there's already cases it seems
Starting point is 00:39:18 of LMS be yeah like people are debating a little bit but mathematicians who seem just where we are saying like wow I used this and it was really helpful
Starting point is 00:39:26 during my proof I would not be surprised to say I solves like a major unself math problem, like the Raymond hypothesis, are similar in the next five years. I'm not going to say that, like, that's my, you know, median case necessarily, but I definitely wouldn't be that surprised. It's like, right now, it doesn't look like math is that hard for AI. It's just like some things turn out to be hard and some things don't, and math is just like one of the domains where it's all seems to work pretty well and where it's most other domains, it's not
Starting point is 00:40:00 at the point where it's, like, useful to a full professor. To the same extent, I think it is for math, or getting very close to for math. Yeah, and also it's, like, very unclear to what extent certain capabilities that it has unusually well might actually turn out to be very, very useful. Like, maybe it'll turn out that there's, like, four papers out there that it knows about, that have obscure results in them,
Starting point is 00:40:24 that when combined, solve some big conjecture, which is the sort of thing that it, like, might be much more feasible to figure out, with AI than for a human to figure out, or something similar. There's a lot of uncertainty here, but just like does not currently seem like something that AI is actually going to struggle with. People often make claims about it being like this, you know, intuitive deep thing that it would mean that AI has achieved something, some huge level of intelligence for it to
Starting point is 00:40:51 solve. I think in practice this is just like, you know, making a piece of art. It turns out AI could just do that before it could do a lot of other, before it can, you know, remember things for more than a couple of days or whatever. Yeah, it turns out to be farther down the capabilities tree than people might have guessed. Yeah, I think I'm also bullish, though I do think that, yeah, it's one of those things where it's tricky and you really probably do need to define it quite well to get a good forecast on it, to hope to get a good forecast on it. Like, I don't know, we've had this experience that with benchmarking mathematics,
Starting point is 00:41:29 You know, we got mathematicians to cut with problems that I think aren't as difficult as the kind of problems you're talking about, but nevertheless, they're like, yeah, AI could solve this. It'd be like a big deal for AI progress. It would mean something to me. And then AI has solved them. And usually, their response has been kind of like,
Starting point is 00:41:46 oh, yeah, that updates me a bit. Although, man, when I look at it, I just realize, like, yeah, you can kind of brute force this. You can kind of choose this. You can get through. And it's a bit like, oh, okay. I mean, what if there's a problem? But for humans we consider sort of, oh, this would be quite big.
Starting point is 00:42:02 And then, yeah, I solved it. And then, yeah, I solved it, whatever. We sort of had this with chess decades ago, right? Like, computers solved chess very well. And everyone was thinking of this as the pinnacle of reasoning. And then they did. And everyone, as a result, kind of concluded by, oh, well, of course computers can do chess. So, yeah, I don't know.
Starting point is 00:42:23 I suspect that math is quite, nice for AI to do. I'm reluctant to go out and assert like, oh yeah, definitely. Air is going to like solve some of the Millennium Prize problems in the next few years. But it would not at all surprise me
Starting point is 00:42:43 if it solves quite impressive seeming things in the next few years. What about a breakthrough in biology or medicine? And we've already seen some of that with the, what's it called? Alpha Fold. math team is unusually easy for AI.
Starting point is 00:43:02 I'm going to be honest. So to the extent where I'm like, is it going to do the same exact level of like, oh, it on its own did this huge thing? That seems to be a much bigger stretch to me. It definitely seems plausible. But there's a lot of other concerns there where it needs to be able to like actually do experiments
Starting point is 00:43:23 and get data and interact with the real world for a lot of these. in a way that does not need to happen at all for math, in particular for certain, yeah, it's just, they in fact seem farther off. What seems more plausible to me is that we see, like, you know, it become ubiquitous that some tools, like, of using AI in some sort of aspect of, like, biology or chemistry
Starting point is 00:43:49 or something useful like that, that it, like, certain aspects of it are enhanced. It also is possible that AI will, you know, make, incredible strides without humans, but it's harder. Yeah, I think again, it's a bit tricky for where you draw the line. I mean, I think you're not counting tools like alpha-fold, because if you were, then probably you'd argue for that, right?
Starting point is 00:44:12 The inventors won the shared Nobel Prize. But, yeah, I mean, I guess there's kind of different directions. In biology, you could have AI being able to predict quite specific things like that. oh, you could have something that's more general purpose, this so-called, like, co-scientist or whatever they want to call it,
Starting point is 00:44:32 approach, or it's more about, like, oh, it was able to look through the literature and have good ideas, and there's different extents of human involvement. There already seem to be some results where impressive stuff is happening. I've not vetted them enough to really have a sense of, like, would this already count as having satisfied, yeah, the sort of level of impressiveness you're looking for. I sort of assume that finding things that end up being meaningful will happen pretty soon if it hasn't already happened. But then maybe there's a question of kind of, okay, but is it doing as well as human researchers are actually like prioritizing the best few ones to work on?
Starting point is 00:45:17 I think most of these co-scientist results have probably had pretty involved humans prioritizing. So, again, I've not looked enough to say. Lastly, how about for real superintelligence, your definition of super intelligence? I have, I have, I think I am on the record as saying that the median timeline I discussed, or the modal timeline, sorry, I think it's modal, yeah, which might be on the early side compared to where my median is,
Starting point is 00:45:50 is, you know, 2045 was where when I did the podcast with Haimei, we discussed like our forecasting, breaking down, and everything going bananas is the terminology I have used. And that, like, looks like super intelligence. I, you know, I think that it's like the case that if we get AI that can do every single job that a human can do as well as any human could do that job in the near future.
Starting point is 00:46:23 And this means that scaling just works to get things much, much better and probably means that you are not that many steps that you are just a bit more scaling away from getting AI that could do anything that humans, sorry, two things vastly better than humans. Yeah, it gets hard to predict. And I think as well it gets to be one of these things
Starting point is 00:46:48 where the predictions get a bit unmoored from the stuff that you can, like, properly model. Like, my sort of, you know, guesses, my, like, judgmental forecasts to use the fancy term for just kind of can do any remote work tasks, probably have a median of about, like, 20, 25 years. I kind of struggle to imagine a world where that happens, and people are, like, deploying it and doing research. they're not making further progress to being able to do stuff much better. So I guess they have to be like not too much longer after that for some definition of super intelligence. But yeah, all very uncertain and yeah, it seems to break down a bit.
Starting point is 00:47:36 You talk a lot about the progress in data centers, benchmarks, biology. And there was one interesting part that I noticed just in the field, that is robotics is making a lot of progress with, let's say, world models and like the physical space a little But curious on, like, what is your take here? Like, what do you think it's, it seems like a lot of the problems in robotics can be solved purely with imitation learning. You might not need a lot of sort of like breakthroughs in math or whatever. Like, you can just basically learn it from a lot of data.
Starting point is 00:48:03 And I think in the last couple of years has been remarkable using in robotics and world models overall. Curious on your take a little bit on this. And if you did some kind of research in the space. So we've looked into what sort of an amount of compute is actually being used to, like, do these training runs. And what we found is that, like, compute... The training runs that are being used for robotics
Starting point is 00:48:26 are, like, 100 times smaller than the training runs that are being used for... than the training runs that are being used for, like, frontier models. And so there's a lot of skill you can do there. I don't think that until, plausibly, until very, very recently, there have been serious attempts to gather data for robotics at a massive scale. It's just the case that you can hire a bunch of people
Starting point is 00:48:47 to move around in motion capture suits if you need to. And there have been a lot of attempts to do that, although I think this might be changing. I think of robotics as mostly a hardware problem. A hardware and, like, economics problem of, if it costs $100,000 to build a robot, then, you know, it's not necessarily better
Starting point is 00:49:05 than a human who could work for $20,000 a year, or a very cheap human in certain countries, or something, like, sort of minimum wage in some countries that you might be able to afford labor for. It's just not obvious to me that there is a software problem here. The hardware, it does seem like unclear, it's very unclear to me how much of a hardware problem is left. In particular, there's certain tasks with robots might be able to do,
Starting point is 00:49:39 but are they actually the tasks that you care about a robot being able to do? If you want your robot to be able to, like, nimbly walk around while lifting up heavy things, and moving fast and react, then that's, that's hard. That's a hardware problem that I don't think they've seen solutions for yet. Yeah, I think my impression roughly matches this. It's sort of, I don't know, people fairly often talk about this distinction between remote work and physical work. I think because there's this perception of robotics progress lagging behind a bit,
Starting point is 00:50:12 and there even is some intuition that maybe, maybe this physical manipulation stuff is actually just harder but I wouldn't conclude that with much certainty like Jaffa's said it feels like you'd kind of also want to see well okay what happens if it gets scaled up in a similar way to even get a sense of like oh okay
Starting point is 00:50:33 was it actually harder versus was it just deprioritized is there anything we didn't get to that you feel is important that we leave our audience with we did discuss the data center's at least we just did. I'm not sure if there's a good way to leave the audience with that. Yeah, let's get into it. Okay, so you guys just did a, you know, released it in a project. What one do you talk a little bit about what you were trying to achieve there and what you hope people take from it? Yeah, so we took 13 of the largest data centers we can find.
Starting point is 00:51:02 These includes a few from each of the major labs in the U.S. And we found permits. We took satellite images, including new satellite images, of all these data centers. We figured out how to determine how much compute is in them. based off the cooling infrastructure that they're building, as well as when they're coming online and their future timelines. So we understand this like real world data, and it's all available online on our website for free. This, like, to give insight into this giant infrastructure buildup that's happening
Starting point is 00:51:33 and the pace of it, there's some things about it that surprised me a lot. For instance, we learned that the most likely candidate to have the first gigawatt-scale data center is Anthropic, which would not have been my pick. but Anthropic Amazon's New Carlisle Project Rainier development seems on track to come online in January followed shortly thereafter by Colossus 2
Starting point is 00:51:55 we also learned a lot about what the largest concrete plans are rather than just like marketing plans some people will throw around numbers but the one we found that's actually seriously underway and has permits and is setting up the electrical infrastructure for is one by Microsoft which is going to be used by Open AI, at least in part, in Mount Pleasant.
Starting point is 00:52:18 They're calling it Microsoft Fairwater. And that one's going to be use a size, use not quite as much power as New York City, but I think more than half. What's stopping us from significantly increasing the cluster size? Is it the cost? Is it supply lead times? Are there any other engineering breakthroughs required power? I think that people are approximately,
Starting point is 00:52:44 wrong that there's something stopping us and we are scaling up as fast as there is money to scale up approximately. I suppose they could want there to be all of the clusters literally today, but they're scaling up really quite fast. You're seeing these data centers which are using I think the one I mentioned for Anthropic Amazon is using about as much power, nearly as much power as the state capital of Indiana, which is where it's located. And the timelines on some of these like the Colossus 2 are, you know, two years or less, which is just an insane thing
Starting point is 00:53:20 to build this thing that's using as much power as a city. I think that plausibly, you know, you don't want to buy chips now, you want to wait for there to be better chips. I think that people think of, there's a lot of noise
Starting point is 00:53:36 about things being difficult and scaling up. And I think this is because people are having to spend a little bit more than they would ordinarily have to spend. You can't use the ordinary sort of power pipeline, which is designed to deliver this affordable infrastructure at a slow pace. You have to, you know, buy things that you wouldn't ordinarily have to buy and spend more than you would ordinarily have to spend, but not buy enough to slow it down. All of these things pale in comparison to the cost of your GPUs.
Starting point is 00:54:05 So my actual takeaway from a lot of this has been, oh, we're not having too much trouble scaling up but just like these plans are going really quite fast and it's not obvious that people would actually have the finances and desire to do them faster when people are talking about energy as a as a as a as a as a major potential bottleneck or i was having to you know increase our capabilities significantly you're you're not worried that that's going to be a sort of durable a sustainable bottleneck that that's not i think people like complaining because they can't just use the traditional tug into the grid for cheap, affordable power four years down the line pipeline.
Starting point is 00:54:44 At the end of the day, there are expensive technologies that exist right now. You could pay for solar power plus batteries. This is fairly small lead times. It might cost twice as much as normal power, but that's still way less than your GPUs, so you're going to do it if you have to.
Starting point is 00:55:03 And you see people doing these sort of emergency things that cost them a bit more, you know, starting up their data center, A common thing we see is people starting their data centers before their data centers are connected to the grid. I think Abilene was an example. X-A-I Colossus 1 is a prominent example of just finding ways around this that are expensive.
Starting point is 00:55:22 And you complain about it because, you know, it'd be nice if you could do the cheaper way. And no one's used to having to do it this expensive way. At the end of the day, though, it's just like does not... There seem to be enough solutions, especially if you are as willing to pay as people are in AI that I don't really expect it to be a significant bottleneck. Maybe let's close with this.
Starting point is 00:55:44 If these systems get as powerful as we're discussing, as we're discussing, I'm curious to how the sort of political system is going to respond. I'm curious if you're sympathetic to the Ashton Brenner view that there's some potential nationalization that occurs. But how do you expect governments to respond? It's kind of remarkable of how not in the political discourse, course, it is given how powerful it is already. I'm curious how you think about that. I expect, so the thing I, calling back to what I mentioned earlier, this concept of, you know, the potential
Starting point is 00:56:17 for 5% unemployment increase in like six months. I think that the public's reaction to this will determine a lot. There will be very, very strong feelings about AI once this happens. I think there will be a bunch of, you know, very strong consensus on what to do. I, on things that we don't normally think of as things that people are considering. I know when this happened with COVID, there was a several trillion-dollar stimulus package passed at, like, you know, in a matter of weeks to days, it was breakneck speed. I don't know what that will look like for AI, but I think it's like everything else in AI, it's like, you know, exponential, which means it will pass the point of, you know, people sort of care about it to people really care about it quite fast if things keep going. I just don't know where we're going to end up
Starting point is 00:57:06 I just expect wherever we end up there will be it will look like oh everyone suddenly agrees that why that's to do this certain thing which we would have considered
Starting point is 00:57:16 unimaginable a year ago and I don't know what that will look like it might look like nationalization it might look like pausing it might look like I don't know going faster guaranteeing better unemployment benefits
Starting point is 00:57:29 who knows I just think there's going to be some sort of like strong response of some sort, and it's going to happen very fast. Yeah, I mean, you know, you make the point that governments are maybe less interested than you'd expect now, but I mean, the current impacts, I think, aren't really that large. I feel like the attention is getting larger, but it's not the day I as of right now is that powerful. And yet, governments are already talking about it a lot, right? And you have people
Starting point is 00:57:59 meeting with heads of state from various hardware manufacturers and AI companies and countries talking about their AI strategy, stuff like this. So I feel clearly country, national governments are going to be quite involved. It's just a question of how. And yeah, I also am a bit unclear on that. I think that right now we've seen this thing in revenue and finances where it's been doubling or tripling every year. And my default assumption is that attention that AI gets from policymakers and governments is going to follow a similar trend where it will double and triple every year.
Starting point is 00:58:36 This means that in the future, if trends continue, there will be a huge amount of attention and it means that right now there's a lot more attention than last year. But you don't suddenly skip from very little attention to all of the attention, although you do move quite,
Starting point is 00:58:50 we are moving, I think, quite fast. I think we made enough predictions that we'll have to have you back next year and at the end of the end of the year and check in and see where we're at and then make it for next year. David, thank you so much for coming to the podcast.
Starting point is 00:59:04 Thank you. Thank you. Thanks so much for having us. Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review
Starting point is 00:59:17 and share it with your friends and family. For more episodes, go to YouTube, Apple Podcast, and Spotify. Follow us on X at A16Z and subscribe to our Substack at A16Z.substack.com. Thanks again for listening, and I'll see you in the next episode. As a reminder, the content here is for informational purposes only. Should not be taken as legal business, tax, or investment advice,
Starting point is 00:59:40 or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see A16Z.com forward slash disclosures.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.