a16z Podcast - Network Effects, AI Costs, and the Future of Consumer Investing with Anish Acharya on The Kevin Rose Show

Episode Date: April 19, 2026

This episode originally aired on The Kevin Rose Show. Kevin Rose speaks with Anish Acharya, general partner at a16z, about how AI is rewriting the rules of consumer software, the defensibility of netw...ork effects in a world where anyone can spin up an app in 48 hours, and why the real threat to consumer founders may be the cost of inference, not competition. They also discuss model pricing, the future of the four-day work week, and peptides.   Resources: Follow Anish on X: https://x.com/illscience Follow Kevin on X: https://x.com/kevinrose   Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 The idea guys are sort of having a moment. In fact, it's funny, like, I'm like looking for new ideas to work on. You know, in the old days, somebody would give you their app idea and you'd be like, oh, here we go again. And now I'm like, cool, how about I build it for you? What has changed for me is, you know, someone that dropped out of computer science because I just couldn't keep up with everyone else. I always had the creative ideas, but my ADHD was too bad that I just couldn't remember all the syntax.
Starting point is 00:00:23 And I was like thumbing through manuals back in the day and, like, you know, trying to like, I had like the C++ Bible or whatever they called it, you know? You know? When you're a child, nobody tells you, Kevin, you're bad at drawing or you're good at painting. The thing that I think we need more than UVI if we ever get to that place is universal basic purpose. And the way you actually get the French Revolution is less that people don't have enough money. That's part of it. And more that people don't have something important to work on.
Starting point is 00:00:46 Like, everybody's got to feel like they're on a hero's journey. I was talking to my wife and I was like, we're getting a lot of arguments about X, Y, and Z. What if we just had a conversation with a model that built out these frameworks for us that understands what we like, how much we care about certain things. And then we just have our models go and, like, duke it out. And she looks at me and she's like, this is more the worst ideas for her. I'm just like, shit.
Starting point is 00:01:08 This episode originally aired on the Kevin Rose show. When anyone can build a Slack competitor in a weekend, what actually makes a consumer startup worth back in? For decades, software moats meant engineering effort. When Kevin Sistram and Mike Krieger were hand-coding Instagram's filters, copying them cost you months, That window is now 48 hours. But Anish argues the moat was never really the code.
Starting point is 00:01:35 When Hipstamatic and a dozen others launched alongside Instagram, it still wasn't obvious which would run away until it was too late. That pattern may hold, but the cost structure has shifted. One founder told Anish he'd need $25 million just to reach 100,000 monthly actives because AI inference isn't free. So the real tension isn't whether great consumer products get built. It's whether venture economics can survive a world where the best companies skip early rounds altogether. Kevin Rose speaks with Anish Acharya, general partner at A16Z, focused on
Starting point is 00:02:09 consumer investing. Anish, we're back. Nothing has changed at all, so I don't know what we're going to talk about. Exactly. How long did we do that episode together? Four or five months? Everything has changed. That's so amazing. Dude, great to have you. Real quick, a primer for everyone. Your partner, general partner, Andrews and Horowitz. That's He focused primarily on consumer investing. That's right. Anything else to mention on that front? I mean, I focus on consumer, but I'm a programmer.
Starting point is 00:02:36 I mean, I grew up the same way that you grew up. We worked at Google together. We worked at Google together. Yeah, yeah. So I've got consumers my area of investment focus, but I've got a ton of personal interest outside of that. Love it. All right. We got lots to talk about.
Starting point is 00:02:47 We figure this would be fun little variety show, cover all the things, AI. Yeah, yeah, yeah. Let's get to it. Get this out fast. Get people, you know, thinking about what's coming. You've got a list. I've got a list. You want to start first?
Starting point is 00:02:57 Well, I want to hear more about what you've been working on because I experienced one of your products this morning. You've been obsessed with the models. You're just like, you can't stop programming. Yeah. I mean, tell me what you're working on and then tell me how much you think it's real productivity versus productivity porn. Yeah.
Starting point is 00:03:13 Well, I'd be curious to see how you define productivity porn. But in terms of what has changed for me is, you know, someone that dropped out of computer science because I just couldn't keep up with everyone else. I always had the creative ideas, but my ADHD was too bad that I just couldn't remember all the syntax, and I was like thumbing through manuals back in the day and shit, like, you know, trying to like, I had like the C++ Bible or whatever they called it, you know. And so now there's none of that. And I finally realized that six months ago I was going and manually looking at the code and thinking to myself,
Starting point is 00:03:47 like, oh, I should just at least look at it to make sure I kind of see what it's doing so I understand best practices. And now I just realize I don't ever have to look at it. the code again. Yes. Because the next, it doesn't matter what bugs I'm creating right now. If I discover them, I can squash them. And the next model is going to be better and it'll rewrite any bad functions or anything that I have going on. Yes. There is, there is a world that what's so weird right now is you're taking a look at all these companies that are SaaS businesses. They're realizing that the moat is no longer there. Like I can spin up anything, my own personal CRM app, a Slack competitor. Anything that has definable outcomes. Anything that has definable outcomes.
Starting point is 00:04:25 comes for software. You know, I'm not making gene editing software, but anything like a SaaS-like business, very easy to replicate. It's easy to say, I want to go build my own workout app, right? And like little things like that, are those businesses no? But like the stuff I'm working on is,
Starting point is 00:04:43 is personal, passionate areas where I believe I want to put my own time and attention. And for me, I've always cared about social news, you know, starting dig back in the day. And I believe that we're in, entering into a really interesting time where information is going to be platform and app agnostic so that it should be able to traverse and meet you where you're at. So if you care about, you know, like the latest AI coding tools, I'm just going to pick that up.
Starting point is 00:05:15 If you want it in a podcast form when you're coming into work, custom tailored to you, if you want it in a video form that you tune into an Apple TV app and it says, Good morning, I nation. It welcomes you by two hosts that you want. If you want it in a daily newsletter, if you want it inside of your Claude Code experience, or your Claude Co-work experience, if you want it in your Obsidian, so there's like a little markdown file that you read every day. Yes.
Starting point is 00:05:36 It doesn't matter. It is just going to, because all the connective tissue is finally happening, information can freely flow amongst these different diverse platforms, which I think is beautiful and I'm super excited about. You know, it's so interesting. So the Obsidian founder is one of the most interesting people to follow on X. It's a nice guy too. Oh, really?
Starting point is 00:05:54 I've met him. Yeah, Kipano, right? Yeah. So he's got this whole theory of how apps are ephemeral. Sort of apps and intelligence are these infomeral things, but files are permanent. And there's something very elegant and beautiful, and it sort of speaks to what you're describing, which is a file is just the basic unit of information. And then can get expressed as a video, as a pod, as all of these different things.
Starting point is 00:06:14 Right. And the app with which you consume this information may actually change from time to time. So I don't know if that's right, but something about that direction feels spiritually interesting. Yeah. Well, for me, what I'm doing is when I work with you, with coworker or work with any of these apps, I'm always saying write this and markdown. Right.
Starting point is 00:06:29 And I want it in mark down because markdown for me is kind of like the text basic, lowest atomic unit of what is possible and portable, you know? And so I know in the future, anything will be able to ingest markdown with ease. And so for me, that's kind of how I'm thinking about my file structure of all things. You know, it's interesting.
Starting point is 00:06:45 If you hear Mark talk about a bunch of the design choices they made when they were designing Netscape, one of the most controversial choices they made with HTTP was actually have it be plain text on the wire because at the time it was like, well, that's not secure, that's not safe, that's dangerous. How can you just have plain text on the wire? So all the wire protocols were encrypted, and that was considered the best practice.
Starting point is 00:07:05 And I think Mark and Ben and the team said, let's actually just make it plain text because it'll be easier to work with, easier to debug. And that turned out to be a brilliant design decision because it drove a ton of HTTP adoption. So there's something that sort of mirrors that in what we're seeing right now. All these file formats, all the cloud storage, like, it's just too much heaviness. and we're in this moment of beautiful interoperability, you know, bashable, all the things that you guys aspired to in the late 2000s, it never really came to fruition and now we're getting it.
Starting point is 00:07:34 Yeah, absolutely. And I think that there's something about having this, like if I put something inside the database right now, I'm kind of locked in there because I could tell any agent to go and read the tables and understand the schema and like have it. But for me, I just want ultimate flexibility. And the context windows are big enough and search is decent enough. And we see the compound engineering and some of these other protocols where
Starting point is 00:07:57 markdown works just fine for now. It depends. Obviously, if you need to do large data sets where you need to do vector embeddings and you need to actually find similar documents, like there's other bigger tools and bigger hammers to use in those cases. Potentially, but I'd say that those are the raw information should be stored in the most interoperable fashion possible. So just, I mean, one of the great things about open clause is that all the memories are just flat files, right? They're markdown files, I believe. So now you've got a memory file per day, plus you've got persistent memory, which is like preferences and things that don't change about you. Right.
Starting point is 00:08:27 And it just means you can use it in a thousand different ways. You can try different memory architectures. Right. You just, your information is yours in a way that it hasn't been for 15 or 20 years. Yeah. Are you an open claw user? So I love open claw. It's very, very interesting.
Starting point is 00:08:41 I guess I haven't been as at the edge of OpenClaas as others. I'm just spending so much more time in Claude Code and Codex. Yeah. For some reason, it's just that form of creative direction is so much more satisfying to me than the kind of product. 10x you get from OpenClaugh. It's incredibly important and really cool architecturally.
Starting point is 00:09:00 I'm just not as obsessed with it as Chris and some of our other friends. Yeah, yeah. How about you? I'm in the same boat where I obviously installed it. I got a Mac Mini like everybody else did, and I wanted to start playing with it. And a little hesitant in the sense
Starting point is 00:09:12 that I have sensitive data that I don't want out there, so I create all new accounts and all that. Yeah, same. But I have yet to see the one use case where someone has come to me and be like, ah, this is the game changer for me. You know, like, it does this. And I'm like, wow, I want to do X.
Starting point is 00:09:26 Like, that sounds awesome. Yeah. And oftentimes, just little tiny things, little tiny productivity boosts here or there. And sometimes I got one friend that is calling him with little updates and stuff and he can talk to it on speakerphone. Yeah. And those are fantastic demos. They're fun. Yeah.
Starting point is 00:09:42 But I just don't see yet. And granted, it's early days. But I like directionally, I think it's right. I think we're going to have a personal assistant likely just built into the larger models. Yeah. That just has. their own storage. I mean, we're seeing this now with these agents that Anthropic rolled out. They now have their own storage in own containers where they can keep these things over
Starting point is 00:10:02 and make them durable over long periods of time. Yeah, cloud agents. Yeah, cloud agents, yeah. I mean, I think a lot of the magic of OpenClau is just the ergonomics of it. And I'll give you a specific example. So when you use a website or a mobile app, you obviously expect it to be perfectly synchronous. Like you press a button, there's an immediate response. That's your user expectation from 20 years of using apps. When you actually text someone, you don't expect me to reply. right away.
Starting point is 00:10:24 You're like, oh, Anish might reply in five minutes, or it might take him two hours or whatever. The magic of OpenCla, I think, is because it's in a mobile messaging channel, your sort of subtle expectation is it won't reply right away. So it has to go off and do something more ambitious. It can do it without feeling like this delayed experience. The other way that shows up,
Starting point is 00:10:42 if you look at the model switching settings in ChatGBTGPT, for example, it's trying to find this balance between instant and deep thinking, which is a little bit of a user expectation contour, that they're trying to meet versus just being like, hey, I'm going to go do some stuff and I'll make the tradeoffs and I'll let you know when I'm ready. Right, right. And OpenClaude does that really well.
Starting point is 00:11:02 So I don't know that there's any one thing. It's just the whole way it's put together, I think is very elegant. Yeah. Back to that your original question, though, around, you know, this coding, what we're up to, what we're building. Yeah. You're seeing a lot of entrepreneurs that, and you called it, what would you call something porn? Productivity porn. Productivity porn.
Starting point is 00:11:19 And it's because everyone, you, me, everyone in this room, actually. has produced some type of application now. Yeah. Some of that is trash. Some of that it's going to be awesome. I don't know how much of it is defensible. So as someone is doing consumer investing, like what scares you?
Starting point is 00:11:39 Well, first of all, I think that we need to, this whole mindset about defensibility and modes, and it all feels so heavy. Like there's this beautiful creative direction in what you described and how you don't have to thumb through manuals. And now the cost of doing something that fails is zero, right? So you get to try every idea.
Starting point is 00:11:57 And there's so much information in exploration. So I say productivity porn to be provocative, but I don't actually think it's that. I think that we have this new way of learning, which is just trying things. And, you know, the idea guys are sort of having a moment. In fact, it's funny, like, I'm like looking for new ideas to work on. You know, in the old days, somebody would give you their app idea and you'd be like, oh, here we go again. And now I'm like, cool, how about I build it for you?
Starting point is 00:12:19 Right. So I think there's this beautiful exploration. that we're seeing that's never been possible before, and there's a lot of value in that. And then when it comes to consumer products, I still think that consumer modes are as good as gold, right? Things like network effects, they're not trivially reproducible by models.
Starting point is 00:12:34 Even if you can reproduce a software, this is what, it's not like Instagram has got like software mode. Yeah, exactly. Yes, but okay, so that the incumbents are locked down. That's fine. Yes, I'm not moving from Instagram anytime soon. But when you're sitting there and you see a new consumer product,
Starting point is 00:12:50 you know, five years ago, And you're like, damn, that's a good idea. You write a check for $10 million, you better than entrepreneur, you wait for the next five or seven years, and you see what happens, right? Now knowing that, damn, that's a good idea, could be replicated by anyone else if it is a good idea. Yeah.
Starting point is 00:13:06 Does that freak you out at all? I don't think so, Kevin, because even in 10 years ago, the good ideas weren't obviously good at the time. Like what Systrom was up to at Instagram wasn't an obviously good idea until the network really started to take. And then it was too late to reproduce the software. So I don't think that the moat has ever been,
Starting point is 00:13:24 it's like really hard to reproduce the software. I think the moat is in part, every consumer idea is embarrassing to work on until it's obvious. You know, it's like it seems trivial, it doesn't sound important, VCs don't know what you're talking about, your family doesn't know why you're working on it. You just feel this pull in a direction. And ideally, when it works, the network runs away
Starting point is 00:13:42 before people can replicate it. But we live in an information, in time where information is propagated, like instantaneous, right? And so an open clock comes out, or whatever they called it when they first launched. I don't remember now. And you see 30 different clones of it, right? And then all of a sudden, everyone's moving in different directions.
Starting point is 00:14:01 And I just feel like that wasn't the case. It wasn't as easy to reproduce something like that. And you see a hit than it is today. The time has shrunk dramatically. Like, for example, if I launched Instagram today, I remember when he goes by Mike now, but back in the day, he went by Mikey Krieger. And Kevin, we're working on this.
Starting point is 00:14:21 and, you know, it took them, I remember talking to them about developing the filters, the basic filters for Instagram. That was real work, real engineering effort, right? Yeah. And so even if you saw Instagram
Starting point is 00:14:33 sort of hit escape velocity, if you were a startup, you had to go try and copy them and catch up to them, it was a few months of work. Yeah. Now it might be 48 hours of work, you know? Yes.
Starting point is 00:14:44 Even at the time, there was so much noise in the ecosystem, right? There was bourbon, later Instagram, there was hipstomatic, There was a dozen apps that all did roughly the same thing. And it wasn't clear why Insta was special until it was clear in retrospect. So I don't know.
Starting point is 00:15:00 I think that if you were to build Insta today, it would still be non-obvious until it was obvious in a way that it was too late. Right. That's fair. Because there's lots of things, lots of examples today of like Wordle, for example. Yeah. You and I can clone that within 20 minutes. Yeah. Yeah.
Starting point is 00:15:13 It's like, but we're not going to take over Wordle because they have that built-in base. Well, even OpenCla, right, which was originally called Quad-Bod, I think. Yeah, Quadbot. So even though there's been a thousand forks of it, it's literally open source, open claw itself still dominates conversation. Yeah. Right? We're not talking about the 10 open claws that have equal market share. We're talking about open claw. Yeah, yeah. So there are these compounding effects that are sometimes, you know, less intellectually satisfying the network effects, software modes, whatever. Yeah. The thing I worry the most about with consumer software is that the costs are too high to have a free model, really for very long. Right. I was talking to Signal who's launching
Starting point is 00:15:50 a product in the next couple of weeks in consumer product and he was saying, hey dude, I need to raise like $25 million if I want to have 100,000 mouse. And by the way, the money will go quickly. So the fact that you don't have this sort of zero marginal cost of distribution benefit, which is really
Starting point is 00:16:06 advantaged consumer founders in the past, I think is a major drag on the ability of consumers to scale, consumer founders to scale things to a lot of people. So let's talk about some of the actual models on how people are building these things. It seems, and you guys are investors in pretty much all the bigs, right? Like
Starting point is 00:16:25 Anthropic and Open AI. Are you in all the major model providers? We're not in all of them, but we're in Mistrawl, we're in Open AI. Anthropic or no? We're not in Anthropic. Okay. So when you think about what's happening with the moment that Anthropic is happening right now. Yeah. And you have Open AI, which has been largely, I mean, not totally quiet, but they're cooking. Yeah, definitely cooking. Obviously, this isn't winner take all, or is it if someone hits something? escape velocity. Yeah. Well, I think it's, you know, it's easy to overrotate on some of the conversations on X. If you sort of zoom out and you look at OpenAI, first of all, they've got,
Starting point is 00:17:00 I don't know, 950 million weekly actives. Like, it's the biggest AI consumer product by many orders of magnitude. It really is. And it's the first product to hit that kind of scale and that short of a time period in the history of technology. So they've got a sort of unassailable advantage in consumer. I think that's one. Yes. And then if you just look at the model quality, I think we both experience this like the codex models are excellent I think the codex harness is a little behind cloud code and I hope they'll fix that but I think that that's like that's like a hill climb that they can do on model quality they're right there and we're hearing that the new open AI model which is going to hopefully come in
Starting point is 00:17:36 the next few weeks will be just as good as the sort of super secret you know my face most model that's too dangerous to release do you buy that too dangerous to release I don't buy it I mean maybe but look I think that if you actually have a model like that there's like like four independent things that I think about. So first of all, let's even assume that it's six months ahead of our geopolitical rivals. The first thing you probably do is use it
Starting point is 00:17:58 for offensive capabilities, right? To hack others. The second thing you do is to lock down all your defensive. Exactly. I think the third question, which Mark pointed out, and I think Stratory wrote about this first, is do they even have the compute for it?
Starting point is 00:18:12 Like it might just be that they have a GPU shortage, which is also why they sort of nerfed Opus 4-6. I don't know if you followed this. They took down the default thinking level. from high to medium. People are complaining that it's nerfed, it's nerfed because they may be
Starting point is 00:18:24 trying to free up GPU capacity. So it may be an infra problem. And the last bit is just like the aura farming of them having a model that's too dangerous for the world to handle. Like, how can you have
Starting point is 00:18:35 better marketing than that? There's no better marketing than that. So it's unclear to me that it's actually too dangerous. There are a lot of other reasons why they might actually be holding it back. There are rumors that they haven't coded anything internally
Starting point is 00:18:44 since January, I believe. Yeah. Do you buy that? I mean, Boris has said it a few times. is like we don't write any code on Claude Code. Like Claude Code writes Claude Code. So I do buy it. And like, how can that not be true?
Starting point is 00:18:55 It's your and I lived experience as well, right? Yeah, I just, I wonder if you have a model of that capabilities. If you aren't, like, one of the things I've been really impressed about with Anthropic is just the rate of shipping. It's like every freaking week, something new drops. Every day, dude, multiple things. Yeah. And so you have to imagine, if you're the first one to have the model, that is, let's just
Starting point is 00:19:14 call it 30, 40% better than what's out there today. Yeah. You have to be like, okay, let's make our tools. like dope as hell as fast as possible. So you turn it internally and get all that tooling and just ship it feels like that's what's happening because of the rate of shipping.
Starting point is 00:19:29 You're an open AI investor. I think you're an anthropic investor as well. Okay, so what's your kind of view? I guess you're the most even-handed having been in both. Well, I mean, I think that Open AI, listen, they have the it's hard to count anyone out at this point because even Google. They've got their
Starting point is 00:19:46 tensor processing units or their self-training model. They've been eerily quite it and like I those folks are just sharp as hell like don't fuck with Google like they're going to come back and get you so I I just have a hard time believing there's going to be at least from someone that's doing software engineering with the models in allegiance to anyone yeah like I will go to chat GPT tomorrow or Gemini if it's a better model I don't care yeah like my credit card just transfers to whatever model then it starts working again right yeah so on the consumer side you know when I talk to family members
Starting point is 00:20:22 and people that aren't deep in tech like we are, it's all chat GPT because that is the household name. That's right. And I think it doesn't, it does everything that they want it to do. And it's not like they're seeing anything and we're like, oh, that co-work feature
Starting point is 00:20:36 from Claude is so good. They don't realize that yet. Yeah, exactly. Yeah, you mean they're not using remote control on my code? Exactly. They're not connecting their phone and leaving their laptop open.
Starting point is 00:20:47 Carpathy talked about this this week and it's so funny because he's like, look, the people who have used chat GPT in a somewhat cursory way, they're like, I don't know about the say I thing. Maybe it's over hype. Like, you know, maybe the Instagram audience, they sort of don't quite get it. So I think it's like, and for the ones of us who are really deep and we're using it to make software and we're seeing all the model progress.
Starting point is 00:21:06 Right. And we're not talking to each other. Right. You know? Exactly. Like every time I talk to someone that's not in this space, I try to like red pill them and I set them down. And I said, let me show you what this can do for you. Right.
Starting point is 00:21:19 And then within 20 minutes, they're like, oh, my God, I need to be spending more time here. Yes. But it is a lift to get people up to that. I think it's a multi-year lift. We live in a world where we're trying everything, but the average consumer is going to try a couple new things per year, you know? Yeah, yeah, yeah. So it's interesting. What about model pricing, like the max that we're seeing these max plans for $200 and we eat all the credits and all of a sudden we're out of credits and people are complaining about that?
Starting point is 00:21:45 It's like, do we see a $10,000 plan? Do we see a $5,000 plan? Yeah. I think it barbells, so I think, I'd love your view on this, but it feels like the most powerful models are going to be more expensive than ever, and yet the price of a token on GBT40 is down 100x since the model was released, and what was that, 18 months ago? And when GBT40 was released, it was this incredibly cutting edge.
Starting point is 00:22:08 It had this beautiful personality. Like, it seemed inconceivable that it would be cheap, and now it's very inexpensive on a per token basis. plus you have open source, right? So it feels like there'll be inflationary effects on the edge models and maybe they'll even be unavailable as APIs and there'll be huge deflationary effects on everything that's not cutting edge.
Starting point is 00:22:26 Yeah. What do you think? Yeah, I mean, I think there's, listen, like even when Google announced these models that now run on phones, on device, there are so many instances where a lighter weight model will do most of what you need done. Yeah.
Starting point is 00:22:40 And so I love that that exists because there is a ton of things that I'm building right now, which are kind of like backfill and data kind of a fetching and like just very non-reasoning tasks. Yeah. That six months ago, a year ago, would have cost me thousands of dollars to do what I'm doing.
Starting point is 00:22:58 And now I'm like, oh, that was $12. You know? So I do love that that is happening. I think that the API pricing makes sense. The weird thing for me is, I believe there will be a higher tier. We're already starting to see this where you can pay per poll request and pay Anthropic to review it with a better, more finely tuned model for those
Starting point is 00:23:21 bugs squashing. It is a little bit weird. It is a little bit of that Spider-Man point in a Spider-Man situation where like, I coded it for you, but I'm also going to fix the bugs for you. For like $12 a bud. Exactly. But the, so is the opening eye is security stuff is actually quite good as well. It's kind a bunch of vulnerabilities that would have never found. Amazing. So, yeah, I don't know. But what's up on your... Well, so actually, let me ask you a little your question though. When you think about your budget, I'm assuming that you don't have like a ceiling on how much you spend on the models. Right. But how much if you had to make a trade off, would you trade off against, say, your entertainment budget? Like, would you skip a movie or a bottle of wine because you wanted to spend the money on tokens? Yes, 100%. Well, but I'm building towards something. Are you, though? How many of your things are important and have to be durable versus are satisfying to work on?
Starting point is 00:24:07 80% are important and durable. Okay. And the other 20% are just for fun just because, like, I just want to see it exists. Do you know which is which at the outset? Yes. Okay. Okay. 100%. Well, largely because, like, you know, I step back from my venture role for my investing role there because it's too fun to build right now. Like, why not build?
Starting point is 00:24:22 And so when I'm tackling a bigger idea, the hope is that hundreds of thousands of people will use said product at the end of the day. Yeah. And if it is not that, then it's just going to be a little tiny thing that I think should exist. And I'll put a little bit of time and effort into it.
Starting point is 00:24:38 And if there's a thousand people that love it, like, God bless. Like, at the end of the day, I think we, I don't want to optimize for just financial outcome or usage. I want to optimize for would, do I enjoy doing this? Do I enjoy building this product? You know, I've seen some of your vibe coded stuff. It is largely because you enjoy doing that. It's just fun to work on. And that's why I think your language is interesting. You didn't say it's too important of a time. You said it's too fun of a time. Yes. Right. So there's so much beauty in the process right now. And I, and that's why I think
Starting point is 00:25:08 that it will eat into what would otherwise be called entertainment budgets. Because there's just, we're discovering this part of ourselves that has been asleep. Right. Both as technologists, but even as humans, right? Like when you're a child, nobody tells you, Kevin, you're bad at drawing or you're good at painting. Right. Whatever.
Starting point is 00:25:22 You're actually just good at anything you're interested in. And we're able to get back to that place again for the first time in a long time. I know. And it meets you where you're at, too. It's so awesome. Like, there's the number of things that I now know versus a year ago in terms of, like, how to fix, like, faucets and shit. And like the stuff where I'm pointing, oh, I chat you piti at.
Starting point is 00:25:40 I turn on video mode. I'm like, hey, how do I do this? What does this mean? Yes. And I'm up-leveling my knowledge across an entire spectrum of different things. I've got to tell you this crazy story. So there's a guy that we met came in and presented to us yesterday named Mike. So Mike is a personal injury attorney who won the global anthropic Claude Code hackathon.
Starting point is 00:25:57 Okay. So he started using Claude code to solve a bunch of his problems because he's running. He's like a startup guy, sort of running a startup law firm, and he couldn't afford a paralegal. So he used Claude Code to basically start coding a bunch of things that would otherwise he'd have to hire someone for. And then he went so deep in the tooling. He came in, he showed us a software he's built. It's extraordinary. It's like what a whole venture-backed SaaS company would otherwise do.
Starting point is 00:26:21 He's so deep in it. He's using whisper flow. He's talking about slash commands. He's like, okay, I've got a slash loop that's always looking for security fixes. And this is not the guy you'd expect. He's like a jujitsu, square-jod, sort of personal injury attorney. Right. So I think there's just, this is my white pill.
Starting point is 00:26:38 There's going to be so many beautiful stories of, like, digital homesteading or something where these unexpected people build these really incredible things based on personal need or interest. Yeah, and I think this is where I had a company in a similar vein where they were spending
Starting point is 00:26:54 something like 80K a month on SaaS subscriptions. And the founder was like, well, I'm good enough to where I can build this in-house kind of what I want, a little bit more custom tailored to me. It took him about a month and a half and then you just cut that entire SaaS budget out. So here's
Starting point is 00:27:10 my only counterpoint on the kind of SaaS is over thing, which is, yes, conceptually, maybe you could build a CRM. But have you built a CRM? I have, actually. Really? Jesus. All right, fine. Well, I guess SAS is over then.
Starting point is 00:27:23 I was going to say it's so fucking boring to work on all that software. Like, does anyone really want to build? I wanted a really simple personal CRM, so I just built it. It was a bad example. Okay, fine, fine, fine. Okay, how about payroll? No. No.
Starting point is 00:27:36 No. No, but Richard has over there. He just built some tax accounting software the other day. Richard, come on, man. All right. So what else is on your list? I know you had some really spicy, fun topics to get into. Yeah, dude. Well, I wanted to actually get your take on, you know, obviously there's a stuff that happened with Sam's house. There's a bunch of attacks on his house, which is, you know, super tragic as whatever you think of AI politically, like, you know, he's a human running a company who has the best of intentions. What's your kind of view on the conversation we're having as a society about this new technology? I like how opening I came out and said, we're going to have to heavily tax us
Starting point is 00:28:11 and provide some type of safety net fund and we have to work that into the legislation at some point so that when we do displace all these jobs, which it is coming in my belief, you know, it may be five, seven, ten years from now, but I believe that we're going to have to have some of that flow back to the public in some way. I think when traditionally,
Starting point is 00:28:35 And course, correct, if I'm not answering your question correctly, but traditionally when we talked about, you know, 10 years ago, universal basic income, I was like, well, how are they going to fund that? Like, how are we just going to, like, print money to give people money? And now it's clear to me that the bigs continue to dominate and disrupt and just crush jobs across every single vertical. Yeah.
Starting point is 00:28:58 We're going to need some way for those profits to flow back into everyday consumers so that they are just everyday humans so that they can live and thrive in a world where they might have been displaced from a job loss. You know, it's interesting. So one of my good friends is an executive at Google,
Starting point is 00:29:14 and I talked to him about this and said, hey, how much are you guys actually laying off people? Because you're obviously, you know, you're at the edge of the new models, plus you have all the TPUs, so presumably they have all the tokens that they want and need internally. And he said, look, we actually haven't laid off anyone.
Starting point is 00:29:29 We're just ripping through our backlog. So we're doing like 100, X more than we ever did before. So it's interesting the kind of point. A lot of people have made this point on like, hey, where are all the jobs going to be? But I think there's no ceiling on human desire. There's no ceiling on company ambition. And so far we're seeing it as like companies doing more versus, hey, let's do the same amount with less people. Right. So we'll have to see. I also think on the UBI point, like the thing that I think we need more than UBI if we ever get to that place is universal basic purpose. And the way you actually get the French Revolution is less that people don't have enough
Starting point is 00:30:01 money, though that's a part of it, and more that people don't have something important to work on. Like, everybody's got to feel like they're on a hero's journey. Right. And even if it's a little, I mean, look, arguably we both have fake jobs right now. So even if it's a little bit contrived, I think that we have to make sure people have a sense of purpose in addition to the means to kind of go achieve it. Where do you think that's going to come from? Okay. Again, I think there's no ceiling on human desire.
Starting point is 00:30:23 I think people are going to want vacation homes on Mars. I think people are going to like, you know, companies are going to be doing more ambitious things. we're going to have 100x more skyscrapers. Like, the history of all technology is it increases productivity and it also increases human desire. You've heard this thing of like luxuries become commodities, right? Therapy was considered a weird luxury 50 years ago. Yeah.
Starting point is 00:30:43 Now therapy is an expectation. Right. Right. With Instagram culture, so many bottle service used to be something that was unheard of 20 years ago. Right. And now it's only bottle service. Nobody wants to go to the club unless they're getting bottle service.
Starting point is 00:30:54 So I just think we're underestimating how much human desire will continue to grow in productivity. example of like people used to get dressed up and put ties on and suits on just for their flight because it was such a luxurious thing. I mean, I wouldn't mind if we went back to that. Yeah. I'm big on manners. But yes, exactly. So I don't know that, I don't know that it's like the end of jobs. And, you know, the other thing I'd love your take on this since I know you're big in meditation and sort of self-awareness is, it feels like this technology is a tool to explore our emotional spiritual self in a way that no technology ever has been. Like the fact that you can talk to it like a human and you can explore things.
Starting point is 00:31:29 and you can get into flow state when you're programming. It has all of these attributes that are just so different from, you know, spreadsheets and word processors and even the creative tools of the past, which are very point and click. Do you think that's overstating the case? I mean, what's your take as a practitioner? Listen, I think that the one thing that's exciting to me,
Starting point is 00:31:49 well, two things. Certainly human connection, real friendships, spending time together. If we can have more time to do those types of activities, activities. Yeah. And we do lean into that versus just being hooked on our phones. Yes. I think it will lead to better and stronger emotional states, better family units at home, like a whole slew of benefits from that side. There are certain pieces that, you know, I'm not going to want a digital meditation instructor in terms of like, you know, I want a real human teaching me. It might come in
Starting point is 00:32:21 digital form, but I want a real human that's behind that, not just some AI. Therapy, maybe not so much. Maybe I do like the idea of having a digital therapist. We'll see. So it's not all doom and gloom. I hope that we start to shift to celebrating craft and celebrating that if you find a niche that you are into, then you're the best at that particular thing, that we can hopefully, you know, put some value behind that. And Japan is quite good at this. I mean, they've done it for 700 years, right? Yeah. You'll find these small little artisans. Yes. They're the best.
Starting point is 00:33:02 And there are people, like I've gone to coffee shops in Tokyo where it's like one guy doing aged coffee beans and there's like a line down the street to get in. Yeah. And it's just like, he doesn't want to become a billionaire. Isn't that beautiful?
Starting point is 00:33:12 He's just happy to do this thing that he's found this very small little niche that people love. Yeah. You know, I hope there's more of that. What's your version of that? Oh, probably Japanese woodworking, a little bit more meditation.
Starting point is 00:33:25 Okay. More time with friends and family. getting off of technology. I found that I need breaks. We mentioned how it is so fun to do coding right now. And because it is so fun, I can literally wake up and kind of like out of my trance of coding and be like, wow, I've been on the computer for 11 hours today.
Starting point is 00:33:46 It doesn't feel like 11 hours. But that can't be healthy, right? So I need to figure out what is that balance. Am I really, truly finding the time to hit the gym? I finally try time to do the things that are going to be. benefit my body over the long term. Am I staying connected with my kids, my family? Yeah.
Starting point is 00:34:01 You know, I think those are very important questions I ask. And I worry a little bit that AI makes it more of a trap to get kind of get sucked in and avoid some of it. I don't know. Like maybe it's barbell, you know, so maybe you spend these really productive, satisfying, fulfilling time on your computer. Yeah. And then you don't have any of the overhead that you typically had, right?
Starting point is 00:34:21 All the meetings. You don't go to meetings. Your agent goes to meetings for you, right? And makes decisions and handles, conflict and sells customers and all this other stuff. And the rest of the time you're, you know, practicing meditation, hanging with your kids, touching grass. Right. So I think there's the potential to have this like really productive, satisfying technology time. And it's really productive maybe or satisfying sort of fulfilling human connection time. Why can't that be the
Starting point is 00:34:46 shape of our life going forward? No, I hear you. And also I'm, I've, I've, I've, I've danced around the edges of this in a dangerous way. So I thought about, so I was talking. So I was talking. to my wife and I was like, okay, listen, we're getting a lot of arguments about X, Y, and Z? And what if we just had a conversation with a model that built out these frameworks for us that understands what we like, how much we care about certain things? And then we just have our models go and, like, duke it out and be like, actually tonight we go out to dinner because historically, Daria has had the right to go out to dinner or X members. I'm just making this up. And she's like, and I explain this whole thing. I'm super proud of the idea. And I think it's
Starting point is 00:35:25 It's going to like, save us a ton of time effort. And she looks at me and she's like, this is one of the words I guess for her. I'm just like, shit, you know? But so I worry about outsourcing too much of that negotiation and conversation and just, you know, saying like, oh, the agents are just going to handle it. That said, around certain things like policy or conflict resolution between nations and things like that, there might be a little bit more of a steady hand to have two agents discussing their and getting to a resolution faster. I don't know. It's interesting. So, well, I'm not sure about the family setting, but if you think of the corporate setting, you know, there's this old saying, like, the best way to compete with an incumbent is to pick a product area that lives between two VPs. Because the idea is the VPs hate each other so much. They won't possibly be able to collaborate to actually, like, handle the competition in this area of crossover. And that's like an area in which I think, look, if each VP had an agent that went in, I mean, we'll have to see, and I guess there'll be agent politics, corporate politics. But you could just imagine conflict resolution feeling so much less personal.
Starting point is 00:36:24 I think so many issues at work come from this like misattribution of professional decisions to personal feelings where it's like, hey, the boss went against me because they don't like me. It's not because they just had a professional judgment that was in a different direction. Right, right. And I hope that this technology helps to lessen that. Yeah, absolutely. What is the coolest thing you've seen in the last few weeks? Because you must see so much stuff in terms of pitches and what has you excited to be an investor still? Because for me, early stage seems like,
Starting point is 00:36:55 I'm not saying that you're squarely in early stage. Yeah, yeah. Because obviously your fund does a whole gamut. Early stage seems pretty dead to me. Yeah. Just because it's like people, entrepreneurs are just going to skip that round altogether. Well, I think that there's venture exciting
Starting point is 00:37:09 and then there's sort of like human exciting. Yeah. And when I tell you a story like the one I told you about this guy, Mike who built this incredible software and he's like, you know, a clod-pilled and all this other stuff, that's one of the most hopeful things I've seen in a long time. I mean, we should put Mike on billboards. I actually pair that with a lot of the stuff that Elon's doing.
Starting point is 00:37:27 Like when you see the effort to get to the moon, when you see SpaceX, when you see Waymo, like, oh my God, that's what we met when we said technology. Right, right, right. How about you? I mean, what do you think? No, I'm in the same boat. I worry that the kind of early stage, like, sassy world, like little bite-sized pieces of consumer stuff is going to be largely dead. hardware is still very capital intensive to get off the ground. I'm very bullish on actually devices that
Starting point is 00:37:56 encourage us to kind of disconnect and have more shared reality together. Have you seen Ten Can at all? Yeah, my son has one. Yeah, it's amazing. Yeah, so my... You should describe a tin can. So people that don't know,
Starting point is 00:38:12 tin can is this little device that is a phone, like an old school phone, that you plug in, the USBC and it connects to the Wi-Fi, and then you could, as the parent, you can tell the app which kids they can call. Yeah. And so it was so funny because my daughter picks up, she goes, she listens to it. She goes, someone's calling us. And she hands me in the phone and it's a dial tone.
Starting point is 00:38:34 And she'd never heard a dial tone before in her life. Yeah. Why would you? And she was like, someone's calling. It's ringing. It's ringing. And I'm just like, no, that's what you had to listen for before you could dial the numbers, you know? Incredible.
Starting point is 00:38:45 But now when that rings across the room, they get so. excited they don't know who's calling. And they go running to go pick it up as fast as possible can. But I want 10 can for grownups. It would be so cool to have like a little phone that is only with your friends. And you don't know who's going to call, but you're going to pick it up because you know it's a trusted friend. Yeah. So it's little bits of technology like that that I think I just going to provide these little moments of delight and real connection that I'm excited for. It's very funny. So my son has one as well. And when he gets, he geeks out when he gets a call, right? Yeah.
Starting point is 00:39:17 races, darts over to the phone, he picks it up, and it's his bro calling. And they'll be like, oh, what's up, what's up? And they'll talk for maybe 20 seconds, and they won't know what to talk about. Right. And I don't know if that's like the kids don't know how you use this technology or that's just how men talk to each other. Yeah. So they'll just be on the phone for like 10 minutes, just sort of like,
Starting point is 00:39:33 just grunting a little bit. Yeah, yeah. And then they'll hang up. And how was your call? Oh, it was great. It was really good. It's amazing. Yeah, but he's still young, though, too.
Starting point is 00:39:40 Like, it's not like they have like. A ton to talk about it. Yeah. It's such a funny little microcosm of like male relationships, you know. Are you looking at more hardware these days? Yeah, we're open to everything actually right now. One of the most interesting, coolest pieces of hardware I saw recently, we're not an investor, is this company Pocket.
Starting point is 00:39:56 You know, a bunch of companies are doing things like this, these sort of like passive recorder devices. I know you, Sanbar, I think you guys have invested in one that you're excited about. Yeah. It's just, it's really elegant hardware design. And also just love that founders are being ambitious enough to try hardware again. Hardware has all of these, like, you know, It's just a challenging place to operate,
Starting point is 00:40:14 but founder ambition is higher than it's ever been. So, yeah, we're definitely open to Hard Player. Yeah, that's awesome. Yeah, I've been looking at the always-on recording stuff for me has been a little bit like, eh, I don't know if I want that in my life. I know, I know, I know. Yeah. Still, it's just cool that people are building things in those directions, you know.
Starting point is 00:40:30 Yeah. I also think that you're too skeptical on consumer. Like, consumer feels like dead, dead, dead, dead. Oh, my God, it's on fire. Right. Yeah, I guess the question is, I think what the reason I am a little bit skeptical on it is, I think it'll be dead, dead, oh, my God, it's on fire. And then when it's on fire, it's on fire with three engineers.
Starting point is 00:40:48 And they've, you know, hit something. And all of a sudden, the first round has done it at $500 million pre. And I'm just, I worry that what we're going to see is just a decimation of all the early stage funds because they won't, they'll skip all those rounds altogether and go straight to B or C. Yeah. In which case, Andreessen will do fine. But like, all these other funds are just going to be completely wiped out. So I have a history question for you. When you started dig, what did it, what did the environment feel like in terms of consumer?
Starting point is 00:41:16 Like, does it feel like a good or a bad time? It was dead. Okay. Yeah, it was largely dead because we were coming out of the dot-com burst. Yeah. And then the thing, it was a technology enabler, which was Ajax that allowed for dynamic content to be refreshed, that enabled a new wave of kind of consumer apps to be built on top of that. But it wasn't obvious at the time when you guys started tinkering. No.
Starting point is 00:41:35 But it was going to be like a global social media, like, you know, I mean. It would be regulated. It would, like, you would potentially influence elections. It would, did you have any sense of how important it would be? No, there was none of that. And I don't think it was just,
Starting point is 00:41:48 it was a very, it was a time where people said, what if, and then they tried it. Like, Flickr came out, and it was the first time people actually shared their photos publicly. Right.
Starting point is 00:41:58 Because that was a weird thing. You're like, oh, there's a picture with my family. Why would I ever want to put that online? Yes. Someone might see that. Those are my private photos. Photos were always kept in albums in your house.
Starting point is 00:42:08 Yes. You know, and like, some family would come or look at them, you know? Yes, yes, yes. And so there was a lot of that going on. Like, you know, 4Square was like, oh, I'm checking in somewhere. Is it weird to tell people where I am right now?
Starting point is 00:42:20 Yes. You know, like, Twitter was like, what am I up to? You know, like there was a lot of strange things. Yeah. And we're seeing kind of very similar experiments happening right now. And I do believe we'll see, you know, multi-billion-dollar companies on the consumer side come out of this wave. Yeah.
Starting point is 00:42:37 I have a feeling it's going to be structured very differently than it was in the past. Maybe. I mean, I don't know about venture economics, but I feel like if there's a big new set of consumer categories, venture will be fine and consumers will benefit. And I love that you talk about the photos sharing thing because I feel like anytime you have new culture and new technology and they both happen at the same time, it's amazing. Like, think of the photos thing. Photos went from hyper-private to being public, to being public and seeming so important that you would never want to delete them, to snap coming around and being like, no, now they're
Starting point is 00:43:07 disappearing. Right. Which felt like, whoa, why would I want it to disappear? Right. Ten years after you're like, whoa, why would I want to share it? Right. But then it went back to Instagram saying, I actually want my photos to look cooler and stick around. Yes, yes. And so it's kind of all... And now if you actually, if I shared an AI generated photo of myself with you, you'd be like, this is slop. Yeah. So it's just there's so these cycles, there's locations and other fascinating one. There's this whole moral panic around privacy and security and will people share location and location based ads, like this whole topic. Now if you look at it, Gen Z, they share their location with friends, family, exes, like everyone, right?
Starting point is 00:43:41 They have zero expectation of location privacy. There's no more like sort of distorting effect on culture than AI right now, right? Think of even AI companions and friendships and psychosis and productivity porn and there's so many topics here that are going to drive culture, not just technology. How can that not result in interesting new consumer products? Yeah, I agree. So let's wrap things. I have one final question, but if you have any one you want to touch on before you go,
Starting point is 00:44:06 let me know. I know you got a hard out soon. I've got a little bit of time. I mean, I, uh, you got a good one. Let's hear it. Okay, so let's like explore this and we can cut it if it ends up not being interesting. But we were sitting here getting ready for the pod and we were sort of trying to get our mics on the right way. And you were saying, oh, you've got to wrap the cord around the mic in a certain way. Yeah. So there's this sort of knack for getting things done, small things and big things in this world.
Starting point is 00:44:34 And it's just in people's heads, right? nobody has sort of written that kind of thing down. You've called that dark data, right? So what's your kind of view on that type of information? Does that have value? Does that have more or less value than traditional information that's written down? And how does that show up in the future? Yeah, I mean, there is, I think that is a huge opportunity.
Starting point is 00:44:54 I don't know how you monetize that, but it's an opportunity to unearth it, save it, and then work it into the models that we're doing. You had a great example off Mike where we were talking. we were talking about, you know, the one person that knows how to adjust a carburetor in a certain way for a certain model of car. That's right. And if that person dies, that knowledge is then lost, right? That's right. And so I believe that this exists kind of all around us in terms of little tiny micro things of dark data that we just have yet to work in and record in little tiny preferences and nuances that just never make it their way into these models. Yeah. So, you know, for example, this table right here, one of the things I wanted to do is I wanted to get a very Charlie Rose-esque table, which was an old, he basically was very famous for doing his interviews where he could like lean in and talk to someone.
Starting point is 00:45:48 Yeah, yeah. And I actually had, there was no stats or dimensions for his table anywhere in the models. Right. And so I provided the model with like five different screenshots. Right. And then set it on the heaviest working mode. And then it calculated the size of the coffee cup. Okay.
Starting point is 00:46:07 And then estimated the dimensions of the actual table. Wow. And so I could actually buy a similar table in terms of size so I could fit up to four guests sitting around this table, as Charlie did. Yes. And that was an example of something that the data was not in the model, but I had to back my way into it, right? Yeah.
Starting point is 00:46:24 And so I just think there's a lot of that. Proprietary data sets, I think, are really interesting. Yeah. In some sense, there's probably a market to go raise a fund just to acquire proprietary data to resell it. This is what Mercor and others are doing, you know? They're getting human data generation, some of which is like coding and math and all that. But I imagine in the future, some of which will be this type of tacit knowledge you're describing, you know? Yeah.
Starting point is 00:46:48 So that to me is very interesting. It's not a business I would want to go build, but certainly I think we're going to get a lot more of that unearthed as robotics and other sensors come online. Yeah. I mean, that's, I think Google Maps was probably the earliest of going and mapping this kind of dark data. Yeah. But, yeah, it's, it's models will, this is where I think when you talk about the layoffs, you're like, oh, none of my buddies are getting laid off at Google, if anything, they're just accelerating. But that doesn't account for the models getting better.
Starting point is 00:47:15 Because when the models are, you know, three times as performance as they are today or five times or ten times, then you really don't need as many humans. Your backlog quickly dries up. Yeah. And, you know, one person can orchestrate, you know, 20 projects or 100. Yeah, but dude, then Sundar says, like, I want to be the first $100 trillion company. You know, like, what is the... Sundar's not going to say, you know what, we should, like, keep our market cap, but just, like, be
Starting point is 00:47:40 more profitable. No CEO thinks that way, right? They want to shoot for the moon. So I think Sundar's ambition will grow at least or faster, as fast or faster, than model progress. Do you think government regulation steps in there and says you can't run everything? Because there is a world where just it's open AI, Anthropic, and Google, and they're kind of just everything to everyone.
Starting point is 00:48:01 But it's just not the world we're living in, right? Like, meanwhile, here on Earth, there's hundreds of open source models, there's all kinds of distilled models. The model price, token prices for what we're cutting edge models three months ago are going down. Yeah. There's edge devices actually having models embedded in them.
Starting point is 00:48:17 So that could happen, but I think the case for sort of outlier, you know, N of 1, outcomes was much stronger in social networks than it is in foundation models. We just don't see foundation. You said yourself, right? I've got my credit card. I'll use Opus one day and Codex and X day.
Starting point is 00:48:33 Right, right. Yeah, that's fair. All right. Any predictions for the next six months? Let's hear the craziest wildest prediction that you have. I mean, weird or better, faster, man. Weird or better faster. So, okay. Give me a hard prediction.
Starting point is 00:48:49 Open AI IPO. I mean, I'm not supposed to. Look, I think all these, fuck, look, I think all the big model companies are going to go public. Yeah. You think that's pretty well now? I mean, I don't know exactly, but I think that it's going to happen in the next 12 months. Yeah, with no inside information. That would be my prediction.
Starting point is 00:49:06 I agree. I think that, look, okay, maybe like my strangest prediction is that I think we're going to get to the four-day work week in a uniquely American way. And let me tell you what I mean by that. So I'll tell you a little bit of a story. So if you think back to the conversation we were having about diet and nutrition, 30 or 40 years ago, there's two schools of thought, right? There's the European school of thought, which is we should eat less sugar. There was the American school of thought, which is we should eat less fat. And, you know, maybe they were sort of influenced by industrial lobbies and
Starting point is 00:49:35 whatever else. So we ran the experiment for 40 years, and Europeans stayed pretty slim, and Americans got really fat, right? Now, probably what Americans should have done, and a lot of people try to make this better is said, hey, we need to just change our habits as a society and, you know, moderate a little bit and sugar intake and all the, of course, that didn't happen. Right. Instead, Americans did this. You need to. uniquely American thing, which is we invented OZempeg, GLP ones. We're like, with this breakthrough new technology to solve this problem that we really should have just solved with, like, perhaps different choices as a society and as individuals.
Starting point is 00:50:05 I think the same way the Europeans have aspired to the four-day workweek, and they're like, this is better for human flourishing and families, and I think that's right. But Americans have to do it the American way, which is we create this incredible new technology that makes us so much more productive that we can actually reduce our workload from five days to four days and maybe even three and a half days. So there's some sort of a parallel here, and I think we're heading to a four-day work week. That is how the 20% productivity increase
Starting point is 00:50:31 is going to show up in my view. And that's my strangest prediction. Okay. My strongest prediction is that in the next three years, we will probably discover and produce and create and get into humans another 50 to 100 peptides that alter our longevity. in a meaningful way.
Starting point is 00:50:53 Can you tell me about the peptides thing? Give me the quick take. So peptides are just small chain amino acids. They're very easy to produce in the lab. And they kind of like, think of them as upstream regulators. They can then trickle down and cause other things to happen in the body. So they're not as powerful as like a straight up hormone. And they're not, so they're not as heavy as a hammer.
Starting point is 00:51:13 They just kind of like cause things to happen. Yeah. And so the most famous peptide is insulin. the glpae 1s are peptides but there are 20 other candidates that are out there that I've seen people experimenting with that are producing wild results
Starting point is 00:51:31 there's one called the Wolverine stack where literally I have had friends that have had back injuries and knee pain and elbow pain I had tennis elbow really bad in my which I didn't play tennis which fucking kills me but I had tennis elbow with my right arm really bad and I took the Wolverine
Starting point is 00:51:48 protocol for like two weeks and I don't I feel amaze. Really? Yeah. Literally. Yes. And so I believe and there's a lot of people working in the space that they're going to identify these peptides will be able to easily compound them
Starting point is 00:52:02 and create them. And the FDA is backed off a little bit. They were banning them for a while and now they actually have approved quite a few. Okay. One for muscle mass in AIDS patients. and so they can do that, give them this peptide, and they'll gain more muscle mass.
Starting point is 00:52:23 There's another one that actually, I believe it's Eli Lilly that is this three agonist, GLP1, that you don't lose muscle with now, which is really interesting. That's in the pipeline. It's phase three trials right now. Okay. So as AI allows us to do a lot of this modeling
Starting point is 00:52:39 in more real time and hopefully candidate discovery, I think peptides is just going to be a fascinating space to watch. So, okay, So I have two questions. One is, what's the basic stack for somebody playing with peptides? Like, what would you recommend? What's the right amount of risk to take? I would say go listen to Huberman's two podcasts.
Starting point is 00:52:57 He has two podcasts on peptides. Yeah. And they go in depth about each one. Some of them are for sleep. Some of them are, you know, they really increase your deep sleep and you sleep better. I have one buddy that says he sleeps like he's 18 again. He just like wakes up like fully rested and do these peptides. I have one buddy that's removed all of his visceral fat and is done by a dexas scan,
Starting point is 00:53:17 which we know visceral fat is like really the bad type of fat that you don't want to have around your organs. And yeah, there's almost a peptide for everything now, which is just really insane. And these are largely, you got to be very careful, though, because as you will talk about, there's grain markets, there's contaminants that can get in there. You got to work with like a high-quality company when you're thinking about the stuff. So my second follow-up question is, you know, if we radically extend human life, do you think we're sort of emotionally, spiritually, societally prepared for people to live to 100? 120, 150, 250, 200.
Starting point is 00:53:48 I don't think so. No. Sadly, not in America. We just don't have the support infrastructure for the elderly nor the care, which is a huge bummer. But what if we controlled for that and said people live, you know,
Starting point is 00:53:58 their full most flourishing 25-year-old lives? I agree. The thing you don't want to do is have, extend dementia for an extra 20 years because of taking peptides. You know, it's like, so that's the tricky part. But, you know, there are candidates like clotho
Starting point is 00:54:13 and some of these other. That's a protein that looks like it's in the pipeline. for a potential treatment for a lot of dementia. So it's a very exciting time on the science front. Yeah. So that to me is, and then I would say my other prediction by the end of the year is that I just don't think we'll ever look at code again.
Starting point is 00:54:30 Yeah. I think that Elon talked about how, why are we actually writing languages? Yeah. Like writing coding languages. It should just, if the AI is smart enough to understand the inner workings of every single CPU,
Starting point is 00:54:42 GPU, and every piece of silicon that it's going to touch, it can just compile directly to binary and just work, which is wild because then that rewrites all of how we think about infrastructure in general. You don't have to pick databases anymore. You don't have to think about any of these things. That, I believe, is going to be in the next five years, there won't be weird questions like,
Starting point is 00:55:03 should I be using a graph database for this? Or, you know, it'll just kind of be solved, which is strange. It'll be interesting to see what happens when agents make these choices on our behalf. Right. Because there'll be technology with tradeoffs. Presumably, they'll be marketing, but not to humans, to agents. Because ultimately, it's going to be codex that chooses the database. So how does that marketing show up?
Starting point is 00:55:24 What are the... It's very wild. I don't think it'll have to be marketing. I think it'll just go spin up in instance, run it, benchmark it for its use case, come back and be like, yeah, I just test that out. This is the best one. And then actually, Chimot had a really interesting point about it. A lot of people were calling bullshit on, but I kind of believe,
Starting point is 00:55:40 which is he said that these agents will just shop around. they'll go and they'll be like, okay, that graph database that was serving us well six months ago is now more performant on this type of database on this infrastructure for this cost, I'm just going to do the migration. And so you have to imagine
Starting point is 00:55:55 that just happens like seamlessly at some point. Dude, it's so cool. So when I worked at Amazon when I was a kid, which was 2003, delivering packages? Yeah, exactly. Were you? Going door to door.
Starting point is 00:56:04 No, hell no. I was a programmer. I was a programmer. So Jeff used to say this thing. He said a lot of things which were, you know, forward-looking. But one was we should assume that we're going to live in a world where our buyers have perfect information, right?
Starting point is 00:56:19 And if you think of, like, how much of industry and commerce is based on either buyer laziness, apathy, or this asymmetric sort of information between sellers and buyers? What if that goes away? Yes. What if, like, you have to win by product quality always 100%. There's zero value in brand. There's zero value in marketing. Like, isn't that just a better world to be in? 100%.
Starting point is 00:56:39 And so many of these areas, I think financial services is another big one. And we were talking about Hero before the show, right, which is a company that was acquired by OpenAI yesterday, really talented founder Ethan, who started digits prior with this sort of consumer fintech mission that many of us had, which is like, hey, why can't we just make this system efficient and work for the everyday consumer, right? And it's like, why should they have to know how the credit system works and how to play the game and how to do all of these things? And I think with agents, a lot of that just goes away. Your agent goes and makes great financial choices on your behalf, given your constraints. Yes. And I think that's going to happen in a lot of product categories and it'll be a much better sort of world in society as a result.
Starting point is 00:57:15 Yeah, absolutely. Well, always a pleasure to have you on the show. Thank you so much, Kevin. Glad we did this. Come back. Let's just keep doing these, man. I love these little roundtables. It's so fun.
Starting point is 00:57:25 Especially because every four to five weeks, everything changes all over again. I mean, we could do this once a week. I know. We run out of time. A brand new topics talking about. Awesome. Thanks for having me, man. Good seeing you, brother.
Starting point is 00:57:35 Cool. Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts, and Spotify. Follow us on X, A16Z, and subscribe to our substack at A16Z.com. Thanks again for listening, and I'll see you in the next episode. This information is for educational purposes only and is not a recommendation to buy, hold,
Starting point is 00:58:08 or sell any investment or financial product. Podcast has been produced by a third party and may include pay promotional advertisements, other company references, and individuals unaffiliated with A16Z. Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z, or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.