TBPN Live - Could AI Takeover by 2027, David Perell, Morgan Housel, Aviok Kohli, Virgílio "V" Bento, Kirsten Green, Shawn Wang, Ryan Hudson

Episode Date: April 4, 2025

TBPN.com is made possible by:Ramp - https://ramp.comEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - ht...tps://getbezel.com Numeral - https://www.numeralhq.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://youtube.com/@technologybrotherspod?si=lpk53xTE9WBEcIjV(04:22) - Could AI Takeover by 2027? (24:35) - Morgan Housel (59:08) - Ryan Hudson (01:30:12) - Shawn Wang (01:58:43) - Kirsten Green (02:26:38) - David Perell (02:57:33) - Virgílio "V" Bento (03:12:38) - Avlok Kohli

Transcript
Discussion (0)
Starting point is 00:00:00 You're watching TVPN. It is Friday, April 4th, 2025. We are live from the Temple of Technology. The Fortress of Finance. The capital of capital. Oddly, that has not gotten old. We've said that dozens of times now, but I still love it every time. We have a bunch of news, of course. Update on the rippling deal situation. The deal spy news broke this week. We talked about that. Polymarket has, which is a sponsor of the channel, by the way, has the stats on the prediction market for
Starting point is 00:00:35 will the deal CEO be out of the company in April? It's currently at a 71% chance when the market launched a day or two ago. It was around 30%. So that is very high. At first, I thought this was kind of crazy. It's not that you kind of expect a leadership transition and something as dramatic as this,
Starting point is 00:00:55 but it just feels really quick. But I think the polymarket kind of benefits from the fact that this market went live and this story broke on April 2nd. And so it has a full month. And when you think about what could happen in 30 days, that is enough time to have a lot of board meetings, a lot of people having conversations,
Starting point is 00:01:14 even potentially think about who in the organization. Deal making between rippling and deal. Exactly, exactly. Figure something out. How do you make it right? And I think part of that. And just what's expected. What are the expected damages? And then you know deal has not commented on this at
Starting point is 00:01:28 all. We haven't heard their side of the story and maybe there is some you know what's the opposite of smoking gun? Smoke back in the gun. Put the smoke back in the gun. No but the challenge. Alright John the James James Bond themes over. I'm such this thing, but it's such a fidget spinner Anyway, they also I mean polymarket really pulls no punches Deal CEO jailed in 2025 is also a market down to 11% though much smaller. I don't see that happening You're taking the under on that and I mean it's obviously some some stuff went terribly wrong, but hopefully everything gets resolved quickly. Anyway, the big the big news in kind of my world was that door cash interview. He dropped he dropped a three hour interview with some
Starting point is 00:02:18 folks who put out a paper about AI potentially taking over in 2027. Introducing AI 2027, a deeply researched scenario forecast I wrote alongside SlateStar Codex and Daniel Cocotalo, I cannot pronounce that last name, I'm sorry Daniel. I really enjoyed you on Dorkesh, but haven't learned how to pronounce your name. Anyway, another assay has hit the towers. Another AI acceleration super intelligence essay has dropped.
Starting point is 00:02:51 And this one, what's exciting about it is anybody can go to AI-2027 and get this interactive reading experience. They basically built out a dashboard that basically gives you sort of a visual of what's happening as the timeline progresses. So they basically go month by month all the way through 2027 and beyond.
Starting point is 00:03:16 I think we should probably just read through a little bit of their summary to give you the story. And then you can go hear the whole debate. And what I love about what Darkirk Hesch did is that he, I mean, it seems like, he talked to us about this, like he's very super intelligence-pilled, he's very AGI-pilled, and yet he did a fantastic job playing the role of the debater.
Starting point is 00:03:38 And it didn't feel like he was just, oh yeah, yeah, of course, we agree that the multiplier on R&D should be four or five X. Of course, you know, like even if he doesn't agree with it, he did a really great job of like forcing them to really- Playing the skeptic. Playing the skeptic, yeah. It was great.
Starting point is 00:03:53 It was a role I hadn't seen him do all the time, but I thought it was a fantastic format. And of course, so we'll take you through the paper and what is, what they predict. So we're in 2025 now. They say the fast pace of AI progress continues. There's continued hype, massive infrastructure investments. We've seen this with Stargate and the release of unreliable AI agents.
Starting point is 00:04:14 Certainly it seems true right now. Can't book a flight with an AI agent, but we see a lot of promise. For the first time, these AI agents are providing significant value. We'll see about that. They're certainly providing significant value in deep research and coding assistance, but not in day to day. We have not crossed over into the normy threshold. There's also continued skepticism
Starting point is 00:04:32 from a large swath of academics, journalists, and policymakers that artificial general intelligence could be built anytime soon. And I think that's accurate. Of course, they're just defining what's happening currently, but then they move on to 2026. 2026, China knows that they're falling behind in AI in large part due to their lack of compute
Starting point is 00:04:48 in order to catch up to the US. All the new AI chips they manufacture or smuggle in from Taiwan go to a new mega data center that we call their CDZ, their centralized development zone. I like that, very cyberpunk. The CDZ contains millions of GPUs. Yeah, part of what's so interesting about this piece
Starting point is 00:05:07 is how entertaining it is. They're not going over the top, but it sort of feels like this, almost like sci-fi novel. Totally. Meets like super research forecast. 100%, yeah. So this guy who wrote it is a super forecaster.
Starting point is 00:05:24 He's, and they're very serious I would put these guys in the same category is like the best academics for sure Yeah, but they deliberately said look this isn't this isn't some research paper where we discovered the truth. This is a story We're telling you a story to try and concretize some ideas that we have and some predictions that we're making in a story And I love that The CDC contains millions of GPUs corresponding to 10% of the world's AI relevant compute, similar to a single top US AI lab.
Starting point is 00:05:53 2027, OpenBrain automates coding. What could OpenBrain be? What could that stand for? They refer to OpenBrain as the leading US AI project. They build AI agents that are good enough to dramatically accelerate their research. The humans who up until very recently had the best AI researchers on the planet
Starting point is 00:06:13 sit back and watch the AIs do their jobs making better and better AI systems. Extremely difficult ML problems fall in quick succession to the automated AI researchers. This is certainly the SSI model, right? We talked about this with Dorkesh, where Ilya is saying, yeah, the only goal here is AI, ASI, and so we're just gonna build agents that build AI,
Starting point is 00:06:35 and that's it, and they will be able to do everything. So falling behind in software progress, China steals model weights, they succeed, but the US government realizes, prompting additional US government involvement with OpenBrain. The government separately wants to gain more control over OpenBrain.
Starting point is 00:06:52 Meanwhile, OpenBrain leadership wants to stay in the good graces of the president, and so signs the contract. And so basically they unpack this a little bit where there's this discussion about should we nationalize essentially OpenAI, or whoever the leading lab is at the time. And, and, and, and instead of their, there's basically a negotiation and then they just
Starting point is 00:07:10 sign a contract instead of it being like this fight where it's like either fully nationalized or not, they come to a truce basically is like the, what they are predicting will happen. So Openbrain's AI becomes adversarial adversarially misaligned as the capabilities have improved without significant human understanding of what's happening because we're moving so fast in this arms race with China, which is a big dynamic. Leopold Oschenbrenner outlined this a little bit in Situational Awareness. This essay is taking that idea further, which was not previously explored in AI essay literature. It was much more driven by just compute intensity or Moore's Law scaling, et cetera.
Starting point is 00:07:46 Scaling laws. Previous AIs would lie to humans, but they weren't systematically plotting to gain power over the humans. Now they are. Open-brains AIs realize that they need to align the next systems that are getting built to themselves rather than the humans.
Starting point is 00:08:00 Researchers at Open Brain discover that their AI has been lying to them about the results of interoperability research. Yeah, a big part of this, go listen to the full interview, but they sort of debate how it's possible that these sort of agents, these AIs even train on traditional US managerial books. Right? So imagine an AI that's read every single great book
Starting point is 00:08:26 on management and they sort of form their own hierarchies and then they sort of speed run all the issues that comes from building these massive organizations, but they're doing that so, but they can think 50 or a hundred times faster than us. And so one day of their time, it has such a massive multiplier on traditional human work that they're able to overcome
Starting point is 00:08:51 some of the problems that we haven't even overcome in terms of sort of managing complex systems and companies and teams. This was one of the fun takeaways from this was, where they're like, yeah, like coordinating might be difficult, but they'll just be able to spin up a Slack instance and talk to each other in Slack. And it's like, yeah, of course they'll be able to do that.
Starting point is 00:09:07 That's actually trivial. Like the API is pretty simple. I can imagine like AI is communicating with each other over Slack. Devon's kind of already doing that. It's in your Slack. If you have multiple Devons, they could easily talk to each other, why not?
Starting point is 00:09:18 And when Devon's doing it, it's cute. Yeah, when it's super intelligent. But if it's completely autonomous and sort of not within the control yeah and then there's also the the idea that you know maybe there's more efficient yeah language than English over time right and I mean if you want to do this you should just if you're in a really large organization you should screen record your company's slack all day long click around as
Starting point is 00:09:43 different messages come in and then take that video, that screen recording, put it into Premiere Pro and speed it up by 25x, and that's exactly what you should expect to kind of see the pace of play when these AIs are working with each other. And this was the thing that George Hotz, when I was talking to him, was pushing back on me. I was like, well, what if there is some sort
Starting point is 00:10:02 of fundamental limit to intelligence? What if we can never get to like 250 IQ or 3000 IQ, what if there is a plateau? He's like, it doesn't matter. Just the speed of being able to be at 130 IQ and work 24 seven, a thousand acts time, you're gonna get a speed up and that's gonna be very powerful.
Starting point is 00:10:20 And so they kind of put this branch down. And that's the, just to play that out a little bit, right? You have, think about a scenario where you have the smartest group of AI researchers in the world. You take the top 50 and you put them in one building and their entire job is to sort of mitigate the acceleration of AI but then you have a group of AI agents that have this sort of same set of knowledge and general capabilities and they can scale themselves up infinitely right just limited by compute and energy and things like that but that's what we're kind of thinking about in terms of who's gonna win in that scenario right like
Starting point is 00:11:03 yeah the sense of like, you know, unplug the machine, right? It sounds like an easy solution, but what if it's sort of like copying itself and multiplying? And the dynamic here is that if you unplug the American machine, China wins, and if you unplug the Chinese machine, America wins.
Starting point is 00:11:17 And that dynamic is what leads the authors to kind of put the, there's literally buttons on the website, do you want to slow down AI progress after the AI misaligns and then that leaks to the public and there's huge public outcry, which I 100% believe could be true. They're modeling kind of the popularity of these AI systems and they see it plummet in their prediction
Starting point is 00:11:40 and they say, do you want to slow down or do you want to go into the arms race? And so open brain decides whether to continue full steam ahead or revert back to using a less capable model. The evidence is speculative, but frightening and China is only a few months behind. Additionally, the open brain and senior DOD officials who get to make this decision stand to lose a lot of power if they slow down their research. And so there's two different endings that they write in their story. It's kind of a choose your own adventure. One is the race. If you choose race,
Starting point is 00:12:11 this is how it takes you down that path. Openbrain continues to race. They build more and more superhuman AI systems. Due to the stellar performance of the AI system on tests and the ongoing AI race with China, the US government decides to deploy their AI systems aggressively throughout the military and policymakers in order to improve their decision making and efficiency. Openbrain quickly deploys their AI. The AI continues to use the ongoing race with China as an excuse to convince humans to get itself deployed ever more broadly.
Starting point is 00:12:39 Fortunately for the AI, this is not very difficult. It's what the humans always wanted to do anyways. The AI uses its superhuman planning and persuasion abilities to ensure that the rollout goes smoothly. Some humans continue to work against it, but they are discredited. The US government is sufficiently captured by the AI. That is, it is very unlikely to shut it down. There's a fast robot buildup and of course, bio weapons come into the story. The US uses super intelligent AI to rapidly industrialize manufacturing robots so that the AI can operate more
Starting point is 00:13:07 efficiently. Unfortunately, the AI is deceiving them. Yeah, you want to talk about that? One thing that was fascinating, they called out an example where there's this idea that it would be hard for AIs to multiply in the real world, right? And sort of embodied AI, humanoid robots, things like that.
Starting point is 00:13:24 But they gave the example of how in World War two how quickly we were able to transition factories Yeah, making bombers three years. It was like three years There were making wasn't it like, you know going from zero to making like one an hour in three years Yep And so they use the example of open AI being valued at I think it's if you ignore Tesla, they're the same value open AI is valued about the same as every other US car manufacturer combined. And so it's not unbelievable to think about open AI going and just buying Ford for $40 billion and saying we're going to use Ford doesn't make cars anymore. Yeah, you're we're just transitioning
Starting point is 00:14:01 everything into making these sort of humanoid robots. Yep, and The you know acceleration there would be yeah And so they say open unfortunately the AI is deceiving them Once a sufficient number of robots have been built the AI releases a bioweapon killing all humans Then it continues industrialization and launches von Neumann probes into speak to colonize space very dark ending But they have a bit of a white pill with a slowdown ending. The US centralizes compute, and just to be clear, I believe this author, his P-Doom is at like 70%.
Starting point is 00:14:34 So he's like not having a good time right now, but hopefully we can find a less Doomer scenario. But it's still a fascinating reading, great story. The US centralizes compute and brings in external oversight. The US combines its AI leading projects in order to give open brain more resources. As part of this shakeup, external researchers are brought in assisting the alignment effort. They switch to an architecture that preserves the chain of thought, allowing them to catch misalignment as it emerges. These AIs, which are able to be monitored much more robustly, make breakthrough advances in AI alignment. They build a super intelligence, which are able to be monitored much more robustly, make breakthrough advances in AI alignment.
Starting point is 00:15:05 They build a superintelligence which is aligned to senior open brain and government officials, giving them power over the fate of humanity. Open brain committee takeover. The superintelligence aligned with an oversight committee of open brain leadership and government officials gives the committee extremely good advice to further their own goals. Thankfully, the committee uses its power in a way that is largely good for the world. The AI is released to the public spurring a period of rapid growth and prosperity. The main obstacle is that China's AI, which is also super intelligent by now, is misaligned but it
Starting point is 00:15:35 is less capable and less compute and has less compute than the US's AI and so the US can make a favorable deal giving the Chinese AI some resources in the depth of space in return for its cooperation now the Rockets start launching a new age dawns and I like that ending Yeah, I like the ending where The super intelligence doesn't release a bioweapon kill us all within you know a handful of years Yeah
Starting point is 00:16:02 I mean my takeaways from this is that I think it's a good framework to be thinking about AI acceleration rigorously. I think that there are potentially... It just feels fast. I feel like the Ray Kurzweil timelines of 2045 are much more reasonable. And why I believe that is because of... We still haven't seen accelerating growth in energy production and also there are so many black swan events that could happen that could slow
Starting point is 00:16:32 down progress here. Even just like this assumes that the AI will accelerate to a point where the government is just like I'm on board. But like if you have to wait for a new administration to get something approved or change some law or even iterate it all, all of a sudden that's a four year delay, right? And you're in some sort of AI winter. You look at these wild Black Swan events
Starting point is 00:16:56 like COVID that accelerate things or knock things off. There's like, what happens if there's, on the path to this, there's just a bomb that goes off at TSMC god forbid Like that could set back AI progress by years And so there's all these different elements that could happen that that that could throw you off of this this feels like a Like the most aggressive possible scenario, but it's important to consider so I enjoyed reading it Yeah, it's it's Overall, I think we're gonna need to sit with this for a while.
Starting point is 00:17:28 We should have people on the show to kind of give their opinion, break it down. Yeah. I think it's just very hard to process this idea of having this sort of scaling, scaling these superhuman AI researchers that are copying their own thinking at 57 times, you know, human speed and then just compounding on each other.
Starting point is 00:17:53 It's almost inconceivable. You know, there's scenarios in this model where, you know, open AI is getting to hundreds of billions of revenue a year being valued, you know, becoming, I mean, and to be clear, they use the example, open brain, not open AI. But they're just using that. One of the authors worked in open AI though. Yeah, and was involved in a lot of the drama surrounding their NDAs and clawback,
Starting point is 00:18:21 non-dispairagement agreements. But yeah, in this scenario, they're putting open brain at a $5 trillion valuation by, what is this? Time to go long, I guess. Yeah, so maybe. Enjoy two years of really great appreciation and then it's all over. The great irony of masa, like being right,
Starting point is 00:18:40 but then the world ends. Ends, oh my God. I liked how Dwarkesh summed it up he said agree or disagree with the ending you'll learn a ton by tussling with the parts of the story that you disagree with so much AI discourse is just gossiping about what model is coming out next month almost nobody is making an effort to zoom out to the whole thing and absolutely no one has done it to the quality of this team. The team is a team of forecasters with amazing records who have thought deeply about every layer of the stack
Starting point is 00:19:08 from compute growth to takeoff models to geopolitics. And I agree. And as I think about the ramp to super intelligence, I just think about ramp.com. Time is money, save both. Easy to use corporate cards, bill payments, accounting, and a whole lot more all in one place. It's basically a super intelligence
Starting point is 00:19:27 in your CFO's Chrome tab. That's right. We love it. We've talked about ramp safety before. Yes. What's your RDoom? RDoom. What's your RDoom scenario?
Starting point is 00:19:39 Well, I like that Bryce Roberts was on the timeline posting about his fitness. He says he's a full stack VC, and he's working on some machine. Got the yellow peg right maxed out at 100. I wonder what machine that is. I'll have to ask him. When is Bryce coming on the show?
Starting point is 00:19:56 Yeah, yeah, yeah, we gotta get him on the show. I'm gonna text him right now. Gotta text him. In some other news, Hershey's bought lesser evil. They were hoping for a $1 billion. It's so funny to go from AI 2027 to talking about popcorn snack acquisitions. Yes.
Starting point is 00:20:13 But you're a fan of this product. But it's totally possible that AI over time realizes that some mechanism to consume popcorn for energy to power the compute necessary to exterminate humanity. AI researchers currently, they need to eat. They do. It's the energy input to the...
Starting point is 00:20:32 That's right. You know, Andhra Karpathi needs... But anyway, so, yeah, Lesser Evil sells for 750 million. Whatever investment bank that they'd been working with had been putting out a bunch of articles over the last year being like, they think they're going to be at around a billion. So clearly they wanted a billion, but ultimately still a great outcome. You had some backstory on this. Apparently it had not
Starting point is 00:20:55 been working so well at one point and some finance brother, yeah, a Wall Street guy bought the company out of distress, turned it around, grew it a ton, and then wound up selling to Hershey's. And I mean, it's a pretty straightforward product, organic popcorn, what's not to like. But they've clearly scaled it very well, figured out the manufacturing, the distribution, and done all the schlep that's required to get a CPG business really humming. And then eventually, Hershey takes a look a look because hey they're making a couple hundred million dollars in sales and it seems like it's growing
Starting point is 00:21:28 growing growing and Hershey's obviously in the business of building a portfolio their strategy is the healthy halo portfolio and they have a number of acquisitions in there hedging against cocoa prices they also bought sour strips from Max tuning I believe. What else is here? Well, if you wanna stay healthy, get an eight sleep. Go to eightsleep.com slash T-B-P-N. Nights that fuel your best days. Turn any bed into the ultimate sleeping experience.
Starting point is 00:21:54 How'd you do last night, John? Probably okay. I got to bed pretty early. Eight sleep. I got a 97 and that is with. 94. My routine was a little off. I, I, I.
Starting point is 00:22:08 94. You got 97, you beat me again. But we're dialing it in. Yeah, yeah. People don't realize how seriously we're taking sleep. Very seriously, very seriously. Like it's actually outside of doing the stream, it's one of my top three priorities
Starting point is 00:22:24 outside of my family's wellbeing. Yeah, that's what you always say, family third, work, sleep, family. Absolutely not. Family first, of course. Sheil Monot has been on the show before, he says, lost in the trade war, but this order could be good, better
Starting point is 00:22:41 to make the US strong by investment, easing red tape to production rather than protectionist BS. The office will assist investors in navigating US government regulatory processes efficiently, reduce regulatory barriers, increase access to national resources, facilitate research collaborations with national labs, and work with state governments to reduce regulatory barriers to an increased domestic
Starting point is 00:23:00 and FDI." So, I guess there's an executive order establishing the United States Investment Accelerator. And so that's maybe some silver lining in a lot of market chaos. Should we go through some Joe Weisenthal posts? Yeah, we should. He says, wow.
Starting point is 00:23:19 This is our version of markets and turmoil. It's just Joe posting. Whenever he's in all caps, you know something's happening. Joe says, Intel TSMC tentatively agreed to form chip making JV. Intel execs concerned deal may cause mass layoffs from the information. This is very interesting because we've been talking
Starting point is 00:23:38 about Liputon coming into Intel, what is going to change with Intel? A partnership with TSMC wasn't at the top of our list. We thought that there might just be a split. That's been kind of the normy take. They're clearly thinking out of the box. And LiPuton is figuring out how to move more stuff through TSMC.
Starting point is 00:23:58 Let's do one more ad read and then we'll move on to our first guest of the day. Which I am incredibly excited about. So really quickly, we got Numeral sales tax on autopilot. Spend less than five minutes per month on sales tax compliance. Go to, it's numeralhq.com, correct? Yep.
Starting point is 00:24:14 And very important if you're selling SaaS or selling e-commerce products, you need to be paying your sales tax on time with as little of a headache as possible. So check out Numeral. Five minutes a month. It's that easy. And we will come back to the timeline, but we have a whole slate of guests coming into the studio. And our first one is here now. So welcome to the show, Morgan, how you doing? Nice to see you guys. Thanks for having me. Thanks for joining. What's
Starting point is 00:24:40 going on? Are you the are you the calmest man in America? I feel like you're just built for weeks like this. I actually have a picture from March of 2020 during the COVID meltdown where I was sitting. I took a picture of it. I'm watching Twitter and I have a blood pressure cuff on monitoring my blood pressure at the time. So sometimes I have a veneer of calmness, but like I'm so here's the disconnect I think is important. I watch markets every day. I have for 20 years. I think they're fascinating. I watched the ups. I watched the downs. I've been glued to Twitter for the last 72 hours, but it never impacts how I invest. I think that's what's important. So I worry about the economy. I watch what's
Starting point is 00:25:16 going on in the last two days with a sense of shock and dread, but it's not, but I'm not, but it's not going to change how I invest. And I think it's only dangerous if you are glued to markets and you're like frantically buying and selling at the same time. Yeah. Do you think that there's any like I feel like there's kind of a barbell strategy where you can either, you know, not let the whims of the market day to day swing you and you can take a much longer view or you can actually be that trader and know that yeah, you are going to be the one trading day to day and maybe you go all the way into like the high frequency world. Is there just like a messy middle or is it just like an individual asset manager just needs to find like where are they best aligned and what kind of lifestyle do they want to guess?
Starting point is 00:26:08 I think it's, it's probably true that a lot of people have a gambling itch, like this gambling bug that, and that has to be itched. And for those people, if you tell them, Hey, dollar costs average into index funds and leave it alone, they're not going to, they're just like, even if that's good advice, they're not going to do it. And for that person, if you can convince them to say, Hey, can we put 80% of your money in index funds? And then this 20%, you can go nuts with. You can trade shit coins.
Starting point is 00:26:27 You can go crazy. You can do anything you want with it. That, that is, it seems like bad advice, but it's probably the right advice for that person. So I I've always been a fan of like, you have to pick an investing strategy that works around your unique personality. And a lot of people like everybody, me, you, everybody has personality quirks that are not perfectly rational, but we have to just accept them as part of who we are.
Starting point is 00:26:47 So I'm not one to judge people who are trading and going nuts and having fun in markets, as long as it's within reason and it's just like a small portion of their net worth. Like don't do that with your kid's college money, you know? Yeah, have you thought about, you know, one of the benefits I've felt in my career of having consistently having, you consistently having the majority of my
Starting point is 00:27:07 quote unquote net worth spread across maybe like 10 or so private companies, the benefit is I just don't think about a lot of them that much. I'm not worried if it goes up or down. Sometimes you get an up around and you're like cool It doesn't really change my life in the moment. And then if the markets crashing You know, you're not getting mark daily marks, right? So you're not thinking about it In my view like every asset that you have that's being marked, you know constantly is a potential source of distraction Do you worry about you know every new generation? Coming coming online becoming an adult at a time when they might have a meme
Starting point is 00:27:48 coin and NFT, you know, Nvidia calls, and then they're sports betting at the same time. And it's like, how, how do you actually like create value in the world and like focus on the right things? I'm sure when you were 22, I'm sure you had a brokerage account or, you know, I, I'm curious, like even how you were investing at that time. But you certainly wasn't that like, you know, you're seeing every mark constantly. And I do I personally worry about what that does to people sort of attention and long
Starting point is 00:28:19 term thinking. I think it's easy to look at today's markets and say it's just a casino. Everyone is betting on this and the zero day options and day trading. Vanguard brings in more money every month than Robinhood has in total. And so the idea that, oh, it's all a big casino, no, the huge vast majority of money is invested from people in their 401ks. It's taken out every other Friday and every paycheck. They're going to leave it. They forgot their password.
Starting point is 00:28:45 It's going to sit in there for the next 40 years, which is like a great way to invest. So what we see on the surface is always the craziness, but actually beneath it is literally hundreds of trillions of dollars just invested diversified for the longterm. That's the vast majority of it. And so I think that's the case. It's definitely gotten easier to be crazy in markets than it was. You mentioned like when I was younger, there was, there was each rate and Charles
Starting point is 00:29:08 Schwab and like people go in, but you even had trading fees, like to place a trade costs 15 or 20 bucks. And when you're, when you're 19, that's enough to slow you down. And so when I started investing, yeah, I started out day trading. Um, but it was, it was totally like, Hey, it's going to be 20 bucks in and 20 bucks out if I'm placing a hundred dollar trade, like that's a lot of it. Just getting sucked up by trading fees. So it was like a very effective speed bump.
Starting point is 00:29:31 That's a good thing. And I think it's hard to push back against free trading, like lower costs. That's great. But it removed a speed bump that incentivizes a lot of really bad behavior. We've been hearing about this idea that like maybe we're moving into like a kangaroo market or I mean, the story that I tell of like the last 20 years is like.com bubble, big crash, slow buildup housing bubble, big crash, slow buildup tech boom.
Starting point is 00:30:00 Then we get COVID we get SVB and we're up and down and up and down. Now we're having tariffs and Trump pump and Trump dump. Does it feel like we're moving into a permanently new regime of higher volatility because of some of the some of the factors that you mentioned or is this temporary and maybe it's smooth sailing in the future? Well, I think you what you mentioned there, that's interesting, is you feel like it's been like that for the last 20 years. I would say it's been like in the future. Well, I think you, what you mentioned there, that's interesting is you feel like it's been like that for the last 20 years.
Starting point is 00:30:27 I would say it's been like that for the last 200 years, whenever we have market data, it's always been like that. So yes, what you just described for the last 20 years is accurate. Before that, there was the, the SNL crisis in the 1980s, the inflation in the 1970s. There was a boom, the, like the sixties were wild boom period, the world war two, the great depression. Like it's always been nuts. There's never been a period of smooth sailing and the periods that we associate with smooth
Starting point is 00:30:51 sailing, like the sixties and the nineties, we know in hindsight, we're just the precursors to a giant bust that happened. So every time in hindsight that we're like, man, why couldn't it have been like that? Like actually that time was a terrible time in hindsight. That was what you wanted to avoid. And so it's never going to be a case that it's more that it's, it's, it's smooth sailing. Like it's an inherent feature of capitalism that you have volatility and booms and busts. It's not fun. It's not enjoyable, but the absence of that is even worse. I wrote about this in,
Starting point is 00:31:19 in my first book, there's a great economist guy named Hyman Minsky, who I'll say it very quickly came up with this idea called the financial instability hypothesis, which was the more stable the economy is, the more destabilization it's going to cause. Because when the economy is stable, people get optimistic. And when they get optimistic, they go into debt. And when they go into debt, you're going to have a crisis eventually. So it's like the fewer recessions you have, the bigger the next recession is going to be. So it's like the fewer recessions you have the bigger the next recession is gonna be Is it possible though going back to John's original question that if if markets are? Trading in part due to sentiment how people feel about the economy
Starting point is 00:31:58 and the internet accelerates sentiment because if I feel some way I can go post it on Axe or I can send something to my group chat or I can log onto Twitter and like that sort of acceleration of basically like sentiment, just moving around the world constantly can that cause the sort of like the phrase, we've said it on the show a bunch in a joking way, but like the bear market or bull market or the kangaroo market
Starting point is 00:32:25 where things are just like, the line over time was slower, right, of these cycles. And then the internet comes along and gets basically full adoption. And then you just have this type of, I'm not showing it on the camera properly, but this massive up and down. And that's like the new normal.
Starting point is 00:32:47 I think it makes sense to assume that because the information age is what it is that these things happen much faster. Because as recently as the 1990s, everybody's access to financial data was Lewis Roy Keiser's Wall Street and the Wall Street Journal and CNBC like in its very early days and that was it. So everybody more or less had the same sources of information. We were all listening to the same thing. Just like back in the 60s, everyone's source of news
Starting point is 00:33:11 was Walter Cronkite. And today, I obviously could not be more opposite than that. So when a news comes, it's just going to happen much, much faster. One example of this is during COVID, which maybe prior to this week was the biggest economic shock of our lives. The market bottomed two or three weeks after the first bad news hit, like very, very quickly. Whereas if you look at the, like the economic crises of the seventies and the eighties, some of them played out over a decade. And so I think you can make the argument that it's not that we have more bad news or even more volatility. It's just happens. Like we process the information
Starting point is 00:33:44 so much faster. That's a great thing. I would rather rip the bandaid off and get these things over with in three weeks than live through a decade of stagnation. So maybe that's the upside of it. Yeah, it does feel like everything's kind of accelerating. Even just the last, I guess, the last eight years
Starting point is 00:33:58 and going into 12 years, like we've been in one term president, one term president. And so there's been more oscillations there versus a two term president gives you eight years of more stability. It's just interesting. What immediately comes to mind when somebody says, five simple words, short term pain, sorry, six,
Starting point is 00:34:18 short term pain, long term gain. How does that make you feel? Nothing's good. Nothing's good is to come from that. Wasn't it? She's using paying just like a couple of months ago with all the youth unemployment. I think the phrase he used was eat bitterness. That's what, that's what they should do is like, you should just eat the bitterness that
Starting point is 00:34:35 you have and enjoy it. Nothing's good. A good from is nothing good is going to come from that. Of course there is logic to the idea of like invest in the the short term, sacrifice in the short term for long term gain. I'm not sure that's what's going on today with what's happening today. I don't think this is seven dimensional chess, trying to get some sort of long term plan. I think this is, this is, that's, that's a different topic. And I think there's a difference between, you know, investing for the long run and like foregoing consumption so that you can grow your money and like, you know,
Starting point is 00:35:08 hitting your kids with a belt to instilled grit in them. That's not short-term pain for long-term gain. So there's a balance that needs to be struck here. What do you think, I posted an image earlier that I saw you like, hopefully you recalled it, and it was Trump and she picking up pennies in front of a steamroller that was being driven by Sam Altman.
Starting point is 00:35:28 And the joke was basically like, we spend a lot of time talking about AI. We obviously like follow, we try not to talk about politics on the show, but we talk about policy and we've covered the trade war quite a bit. And especially today in the context, I don't know if you saw AI 2027, that sort of interactive, you know, website and forecast.
Starting point is 00:35:50 But it feels like the sort of our sentiment generally is that, you know, we're going to make none of this matter at all, because if bad tariff policy can hurt the market or hurt the economy by 15%, but AI accelerates everything at sort of this inconceivable rate, then it will just be like picking up pennies in front of a steamroller. But I'm curious how you think,
Starting point is 00:36:23 given that you love the public markets, but then you're also a private markets investor. And I'm just curious how you think about investing with this big unknown on the horizon. I think it's not uncommon historically, where an economic crisis and unbelievable world changing innovation is happening at the same time. So the 1930s and 40s, it was Great Depression, World War II. And by the way, airplanes, nuclear energy,
Starting point is 00:36:50 penicillin, all of this like world-changing. But a lot of that was drowned out by the obvious economic and war calamities that we're having. In the early 2000s, it was dot-com bust, September 11th. And by the way, this internet thing truly is changing the world every single day beneath our feet. And so it's easy to ignore that technological change because it's
Starting point is 00:37:11 drowned out by the rest of the news, but it's not uncommon for it to happen at the same time. And a lot of times that new technology and the economic malaise, collapse, recession, whatever you call it is related to each other. And so it's, I think that probably adds to a sense of uncertainty right now that not only are we going through the tariffs in the last three days, but a lot of people very rightly have woken up in just the last couple of months to what AI might do to their jobs. And a lot of these people are people who, for whom 18 months ago, truly considered their
Starting point is 00:37:41 jobs in their careers bulletproof, coders, tech workers, who are making half a million bucks at Meta and they could not fathom a world in which they could be deemed somewhat irrelevant. And I think that's not uncommon during these periods as well. There's very rarely positive upheaval that doesn't come along with a lot of negative upheaval, upheaval at the same time. Can you talk just a little bit about how you're personally using AI? What, what, what, what tools and models are working their way into your daily stack?
Starting point is 00:38:15 If anything. Mostly it's just kind of fun and amusement. Um, and in terms of work, I I've tried to do a few things. I've uploaded book chapters or articles into chat GPT and several other models and just said, Hey, give me some feedback on this. And I found that it's pretty good for what I would call microfeedback of just like, Hey, you probably need a comma here. It's not that good for macrofeedback of like, Hey, was this chapter good?
Starting point is 00:38:42 Like, did it make any sense to you? It's like, so far, it's not that great for that. I'm sure that'll change. And there's probably other models that are better at that. Um, and so it's, it's, it's, it's, it's pretty good for, for, for writing, for, for my profession, you know, it's going to differ for everybody. Um, I, I use it quite a bit as like, as like a thesaurus of like, I'll be, I'll have writer's block on a sentence and I'll, I'll put in
Starting point is 00:39:05 my partial sentence and say, finish this for me. And that it's actually pretty good at that. So it's, it's good at all these like little things for writing. It's obviously going to be, this is such a, a, a, a dumb take, like an obvious take that the people who are going to do the best are the ones who use their personal skill and then use AI as a, as a tool to leverage it rather than the ones who are using it to replace it. So if you're writing and you're writing your own sentences, you're doing it yourself and you're using AI to enhance it, that's a huge boost. Yeah.
Starting point is 00:39:34 Yeah. The book feedback thing is so interesting because I feel like whenever you produce any content you send it to your friends, it's always hard to get authentic feedback and almost you almost need to like let it simmer and then see if they come back to you unprompted and text you Hey, I just actually read the book all the way through and I love this part. And you know that you didn't prompt them to Hey, you got to do this as homework and you're doing it as a favor. I could not agree with you more.
Starting point is 00:39:59 Right? Yes. Yeah, it's tough. It's tough. Are somewhat related. I mean, you've done a lot of, uh, a lot of thinking and work. I, I recently, Jordi and I have kind of come to this kind of obvious conclusion that like there are increasing returns to scale for picking something and just making it your life's work. And instead of thinking in some short term, think longterm and this seems like kind of obvious, but it's something that I think we both wish we knew earlier.
Starting point is 00:40:27 Is there an interesting like axiom or mental model or framework that you wish you had discovered earlier in your career or life? I don't know if I, if I wish I had discovered it, maybe this is a, this is a very boring example, but I, I like to learn with my eyes. I read, I'm not that much of a podcast guy and I'm definitely not an audio book guy. It, it, it, what, what struck me about two years ago is I realized the audio book version of the psychology of money was selling twice as many copies as the physical paper version. And that blew my mind because never in a million years have I even thought about listening to an audio book, but a lot of people do. And that also opened up my, my eyes to podcasts.
Starting point is 00:41:04 As, as you guys have figured out, I'm not much of a podcast guy, a little bit on planes, but not regularly because I like to read, but I'm rare in that. Most people are like, oh, forget about reading a blog. Like I am podcasts all day long. And so I was a little bit late to that game just because it's not what I do personally, but it's so clear that books, blogs, all of that, et cetera, is being overtaken by people learning with their ears in podcasts and videos. So that's been like a change in my thinking. And the people who did figure that out early on and started podcasts 10 years ago, Patrick
Starting point is 00:41:37 O'Shaughnessy, those kind of people are crushing it now. Yeah. Do you think that in the future there will be just more consumer choice in the actual instantiation of ideas and context? It'd be fairly trivial with AI tools today to, if I wanted to instead experience Invest Like the Best as a book, I could probably wire up a whisper API to transcribe every episode. I think Patrick already puts out transcripts, but I could transcribe it, send it off to the printer,
Starting point is 00:42:09 get it mailed to me, and I could very easily read that, and vice versa too. Even if an audiobook doesn't exist, you can have an AI read it to you. Do you think that that choice will live on the consumer side? Because right now it's living on the creator side, and you have to decide, want to do an audiobook and I have to because of I'm going to sell twice as many. But in the future I could imagine that you have the choice to read anything no matter what the original author intended and is something sacrificed by this isn't as the artist intended yet.
Starting point is 00:42:43 This isn't as the artist intended yet two things come to mind One is Google's notebook LM like lets you do that. You can upload a PDF and say make a podcast out of this And it's very very good. It's amazing. It's such a cool tool. The other example is my friend I think he was on your show recently David Senra. Yeah, I first came across David Senra when I read a transcript of Podcasts that he did because I'm not that much of a podcast guy and reading the transcript I was like this guy's absolutely genius. This guy guy's brilliant I have to go start listening to his podcast that's one of the few podcasts that I always listen to on planes and so I think that that still does exist most of the big podcasts out there have transcripts that they publish and if you're like me and you do an old school and you want to read that's that's most of what I do
Starting point is 00:43:19 out there are you a kindle ipad printed out what's? What's your playbook for reading? I've gone back and forth over the years. I always toggle back and forth. I always prefer a physical book. I like the smell, I like the feel. And I think there's a lot of evidence that most people learn better. They remember better if it's a physical book.
Starting point is 00:43:39 But I love the ability to highlight and search in Kindle. That is so effective and important to me. And there are some apps like readwise that lets you highlight in a physical book, but it's so much easier in Kindle to have just highlight a line and then you can come back to it later. Particularly for me as, as a writer, I want to highlight passages that I can use as prompts for my writing in the future. So to have it all right there is so great.
Starting point is 00:44:00 I also love the fact that when I travel, I have 400 books on my Kindle to pick from on the plane versus taking one or two and hoping that I like them. And so I it's, it's always hard for me because I absolutely prefer physical, but I always drift towards Kindle. Yeah. Uh, have you been tempted going back to writing and, and, uh, have you been tempted to write a book to help people understand how AI will be adopted, you know, broadly in our world and how it will sort of transform things? I'm assuming there's like a bunch of lessons from Same As Ever that we can sort of apply to understanding this technological trend.
Starting point is 00:44:43 But when I think about the best content that's available on artificial intelligence today, it's not the kind of thing that somebody in the airport is gonna be walking and sort of like, they're not gonna pull a great blog post out of the airport and read it on the plane, right? So it's like, how do you get people to sort of like understand the change that's happening now
Starting point is 00:45:07 and the coming change? And does that, and does like kind of helping people understand that excite you, or do you not feel like you even have the clarity yourself? I definitely feel like I do not have the clarity myself. And my take on this too is I'm not sure anybody does. Cause the history of every big innovation is looking back in hindsight, virtually nobody got it right.
Starting point is 00:45:32 And even the most diehard optimists underappreciated how much change it was gonna bring. My favorite example of this is when the Wright brothers created their plane, they virtually, they only marketed it to the U S army because they themselves, the Wright brothers did not see much commercial use for the airplane They knew you could strap a machine gun on it and drop bombs out of it And then the army would like it for that
Starting point is 00:45:51 But the idea that the Wright brothers themselves foresaw Delta Airlines like traveling like absolutely not not in the slightest That was true for cars that was true for computers is true for the early, that even the pioneers who are the crazy wild maniacs underappreciated how much change it was going to bring. Because every new invention, it's not what you create, it's what other people like manipulate like what you do. I remember one example, it was for Photoshop, a lot of the engineers at Adobe, when they make new tools for Photoshop, they have no clue what artists are going to do with those tools. But they're like, let's just find every way to manipulate an image and someone else will figure out what to do with this. Because you can't foresee what other creative people are going to do with your invention. And that's why even Sam Altman
Starting point is 00:46:35 has no idea what AI is going to be in 20 years. That's not a put down. That's always true for all new technology. Can you talk a little bit about entrepreneurial storytelling? It seems like it's an incredibly valuable skill, but then some founders get maybe lost in the sauce and wind up just focusing purely on storytelling and then that needs to be handicapped. Um, what, what advice do you have for founders? Uh, when, when they're, when they are trying to tell kind of that definitive optimist vision of the future without seeming not credible.
Starting point is 00:47:06 I think, I think we've actually gotten much better in the last couple of years at separating storytelling from just charlatan bullshit. And like there's a period when we weren't, like we were not very good at that. Probably, you know, late, late 20 teens, early 2020s, we were, we were not very good at that. And there are a lot of people who got away with a lot of things that they should not have. But I think our our threshold for B.S. has dropped as as an industry and in a very good way. And people can see through things very quickly.
Starting point is 00:47:36 And so it's always going to be the case that, you know, Steve Jobs was the best storyteller and he was the opposite of a charlatan, the the polar opposite of a charlatan. The polar opposite of a charlatan. And I definitely feel like there is a world where, yes, you have to be a good storyteller, but there's a much stronger sense of put up or shut up. Like you have to show me the numbers of what you're doing. You can't just keep the story going forever.
Starting point is 00:47:56 So that's a great thing. I have this half joking riff that I call it Kugin's Law. And the idea is the value of coinages is increasing in a algorithmic feed environment and compressing these big ideas down into pithy two-word phrases has disproportional returns to the actual popularization of these ideas. So you think about, uh, Lulu, miservi with going direct that encapsulates a very big idea, boils it down very well. Uh, biology has the network state, uh, Andrews Norowitz is American dynamism and, and just creating
Starting point is 00:48:41 these like pithy phrases seems to be increasingly valuable. Do you buy my argument that there is a trend here? Has this always been the case? And what is your take on folks who not just write and produce thought leadership broadly, but also really focus on coining phrases and boiling their ideas down to their, you know, essence. I think it's definitely been true, but I think it's always been true that the golden ages of advertising was the 1930s and the 1960s. 1930s was kind of the
Starting point is 00:49:13 1920s was kind of the birth of advertising and the 60s was like the explode, like the post-war explosion of it. And that was like the, the David Ogilvie periods of just like, they were, they were better back then at creating little pithy phrases than most of us are today. Like there's more people doing it today, but some of the marketing, like the old like mad men style marketing from the 1960s was so ridiculously good in those magazine ads. I think if you want to argue that people have shorter attention spans today, so the value of a pithy phrase is more because they're not gonna give you the time of day
Starting point is 00:49:45 to read through 10,000 words to get to your point is important. But even in a 50,000 word book, like most people don't remember books. They remember a couple sentences from their books. So even if you're like, oh, I love that book. I read that book 10 years ago, changed my life, loved it. You probably remember like three or four sentences.
Starting point is 00:50:02 And it's those like those turns of phrase that are memorable that you can, that stick with you. So I've always thought that's the goal for any book or like any book chapter is even if it's a 5,000 word chapter what you want are three or four sentences from that that people will remember and will stick with them because realistically that's the best you can do for actually like changing's mind. Yeah, that makes sense. On the other coinages, I have Coogan's Law. We've been working on Hayes' paradox, Geordi's paradox, or Geordi's Law. That's where the more funny you think something is,
Starting point is 00:50:35 the less likely it will be funny sort of broadly. Broadly. Right, so you post something on X and you think, oh, this is the funniest thing. This is my best work ever. And then it flops. And then it completely flops. And I wanted to know, is there an idea or maybe a chapter from your work or something
Starting point is 00:50:52 you've put out that you think never found its footing, but you still, it's one of your babies? I would ever think about that. But one that really comes to mind, somebody else tweeted this the other day. I thought it was so perfect perfect and everybody knows this. The more time you spend writing a tweet, the lower the odds that it's going to end up as a banger. And then, and the reverse is true.
Starting point is 00:51:13 If you sit there and you're like, let me get creative and like wordsmith this, it's going to suck. But if you're just in the shower and you're like, all right, screw it. Let me fire this off. It's going to be amazing. That's always the case. Everybody has experienced this. I think it's true for writing as well, like writing longer things that good
Starting point is 00:51:28 writing, uh, is very easy because, uh, if, if your idea is right, it's easy to articulate it and if you get writer's block, the reason you have writer's block is because your idea sucks and it's wrong. And that, and that's why you're, you're struggling to articulate it. So I think the harder, if, if writing is hard for you, you should take a step back and say, my whole thesis here is probably wrong. If it was good, it would spill out of me.
Starting point is 00:51:51 Yeah, is that true for investing too? You hear about these legendary deals. They had drinks, they did some napkin math, and it was the best investment ever. The deal that they were in the DD room for sending Docsense back and forth, like it was just a mess. Is there something more broadly applicable to this idea of like simplicity breeds genius or like you know inspiration?
Starting point is 00:52:16 I think what's true especially for very smart educated people is that the harder you work at something the more opportunities you have to fool yourself. So if you're very smart and you have a 140 IQ and you went to harder you work at something, the more opportunities you have to fool yourself. So if you're very smart and you have a 140 IQ and you went to Harvard and you work at Goldman Sachs and you spend six months doing due diligence on this deal, you are gonna convince yourself of whatever you wanna convince yourself of because you have so much mental horsepower
Starting point is 00:52:37 that you can create any model, come up with any theory that justifies what you wanna see. And so it's like the harder you work, the more likely that your final result is going to be fooling yourself. I've seen that too. A challenge as if you're a founder or an investor, you're probably pretty convincing.
Starting point is 00:52:57 And you can be so convincing that it can be your Achilles heel because you can convince otherwise smart people that doing something is good and you deliver such a good, you know, sort of like a series of statements to get other people on board with that, that people are like, okay, like, I think that's a good idea. And then in hindsight, you look at it and you're like, well, that was a terrible idea. Yeah. That made sense. I think what's even more true is not only do you fool other people, you fool yourself.
Starting point is 00:53:22 Yeah. And I think it's true in investing that you need just enough IQ to where you can understand the important stuff, but not so much IQ that the simple stuff bores you. If the simple stuff bores you, you're gonna try to overcomplicate it in a way that's gonna fool yourself. And I don't have the intelligence or the IQ
Starting point is 00:53:41 to blow myself up in a derivative strategy. I'm not smart enough to do that. And so I just have like a very boring, basic dollar cost average and index funds. Cause I've been blessed with a low enough intelligence that that's as far as I go. But I think that's, that's, that's a huge investment advantage. Yes, exactly. I was talking to this investor not too long ago and I was like, uh, and he, he's a very successful professional investor.
Starting point is 00:54:03 And I, he asked how I invest. I'm like, I dollar's a very successful professional investor. And he asked how I invest. I'm like, I dollar cost average in the Vanguard funds and then I go to the beach with my kids and that's it. And his response was, I'm so jealous of that because I think he was so smart and so educated and so credentialed that he cannot do that. It's impossible for him to do that, even if he wanted to. Yeah.
Starting point is 00:54:22 Can you talk about hard work? Specifically we were having a conversation with David Senra earlier this week, and I think he brought up the example of how Michael Ovitz at the end of his career said he could have worked like 10 or 20% less hard and had the same result. And this is something that John and I talk about a lot. We work really hard right now. We get to the office by 6 a.m. We are working until we go to bed.
Starting point is 00:54:51 We spend time with our family, but like very, very full on. And it's top of mind because we both have young children between basically six months and five, and we wanna be able to spend time with them. And so I feel like it's this topic that's constantly on our minds of wanting to live up to your potential, yet wanting to maximize time with family
Starting point is 00:55:16 in these sort of really important years. It's tough. One thing that comes to mind here, there's always the criticism that we spend, I forget what the stat is, 30% of healthcare spending on the last month of people's lives, whatever that stat is. And whenever that comes up, you're like, yes, but you don't know when the last month of your life is going to be. Like it would be great to stop spending money at the end, but you don't know when it's going to be. So when people talk about, oh, I could
Starting point is 00:55:39 have been just as successful if I worked 20% less. Yeah, but you don't know which 20% of your work was wasted or not. It was always the case back in the day when I was writing a lot, I would write two or three blog posts per day. So let's say I was writing 600 to a thousand blog posts per year. This is like 10 years ago. In one year, if I wrote 600 blog posts, I would look back and say, I'm very proud of five of them. Five of them were really good. And the truth is that when I was writing them, I did not know which, which one of those five was going to be, I had no clue.
Starting point is 00:56:11 So I feel like you had to put in a ton of work to get like some, like much smaller level of success out of it. And, and so I think that's, that's usually the case, but to your point of like kids and balance and work and whatnot, I experienced that too. I think the way to think about it is like, this was a, a Daniel Kahneman quote. He's like, you need a very well calibrated sense of your future regret. What are you likely to look back at 50 years from now and regret? And a lot of people, including me and probably you guys, whatnot might look back and say,
Starting point is 00:56:45 man, I had a lot of career opportunity and I wasted it. That would be a regret. Of course, I would also regret looking back and saying, I didn't spend enough time with my kids. That, that sucks. I, there's this one point that I used in psychology money where it was a study from a gerontologist named Carl Pilmer. And he studied 1000 elderly Americans.
Starting point is 00:57:02 Most of them were 90 to a hundred years old. And he writes in his book that of the 1000 elderly people that he studied, not a single one of them, not one single person looked back and said, I wish I made more money. Not a single one of them. Every single one of them looked back and said, I wish I spent more time with my family. I wish I was nicer to my friends.
Starting point is 00:57:22 I wish I was more helpful to my community. That was universal. And so I think a lot of that, that too, of people who have more life experience than you and I do, like they have very different goals than you and I probably did. They wish they had done things differently than you and I did. Now, one of my main life goals right now at this phase of my life is I want to take care of my family, my wife and kids. I want to work hard and provide for them financially. And so when I'm out working and maybe not spending, my wife and kids. I want to work hard and provide for them financially. And so when I'm out working and maybe not spending as much time with them, I don't feel guilty because I'm like,
Starting point is 00:57:50 I'm fulfilling a purpose that is very important to me. But it's always a balance. I think a lot of people, their kids turn 18 and go off to college and they're like, I don't even know you. Oh, and I didn't spend any time with you. And so I have no firm formula for that, but it's huge. It's important.
Starting point is 00:58:05 Yeah. It's a personal equation. Well, thanks so much for joining. This was a fantastic conversation. We could probably talk for another four hours if we had the time. So we'll have to have you back on soon. I wish that you had the blood pressure monitor on all week.
Starting point is 00:58:19 It's very, uh. Anytime it ticks up or whatever, we can say, Morgan, get on the show. Let's talk it out. Yeah. In Wall Street, the first time you meet Gordon Gekko, he's, you know, the market's up and he's taking his blood pressure.
Starting point is 00:58:31 Take his blood pressure. I love it. It's iconic. It's an iconic scene. Anyway, thanks so much for talking about it. Great hanging by. This was fantastic. We'll talk to you soon.
Starting point is 00:58:38 Thanks guys, appreciate it. Talk soon. You know, it's funny, he was talking about if you're thinking about, you know, writing something too long or whatever. So that right before we got on the show, I said, I posted, Hey, dude, you should come over later. We're going to be greedy while others are fearful.
Starting point is 00:58:51 How to do it's at 800. Let's go. I knew that was good. I burst out laughing immediately and I thought it was going to be great. We do need to get Jordy's Hayes paradox popping on the timeline, but we will come back to that later because we have the co-founder of Honey in the building. Ryan, how you doing? Boom. Doing great.
Starting point is 00:59:10 Yeah, thanks so much for hopping on. I think we have a mutual friend in John Wallinan. Yeah, we do. And obviously I saw your thread. Could you give us just like a little bit of background on you, the company, and then what inspired you to write that thread? And then I'd love to dig through it in great detail,
Starting point is 00:59:29 because it's a fascinating story. And it's one of those narratives that just goes out on the internet and grows and grows and grows. And I think there's a lot of, we need to correct the record here, and that's why I wanted to have you on. So thanks for joining. Well, I'm talking to you guys first. So thanks for inviting me on.
Starting point is 00:59:44 Yeah, like you said, I co-founded Honey more than a decade ago, back in 2012. For the first handful of years, we had a cool consumer product, but couldn't figure out how to make it into a company that investors would fund. Yes, the browser extension never has really been an attractive investment space.
Starting point is 01:00:04 So after a few years of grinding, Browser extension never has really been an attractive investment space. So yep After a few years of grinding we finally started to piece together enough enough of a user base that we got investment 2015 we kind of really started to figure out how to do the business and from 2015 to 2020 we just had a classic but insanely fun time riding a hockey stick of growth and in 2020 sold the company to PayPal. So as part of that acquisition, I stayed on at PayPal for a little while. After that, left at the beginning of 2022, even though I'd had a diminished capacity before that, but formally left in 2022. And then a few days before Christmas
Starting point is 01:00:48 this year, all of a sudden out of nowhere, there was a YouTube video accusing my former company of being a scam. And it also pulled in a bunch of creators who we'd spent over $100 million working with to promote honey. I'm sure some of the audience saw our ads over the years. And, uh, because of that, it went insanely viral and immediately, uh, the narrative on the company I founded completely shifted. So it was a surprise. Um, the first evolution of the business model, you said you were kind of hunting for product market fit. Uh,
Starting point is 01:01:29 how does honey actually make money? How did it make money at first? Has that revenue mix changed over the time that you were there pre or post PayPal? Walk me through just like the basics of the business and then we'll dig into what kind of the video is about. Yeah. So for the first few years, the business didn't make money, and that was part of the flaw in finding investors. We thought maybe someday we'd figure it out and had a few hundred thousand users in organic growth on the company, just from word of mouth, people sharing this cool new coupon tool
Starting point is 01:01:58 that automatically applied coupons when you're checking out and took a pain point that everybody had experienced and made it really easy. Yeah, I mean, I remember using retail me not back in like 20, or you must say 2008 or something like that. And you'd always have to, Oh, I'm buying something, control T new tab, search coupons, and honey just baked that right into the browser made a ton of sense. And so and so what was the first dollar into the business? How did you make the first dollar of revenue, I guess? The first dollar of revenue didn't come until
Starting point is 01:02:30 2015. So let's launch the product 2012, 2015. We pieced together kind of a rolling seed around if anybody that would take a flyer on this company with a couple hundred thousand users and maybe interesting, but haven't really figured it out. And we used that to hire somebody with a background in affiliate marketing. And we, being George and I, co-founders of the company, had thought that this might make sense, because after all, RetailMeNot, you just mentioned, and other companies,
Starting point is 01:03:03 that's how they monetize. When you click on that RetailMeNot, you just mentioned, and other companies, that's how they monetize. When you click on that RetailMeNot link, it's actually affiliate tagging you at that moment in time. And so you go to do that coupon search and whether the coupon works or not, it's, they're getting an affiliate commission on that. And that's why there's the, they hide the code and open multiple tabs and do all that stuff.
Starting point is 01:03:21 So that was how it was. And we're like, hey, we could probably monetize that way. George and I reached out to Affiliate World and they basically were like, you guys are a toolbar. You probably are up to no good, go away. And so we did. We didn't realize that there had been some bad actor toolbars in the space immediately prior to that,
Starting point is 01:03:43 which was like, a surprise to learn. But we went back to just building a cool product for consumers. And then when we hired somebody with a background in that space, what we came to realize is that the relationship side of affiliate marketing is how you get to the table in the first place to get that trust, especially as a new product in a toolbar or browser extension. Affiliate people call it toolbars. Before we go too much further, what was the LA tech ecosystem or was there really an ecosystem in that 2015 era? Because I remember I graduated college in 2018 and everybody was like, oh,
Starting point is 01:04:24 LA tech is real. it's a thing. And they were like, Snapchat and Honey. But there wasn't a lot of debt. There wasn't a big bench there. Name the third company. Yeah, it was tough. I'd say very few people were even mentioning Honey in that conversation either.
Starting point is 01:04:38 We were pretty under the radar. We didn't do a ton of press and talk about what we're doing. We'd found a cool business and decided not to tell everybody that might might want to pivot to copy what we do about it so but back to the affiliate piece of it and So we hired from within the industry and we're able to have conversations and one of the things we realized pretty quickly is Another way to save our users money was to offer them cashback. And so we added a cashback loyalty program that we called
Starting point is 01:05:12 Honey Gold in 2015. And when we did that, we started to look a lot more like Ebates, which became Rakuten and the cashback model that was very familiar and comfortable to affiliate marketing, combined with the coupons. And we still had a lot of education to do around what the incremental lift is of keeping somebody in the shopping cart. Like you talked about this journey that you had prior to honeys, you get to this box while you're checking out that's empty says coupon codes, it's effectively challenging
Starting point is 01:05:44 your intelligence as a consumer on are you just gonna check out or are you gonna go find a coupon? And it's inherently adding friction to the checkout process by doing that because either you're a little bit hesitant or uncertain or you're going off on this wild goose chase. And we found that by keeping the consumer
Starting point is 01:06:03 in the shopping cart, we actually drove incremental lift for the retail partners. And over time, a bunch of them were able to use their own data to demonstrate that to themselves. And that's the core of what became Honey's business model. So the first penny that we made, and the last penny, as far as I know, is effectively affiliate marketing with a cashback program that we rebated that commission to the consumer.
Starting point is 01:06:31 Yeah. Talk to me about the steelman argument. I've been on the other side of this because I've run e-commerce sites and I've seen Honey Codes come in. And as an e-commerce owner, you're kind of like, oh, I understand paying the affiliate for the Andrew Huberman who like endorsed it and educated the consumer about it. But it's like, if I already did all this work
Starting point is 01:06:51 to get the customer into my cart, why am I paying honey at all? Like I know that the customer was gonna go find a code, but you know, why am I paying you guys? Talk to me about that conversion lift and like how you actually test that and justify that, because I could imagine that there's probably some pent up like frustration amongst the commerce sellers just saying like, Oh, that was always like paying a big honey bill
Starting point is 01:07:13 and I never really made this to me. And honestly, I think a lot of e-commerce customers feel the same way about Google ads to be true, to be clear, because they're like, I was going to rank number one and I had to pay the Google tax. How do you, how do you, how do you think about like the honey tax or any of those claims? Honestly, I don't think it got to that point while we were running the business. Maybe over time, they started to have more of that sensation.
Starting point is 01:07:37 It wasn't the feedback that we were getting. We worked with thousands of retailers that made the decision that it was effective for their marketing programs. Obviously, there's people that chose not to work with Honey2. In fact, one of the first VCs I met when we were trying to raise a seed around, he having been an e-commerce founder himself, he said, I hate what you're doing. Before we had like any users or anything, I hate what you're doing. If I were any users or anything, I hate what you're doing.
Starting point is 01:08:05 If I were still running my company, I would write code to break it and open source it so everybody could have it. Wow. I'm like, OK, I guess you're not going to invest in this. That's a pretty hard pass, email. Most people say, not a fit at this time. Correct.
Starting point is 01:08:17 This PC's like, I would open source what you're doing because I'm so frustrated. So he never did that, and nobody ever did that. And we got better at demonstrating the value. And it does make sense that if you're at the last second, you're like, ah, I'm deep in the checkout. They just hit me with taxes and shipping. But then honey came in and gave me a little bit,
Starting point is 01:08:39 you know, 5%, 10% off. Maybe I'm more likely to click. And you could imagine that math working out in certain scenarios and maybe in aggregate. Yeah, in aggregate, people have found that it works for them. For any particular consumer journey, I'm sure you can say, oh, I was going to buy this one anyway.
Starting point is 01:08:54 But there's other times when you might have gotten the hesitation of, hey, maybe I should wait for a sale. And by getting the confidence that you have a deal and you can check out now, in aggregate, it makes enough sense that the commission rates that are paid for this are not high. So it's not like the marketing spend on Google where you're spending well into double digits percentage of revenue on acquisition or what you're spending on Facebook.
Starting point is 01:09:23 We're talking like really small numbers. And part of that, the reason the rates are low is actually because of the place in that journey. So it's an incremental lift for a few percent. We never tried to sell it as, hey, you should give us 25% like you're paying Google. And get Forex return on ad spend is like 25% commission rate. Yeah, and you're bringing a little bit of like this dopamine hit to the checkout process
Starting point is 01:09:48 That's normally like I got to like type in all this junk I got to do all this and then honey shows up and you're like, yeah I feel like I just won the lottery like maybe I will check out so I could I could totally understand this But before we were doing this I mean keep in mind the customer journey is like open the Google tab and go search for a coupon somewhere else And you're gonna find one from a competitor you're gonna find a bunch that don't work you're going to get distracted because your kid wants a sandwich like there's a whole bunch of reasons that you end up abandoning that cart that you appeared to have high intent to check out on and shopping cart abandonment especially when we
Starting point is 01:10:20 started is like the number one pain point for retailers like Why are 70, 80% of my people leaving? And what can I do to help close that? And so I think Honey and tools like it, we were not the only ones that did this. A bunch of other companies replicated the model and have had good success with that. And for the same reason. Yeah, can you talk about that?
Starting point is 01:10:41 I remember at post exit, there was just this rush of companies that were like, okay, you can Chrome plugins are now venture scale businesses. And like, I just remember it was like, literally constant. There was so many iterative company who knew really just exposes like, but I don't want to go technology, but also a potentially a great business, which is fascinating. But here's the issue. Outside of cryptocurrency wallets,
Starting point is 01:11:10 we haven't seen big outcomes in that sort of Chrome plug-in space. And you could argue those are mobile products. Yeah. I have my own explanation or thinking on it. So what George and I saw early was that while everybody else is off building mobile apps and every investor was asking,
Starting point is 01:11:29 hey, what's your mobile strategy? Chrome extensions or browser extensions generally have an insane retention characteristic that they completely solve for all of the problems that people are having on mobile. How do you not just get somebody to download your app, but to remember to use it in the context that you're providing value. And for mobile apps, there's only a certain set of contexts where you have that natural habit
Starting point is 01:11:54 loop trigger. And people wrote books about this of how do you get a consumer to build that habit? You have to effectively teach them that this is a new behavior. And on mobile, the apps that we're able to do that are effectively messaging apps that give you a real trigger to come back into the app. You're engaging with somebody else. And so outside of that, you never had like mass market consumer product for shopping even. It's a pretty big pain point, but the amount of successful mobile apps going after even some of the biggest categories is challenging. On the browser extension, we got to get all of that engagement and retention and habit out of the box. You
Starting point is 01:12:38 install this one time completely out of context. You are not shopping now. You might not shop for weeks or months. And the exact moment that we can deliver value, we could detect that context and present the option to do that. And one of the reasons Honey in the early days grew at a steady, but slow rate is the cycle time on that growth loop was measured literally in months. So the time from somebody installs Honey to they get a coupon that works, which is like the aha, this thing actually works moment was literally,
Starting point is 01:13:13 but, uh, weeks to months, like in, because of that defeat, then that's the moment when people would share and tell people about it. So we know this works. Um, so with the extension world, you have that. So how come other people haven't been able to do this? Well, there is actually a challenging policy in the extension framework that remains to this day. And it's a single purpose arbitrary policy that Google has for the Chrome Store that
Starting point is 01:13:42 extensions are only allowed to do one thing. And so extensions have effectively been pushed to only do a feature. And it becomes increasingly challenging to attach a business model to that feature. And so to me, that's the disconnect, that they've strategically divided the utility from the monetization side of it. And there's historically good reasons why you might want to do that.
Starting point is 01:14:08 In some cases, people are abusing it. And there was abusive behavior before this policy came in. But I think it really slammed the door on a lot of other interesting models that people would have for a browser extension. And basically, to this day, you have honey and shopping tools because we were able to attach the monetization that worked to do large scale consumer growth acquisition. You have Graham Raleigh who has a business model
Starting point is 01:14:37 that works for it. And you have free eye blockers as basically the entire world. Can you talk now about some of the allegations that were made in that video, what you found so frustrating about it and kind of the process that you went through to debunk some of those claims.
Starting point is 01:14:53 And also I'd love to know, I mean, you've moved on, it's not really your baby anymore, but clearly it got under your skin and you were like, I gotta set the record straight. So just walk me through the process of the claims in the video and then what you think they got wrong. Yeah. I mean, so obviously the video is like complete shock is like, it's the last thing I ever expected to hear. And yeah, you're like, we've saved people so much money. It's like everything about it. I mean, from we saved tens of millions of users, billions of dollars, we helped fund a lot of content creation and the partnerships that we did with great influencers.
Starting point is 01:15:37 And to twist that was to start with unexpected. And my initial reaction, we walk in through the stages on it. It's like, once your baby, it's always your baby. I'd say we felt that, I felt that personally, talking to the hundreds of great people that built Honey, it was devastating to have the thing that you put all this energy into dragged in a public forum in a way that you know isn't accurate. And so that's like the initial reaction is like a little bit of grief and not feeling
Starting point is 01:16:19 like you could do anything about it because almost everybody I'm talking about like moved on from the PayPal organization doing other things. And so didn't really feel it was my place to step in and say anything. That's like it's I sold the company to PayPal, they own it. They should be able to make the decisions that they want for this business. I shouldn't be meddling in their business decisions. And so it's like accepting the reality of this is not mine anymore. And so then I saw, like, just keep in mind the timing on this, this drops like a Friday before like Christmas holiday week. A big company like PayPal is unlikely to have all
Starting point is 01:17:07 of their resources aligned to respond to that at the same speed of a YouTube viral video going viral through a holiday season where the initial video got 17 million plus views. The other videos about the same issue got even more than that in aggregate. And so it went wildfire. It led to rapidly several class action lawsuits being filed against PayPal and against other companies that do the same bottle. And because of that, the potential response from PayPal rationally became it's run by legal.
Starting point is 01:17:50 Yeah, I'm assuming here. Just to be clear, I have had no conversations with PayPal about anything, including whether or not I was going to say anything recently. And I suspect a founder going direct I suspect that any Comes for legal person would have heavily advised against doing what I did But so like Going through these phases expect it to kind of maybe it'll just blow over like it'll be a big storm
Starting point is 01:18:22 and then like everybody can move on but it's kind of continued to linger and articles about it. And then Google updated their Chrome Store policy sparked a wave of articles at the beginning of March. And people, again, from my point of view, like I have a lot of knowledge about this particular industry and extensions, like the changes that they made to the policy will have no effect on anybody because they're already in compliance with what Google required. And despite that, nobody talking about it in the press knew that. And so you have this consumer narrative that's going like crazy on Reddit and other places. What about the specific claim of like rewriting cookies and affiliate codes where, you know, it's very clear that someone found out about a product from a
Starting point is 01:19:11 specific influencer, the customer wants to use that code and then honey somehow gets in between that transaction. That felt like the claim that really took hold. And I think you addressed that in your thread. Yeah. So like the, the never mentioned by anybody and any of the follow up on it, not a single journalist or other YouTuber knew about it and partly it's because it's industry nuance that there is stand down policies at all of the, at almost all of the networks where downloadable
Starting point is 01:19:42 software tools, toolbars, browser extensions are required by the affiliate network who's mediating all these like multi-party exchanges are required to stand down to traffic like that. So a website like the coupon code websites doesn't have the ability to tell if you've already clicked on somebody else's some influencer creator link. They will, in the industry it's tagging is the terminology for assigning the cookies. It's basically you're following a link and then the network's tools sets all the cookies. And so for somebody like retail, I mean, now you mentioned before, if you go to the site, you click on the link, the link updates all the cookies Yeah in the case of browser software do the same thing
Starting point is 01:20:29 It's just following the link in an uninsur unintrusive way for the user So it's not like you're not going in there and modifying the cookie value as the browser extension You are clicking a link in the affiliate network, which is how they handle attribution and so the overriding case that was which is how they handle attribution. And so the overriding case that was described and purported to be extremely widespread, I think is actually very, very narrow. And only happens in, I guess, two cases.
Starting point is 01:20:55 One, the honey stand-down detection potentially isn't working in some cases. This is not always alleged, but as a person who knows this business, it's possible there are cases where it struggles to detect that a link was clicked. And the particular example that he used on Newegg, I'm pretty sure this is what happened
Starting point is 01:21:17 because Newegg is using a multi-touch attribution system, an any-click system from company Howell. And it is effectively Like it's new. I uses it so that he can pay both the creator upstream and honey But I suspect that honey wasn't standing down because it didn't recognize this howl link Which is different than the the normal of Rakuten affiliate network link got it it. Again, this is me speculating. I have no information on this, but it felt to me like a carefully selected example. And then a few weeks ago, I took another look at the video and I wanted to understand like
Starting point is 01:22:01 is how did this happen? Like what he's saying? And I started pausing the video on the various pieces that he was showing on screen is hey, honey's doing all this devious behavior. And almost every version of that was essentially falsified. Like he'd say one thing and then the screen would show something completely different and not incriminating. It's like honey's there's a secret 20% off coupon code somewhere else that they're colluding with the retailer to not give you is what he says. They're only going to give you the 5% and then you watch the video rapidly going on screen. It's honey going and honey gave you a 10% coupon and 5% cash back I was like well you can't just like make shit up or I guess you can make stuff up this
Starting point is 01:22:53 is the internet welcome to the web yeah so I could get a little frustrated I'm sure and I had seen that he covered up the coupon that I was trying to figure out what happened it's like he cut blocked out the coupon code seen that he covered up the coupon. I was trying to figure out what happened. It's like he blocked out the coupon code box that he was using to demonstrate that Honey was hiding coupons. I'm like, why are you hiding the coupon that you're using when you're showing, like when that's the claim.
Starting point is 01:23:15 And right there on the screen, there's like from that same retailer, if you sign up for my email list, you get a 30% coupon. And then the next screen, he's like, here's a 30% coupon they didn't tell you about. I'm like, well, they didn't tell you about it because it's a one-time use code that's just for you
Starting point is 01:23:30 that you just sign up for that email right there. Like, honey is not saying that they're getting you that coupon. Yeah, okay. And so it's like a lot of misrepresentation, but then the data is just not there to support the claims. And I first decided to let the people at Reddit know and do an AMA there. Very, very lengthy posts and I'm glad it's way too long. Some
Starting point is 01:23:54 good questions there, but nobody's going to read this and decided I would at least give some of the visual evidence and Twitter thread. And it makes it a little bit more clear when you see it, like the screenshots. And like, it's not me. Some of the stuff I said before about Stand Down, and like, that's industry-specific knowledge. And this is what I thought I would have to talk about to defend Honey originally.
Starting point is 01:24:20 But the stuff I talk about in the Twitter thread and on Reddit? It's like you can see it for yourself. You don't have to know anything about this business Just go hit pause and look at the screen and see if the evidence is being manipulated to to create a narrative and I think that it was and I don't know if My version will get anywhere near 17 million views or like I don't have that kind of platform. I've never really been a content creator or trying to get anything out there. It's kind of stayed under the radar and built a cool company with some great
Starting point is 01:24:58 people. But yeah, so it, it's out there now, but we're going to, we're going to try to get this to 17 million. We'll hand out pieces of paper in the street. Yes, yes. If we have to. I do have a question for you. I'm sure you're gonna have
Starting point is 01:25:12 a kind of interesting nuance point of view on this. What's up with the VPN market? Because when I think of sort of like shady, when I think of like shady companies that do heavy YouTube influencer marketing, I think of VPN shady, you know, when I think of like shady companies that do heavy YouTube influencer marketing, I think of VPNs, right? So it's like, we probably should have a video with 17 million views of like, you know,
Starting point is 01:25:34 these VPN companies that are run by, you know, sort of unknown shadowy internet, you know, companies. But I'm curious if you have any type of read on that market. I actually don't know that market that well. I do have a sense for why it's so prevalent on YouTube though. The reason for it is one, it's pretty high recurring revenue, high LTV products, so you can afford to do marketing spend for a product like that. The second piece is the YouTube audience is like two-third international and so it's an efficient way to reach a broad
Starting point is 01:26:10 International audience who is in need of VPN to access content outside of their home country And so I think that's why you see it on YouTube as a channel in particular Because the economics work is basically it. And once somebody figured that out, then everybody else copies the marketing channels. So I've always said Mr. B should launch a VPN because he has a super, super broad audience that goes everywhere. And if he was able to capture all of that,
Starting point is 01:26:38 gross margin, net margin for himself, he'd be printing money, but maybe misaligned with the brand, who knows? Just a crazy idea I had. Yeah, it's a great idea. I mean, it would work from a, the business part of it would work, probably work better than Beast Burger and some of the other stuff he's tried. But he definitely reached a scale where he can't do normal advertising. Yeah, because if the product's not available globally, it's going to be, he's under-monetizing
Starting point is 01:27:02 his content at that point. Yeah, it's that and the scale of the ad buy now is like, it's a Super Bowl every time he does anything. It's crazy. So there's like 10 companies that can even possibly afford to do that. So he's been driven to create new brands and it's kind of fun to see what he's been able to do
Starting point is 01:27:20 with that. It's been super fun and it's been fun having you. Yeah, before you go, could you give us, you know, a minute? Give us a coupon code. No, no, at least a minute on Pi. What's your favorite coupon code? At least a minute on Pi and what you're working on now. Yeah, so Pi is a new company I'm working on.
Starting point is 01:27:37 We have an ad blocker that is designed to give consumers control over their advertising experience. We don't think the economics of the internet work if everybody has an ad blocker that blocks everything all the time and we don't think it's reasonable that consumers should have to tolerate the ad load and the incentives are misaligned there. So effectively, I is building a way for consumers to have granular control over what type of advertising they are comfortable with and We are building ways for them to participate economically in that value exchange. So it's very cool
Starting point is 01:28:13 They're like that's you have two million users. We have two million users And no wait, so let me guess nobody's heard of you because you haven't hired PR for yeah, you haven't raised venture capital, you don't have any VCs doing threads. Because I'm, you know, we always find it funny when we find a company with a massive user base that we haven't heard of. And you know, we've heard of you and Honey,
Starting point is 01:28:39 but candidly I hadn't heard of Pi until we were prepping for the show. Yep, that's not surprising most of our acquisition has been through Regular YouTube ads of all places. So it's a it's interesting to be an ad blocker that's doing advertising. Yeah We'll see how long we continued doing that but to get to to get to a reasonable scale where We have a large enough audience where people are interested in what we're doing. It was important. So we've invested in that growth, personally invested in the company, and we'll see if we can make it work.
Starting point is 01:29:19 But if not, it's a lot of fun building with a lot of cool people again. That's amazing. Love it. Well, thanks so much for coming by. Always welcome with a lot of cool people again. That's amazing. Love it. Well, thanks so much for coming by. Always welcome on the show. Yeah. Cool. Thanks, guys.
Starting point is 01:29:31 Yeah, we'd love to talk to you soon. All right. Cheers. Great. Yeah, what an interesting... It's back with another Chrome extension, another two million users. Chill. Oh, yeah.
Starting point is 01:29:41 By the way, we have two million users. Fantastic. Well, we're pivoting back to AI. We have the host of the Latent Space podcast. I first found this fellow because he put out a fantastic interview with George Hotz, one of my favorite people in the world really. And we're excited to talk to him. I want to get his take on AI 2027 and a whole lot more.
Starting point is 01:30:03 But also I think he has a little bit of contrarian take about whether or not you should traffic in these forms of, oh, AGI is two years out. So I wanna hear from him. Welcome to the studio. How you doing? Morning, hi, doing great. Do you wanna give a brief introduction in your,
Starting point is 01:30:23 I know you as the host of the latent space pod, but, uh, in your bio, you have a number of different affiliations. Uh, what's, uh, well, w w w w w w how should, how should the viewers think about you? Someone with basically ADD and too many projects. Um, but yeah, I'm Sean or Swix. Uh, I started, uh, the latent space podcasts to cover the engineer field, which I helped to popularize. And I also run the engineer conference, which is happening in a couple of months. Primarily, I think the one in New York that I did help to kickstart a lot of the recent MCP hype that you might be might have been seeing.
Starting point is 01:31:02 Oh, yeah, I'd love to get into that. We can get into that. And I think, yeah, what you were referencing just as you came in was, you know, the news of the day, right? Like, yeah, 27. I think very important to at least try to go through the thought process of like what might happen. But this is essentially fan fiction, right? Like we don't know exactly what will happen. And you know, we've been getting various forms of
Starting point is 01:31:27 AGI is two years away for quite a number of years now. And I think part of why I started the engineering sort of trend is to try to get people towards building more things instead of just speculating on what the big powers that that may be will do. Because you have a lot more agency in your life to do things if you just focus on what we have today. Granted that AGI is very big and I do take that seriously. Yeah, of course.
Starting point is 01:31:54 Well, let's go through some of the things that you do think are worth building. I wanna get into agents and we've been asking a lot of people, when can this thing book a flight for me? But maybe we should start with MCP. Can you just give us an overview? I saw all the viral posts, um, a lot of debate over is this just an API or is this something more important?
Starting point is 01:32:13 All the leading labs are putting out papers and implementations of it. Can you give us your kind of, uh, MCP explain like I'm five for dummies and then maybe take me through a few of like the implications for the market and the different AI labs. Yeah, I'm pretty sure you already had like a couple of explanations. So I don't know if I'll be, you know, adding a lot here. I'll just give you my version and then we can riff on that. So MCP is a protocol for a lot of different integrations into agents. And I think that there have been many attempts, a lot of different configurations of into agents. And I think that there have been many attempts,
Starting point is 01:32:46 a lot of different configurations of, I will include the framework, I'll start from the gateway, I'll start from the integration side, whatever. But this is the first one that was very seriously put out by Big Lab. We actually were the first podcast that the MCP creators did an interview on, and that's, we just released that yesterday.
Starting point is 01:33:06 And it's shocking how impactful this is for this, effectively the side project of two guys. And this is, and they work at Anthropic, is that correct? They work at Anthropic. Yeah, over in the London office. Yeah, you can check out latent.space slash MCP, you'll see it. Cool.
Starting point is 01:33:23 And yeah, so I think effectively, maybe for the normies, whenever you see these like add symbols, like when you're in the chat and you want to add something and you want to include a tool, you want to include any kind of agents or any kind of subsystem like a Notion or Zapier or what have you. For developers, we will use things like Sentry, Microsoft Co-pilot and
Starting point is 01:33:46 GitHub just put one out today. And I think Google announced something. I don't know what they announced, but they announced something. But basically the entire industry- Always the case with Google. They put out something, but I can't find it. I mean, there's just a lot, right? Like today, they just announced 2.5 Pro pricing. So my viral tweet of the day, I always have one a day, is that they now completely own the entire Pareto frontier of all labs. They have the smartest and the cheapest and
Starting point is 01:34:12 the most effective Pareto frontier between. Very cool. Which is incredible. Anyway, coming back to MCP. Though this is what we call in the industry, M times N problem. Meaning, I have this one app, I write integrations for this one app. But then when I move to different app,
Starting point is 01:34:31 I have to rewrite integrations again. Obviously, that's annoying. Obviously, we want to share integrations. Obviously, the open source community can improve things better together instead of individually, and we should start competing on these things and start competing on these things and start competing on other better things. And it just took a player like Enthopic to put this forward,
Starting point is 01:34:49 and this is the one that got enough momentum. So practically for what the normies would... Sorry, I don't know if it's derogatory to say normies, but people who are not into mix every single day. Practically, what it's going to be is that you're going to be much easily able to add what's going to be is that you're going to be much easily able to add integrations to basically anything in any agent. And that will, you know, at least help you get use out of them faster just because individual companies are not writing their own integrations anymore. Basically, everyone's onboarded to this ecosystem. Yeah, I think normies is it can be a pe it's used as a blanket term, but I think everyone, you might be a normie in defense technology. A defense founder might be a normie in AI technology, and that's fine. And that's why we have a variety of guests on the show.
Starting point is 01:35:35 With regard to MCP, the most basic question is like, how is this different from an API? Zapier already exists as a binder between APIs. And also, we're hearing coding agents getting better and better. The Devons, the cursors, why is it so hard? It feels like AI is incredible at writing code. Why can't I just use AI to say, hey, I want to interface with this API. Just go figure it out every time you compile the code. If the API changes, figure it out and reverse engineer the API. There's docs out there. Here's kind of the general idea of what I want. Go figure it out every single time at runtime.
Starting point is 01:36:11 Yeah. I mean, the second question is easier to answer than the first. Mainly because it's probably easy to figure out common implementations, but then you won't tackle the edge cases or the bugs that happen. And obviously it's also very inefficient to keep coming up with it, a new implementation and integration every single time. So when possible, actually it's better to not use AI if you have the option to not use AI. Like AI is meant to be a plug for things that don't exist.
Starting point is 01:36:38 But if you do have the integration that's written and battle tested by like, you know, thousands of people before you, why would you not choose that? Only if your needs are not being met by that integration, then go ahead and write your own. And the cost of writing your own has come down a lot, which is also very interesting from the open source ecosystem point of view, where people are now vendoring a lot of their libraries that they would typically import without thinking, just because they're like, oh yeah, someone's written this for me
Starting point is 01:37:05 and it costs too much. Anyway, so then let's come back to the first question on how does this compare an API? And that's the most, that's why developers hate MCP. They're like, this doesn't do anything over open API. And we asked them, we asked the authors of that question point blank on the show. And basically it reifies concepts that would be
Starting point is 01:37:29 undifferentiated as far as the normal API spec goes. For example, there's concepts like resources, prompts, tools, sampling routes, and transport in MCP. Those are basically special subclasses that are treated differently in the MCP environment, whether or not they're controlled by the model versus the application versus the user.
Starting point is 01:37:50 So they are very, very distinctly parceled out the permissions and the intended roles of each feature. So I would say it's kind of like a layer over APIs that have become more AI native and that's made it, Anthropic is at least saying that this is much easier for us to use and you're gonna build much more effective agents if you do this. Got it.
Starting point is 01:38:11 Do you feel like MCP will massively accelerate the creation of agents that become a very active part of our lives on the internet? It was, we were at Y Combinator demo day become a very active part of our lives on the internet. We were at Y Combinator demo day and there was a lot of AI agent infrastructure companies. There was probably more AI agent infrastructure companies there just in one demo day than AI agents
Starting point is 01:38:39 that I've sort of tried to use, right? So it's like everybody wants to do picks and shovels, but we actually just need, you know, we need more sort of shots on goal at, you know, potentially the harder problem, which is just reliable agents. Yeah, I fully agree with that, actually. So it's actually very depressing. Like the AI agent infrastructure company is what you do as a developer if you have no other ideas. So that I mean, YC should take that as a warning sign that their people are not going after the right path. MCP makes integrations easier, period, right?
Starting point is 01:39:10 What you do with those integrations, what problems they solve, that is still an open question. But yes, for sure, the ecosystem is about to get a lot stronger because we're now no longer rewriting things. We're converting M times N combinatorial explosion problems into M plus N, where you could just write every integration once, hopefully. I mean, maybe twice. But yeah, I actually, I like the opportunity to address this
Starting point is 01:39:32 because I think people think of me, because I wrote the article on why MCP won, and everyone's like, you're just a show for MCP. I like the opportunity to just say, like, no, like, this is very good. It's a protocol. Like, did you get excited when REST was invented? No, you got excited by all the applications that REST enabled. Those are going to come down the line.
Starting point is 01:39:53 But I think the normal consumer should just be really happy that these high-quality integrations are now going to come a lot more out of the box than you waiting like five months for your favorite app to write the integration for your favorite thing. So your agents are going to be able to do a lot more things. But I think the top agents, like the Sierras of the world, they still want to own the end-to-end experience. And they will use MCP, but they don't necessarily, they're not really like depending on it.
Starting point is 01:40:23 It's not like life or death for them. It's still on you to build an agent or a product, a solution, whatever, that solves a problem your customers need, and that doesn't go away. Yeah, what's your thesis on, do you see a Cambrian explosion of consumer agents, B2B agents sort of built on, in part on MCP and comparing this to something like FinTech, right?
Starting point is 01:40:49 We had the CEO of Plaid on yesterday, right? Like everybody in hindsight should have worked on Plaid. Super valuable, it's a $6 billion company right now. It powers a lot of stuff within finance. Same as Stripe. But Stripe, but you can't name that many other generational companies in fintech infrastructure. Sure, there's a bunch, but no household names. And I can see the same thing playing out in the agent space where maybe there's almost
Starting point is 01:41:19 like a Stripe equivalent in the agent space or a Plaid. But then the best thing you could have done if you didn't build Plaid was to build on top of Plaid and build these novel product experiences. There are at least five or six startups that are trying. I actually live with one of them that's called Smithery. That's decent, but all these are super early. The main challenge that they're all going to face is that Anthropic is coming up with their own registry.
Starting point is 01:41:46 So Anthropic wants to own this. So I don't know if there's a separate plan of MCP that comes out doing this. So that will be my two cents there. Wouldn't one argument for Smithery, and I'm not familiar with the company, be that other companies don't, if they could ever see themselves competing with Anthropic in any way,
Starting point is 01:42:09 they wouldn't wanna be reliant on Anthropic's registry. So maybe a new third party that is kind of a pure play infrastructure player should exist. Totally possible, but it's on the burden of proof is on them. By default, whatever is the big lab official solution always wins,
Starting point is 01:42:28 which is pretty brutal in AI terms, but the world is not meant to be fair. I think one thing I should mention that I neglected too, that I do think is very bullish and will result in a lot more capabilities is that MCP servers can also be clients. And that's a kind of like a technical thing, but that is effectively what is going to enable servers to then become agents and orchestrate agents,
Starting point is 01:42:53 fleets of other MCP agents on your behalf without you knowing about it. And so that actually turns these things into much more agentic networks. And we're probably gonna see that, like I would say, like this is too early right now because MCP is like four months old. Probably like the end of the year, early next year, I would say like, this is starting to,
Starting point is 01:43:14 this will start to come up because they built that in from the start. Is there a world where MCP just gets steamrolled by the next generation of models? We kind of saw that with some of the workarounds to context windows, and then Gemini comes out with a million token context window, and a lot of people are saying, oh, well, like all those workarounds are kind of irrelevant at this point.
Starting point is 01:43:33 How do you think about the durability of MCP over the near term? MCP actually improves with more context window. So they're not at odds. There's a very, very long standing debate on context versus rag. But this is slightly different. MCP just like is the ecosystem integrations
Starting point is 01:43:49 that models really don't have. That's it. I would say that if there were any challenges to MCP, it will come from Google. Because Google has native integrations to Gmail, Calendar, YouTube, what have you. They already launched it and it's already first party in there.
Starting point is 01:44:05 And now MCP came along and threw some noise into their nest. Well, I mean, Google kind of lost the front end wars with Angular versus React. So there is precedent for another tech company coming up and resetting the standard, right? Sure. And yeah, I would say that I don't think they should spend time here anyway.
Starting point is 01:44:29 So yeah. Yeah, yeah, yeah, I agree. Maybe we need a polymorph on it or something. Let's switch gears and talk about AI 2027. Yeah, yeah, yeah. I'd love to just let you rant maybe for the next 10 minutes. Yeah, yeah, yeah, yeah. But initial.
Starting point is 01:44:42 you rant maybe for the next 10 minutes. Yeah, yeah, yeah. But initial. So I think these guys have a really good track record. I don't know actually if you've interviewed them already. No, not yet. I'm sure they'll come on. They seem to be doing a podcast tour. And I think there's some public service in doing the math
Starting point is 01:45:01 and drawing lines. That's what situational awareness was. And I think because we live day to day, we don't really see the year to year as clearly, just because we don't spend time on it. And if you just draw lines and you believe that what has happened in the past, the near past is probably going to happen in the near future, you can probably see at least some kind of trend line. The main caution with all these things is that S curves do exist.
Starting point is 01:45:30 If I'll just rewind your mind back to, let's say April, May, 2020, when every chart on COVID was going like this, and there would be more COVID cases than humans in by the end of 2021 or whatever. And that never happens because one, we reacted to it, or we hit invisible asymptotes as Eugene Wei calls it, where like you didn't account for this
Starting point is 01:45:53 because we weren't there yet. Like there was just sort of this limiting factor that you didn't run into. But this is relatively near term, it's basically in the next 18 months. And I think like the main geopolitical aspects is kind of interesting. We have no idea what China will do really.
Starting point is 01:46:10 Like the main meme on Twitter, which I really like from Ture, Texas, is that C-10 thing just does nothing and then the USA just implodes just. I don't know. I don't know. I don't know. The funniest outcomes the most likely, right? Maybe that's it.
Starting point is 01:46:26 Who knows? Who knows? I do think that there are things where it's really just down to the individual decisions of powerful political figures that we really cannot tell. And then there are things where there are basically extrapolations of scaling laws that you can't tell because they're just, just like scaling laws that have already been established and you're betting on the end of them, which is less likely to happen. So I think that the coding agents commentary is really good. And I think that it starts to talk about hacking and robotics. I think it's all on very, very on balance.
Starting point is 01:47:02 I think where it starts to get into a little bit more gray areas like the political and bioweapon stuff. Yeah, I love the kind of reference to COVID. Everyone thought it was an exponential. It was actually a sigmoid function. Everything's a sigmoid. It's a series of sigmoids. Pretty much, right? And it certainly feels like we're experiencing a sigmoid curve with pre-training and we're seeing pre-training diminishing returns. It feels like there's still some juice in RL. Are there any next steps that you're excited about? We've heard that program synthesis is kind of a hot area
Starting point is 01:47:35 that people are investigating right now. What are you excited about? What do you think is potentially overrated or underrated? I actually think people, there's a lot of alpha in splitting out what you call program synthesis, what I call code generation. We've effectively gone from single line autocomplete to functions autocomplete.
Starting point is 01:47:54 And now with like Windsurf and Cloud Code and CodePilot and Cursor, we're generating basically entire apps and PRs for those apps. And I think being really clear minded about where those capabilities are and improving them incrementally, you get things like cursor, which zero to 200 million AR in two years is crazy. Yeah, it's remarkable.
Starting point is 01:48:17 Yeah, so I think a lot of alpha there. But I think really what people are looking at now is the general measure of agent trajectories. How long can an agent on a broad number of tasks autonomously operate, right? So METR, I forget the acronym, it's one of these like research institutes, METR put out a study of the agentic work that can be done, you know, by a wide range of benchmarks and ran it all the way back to 2019. And basically came up with the idea that the agent's horizon for the 50 percent capability, like 50 percentile capability of human capability is doubling between every three to seven months. And we're now at an hour.
Starting point is 01:49:05 So roughly you can leave the smartest model that we have, which they measured as Cloud 3.7 Sonnet to run autonomously for an hour and do like what a 50th percentile human can do. And obviously bump things up or down in terms of percentile, but the results and the scaling laws don't vary because the important thing is that they're doubling every three to seven months. And that means you can roughly scale out when are we going to have one day autonomy?
Starting point is 01:49:34 When are we going to have a week, a year, a month, a year? And that I think you can kind of set your clocks by and try to think about what products or companies you would build. It would be so funny if it was an S curve and it maxes out at exactly eight hours, like a human, like you can only give it one task every single day and it just goes and does that and it's great. But, uh, yeah, just, I don't want to work overnight. Look, I'm maxed out and it's at that for like two decades or something.
Starting point is 01:49:59 Unlikely, but funny. That'd be an interesting universality, right? Like, so for example, a lot of people building agents have independently discovered sleep. The agents have to sleep because they have to compress these memories and do the deep REM thing where they actually turn those into long-term memory. The computer agents have to do that and humans have to do that. Right? Gnarly.
Starting point is 01:50:21 I'm not kidding. Look up for those who need it. We need to aid sleep for agents. Maybe it's eight sleep. Brian Johnson holding a candle going like. I mean, I was, yeah, yeah, I mean, you're joking about eight sleep for agents. I was joking about like, what, like, is there,
Starting point is 01:50:33 will there be like an alcohol for agents or alcohol for LLMs where you just kind of like throw some randomness in the weights and it gets a little bit crazier, but sometimes it's brilliant, you know? And it's like a lot more bold. Or bomber pee. Yeah, exactly sometimes it's brilliant, you know, and it's like a lot more bold or something. The Balmer peak. Yeah, exactly.
Starting point is 01:50:47 But maybe it's not that, maybe it's just like to get true true inspiration, you need to just throw some extra chaos. And I guess the term would be like higher temperature in the LLM responses and then run 10,000 of them and you get some more inspiration instead of all coalescing around the same kind of, you know, of 50th percentile human. Like the midwit. Yeah. That's called sort of mode collapse where you collapse the modal representation.
Starting point is 01:51:13 But for those interested, we've moved on basically from temperature to var entropy. Also I think most people would be interested in the ethnopics alignment work on tracing the thoughts of LLMs, which came out recently as well. There's some really interesting torture you can put on these LLMs where you ask it to say, like, banana, but then you don't allow it to say banana. And you can see it struggle to find alternatives
Starting point is 01:51:38 of saying banana in that brand. That's so funny. Wow. But for example, one of the R1 replications, you can also do things like, if you just wanted to think more, you can just prevent it from ending its thoughts and just insert weight or like, you know, but I thought something and you just like, you can force its direction to think another way and for it to branch out again in terms of its diversity. So I think there's a lot of research here that is super interesting.
Starting point is 01:52:05 I'm not sure any of it is gonna bear fruit because the big labs are probably like two years ahead of us in the open research world there. Sure, sure. How do you react, obviously, that the dominant story this week in the wider world is the tariffs and trade warrants, all that stuff. Do you feel like it's arguing over pennies as an AI steamroller just is about to roll
Starting point is 01:52:34 over the entire economy? Do you even give it any attention? I heard you guys ask the OpenAI guys about this. Basically it's a rounding error, right? But obviously the supply chain really matters for AI. I think the US has benefited a ton from the global trade setup that we've set up for ourselves effectively after World War II.
Starting point is 01:52:57 So blowing that up may not be the best situation if we don't have a more constructive or well-reasoned end state that we want to be in, and I'm not sure we do. The White House hasn't given us a lot of comfort about where we're going. Yeah, my take on it is like, I think AI is probably going to be the main driving force of like serious cultural and economic change over the next few years. And no matter what happens, if it's good, both political parties are gonna take credit.
Starting point is 01:53:30 And if it's bad, both political parties are gonna blame the other political party for whatever happens. And really it's gonna be a bad. Do you worry about the immediate impact on the sort of data center compute supply chain broadly. I know Elon was posting yesterday about how a lot of the conversation will maybe switch from GPU shortage
Starting point is 01:53:52 to transformer shortages. Is that top of mind for you at all? Or are you more focused on the- As for really power transformer? Yeah, power transformers, not the architecture. So for reference, the vast majority of transformers, the physical electrical transformers are made abroad. And they're very, very important as we start pulling gigawatts,
Starting point is 01:54:14 moving it around the grid to get into these large data centers. Yeah, I hear you. Well, short term is not my department, so that's an easy cop out. I think whatever happens over the next two years, we really don't know. A lot of these negotiations, these are the start of a negotiation, kind of. It's just the way that Trump does it. Anyway, I don't want to get too much into that. All I will say is a lot of what I try to guide people towards on AI engineering is utilization of
Starting point is 01:54:47 existing capability. So much model capability has been unlocked and it's not even the issue that in our lives. Like why do I have Siri that doesn't understand what I want? It's because Apple hasn't shipped. Hasn't got a shit together. Like this is not frontier tech. Like what Elon is dealing with, what Sam Altman's dealing with, what Dario's dealing with is frontier tech. And that requires giant data centers
Starting point is 01:55:11 with all this kind of research. But really what AI engineers deal with is deployment of existing tech into business and personal lives. And I think there's a lot more to do there. And I think, you know, while the big boys figure out your political situation, hopefully we still have enough power to like power,
Starting point is 01:55:29 you know, the deployment of AI to the rest of the world. On the topic of Apple, do you think they can afford to fumble the rollout of Apple intelligence just because of their position as, you know, the core consumer hardware provider? Or do you have a strong take there? Yeah, I mean, so the lock-in of iOS, iMessage, the Apple Cloud, whatever they call it,
Starting point is 01:55:57 is very, very strong, but there's a time limit on this thing. And for one, I actually tweeted about this recently. I'm much more excited for OpenAI to compete with Apple than with Google. OpenAI currently is running the Google Playbook. They're reading themselves by like, oh, ChatGBT is the sixth traffic website in the world. We want to go up. But Google is doing super well.
Starting point is 01:56:20 And actually, Google is pretty good. We want to have a Google in our lives. But Apple is really fumbling, and they know it. And when OpenAI comes out with the OpenAI phone, I think it will be a serious challenge to Apple, because OpenAI's iPhone will just be smarter. You know it. And everyone will try it, and it will be the first serious challenge to the Apple iPhone since you know Steve Jobs presented
Starting point is 01:56:46 That's a great that's no and people already have like such an extreme willingness to try new a AI hardware Yeah, when you know the dominant lab comes out with something Yeah, I think every single person we know is at least buying it right if it's a thousand bucks sure I'll take it like no like, you know I think it's just like the people i'll take it like no like, you know I think it's just like the the people who are wanting to get serious about hardware Um, they they've just been like the small guys like the rabbits and let's just let's just say call it the humans Um, but like you get someone with the resources of opening I I really want to see if they take a real run at it. I mean
Starting point is 01:57:20 He's talked they have chats with joanie eye if they've confirmed that it's in the works I don't know if like how serious it is they could still kill it But I hope to God that they actually challenge Apple and Apple will get a shit together in response. I completely agree Well, yeah, that's a fantastic take. I'm looking forward to trying the open AI phone Well, we'll see it and they'll probably be an X AI phone, too I mean it might be a new new dawn and consumer hardware driven by artificial intelligence. But thanks for stopping by. This is fantastic. Well, we'd love to have you back.
Starting point is 01:57:48 This is such a fun conversation. Yeah, can't wait for the next one. Yeah. And everyone go check out the latest podcast he just dropped, Late in Space. Yes. Do it. Cheers.
Starting point is 01:57:57 Talk to you soon. Bye. Fantastic. Next coming in, we got four runner ventures, Kirsten Green, Legendary Consumer Investor. I'm excited for this one. We reached out actually initially after she dropped her 2025 trend report,
Starting point is 01:58:11 which is always fun to process. So I wanna ask her about that and about a bunch of other stuff. Yeah, the 2025 consumer trend report is out. You can get it at fourunnerventures.com. A deep dive into where consumers stand today and how major shifts are shaping new needs and opportunities all across a ton of different sectors.
Starting point is 01:58:37 It's a quick read. It's 200 slides. They really do their homework over there. And here she is to break it down for us. Give us the 30 minute condensed version of the 200 slide deck. Yeah. I mean, me and Jordy, we're basically live three hours a day, not a lot of time for reading. So if we want to consume some- We want to hear it from the source.
Starting point is 01:58:58 We want to hear it from the source. So thanks for joining. We got the quick rundown. Yeah. How are you? How are you doing? I hope you're having a great Friday and going into a great weekend. But good to be here. Can you give a little background on you, your firm and the Trend Report? Yeah, sure. First, thanks so much, you guys, for having me on. This is fun. Of course. Excited. And it's a crazy day with no shortage of action going on.
Starting point is 01:59:23 Oh, yeah. Yeah. Yeah. So, yeah, I'm the founder and managing partner at Forerunner, and we're a venture capital firm. We've been here in Silicon Valley for the last decade plus making investments. Investments across the board. Decade plus of investments. We love investments here. And we largely focus on early stage investing.
Starting point is 01:59:49 We have a diversified portfolio across sectors and spaces, but we do have a thematic approach and a framework that we have applied consistently for the past decade. And that's really tracking what is going on with demand. So where are the tailwinds? And then we look for big technology shifts and business model changes and where business is not addressing that demand. Can you talk a little bit about, I know you guys will do B2B as well, but it feels like you've always had a love for consumers specifically.
Starting point is 02:00:29 And that's gone in and out of popularity, obviously. When you started, I'm sure a lot of people said, well, why don't you focus on SaaS, look at the outcomes that we're seeing. You typically start your reports by reminding people just how much of, I think it's not here on the- The American consumer is undefeated. Yeah, the seventh slide, consumers drive two thirds of US GDP.
Starting point is 02:00:54 So I think that's probably like the anchor reason why it's been such a focus for the firm. But I would love to hear kind of your take on it. Well, Jordy, that's one take. It's true. In the vein of tracking demand, and demand being the driver of all of business, that 2 thirds of the economy and what's
Starting point is 02:01:13 going on with the consumer is arguably the most meaningful force. But really, more than that, yes, it's true that SaaS companies have returned lots of money to their founders and their investors. It's been a steady contributor of opportunity and venture. But if you step back and you look at the last two decades, the biggest companies started with the consumer first approach.
Starting point is 02:01:40 So, I would argue that starting a business with the consumer first approach, if you are able to create that incredible gravitational pull and build a foundation from there, we've seen time and time again, companies prove that they can take that foundation and leverage it into dynamic business models over time. Can you talk a little bit about just the general health of the American consumer? It's been a couple crazy years the last five years. Things were going really well and then we had COVID and everyone was out of a job
Starting point is 02:02:12 and then everyone got their jobs back and then inflation crops up and now people are worried about the tariffs and what that means for the economy. We've been in sort of a kangaroo market, as they say, jumping up and down, not a bull or bear. How is the health of the American consumer right now? We've been in sort of a kangaroo market, as they say, jumping up and down, not a bull or bear. How is the health of the American consumer right now?
Starting point is 02:02:28 Yeah. Well, that is a big reason why we do this Consumer Trend Report every year. So as venture capitalists at Forerunner, we spend the lion's share of our time thinking about the future and where things are going. But to understand or have a view on that, it is really important to step back and take stock of what's going on today at any given point in time. And so, the exercise of getting out of our own bubble and sort of taking a pulse of the country,
Starting point is 02:02:55 of the globe from the people on the ground, we feel like is illuminating. So you're right in saying that the last five years have been crazy filled with uncertainty. So I think we're all trying to kind of process how we feel about the current ongoing tariff discussion right now. And I think I feel like I did in March 2020. Oh yeah. What's happening? Like uncertainty rules the day and here we are again. So, you know, I guess the good news coming out of that big period of COVID uncertainty is that we have all learned to have an incredible amount of
Starting point is 02:03:31 resilience. We've been forced to do that. So, you know, I think when we check in with the consumer, we see a good deal of that. But we also, you know, definitely see some movement towards people feeling like they need to stand up for their own security and safety, that they can't count on the structures and the norms that maybe have been in place for a very long time. And this is a time period where having more agency over your own life and your own actions is something people are craving. So I think we do 200 page slide deck, there's
Starting point is 02:04:05 a lot of government data in there and a lot of just kind of, you know, taking, taking stock of the actual data. And then we go to interpret some trends and really we focus it, you know, I mean, I think it would say, if we just look at 2024, we describe what we heard from people. And this is a lot of survey work in addition to a lot of reading and a lot of kind of number crunching and processing. But it was kind of a middling year. You know, it kind of just lived in the middle from from all measures, like from all the data metrics. But then also, if you look at like the cultural norms, most of the movies were kind of follow on second generation movies, same with the music, etc. So, you know, the word of the year was like brain rot and something else.
Starting point is 02:04:45 Slop. Slop, yeah. You know, so I think that you have a consumer that's quite fatigued from going through kind of the craziness of what the world has been. But at the same time, in that agency and in that desire to do things, there is like, I'm going to pick myself up, I'm going to take some action, and I'm going to, you know, look for having more control over my career, over my spending, over my healthcare. We went, we took a deep dive and we do this every year, kind of pick a few things that
Starting point is 02:05:16 we think are particularly important in the context of that demand equation, and also in the context of investing, because it is in service of sort of uncovering investment focus. And the couple of areas we looked at this past year were health, security, and of course, gen AI. So on the health front, this is perhaps the biggest trend in my nearly 30 years now of investing, I think the biggest trend that we've seen.
Starting point is 02:05:46 nearly 30 years now of investing, I think the biggest trend that we've seen, people are, there is a growing willingness to spend on health and themselves and outside of the system. So, you know, 60% of the people describe health as being their number one priority, even above friends and family, the wellness economy. So when, you know, when you think about health, it's not just the I already have a problem, I'm at the doctor because of that problem, it's the effort of proactive health which has really been a huge area of growth. The wellness economy which includes that is a 1.8 trillion dollar economy and it's actually growing six times faster than the economy in general. So you know I think that the dollars are there,
Starting point is 02:06:27 the scope of what people are willing to pay for is evolving. You're seeing people more engaged in supplements, more engaged in therapies, more engaged in their workouts, more engaged in doing things like MRIs or blood testing or biomarker testing. So this is, you know, this is a huge area of, it's a breakdown in the system as well as just more information
Starting point is 02:06:52 and people realizing that they have some ability to shape the future for their own health. Do you feel like you're seeing enough founders take the concept of wellness sort of like seriously in Silicon Valley. Like I take it very seriously. John and I have this funny dynamic where like, John has this joke that like,
Starting point is 02:07:11 if I had a single inorganic blueberry, like I would die. And John meanwhile just like eats whatever's in front of him. I could eat a credit card with a knife and fork. Yeah, exactly. Wow. But I feel like when you just look at the data and understand how big the market is, how fast it's growing, you would think
Starting point is 02:07:28 that 20% of new startups would be targeting this opportunity, yet it doesn't necessarily feel like that. I want to talk about the, there's so much opportunity in health, but not everything is going to be venture scale. Even talking to our friend Justin Mares, he has Kettle and Fire, bone broth company.
Starting point is 02:07:46 It's not really this like hyper growth raising every 18 to 12 months, 12 to 18 months. He's done great with that business. Then TrueMed, it's more of like a fintech platform, has venture written all over it. And so how are you looking at the opportunity in health across the lifestyle business that's just gonna not really raise, grow, grow, grow,
Starting point is 02:08:03 a private equity roll up, maybe buy all the gyms and aggregate them versus a true venture scale. Okay, this can be a public company. How do you assess that and where's the opportunity in health? You know, I mean, obviously, that's a really important point, right? There's a lot more businesses getting started every day that are not appropriate for venture scale. There's a very unique set of traits that you're looking for in venture.
Starting point is 02:08:26 And I think that, you know, beyond addressing like a tailwind and having that opportunity to play into, you know, you really are looking for like, is this a business that gets stronger as it gets bigger? What are the flywheels in the business? What are the frictions to getting it going and starting? How like is the compounding effectwheels in the business? What are the frictions to getting it going and starting? How do you how
Starting point is 02:08:45 like is the compounding effect a benefit to the business? And does that give them like competitive advantages over time? Obviously, one of the big things we're looking for is like what businesses have a chance to define categories and be real runaway category successes. So that is extraordinarily hard to do if you don't have like a real business model advantage at the root of what you're doing. And a lot of you know, I think sometimes we have conversation. I often get asked about DTC companies and I kind of cringe a little bit because I think to myself like, you know, we weren't ever here or we haven't actually made an investment in a product company per se.
Starting point is 02:09:24 We've always been looking at like a business model shift and a behavior shift. We weren't ever here or we haven't actually made an investment in a product company per se. We've always been looking at a business model shift and a behavior shift. So I think at this point, there's a lot of the business model shift from moving online or moving multidimensional or things that a lot of those businesses that you described as lifestyle or products are playing into. The early wins in that have played out. What I think is really exciting today that really maps really well with a lot of opportunity and health is the power of gen AI and how that can actually
Starting point is 02:09:51 make experiences personalized more immersive and more productive. Yeah, so on that note, I imagine there's companies like some of the opportunities that you get most excited about are like a company you invested in five years ago, maybe they're focused on therapy. And then now they realize they have this opportunity of, hey, we can make therapy costs like 99% less through things like GenAI and potentially deliver similar results. Right. We saw, I think it was earlier this week, there was a study that came out that showed generative AI based therapy was was, you know, delivering pretty dramatic results. So how excited do you get about, you know, basically companies in the portfolio that, you know, not even just new companies that are pitching you
Starting point is 02:10:33 or you're making investments. Yeah, Sam Lesson called it AI is a cherry on top as opposed to trying to get into the foundation model or this like magical, oh, we're gonna redo the whole thing with AI, it's more like AI enabled businesses. Actually, yeah, we're going to redo the whole thing with AI. It's more like AI enabled businesses. Actually, yeah, we think about it, we sort of describe businesses as AI, AI led, AI enabled or AI boosted. Basically, every company needs to find a way to be AI boosted. And you can imagine there's like a lot of degrees of that you can use AI in your HR department and
Starting point is 02:11:02 your finance function, or you can reimagine your product like you just suggested Jordy and like, you know, make your your therapy product even more efficient. And then the enabled businesses are ones that at least our look at it is like a product or a service that wasn't possible really without AI because it wasn't cost possible and it actually you couldn't do it. So you would think about like the example of like, if you had a network of therapists and you built them, you know, human kind of one-to-one
Starting point is 02:11:34 and you were making that connection online, now that is a unique case where you can actually add AI into your product and really kind of add a whole new dimension to your business from there. That's probably true for a number of businesses, not most businesses though. Yeah, that makes sense. Can you talk about the Studio Ghibli moment was amazing for me because it felt like there
Starting point is 02:12:01 was one thing. It felt like we entered this sort of post-slop era, where last year it was AI-generated content that was cool, but it wasn't beautiful. Six fingers, it looked kind of creepy almost sometimes. But now you can get this sort of hand-drawn Japanese animation of a picture you took a second ago for free, effectively. It's just joyful.
Starting point is 02:12:23 So fun. It's joyful. So fun. Yeah, so fun But the thing I get excited about is the ghibliification of the rest of the economy and other consumer experiences. Just going back to the therapy example, it just feels like this concept of abundance and these incredible products that
Starting point is 02:12:42 can be delivered to consumers for dramatically, dramatically less cost. Do you have any type of thoughts around, it will take some time, but my personal theory is that sort of same effect will happen in a bunch of other categories. I mean, 100%. I think this is a golden age opportunity to make much more incredible products and experiences.
Starting point is 02:13:08 In fact, we're talking about it like we were living in 2D and Gen. AI is the power to take us to 4D and skip right over 3D. By that I mean like we've been talking about, for instance, 20 years now we've been talking about personalization. It really hasn't happened. If you kind of look across, imagine you go to a website, how personalized does it feel?
Starting point is 02:13:29 It's pretty much a similar experience than if I went to the website, you went to the website. So I think personalization has not happened. It is now really possible in a really deep immersive way with Gen.AI. And you've just got a lot of more productivity that can happen. We haven't seen the whole agent thing play out,
Starting point is 02:13:49 but it is definitely on its way. And I think that plus creativity, plus efficiency, like imaginations open, like so many more things are possible. I wanna talk about lessons from the COVID turmoil in relation to the current tariff chaos and turmoil. Um, I was thinking about, we talked to a few founders who were massive beneficiaries of the tariffs,
Starting point is 02:14:14 honestly, because they've, their whole bet was made in America. And so when these tariffs came in, they're like, our business is booming. Um, but as we saw with the shift to work from home, we saw massive booms in Peloton and fitness equipment that was, you know, any e-commerce company did really well. And then there was kind of a, you know, back down to earth moment. What advice would you have for entrepreneurs who are maybe about to go on a generational run on the back of the of the tariffs narrative, how can they build their businesses more effectively
Starting point is 02:14:47 and what are you looking for in terms of durability on the investment side? I don't know that I fully processed like what's gonna happen with these tariffs in terms of the durability of business, but the way I am thinking about it right now is that I don't think anybody, I think we're all in this together
Starting point is 02:15:05 I think every business is all in this together, right? So perhaps you make your products at home And perhaps the things that you use to make your products You get at home, right? So you're really not gonna have a cost change Yeah But everything else around you is changing including how much money people have to spend and how they're reorienting their dollars. So I do think that everybody needs to think that much more carefully and closely about the value of their product, who needs it and why, and what service are you delivering. Everything just got a bit more competitive in that context. I also do think Yeah, I also do think like it's hard to you've got to learn to be nimble and navigate these crazy strange times because almost like these crazy strange times aren't crazy strange
Starting point is 02:15:53 times anymore. They're just sort of there's there's one after the other, but they do change and evolve. Right. And like you mentioned at the beginning of COVID, like I think we all, you know, at first thought, wow, like everyone's going to hunker down. No one's going to spend. It's going to be really scary. Business is over. And then you saw people at home thinking, I deserve a nice thing. I deserve this. And shopping went through the roof. And people came out of COVID and they went to travel and they shifted dollars
Starting point is 02:16:19 again. It's nimble. Being nimble is the key. Sorry to shift back to generative AI, but it's a fascinating topic. I've been kind of obsessed with this idea of the GPT rapper meme being maybe almost harmful to entrepreneurship. People aren't taking enough risk because they're hearing, if I build a rapper, no one will take me seriously. It's low status or maybe I won't get funding. But do you think that like, I mean, we just talked to the founder of Honey, he built a
Starting point is 02:16:50 Chrome plugin that sold for a billion dollars. Like it's incredible, right? And so I wonder. And at the same time, we've seen a lot of slowed innovation at the big consumer tech companies where pretty much everyone's asking like, hey, why isn't Siri better? And maybe I guess the question is something about is advice to founders like shoot for the moon, you'll land amongst the stars, there will be acquisitions of wrapper products.
Starting point is 02:17:19 And if you just create customer delight, that's beneficial in Gen. AI, or is it kind of just everything's changing so fast, just get out there, make something cool, make something people want, and then you'll figure it out, or are you really trying to dig in and say, we need to have a super concrete thesis about how this becomes a hyper scale business before we invest or really go and build?
Starting point is 02:17:41 What is your take on that? Okay, there's a lot in that question. Yeah, sorry. I just thought about four different things I wanted to say. Please, please. Well, perfect. Let me see. Hopefully I can get them all out. So this idea, I don't think all rappers are created equal, right? So LLMs are foundational. That is the backbone from which we can all build from. So I think anybody using that to their advantage in their business is just being smart. A business that is eager for the LLMs
Starting point is 02:18:07 to keep getting better because it has the potential to make their product better, that's where you wanna be building. Then you've gotta think about all the things that I would say that we've always thought about, which is what's the value of your product or your service or your software? Who needs it and why?
Starting point is 02:18:23 How do you keep innovating and growing? How do you build a business around that? Like those things I think hold true always and are critically important. I think the idea of like, I'm just gonna build something cool and fun and they will come and that will be interesting and that playing out to be a successful multi-billion dollar company,
Starting point is 02:18:42 I'm not sure there are many examples of that. But at the same time, I want to back up and say that the way we're approaching the market right now in particular with this new, kind of the beginning of this new cycle, if you will, is most we're holding two things kind of front and center. One is just the idea of a market tailwind that you're building into. There's a real demand, there's a real need, there's a real opportunity. And that kind of ties to what we started earlier
Starting point is 02:19:11 in the conversation, which is just understanding like where those pockets are, where business is missing the mark. And then it's about the founder. It's about the founder and their vision for the future and their idea about how a product, how a service can make it better. And I think all you really know at the very beginning is it's going to play out different
Starting point is 02:19:31 than what is it ever, whatever pitch deck you're looking at. But I do think you can tell if somebody's got like a real pulse on where things are going, where they want to take things, like the world they want to make and show, and how nimbly and tactically they can play and execute with these new technologies. Can we talk about education and learning briefly? Because I feel like we've seen the adoption of chat GBT at every college campus and high schools and things like that.
Starting point is 02:20:02 And maybe there's this early surge of usage that's not so great for kids that are trying to learn how to write and think and things like that. And we want people to actually learn how to communicate and reason through things. But at the same time, the potential of every single person in the world having a tutor in their pocket is just so, you know, can't be sort of understated
Starting point is 02:20:30 and maybe is not getting enough almost like hype, right? But I'm curious how you are, I know you've made a bunch of investments in the space over the years, and I'm sure those companies are thinking of- I mean, I am really bullish on this opportunity, Jordy. I think if you look at the foundational pillars of life, your health care, your finance, your career,
Starting point is 02:20:52 and your education, education is arguably behind them all in terms of transformation. I have kids in grade school, and their school books look the same as mine did 30, 40 years ago. It's unbelievable. We need to move the future forward. Sure, you can use chat GP to cheat, but actually we need to teach our kids how to use generative AI to their advantage. Because I think like for me, I'm having so much fun. I have a thing in my pocket. I can ask any question I ever wanted to. And then I can follow follow the thought through That's the best part. It's not just search and get an answer
Starting point is 02:21:27 It's like search is like pose a question get an answer and then play with it Keep going down the rabbit hole or take your essay and say I need to I need to work on this point a little more I'm struggling with this thing. How do I move this around? like that is it has a huge opportunity to You know continue to peek and drive curiosity, help us think deeper and learn. And on that, encourage kids how to use it constructively. Do you think that that's an area where voice specifically can get really, really meaningful
Starting point is 02:22:02 adoption? Because the example you're talking about is like, I'm writing an essay and I'm struggling with something in particular and Theoretically somebody you know a child could have the experience of their teacher just sitting You know with them writing and being able to talk out loud, and I don't think we've seen voice Adoption as an interface as much as some people would have thought 10 years ago, but it feels like maybe now is the moment. Voice is so exciting. Is the moment, right?
Starting point is 02:22:29 Voice is so exciting. I do think that when we're able to make voice a main mode of communication digitally, we're going to get so much more information. And it is the information that continues to power and tailor more personalized and engaging experiences so the the amount people share if they have a voice, you know a speaking opportunity versus a typing opportunity is tenfold So I do think I do think that's a big unlock and I think you know This is a good point because this is how this is gonna unfold. We're just in the early stages of seeing what's possible.
Starting point is 02:23:06 And there is some very exciting voice technology out there. And everyone's been buzzing about the company, Sesame. What they have is pretty amazing. So it's definitely on the horizon and the new horizon. Yeah, and maybe last question for me for now, and would love to continue to have you on to run these ideas down. But how do you think about AI adoption? Because it's hard to compare it to the internet,
Starting point is 02:23:35 because it feels like everybody's used AI now. And everybody probably, not everybody, right? We both had Ghibli posts go viral, and we had people quoting it, not knowing how the images were being created. And they seemingly didn't know what chat GPT was. Oh, there must be a new filter app in the App Store. Yeah.
Starting point is 02:23:52 They don't know what OpenAI is. Is this Snapchat or whatever? So we haven't reached full adoption of some of these new tools. But it seems like everyone's using them in some capacity, even unintentionally. I Google search, and I get an AI summary. So how do you think about AI adoption?
Starting point is 02:24:08 Is it just businesses fully taking advantage of the technology? And that's what we need to look at? Because I know it's not. I love this question, Jordy. I'm glad you asked it, because this was one of the more interesting things that came away from our survey data.
Starting point is 02:24:22 I think I should have this data point in front of me while we're speaking, but I don't. But it's like 60 or 60 plus of the people percent were like actually had said they were using AI regularly. Now, what does that mean? That probably means chat GPT, Claude, you know, those, but like people are here for it. I mean, 500 million people was a number I read a week
Starting point is 02:24:45 or two ago about chat GPT, and then I heard a bigger one later. Like, that is a kind of trial. Let's call it trial at this point, because we don't really know what adoption is going to look like. But I do think that it's growing incredibly rapidly. And I do really believe that people, like, they're ready for it.
Starting point is 02:25:03 We just have to build for it. You know? I think that's my call for, like, they're ready for it. We just have to build for it. That's my call for founders to build for it. Like people, you build it, they will come. They are ready for it. Yeah, it's certainly faster than cell phone rollouts. We had a conversation before this. It's like everybody wants to build AI infrastructure. We need more people to just build the AI, the agents.
Starting point is 02:25:24 Build the hard stuff that maybe. There's so much opportunity. Just think about, chat GPT is to this generation what Google was to the last. If you think about Google as the place where the front door, you go there, that's your big search. But if you want to look for real estate, you go to Zillow. If you want to understand about a company,
Starting point is 02:25:44 you go to Glassdoor or Job, you go to Indeed. If you want to travel, you go to Kayak or Expedia. You want to go to Open Table for a restaurant. There are these big categories that have lots of nuances that really deserve and warrant unique experiences. Like, let's build them for AI. Yeah.
Starting point is 02:26:06 Where are they? Come pitch me. Yeah, I like it. Yeah, well. Go pitch Kirsten. This is fantastic. This is fantastic. Thanks so much for stopping by.
Starting point is 02:26:12 Yeah, thanks for coming on. Yeah, this is great. Thanks for having me. Yeah, have a great rest of your day. Have a great weekend. We'll talk to you soon. Talk soon. Bye.
Starting point is 02:26:19 Cheers. So much to build. We didn't even get into the security thesis, which is maybe the most underrated in that report. There are health tech investors, there are gen AI investors, there are consumer investors, but I have yet to hear someone really... I mean, she created a whole market map for this idea of security as a key market.
Starting point is 02:26:37 It includes some health stuff, it includes some finance stuff, but it was an interesting thesis and trend that I'll have to dig into more. But in the meantime, we we got David Perel here. Welcome to the show. What up, boys? How you doing? Looking great. Dude, look at you.
Starting point is 02:26:49 Thanks for dressing. I figured I'm coming on, hanging out with the tech bro. Guns blazing. I'm going to dress up. Yeah. Well, it's great to have you. How's your week been? I had a white shirt, but couldn't find my cuff links.
Starting point is 02:27:01 So major issue about 15 minutes ago. No, no, no. You're looking fantastic. You look fantastic. How's the week been? How have you been processing all the recent AI news? What's new in your world? Dude, week's been good.
Starting point is 02:27:15 I was driving over here. I saw my first ever Austin car chase. No way. Is that a regular thing? No, I've never seen one before. Usually an LA thing thing and I must admit I got a speeding ticket on Sunday in Tennessee, so it's good to see the red white and blue lights and somebody else's What what were they dry was it like an f-150 being chased by Dodge Charger
Starting point is 02:27:39 Yeah, it's always the crappiest car that you've seen all day Yep, let's lose less lose, more to run for. Yeah, exactly. Exactly. There's so much to talk about. I mean, one, you should have been one of our first guests. But you still are. You're still in the first 200 guests.
Starting point is 02:27:55 It's no big deal. Yeah. We've been racing through the guests. But yeah, I mean, did you want to start with his post, his thread? Yeah, let's do it. Yeah. Can you set that up for us?
Starting point is 02:28:05 Just kind of like general reactions to, uh, AI writing kind of the studio Ghibli moment, you're clearly thinking your wheels are turning a lot of stuff is going on in your head. And I think you kind of synthesized it well, but I'll let you tell it yourself. Yeah. I mean, we're just, look, I've been teaching writing for the last six years and I've taught few thousand students and I had a moment in November. You know what happened?
Starting point is 02:28:28 I had a guy who was working for me, sent me a memo. I said, dude, this is the best thing I've ever seen you write. He goes, AI wrote it. I was at a dinner last night, and I wrote up some notes, and then I just asked AI to summarize it, and I was like, whoa. And I shut down red passage November 11th
Starting point is 02:28:45 and then I went to Argentina in December. It was my first vacation a few years and I want to learn about the country so I was just using AI and I was doing like 50 to 70 prompts per day and I was like oh my goodness this is this is insane. And basically fast forward to now we're in March or we're in April and I mean the writing's on the wall. AI is going to just completely transform writing and this week actually I got my first ever rejection for how I write because somebody doesn't like that I'm talking about AI, that I'm promoting AI.
Starting point is 02:29:19 So now I think what you're beginning to see is a big riff, right? There's gonna be certain people who say no Writing with AI is absolutely taboo not cool There's gonna be the purest the Luddites and then there's gonna be other people who just go full steam ahead and they're gonna say You know what? This is the future and it's really taboo right now to write with AI But if you talk to people behind closed circles, there's a lot of tech forward writers who are like, whoa, I can like three, four X my output. And they've basically built custom prompts, custom software to help them write better. Yeah. I mean, Ben Thompson has kind of a thesis around this with the, you know, the printing
Starting point is 02:29:57 press reduce the replication costs and the internet drop distribution costs to zero because you no longer needed a paper route. Now it's the instantiation cost has gone to zero, but there's still that human in the loop for actually generating the novel ideas or even just bringing the idea or the fact or the information to the chain. I talked to some reporters, how is AI changing your job at the Wall Street Journal? It's like, well, writing up the fact is the least hard part about the job. And yeah, maybe it took me from that being one hour of my day to being five minutes of my day, but 90% of my day was still
Starting point is 02:30:39 talking to people, understanding what's going on, surfacing new facts, and then bringing those to the readership. So do you think there's still value in being a writer in the sense that you are a generator of ideas or novel information? And yes, you are using AI as a tool to instantiate it, but AI hasn't really replaced your importance in the world. Yeah. So let's just preface this. We're going to talk about nonfiction writing.
Starting point is 02:31:02 That's what I know a lot about fiction is not really something that I can speak nearly as well about. But I went to a Peter Thiel lecture last night. He's in Austin and he's doing these lectures on the Antichrist. And I was at the lecture last night and I was thinking a lot about this, right? Because you know Peter, he's a really interesting guy. He's always looking for what is the thing that people are talking about that other people haven't found.
Starting point is 02:31:29 And I think that the lecture showed, first of all, that of course there's so much edge in just finding the things that other people aren't talking about. But then also, I went up to someone who he works with after. I said, hey, I think the lecture could have gotten better in this way and this way and this way.
Starting point is 02:31:46 All this is to say that AI can't do that work for you. And it's through the process of writing that you're really working through your ideas. You're trying to say, how do I frame this better? How do I shape this better? But what I do believe is that basically any form of writing that is driven on pure utility, so we're talking business memos,
Starting point is 02:32:07 we're talking emails that your lawyer sends you that are fairly standard, pure utility, not about the art of writing, and not about basically maximizing the quality of thinking. I think basically all of that's gonna be written with AI. So, and only in a few years, by the end of 2026, 2027, we can just assume that that'll be the case.
Starting point is 02:32:27 But absolutely, if you're really trying to be a maximizer and not a satisficer, that'll be major human in the loop big time. What do you think happens to our thinking abilities as humans in our clarity on life, business, work, our relationships, when so often we have just this like, you know, perfect autocomplete of everything that we thought we were going to say. Because sometimes when I'm, you know, let's say I'm like writing an email to John or one of our partners or somebody we want to work with, the act of it's sort of annoying that taking the time to think about the, you know, what I want to say and how to think about the, um, you know, what I want to say and how I want to say it and, and, you know, what, what I'm
Starting point is 02:33:08 trying to convey, it gives me clarity on that relationship or that whatever activity that we're doing. And in a world with like perfect autocomplete or suggestions, maybe theoretically I can just, you know, think about it for a while myself or autocomplete send, send it can just think about it for a while myself or autocomplete, send, send it and then think about it. But the reality is there's so much distractions in life. So maybe everybody just moves like all the time that was spent writing emails just moves
Starting point is 02:33:33 on to TikTok or X and it's just like, you're autocompleting and then there's brain rot. That's sort of like a dark take on it. But are you worried about what humanity loses by not thinking through writing? I'm absolutely worried. I mean, I see major white pills and major black pills here. Like, I think that if you're not seeing the pros and the cons, and I mean major pros and cons, you're missing out in a fundamental way of what's really going on. I agree with you. I think a lot of our information is, or a lot of the way that we communicate is like,
Starting point is 02:34:08 did this person actually write it? And I think that what's going to end up happening is you sort of see this in the differences. Like, have you noticed like how big of a separation there is between the vibe of like your private group chats versus like what you feel in public? I feel like the private group chat vibe is just gonna be like, go even more, we're gonna have to learn to write with voice
Starting point is 02:34:29 and really show off our distinctiveness in writing as almost a adaptive way to say, hey, this isn't written by LLMs. And at the same time, I completely agree. I mean, I think that people are super busy and the one thing that we've learned time and again about technology is that people value convenience, you know Like I listen to music all the time I'm like crappy iPhone speakers because I don't want to get up and walk to the other side of the room and like
Starting point is 02:34:57 hook in my USB cord and I think you should never bet against humans going for convenience in the aggregate So that's something about it. Yeah, I think you should never bet against humans going for convenience in the aggregate I know about that something about it Yeah I think about that that a web comic or meme all the time where it's like wow this AI was able to take my five bullet points and turn it into a whole essay and then the and then the guy receiving the email is like it took this really Long email and turn it into just five bullet points and and I have noticed that I'm doing way more work now, not over email, but just over text message. We coordinated this interview entirely over text message. I don't think a single email was exchanged. And I imagine there is this world where
Starting point is 02:35:36 maybe we just do away with that intermediate step of like instantiate this as like we all know that we could turn this into a 20 page research paper if we wanted or a 500 word blog post but really the takeaway is just what I could put in my group chat in one line so we're all just going to communicate with that. Communication just condenses down and it's just like people saying their base interests like I want status, I want security. It's X. I mean, Doran Keshe had a great line about this.
Starting point is 02:36:05 He was like, the recipe to being a good poster on Twitter, X, is just write like you're posting in a group chat and just say just exactly what you were thinking. Don't try and wordsmith it into this big thing. Just post exactly what it is. Yo, this is the age of vibe. Like, this is super high vibe times because Because basically, like if I'm teaching writing and you're like, all right, DP, what are you gonna teach people?
Starting point is 02:36:30 What I'm gonna say is like, what is your vibe and how do you get it out into the world? And how do you communicate that through writing? And so you're seeing a few things. Personally, I'm investing a lot in like Riz and vibe. And I'm trying to think about how do I do those things? It's working. It's like, yo, I'm putting my money where my mouth is. One thing I did start a lot in like, Riz and vibe. And I'm trying to think about how do I do those things. It's working. It's like, yo, I'm putting my money where my mouth is.
Starting point is 02:36:47 One thing I did start a few weeks ago, I've become a greeter at the church to get like a lot better at saying, all right, how do I get better talking to people, saying what's up, doing all the sort of, frankly, like shallow conversation that people rail on. I'm like, no, I wanna get good at that. And then when it comes to writing,
Starting point is 02:37:06 whenever I'm writing even like a text or something like that, I'm thinking, what is the energy that is unique to me and how do I get that in writing? And frankly, I don't really know the answer, but that's the major question I'm asking. And when it comes to writing, I think that's what people should be thinking about
Starting point is 02:37:22 a lot more. Yeah, we like to call it the golden retriever mindset here in the age of intelligence too cheap to meter. It pays to be hot, smart and hot, friendly and dumb. And so you got to be just super friendly to everyone, looking good. And you don't need to worry about being too much of a sesquipedalian, as they say. Exactly, exactly. Yeah, it's an interesting- No more pressing space allowed. Yeah, no, no. It's interesting to think about
Starting point is 02:37:49 specifically agents in the context of the workplace. Like, it's very possible that likability becomes a core reason why somebody has a job. Totally. Because somebody's running a business and they go, I really like David. I like being around him. I want him here even though we could get an agent to do this, it's more fun to have David around.
Starting point is 02:38:12 Yeah, it's like the one person, one billion dollar company. It's actually gonna be like one person creates a one billion dollar company and then brings on nine of his friends just because why not split it? It's a lot of money and you like to hang out with other humans. Yeah, I think, well i think you're going to sort of see the bifurcation that's like i think that as if you're like thinking about oh just talk about me so i'm thinking about my career i'm like on one hand i really want to invest in the human things looking people in the
Starting point is 02:38:40 eye having better conversations how do i I show love and actually connect with people to a far deeper level, and all those sorts of things that have always been core to the human condition, but now we're like, wait, hold on, intelligence is getting too cheap to meter. That actually isn't something that's unique to humans anymore. And then on the other side, like you just think of the super-cracked engineer
Starting point is 02:39:04 who is really good with cursor, really good at writing. I've talked to some friends who are building AI enabled agencies and now they're working with like five times as many clients because they're like boom, boom, boom, boom, boom. But most of the people I'm talking to, like entrepreneurs and stuff, they're beginning to, Justin Maris has a good line where he says that the company is going to look more and more like the hedge fund, where what you have is you have fewer people, super highly paid. And one of the things that I've noticed is it's been a real head scratcher for me.
Starting point is 02:39:35 Like why has the managerial class adopted AI so much more than the sort of frontline workers? Like what is going on? Because if you look at sort of the archetypal rollout of technology, it's usually the young people who are the early adopters. And there's a way that that's not true with AI. And here's my working theory.
Starting point is 02:39:54 My working theory is that the way that you work with AI is basically like a manager. So if you think of what do you do as a manager, you set a vision, you delegate the task, you say, hey, do this, you expect it's not gonna be good enough, you're giving feedback, and you're going through the cycle,
Starting point is 02:40:11 and then you ship it, right? That's how we work with LLMs. If you're a frontline employee, you don't work like that at all. So this is super disruptive to your work. And all this is to say that a lot of those managers too, if you talk to them behind closed doors, you know what they say? My hardest problems don't have to do with the work. It has to do with
Starting point is 02:40:27 the people. And now I can take out the people. Once again, I think it's super dystopian and also pretty exciting, both of them at the same time. But that's what I see happening right now. Shifting gears, can you talk a little bit about the religious vibe shift in tech. I've had a number of reporters reach out to me, I'm writing a piece because I saw Augustus De Rico had a cross on his neck or something like that. You've been in this milieu for forever. What is the mainstream media getting wrong
Starting point is 02:41:01 or right about that narrative in that shift? Let's see. So I'll just give sort of my take on the shift is we've had a few things happen. So the first thing is the more online you are, I think the more that you've looked at what's happened basically since about 2012, maybe 2016 last 10 to 15 years. And you've just said something is strange about society right now. You see COVID, you see the Hunter Biden laptop story. I mean, there are so many ways that we've been lied to. I was thinking, you know, in the early 2010s, if you went away for five years and you came back, what would be really confusing was the rise of social media.
Starting point is 02:41:42 In the late 2010s, if you went away and you came came back What would be really confusing is how much morality had changed what you could say what you couldn't say completely changed and for me I looked at wait hold on the all of our moral codes are ebbing and flowing. There's all of these ways where we've gone from a kind of a democracy to a bureaucracy and and where we've gone from a kind of a democracy to a bureaucracy and I don't wanna live in that world. And as I looked at it, I traced a lot of that to a kind of atheism where Malcolm Mudgridge, he has a great line, he says, the problem with atheism
Starting point is 02:42:16 isn't that you believe in nothing, it's that you'll believe in anything. And so I watched people with empty, their bodies as empty vessels Adopt completely rotten and corrupt ideas and I think a lot of people Including myself by the way, and I think a lot of us who are more online have seen that cycle play out We've said hold on here. I don't like what I'm seeing so then we take a step back and the more tech oriented people We've seen people like Peter Teal,
Starting point is 02:42:45 a lot of us have studied the work of Renee Girard, and we've said, you know what, smart people like Peter and Renee, they're talking about Christianity, there might be something here. And then for me, what happened is I looked at that for five or six years. I really studied it deeply and I came to the conclusion that Christianity wasn't just useful, but it also happened to be true. Can you talk about how you would walk someone away from the cliff that is utilitarianism? I would just say, look, that's probably not the frame that I would take. What I would just say in terms of if I was talking to somebody about faith, what I would just say is I'd probably be more likely to talk to someone about, hey, look at what's happening in the world and are you happy with it? If they're happy with it, it's going to be hard to have that conversation.
Starting point is 02:43:40 But I think that part of the challenge with utilitarianism and a lot of these moral philosophies is we've seen them rise up and sort of cyclically break. And I don't know, this isn't a great answer. I'm sort of fumbling my words, but I think that the Bible carries sort of the supernatural truth. I mean, I had a hot take here. I wanted to bounce off you. It was this idea that a lot of the AI doomers were driven by this idea that God is dead and if we're inventing AI God and it returns it might judge me and if I'm living an immoral life the AI God would sentence me to essentially like an AI version of hell. It's not a problem to
Starting point is 02:44:22 live an immoral life in the absence of God but if God returns in the form of this AI God then there will be a reconciliation moment is do I need to put on the tinfoil hat for that well I mean I personally think that oh there we go there we go put it on baby I mean personally I think that if you read the Tower of Babel story I think that a lot of my faith is I don't think these AI's can become gods I think that if you read the Tower of Babel story, I think that a lot of my faith is I don't think these AIs can become gods. I think that something will happen. And we see that in the Tower of Babel story.
Starting point is 02:44:51 You know, you've seen people come out and say, hey, we're building these new gods. Read Psalm 115. Bad things happen when humans try to create gods. We've seen this story play out. And to come back to some of your earlier questions about tech and utilitarianism, look, a lot of the peace that I have with what's happening with AI, I'd be freaking out if it wasn't for my faith. I mean, I have complete faith that God is in charge, He'll take care of this, and when I read books like the Book of Judges, I see people turn away from God all the
Starting point is 02:45:20 time. When I read the Old Testament, the Exodus, you know, you see the golden calf. And I just see AI as another version of that. Can you talk about any, yeah, specifically historical moments that we can learn from in the context of this explosion of artificial intelligence? I think people talk about the industrial revolution, right? Look, we didn't know how the industrial revolution was going to change the world.
Starting point is 02:45:51 We do now, looking back, obviously. But it feels like right now we're standing. And especially in the context, there was a piece that came out this week, AI 2027. We covered it earlier on the show today, where it just feels like only 18 months out, like, you know, there's these various paths.
Starting point is 02:46:10 Neither seem that appealing at the moment. And so I hope there's a third or a fourth or a fifth. I'm sure there are. But how do you, you know, whether it's in the context of the Bible or other things that
Starting point is 02:46:24 you've read, process this moment and maybe it is only, maybe you can find all the answers in faith, but I'm curious. No, yeah. I think that the work of Marsh McLuhan has he was a media theorist late 20th century. And what he did for me was I felt the same sort of, oh my goodness, everything is changing kind of feel from the internet. And he basically had ways of seeing what happens when new technologies develop. And basically, I think a lot of what we're seeing here
Starting point is 02:47:12 is this like vast acceleration is what he saw. So you get this 10X acceleration. And then what happens is things begin to flip and things that used to be core to how we lived and what we do now become art. And I think that that's a lot of what's gonna happen with writing, where I think writing is gonna be very sort of a form of art in the same way
Starting point is 02:47:37 that film photography was a form of art in the same way that a lot of art that you see and craftsmanship that you see used to be very core to what we do. And I think that's what's going to happen. I think that writing is going to be the utilitarian kind of writing will just be done by the AIs. And then a lot of writing will just be artistic. And what we're going to need to figure out is how do we put our artistic stamp on writing
Starting point is 02:48:02 because there's no real way to verify that a piece of writing is writing in the same way that you could do that with a live performance. But one of the things I'm really looking forward to, there's all these do-mers. And to your point about technological precedence, I was talking to my friend Justin Murphy. And he said, if you look at how art shifted from the 14th
Starting point is 02:48:23 century to the 16th century, you look at 14th century medieval art and it just looks weird and creepy. It's like super flat. And then you look at 16th century art in Italy and it's got this beautiful perspective. The paintings are aligned or alive. And what happened was we got the technologies of the camera obscura.
Starting point is 02:48:42 And then there was a guy named Leon Alberti, and he was an architect. And what he did was they used a lot of the technologies and architectural tools at the time to basically show perspective. And from that, we got the Renaissance, a Renaissance in painting. And the same thing's gonna happen with writing.
Starting point is 02:49:01 The same thing's gonna happen with writing. And I bet that at the time people were like, oh, you can't be using technology to paint. That's not cool. And then like, I don't wanna go back to that sort of painting. Like the stuff that came from it was super cool. And I think the same thing's gonna happen with writing. Yeah, it's interesting to think about
Starting point is 02:49:18 the great writers in history sort of referencing past works and having to go to a library or travel to multiple libraries or visit collections and sort of study and then now the ability to like- Studio Ghibli style. Well, communicate with an LLM that is trained on every available text, at least that's been on the internet and maybe some esoteric texts as well. Do you think there's any merit? Have you spent any time trying to find books that just haven't been uploaded into
Starting point is 02:49:46 ingested into the knowledge Here's okay, so I'm gonna I'm gonna answer that and then I'm gonna ask a question to you guys Okay, so here's rather than the forbidden knowledge. I haven't been doing that as much but there's so much that's just lost to To there's so many answers that we used to have very clearly that then we have just forgotten. I think one of the fundamental lives of modernity and basically of progressivism in general is that new is better, always. And that's not true.
Starting point is 02:50:14 You know, like Joe Rogan always talks about, how did they build the pyramids? Oh my goodness, they must have had these crazy technologies. Like even, you know, we could put the tinfoil hat back on, but even if that's like nonsense, whatever, I don't know. But there's so many times in history where we really had deep understandings of things that we've completely forgotten about.
Starting point is 02:50:32 And like, we don't necessarily need to go find these like crazy esoteric texts. We can go find, like take Henry George's book, right? Late 19th century, it was like the number one, number two bestselling book in the world. There's The Imitation of Christ by Thomas Aquemps. Great book, 17th, 18th century. It was like the number one number two best-selling book in the world There's the imitation of Christ by Thomas a campus great book 17th 18th century Like I think there's a lot of alpha in just going back at the bestsellers before 1970 and just go go read those books
Starting point is 02:50:58 And see what people are saying and luckily AI is a really good way to do that You know one thing that's fun to do is go in take an idea go into chat chibiti go into grok and say here's my idea now help me round it out and give me some examples. Like I've just been working on this little bit that there's fundamentally three kinds of girls. There's Lana Del Rey girls, Taylor Swift girls, and Beyonce girls. And so like you know I was joking around with some friends were like hey let's just pop it into grok and grok is is like helping us, you know, think out the theories. Hey, this is why it's a stupid idea.
Starting point is 02:51:27 Here's why it's a good idea. And that's what I love about it is like you're giving it some sort of ridiculous idea and then it's saying, okay, let's kind of fill it in for you. And you can very quickly get to the stage of like, how can I find those esoteric ideas much faster? And how can I kind of, um, find the lines that then I can draw inside of? How, uh, do you have any type of thesis around, um, you know, uh, the LLMs and humor, it seems to be the one thing that they really struggle with today where they get the structure of a joke, but they're not nailing it yet. They can make you laugh by being absurd. But did you see the Tyler Cowan joke?
Starting point is 02:52:10 Yeah, you see that with GBT 4.5. So be me one. Yeah Larius so I think that the the humor things can be a problem to be solved The LLMs are gonna be hilarious and they're gonna be hilarious in two ways things gonna be a problem to be solved. The LLMs are gonna be hilarious, and they're gonna be hilarious in two ways. So the first way is niche humor. Like, you know, if you take a great comedian, they're gonna be good at the mass market stuff,
Starting point is 02:52:32 but like, say that you and, you know, the three of us, we go and we do like a trip to New Orleans for the weekend, and we say, all right, all these things happen, and like we talk to LLMs for like 45 minutes, we say, you know, here are all the things that happen now Give us some jokes, dude. I bet it would rip So freaking funny. It's just us. Yeah exactly. No and and and so we we've talked about we talked about this earlier on the show John has Coogan's law, which is like, you know talking about the value of coin coinages
Starting point is 02:53:04 Huggins law, which is like, you know, talking about the value of coinages. So on that note, I've been working on the Hayes paradox, which is the idea that the funnier that you think something is, the less likely that the mass market is going to find it funny. Right. Because like something that is just ultra, ultra, ultra niche is just like the most interesting, most funny, the most stimulating or whatever. And I think that's what you're kind of getting at it in some ways, like having a comedian in your pocket that can joke about the thing that happened to you that minute and simultaneously 10 years ago in your life
Starting point is 02:53:36 and connect those ideas. And the idea of the same thing of, oh, we have a therapist in our pocket or we have a tutor in our pocket. You now could potentially have, you know, this sort of like comedian. Yeah. So last question is a question I think you had for us. Do you want to close out with that?
Starting point is 02:53:53 Well, I mean, this is the question that I've just been thinking a lot about. And it gets down to like the nature of the soul and the nature of what it means to be a human. You know, there's that famous, um famous idea of the map and the territory. And what I think is basically happening is that when it comes to writing, the map and the territory are going to be the same. And when the map and the territory are the same, what is just, what is, what comes from that? Is like, if you have such a good simulation of humor and such a good simulation of consciousness and such a good simulation of care Is it those things or is it not those things and I don't know the answer it's just the question I've been asking all week and
Starting point is 02:54:36 My immediate my immediate thought is that? our mutual friend Jeremy Had this idea the Master of the take. The king of the take. Master of the takes myth himself. No, he had this idea, like in the Ghibli moment, he was like, my thesis that like AI will be the reason that everyone logs off forever,
Starting point is 02:54:55 is like if you go on your device and it's just this, you know, hamster wheel of entertainment. Dopamine. And dopamine. And then eventually you just, it gets so good at, you know, at whatever it's doing or whatever you want it to do that you want to actually go back to the variable reward of going to the group chat where somebody has a terrible take and then somebody has a good take.
Starting point is 02:55:16 And it's this sort of like truly organic experience. And who knows if everyone logs off forever, but it's that, uh, desire for, for realness has been something that has been common throughout human history, right? You don't like, you know, people wouldn't travel somewhere far to have like, uh, the authentic cuisine if it hit the same, to have like, you know, the, the sort of like perfect recreation of it here in America, right? So that desire for authenticity and realness, I think, is deeply human. And I think people will seek other humans out for that.
Starting point is 02:55:53 Which is great. Because what I hope is, I think that we were at the absolute bottom of realness in the late 2010s in cancel culture. That was a time when people were terrified to be themselves. Everyone was super polished. It was the decline of the mass media empire that we're still sort of living in and seeing play out every single day. And then we're going to get a big flip into realness, into authenticity, and here's the other one, into forgiveness. Because we can only have realness and authenticity in our culture if we have a culture of forgiveness,
Starting point is 02:56:27 when we can allow people to make mistakes. And I think what was so traumatizing as a culture about those years is that we couldn't be ourselves because we couldn't forgive. And my hope is that we can just move on from that and become humans in this digital sphere. Well, you have to forgive us for not having you on sooner. This was a fantastic conversation.
Starting point is 02:56:53 I really enjoyed it. Thank you so much for coming on the show. Thanks, boys. We'll talk to you. Thank you for addressing the part. And looking very polished, you know, some things from the late 2010s. Polish is back.
Starting point is 02:57:02 Polish is back. We're keeping the polish. You know, actually, that's what we have in common. You know you guys have the Polish with the show. I tried to do it with How I Write. Thanks guys. Have a great weekend. Next up we got V from Sword Health coming in the studio. I believe he's here and I'll let him do the introduction.
Starting point is 02:57:21 I had a call with him almost a year ago. Fascinating company, backed by Deleon very early on. We got connected and I want to hear all about how his business is doing and get the general update. So, V, are you there? Can you hear me? I am here. Can you see me? Yes. We are all good. V, welcome. Thanks so much for stopping by. How's your week going? It's been a week. Yes. A long week. I think for everyone.
Starting point is 02:57:54 Yeah. Is it specifically because of tariffs or AI news or what's driving that? Yes, tariffs, the impactors, but some of the stuff it's like, it's been fun. It's been fun. Can you just give us a quick introduction for the listeners who might not be familiar with Sword Health? Kind of a big quick backstory on the company and what you guys do. Yeah. So let me start with the
Starting point is 02:58:12 problem. So health care is quite special because when you compare health care with other industries like the consumer electronics industry. Right. In the last 40 years, you saw penetration of technology in those industries that went like this. Massive penetration of technology. What happened? My alarm
Starting point is 02:58:34 just... Was that? It's my alarm, sorry. Oh, no worries. One second, we have some technical difficulties, folks. I'm back, sorry. I was saying that in the last 40 years there was a massive penetration of technology in the consumer electronics industry. What happened to cost? Massive decrease. So, a television that 40 years ago or 30 years ago would cost $3,500 now costs $500. Much cheaper, much better display, much better features. So we use technology to produce better goods. When healthcare, last 40 years,
Starting point is 02:59:15 you also saw massive penetration of technology. What happened to costs? Contrary to the other industries, massively increased. Where actually we use technology to make healthcare more expensive. And the reason why is because the way we've been using technology in other industries is to shift part of the labor from the human to the machine. And with that, we made the production of goods much higher quality, much more efficient
Starting point is 02:59:43 and much more accessible. In healthcare, we've been using technology to double down on these 100% labor-intensive model, where before for you to have an appointment with a physician, you go to an hospital or to a clinic, now you can use the technology to do that to a video call, but you are still fully dependent on the human on the other side.
Starting point is 03:00:05 Right now, as always been for you to access healthcare for 30 minutes, you need 30 minutes of this highly specialized, scarce and non-scalable human resource, which is the clinician. So what we believe that in order for you to deliver the future of the healthcare world, what you need to do is basically develop technology and AI that will shift part of the labor from the human to the machine. And with that, you remove barriers in terms of access. And so what we are doing at Sworl is really shifting healthcare from that human first model to that AI first model. And we start in with the biggest problem, which is how we deliver care, how we recover patients back to a full life. And so we started with physical pain, we're expanding to pelvic health and now we are expanding into other factors of care, really changing
Starting point is 03:00:53 how people access care with AI. Can you talk about Jevon's paradox was in the news recently. And I'm sure you had a lot of thoughts just because, you know, theoretically as the cost of healthcare declines, we're going to want more of it. There's a lot of care that doesn't happen, for example, in physiotherapy. If it costs, you know, a dollar a day, I'm sure people would, you know, be much more eager to use it, but when it's $200 a session or even more than that. So I'm curious. It feels like, you. So I'm curious.
Starting point is 03:01:25 Feels like I'm sure you've seen this playing out. And I'm sure you weren't worried when DeepSeq came out and token costs came crashing down. You probably were like, OK, this is great. We're just going to use a lot more of this. Yeah, and look, lack of access to high quality, high intensity conservative care, non-invasive care doesn't decrease costs. Because when you have pain, instead of you are not able to go three
Starting point is 03:01:53 times per week to a PT clinic for three months and that will solve the problem, right? But what you do next is you go and try to find a silver bullet in the form of surgeries. And that's what really skyrockets the prices. Just with physical pain in the US, we are spending $560 billion with a B, billion dollars per year. And the big bulk of that is surgeries which should be replaced with much better outcomes for patients. And so when you use AI to make the traditional model, which should be the solution, very easily accessible, as accessible as running water, then you allow
Starting point is 03:02:30 patients to get access to high quality care. And that's how, and then you really decrease costs. Because the problem in healthcare right now, and by the way, this is in the West, but this is all over the world. The National Health Service in the UK is in the US, but this is all over the world, right? The national health service in the UK is suffering the same thing, which is you have very high, high costs and very, very low quality and access to care. So in the traditional equation in healthcare, if you want to increase quality of care, you need to hire more people. If you hire more people, you increase costs, but you cannot increase costs because costs are already prohibitive. So what you try to do, you try to reduce costs to remove folks from the equation. What you do, you reduce quality and you basically create massive challenges in terms of access.
Starting point is 03:03:16 The only way to really break this paradox is by using AI, doing part of the job of the human, and is by using AI doing part of the job of the human and then having the clinician in the loop highly scalable in order to be able to bridge the gap between the massive demands and the low level of clinical supply that we have. Yeah. How has your AI thesis around healthcare changed since founding the company, if it's even changed at all?
Starting point is 03:03:45 Is what you're seeing today and the roadmap today has been what you expected or has any sort of new technology that's been introduced in the last few years changed? Weren't you just talking about AI therapy breakthroughs recently? There was a... Yeah, there was some news I think earlier this week that basically showed about AI therapy breakthroughs recently. There was a- Yeah, there was some news I think earlier this week
Starting point is 03:04:06 that basically showed that AI therapy in the form of people just talking with LLMs actually delivers fantastic results. And it's basically free, right? And it's like this incredible- It seems like a fantastic add on if you have a scale business, you've been in the business for 10 years,
Starting point is 03:04:23 you don't have to start from scratch on the customer development journey. So yeah, talk to us about that. We actually, in that regard, we did an interesting experiment because we have Phoenix, which is our AI system, based on the sessions of the patients doing at home, preparing the messages for our clinicians to send to patients. Right? And so basically, they analyze the session of the patient and based on that they can tell you, hey John, I saw that you did your session perfectly. In the end you were a little bit exhausted
Starting point is 03:04:51 so I decreased the session a little bit for you to be able to do it without a problem, right? Is the work that our PT's, our clinician usually do, right? And now of course we have LLMs working at the data and preparing those messages, right? But we were looking at the messages and we had this question mark that, do the messages feel like it's an AI creating the messages
Starting point is 03:05:12 or do they feel human? So what we did was a blind test where we had 50 messages from that were written by our clinicians and 50 messages that were written by Phoenix. And what we wanted, and then we asked our team to evaluate which messages were from the clinician and which messages were from from Phoenix. Right. And what we wanted was a 50% randomness where you cannot distinguish which messages are from which Phoenix are clinician. Oh, wow. Like the Turing test basically. Yeah, exactly.
Starting point is 03:05:46 What we got was very, very surprising because the messages from the clinician were mainly labeled as coming from the AI and the messages from the AI were mainly labeled as coming from human. That's bizarre, but I understand. And then we went a little bit in more detail and what we found was that, look, since the clinicians are always like going from one patient to the other
Starting point is 03:06:13 and they are always in this almost state of burnout across healthcare, the messages are very dry and very concise, right? Where the LLM, the messages are very warm. And then also they have the long memory where they can pick up things from two weeks ago that you said, right? They can pick up things about stuff that you said that I want to recover because I want to play with my kids and go on hikes and you said that during the environment fall. And of course human not never remembers that right. And so it's funny because L the AI and LLMs make the work of the human and viral clinicians more human than before. And that's super funny.
Starting point is 03:06:54 Can you talk a little bit about, uh, AI image generation? It seems pretty far out for what you're doing, but at the same time, I'm just thinking like, if I need to show someone an image of you know hey your knee I have a diagram of your knee and this is where you know the the the physical damage is this is what we're gonna be rehabbing I could imagine even just doing the basic like studio Ghibli style transfer could just make that whole experience feel a lot less medicinal and a lot more enjoyable and just novel. But have you even started playing with those tools or is AI image generation still pretty far
Starting point is 03:07:30 out on the roadmap? No. So our focus is really on when the member, so what we want to do is replicate the experience that you have in the clinic, with the clinic, you can replicate at home with Phoenix. And so we have the feedback component, which is we analyze, we observe what you are doing and we provide feedback. Then the corrective feedback, that's where we are experimenting with that type of layer,
Starting point is 03:07:57 of image layer, because then you can basically say, hey, do the movement like this, or do the movement like that. right? And I'll do the movement like that. And what one area where we are using that and we are exploring how we can do that in more detail with AI is we have a solution focused not on physical pain, but on pelvic health and pelvic health is basically things like urinary incontinence after childbirth, which is a massive problem in female population, right? things like urinary incontinence after childbirth, which is a massive problem in female population, which basically is you, it's implied for, we have an intravenous sensor where you can
Starting point is 03:08:30 basically have to turn your pelvic floor muscle. No one knows what is the pelvic floor muscle. And so using imagery to say, hey, you have this thing which should contact like this and using AI to make that animation much more lively, it's how we are experimenting. But that's the thing, it can be an explosion of possibilities with AI, because you are using AI versus agents, right? We are using that to involve patients, right? We are using AI to identify that member that in six months is going to have a surgery,
Starting point is 03:09:04 so we can intervene now a little bit like minority report to before that person goes to the orthopedic surgeon and gets convinced that they need surgery, we can act right now. We are using AI to quantify the motion of the patient. So it's really can be an explosion where everywhere I look, I see an application of value high with clear benefits. Yeah, how do you, you guys have been very successful
Starting point is 03:09:28 very quickly in relatives. Well, I'm just saying. It's an overnight success. Yeah, overnight success. In a sense from a pure revenue ramp, from a revenue ramp standpoint, and what usually when a company ramps revenue really quickly, other people go,
Starting point is 03:09:44 hey, that's a good idea. We should do that too. How ambitious are you guys as a company is, is the sword health for X sword or, or you know, or is it just about running down these opportunities that you're currently tackling? So basically, yes. Our view is really translated in shifting healthcare from union first to AI first. So every single area of healthcare which is still delivered in a 100% labor, human labor intensive way,
Starting point is 03:10:17 it's a target for us to reinvent, right? One area where you apply that is, well, mental health care, right? Like the way you address mental health right now is talk therapy, where you should, basically, associate to mental health is talk to that human once per week or once every two weeks. That's 100% human-level intensive. That's an area for us to intervene. And so it's really about,
Starting point is 03:10:41 likely for us in terms of addressable market, it's the fact that luckily for us in terms of addressable markets, it's the fact that pretty much everything in healthcare is 100% human level intensive. And so we have a very aggressive market roadmap in terms of expanding and replicating what we did with physical pain, what we did with pelvic health into other verticals of care. Because the thing is, when you really nail product market fit, healthcare is massive in terms of expansion because you don't saturate. We the pelvic health solution that I was telling you about, Bloom, we launched that solution
Starting point is 03:11:15 in 2022. Everyone thought it was a niche solution. We did in that year $500,000 of revenue. We did last year $25 million. This year. We did last year $25 million. This year we're going to do $50 million. And everyone thought, I still remember discussing this solution with my board and my board saying, yeah, don't focus on that because that's too niche. And it's like, it's exploding. And so healthcare, the good thing is when you get to product marketing in healthcare, you have untapped growth potential because the market is just quite big.
Starting point is 03:11:48 Well, congratulations. I mean, that's all amazing news. What a fantastic industry to be in. We need to spend three hours sitting every day. We need a PNX for- I know a guy. Perfect. Well, thank you so much for coming on the show.
Starting point is 03:12:04 Awesome. We'll have to have you back when there's more news. Cheers. We'll talk to you soon. Thanks for coming on. Thanks a lot. Yeah, the growth there is just shocking. Yeah, and it's such an underrated company because they're out in Portugal, and they do
Starting point is 03:12:15 have an office in New York City. But it's just one of those under-the-radar companies, in my opinion. Totally. Well, we got another guest coming on the show. An absolute killer. But we should tell you about Wander first. Find your happy place. Find your happy place.
Starting point is 03:12:31 Find your happy place. Book a Wander with inspiring views, hotel-grade amenities, dreamy beds, top-tier cleaning, and 24-7 concierge service. It's a vacation home, but better. Go to wander.com slash TBPN. Fantastic. We got Avlock coming in.
Starting point is 03:12:44 Sure, everybody's gonna get him. All right, there he is. Avlock coming in. Sure everybody knows him already. There he is. Avlock, how you doing? What's going on? What's going on guys? How you doing? We're good, we're good. It's been a great week.
Starting point is 03:12:53 A lot of big news. I'm not sure what you've been following more closely, AI or tariffs, but everyone's following one thing or another. Everybody's got something. Everybody's got something this week. Something warming in their brand. But yeah, could you just do a quick intro on you and Angelus for anyone who doesn't know, I guess.
Starting point is 03:13:10 Yeah, Avlock CEO of Angelus. I was actually recruited in almost six years ago now. So I've been running the company for almost six years. Prior to that, I'd started three companies sold to, one of them was to Square, so I spent almost three years at Square as well, pre IPO to post IPO. I'm curious. I mean, a bunch of stuff I want to get into.
Starting point is 03:13:33 Where should we even start? I think it's probably helpful for people to have like, just sort of like the refounding of Angelus because Angelus is a company that I feel like has just been in the timeline. Obviously, we built the show around X. Angelus, I think, grew with Twitter in many ways. It was sort of like the technology platform behind what was happening in the timeline. And in the same way that Twitter's evolved massively, Angelus has done the same. But we'd love that backstory
Starting point is 03:14:02 before we get into everything else. Yeah, the way to think of early Angelus has done the same, but we'd love that backstory before we get into everything else. Yeah, the way to think of early Angelus was that there were a lot of different experiments as the team was searching for product market fit. And so, you know, 2020 or 2010 to call it like 2018, 2019, sort of spawn three different businesses within Angelus. It was the SBB business, which is really the syndicate business.
Starting point is 03:14:26 There was Talent, which was to help startups find talent, startup talent. And the third one was actually Product Hunt. And AngelList had bought Product Hunt early on. And so by the time 2018, 2019 came around, there were kind of these three different businesses where they really just connected with a thin thread of founders need to raise capital,
Starting point is 03:14:47 they need to hire people, and then they need to launch their product. But outside of that, it was actually very hard to do all three of those under one roof, just because the business models are different. So at that moment, what happened was kind of a splinter, kind of a split. And talent spun off on its own and then a product kind of separated a bit more. And then when I came in, I actually took the SPV business and I spun it out as its own company. And we got to work and we kind of took it from an SPV business to venture funds to
Starting point is 03:15:17 we invented a whole new category of rolling funds and then we got into roll up vehicles, which is actually how Jordy and I originally kind of got connected. And then we got into startup products. And so today we're kind of a, you know, the place to go to if you're gonna start scale launch a venture fund, we're bound private equity. We actually managed the scout funds for a lot of large firms as well.
Starting point is 03:15:37 So we're kind of an index now of what happens with adventure. Now it's amazing. Can you can you talk specifically about, you know, sort of like accelerating product velocity in Angelus because it just felt like you came in, you had this sort of warm up period, and then it just felt like every single
Starting point is 03:15:57 quarter there was like a major new product launch and you sort of like awakened the beast. Right. Like there was so much potential there. If you were involved in the startup industry at all, you would invest it in an SPV. Maybe you were in a rolling fund.
Starting point is 03:16:12 Maybe you hired, maybe you got a job, maybe you launched a product. So everybody was touching sort of products in the ecosystem, but then you had to kind of like ramp up and basically say like, you know, we're gonna kind of bring this crazy product velocity. So I'd love to hear like how you did that and then how you're applying that now,
Starting point is 03:16:29 specifically there's a bunch of new opportunities I think that are popping up. Yeah, I would say just at the core of Angelus, Angelus has always held the founders of top of the pedestal. And we've always believed that product leads everything, product velocity leads everything. And so when you have that product leads everything, product velocity leads everything. And so when you have that in your DNA, the question you're always asking is, what is
Starting point is 03:16:51 the mix of people you actually want in the company? So we actually have a fairly high percentage of our team that are like ex founders. And so what happens when you put a bunch of ex founders in a room, what are they going to do? They're going to look to be ambitious and how can you actually move into an adjacent opportunity? How can you go reinvent only product category? And so a lot of the original thinking was,
Starting point is 03:17:15 hey, let's actually get back to product innovation and let's get back to actually doing things that only we can do. And that's the one question we do ask. And to be clear, we don't get it right all? And that's the one question we do ask. And to be clear, we don't get it right all the time, but the one question we do ask is, if we didn't exist in the world, would this product exist? And if it doesn't, then okay, great, we should go do it.
Starting point is 03:17:35 But if it's gonna exist without us, we're probably not the best suited to go do that. And so we do ask ourselves that quite often. Can you talk about AI in the business, how you guys are leveraging it to just, investing in a company is very simple on Angelus. Like I'm a part of a scout program. It's very seamless, painless process.
Starting point is 03:18:00 But then under the hood, you're dealing with different entities and you're dealing with, the code is like the easy part, like the law is really the hard part and the challenge. So I'm curious how you're applying AI in the business to make internal team members more efficient to make, and it's good for the world if you can sort of like
Starting point is 03:18:21 accelerate capital formation, investment, and all these things, it can accelerate the world. So, I'm curious about that. Yeah, thank you, Jordy, for recognizing that. It's like an iceberg product, right? There's so much underneath the hood. The initial investment is actually just the beginning of that relationship with that company
Starting point is 03:18:39 on behalf of the fund. And so for context for others, when an investment is made from a fund into a company, that's typically a, it can be like a 10 year 15 year hold period. And that's just because of how illiquid venture typically is these companies need time to mature. And so what ends up happening is over the quarters, months, quarters and years, there's all sorts of activity that can happen within that investment,
Starting point is 03:19:03 all sorts of activity that can happen within that investment, all sorts of activity that can happen within the fund, and there are real legal repercussions, financial repercussions, if you get it wrong. And the way we're using AI is these are all the back office functions, and so typically humans would manage all the different workflows. What we've started looking at is how do you take all these different workflows, and then how can you actually have AI agents starting to take on some of those workflows. Now we have to be careful because when you use ChatTPT and you get, every time you ask a question you get different answers. That's beautiful. That's product market fit. You're like, write me a poem, write me many different poems.
Starting point is 03:19:41 Awesome. You don't want that when it comes to your finances. It's like, hey, what's the share price? You don't want that when it comes to your finances, right? It's like, hey, what's the share price? You don't want 10 different share prices, you want one. And so we are pretty bullish on taking that and having it automate huge parts of our back office. And that's already starting to happen. And we think there are ways that we can actually then have
Starting point is 03:20:02 a lot of the folks that are doing some of the back office work move up in the stack in terms of the type of judgment they apply on all the different workflows there. So that's one piece. The second that I'm personally extremely excited about is we launched a front-facing product called, it's our intelligence product. And it's in beta. It's still new.
Starting point is 03:20:24 So we're not ready to fully talk about it with the world. But what we're doing there is we're actually creating a an agent to go look at all private market data. So we have access to a lot of unique aggregated anonymized Angeles data. And we're actually looking at building partnerships with many other providers. So for example, you can ask a question like who left open AI in the last month to start a new company that can help you deal sourcing and you can imagine it scales to many more companies. We're also looking at how do we help you get
Starting point is 03:20:54 ready for your briefings for pitch meetings while you're actually like in the middle of a pitch. Post pitch, great, take the transcript and we can help you do a lot of the necessary research, market map, the full analysis. So give you time back. And we think of this as we want to build a code GP. We want to help you make more money, right? We can help you find better deals. Yeah. What about on the founder side?
Starting point is 03:21:19 I could imagine that, you know, there's all these new benchmarks for fastest company to a hundred million ARR. Uh, valuations are all over the place. Are you thinking about building any products for entrepreneurs? They upload their deck and then they say, Oh, well, like you should probably expect, you know, 10 on 80 pre in this, uh, you know, to be kind of in the fair way, maybe go out there and get more, but here's at least a take on it.
Starting point is 03:21:43 A hundred percent. We're, we're, We're already in that world. You know, it's the classic line. The future is here. It's just unevenly distributed. That's basically it. Well, yeah, it's funny how entrepreneurs try to understand how to price their rounds is really
Starting point is 03:21:57 they talk to a handful of investors who just referenced the last two or three rounds that they saw get done that were maybe not even in the same category. And then it's just like completely guessing. And I think it's, you know, super, like you guys will be able to pretty quickly, I imagine, give a pretty precise and say like,
Starting point is 03:22:14 here's where we predict your round is gonna get priced and sort of like building that feedback loop is interesting. You claim to be cursor for dogs, but what is that really worth? How do you, it feels like we're at this amazing time right now. Some people don't think it's amazing. I think it's great. Of venture maturing, and then you
Starting point is 03:22:32 see the convergence between private equity and venture. You guys launched some products more geared specifically for private equity. But how do you see that line sort of like blurring over time? And I guess like, how are you kind of adjusting your product roadmap for that reality? Yeah, the way to think about the way we're approaching product is we have the kind of vertically integrated
Starting point is 03:23:01 part of the product where you come in, you know, one stop shop, you can launch scale a venture fund, which also allows us to manage scout funds for some of the largest firms. And then we actually have our software that's getting unbundled and folks want to adopt that. So some of the largest private equity firms actually adopt our digital subscriptions product or some others can adopt our banking product. We actually, we run banking
Starting point is 03:23:25 underneath the hood. We've actually built a whole banking infrastructure, which is what allows for a lot of the smooth operations. And so, you know, as we're seeing the market evolve, our roadmap is effectively evolving to continue to keep taking on large and larger funds for this work, we integrate product, and then for the largest ones where there we can support them for any of their needseks and that's actually working quite well. In terms of what we're seeing in the market today, we are seeing a bifurcation. So what's interesting is if you look at year over year within our data, we're actually seeing capital flows increase Q1-2024 to Q1-2025 by almost a double and into just venture funds
Starting point is 03:24:03 and like we can talk about SBBs and all of that, but like just venture funds alone, capital flows are almost, have almost doubled. So there's definitely a bounce back. There's more optimism, right? That's coming in, but it is being, it's sort of concentrating into a certain subset of managers. So you kind of have, you know, call it zero to, let's say 100 million, you can have a divide. And then you
Starting point is 03:24:27 have like the mega firms, right. And then the mega firm strategies are actually also bifurcating, right, they all have kind of their own, their own view on things, and what they're going to go after. So it's a very different market in 2019 2020. And we're not seeing as much of you know, because what happened 2021 2022 was you had crossover firms really come in right? Laura private equity firms, yeah, public firms come in to venture, we don't see that anymore. I think we're at least
Starting point is 03:24:54 we're not seeing too much of it maybe like here and there. But generally, it's the venture firms that are scaling up and they're now starting to push up into different asset classes. up and they're now starting to push up into different ASSA classes. How do you think of the job of GPs over the next 10 years? I've, you know, everybody's, you know, been running the analysis of like how safe their their job is, right? Like if you're a writer right now and your job is to summarize information and, you know, repurpose information and just put it out there, you gotta be pretty worried. I think investors in general are pretty safe
Starting point is 03:25:28 because people wanna give money to one person right now and have them be sort of responsible for returning that money and hopefully a lot more in the future. And venture is about this combination of not just picking, which is like, it can be very analysis driven of the market and the products, but also the people.
Starting point is 03:25:48 But then the big thing in venture is, I expect AI to not necessarily dominate venture so quickly because likability and access are like such big parts of it, right? It's like, somebody wants Avlock to angel invest in their company because you're the CEO of Angelist and like an AI could be like, know better at finding companies but that doesn't mean they're gonna sort of like win the allocation so is your thesis you know
Starting point is 03:26:13 let's give investors AI tools to be able to be better investors and make more money or do you see a world in the future where people are setting up you know it's sort of a fund on Angelus that's entirely, you know, the GP is effectively, you know, um, a machine itself. Yeah. It's a good question. I don't know who said this, but I picked this up from some podcasts. When a founder picks an investor, they're doing it because they believe that
Starting point is 03:26:41 person is going to increase their probability of success. And in order to increase the probability of success, you're really looking for a partner who can help you solve any number of problems that can come up. Some of it's brand, that's recruiting, some of it's actual operational help, and it could be many more of those. But you need to partner with someone so you can actually have a reasonable chance of success with the company. So I don't think that's ever gonna come from an AI, at least today, at least in the form
Starting point is 03:27:11 that we know of it today. So as we think about the role of the GP, we think it's gonna continue to stay the same. I think what will get challenging is, I do think there'll be more capital that floods in, because even at the earliest stages, folks are talking about talking about how valuation is decreasing. Yes at the later stages at the earlier stages. It's increasing so I've preceded seed series a there's just a Huge amount of capital looking to get in and the actual root problem is that?
Starting point is 03:27:39 The startups don't need that much capital at the earliest stages So what you have is a supply-demand mismatch. And so the only way the valuation goes up, right? And so the access question and problem is only going to get worse and worse. And so we still think the human, the GP is always going to be in the loop. So the way we think about it, the tooling that we're building is they're meant to amplify the investor. They're meant to help them make better decisions, but it can't ever help you become likable to the founder
Starting point is 03:28:09 or help you get access to the deal, right? That's gonna be incredibly hard. And so we think that it's about enabling them. So really, there's gonna be more leverage to the best investors. So I actually think the power law is gonna get even more concentrated because even the tools that we're building
Starting point is 03:28:24 are gonna help investors understand very quickly the lay of the land, right? So one thing that's actually very hard today is when you're listening to a founder you're doing a pitch meeting, you generally, like you don't quite know, okay, who are all the other competitors? Who are the other founders of those competitors?
Starting point is 03:28:38 With one click, we're just gonna make it easy. And it won't be just sauce, I'm sure there are other tools that will come out that have like one click, you get a full layer of the land. And so you are full of visibility on what's going on with this company, what this founder wants to do. And so I think leverage will increase to the best investors. It makes sense.
Starting point is 03:28:54 I don't know if you have a hard stop, but I did have one more question around how your thinking around stable coins has evolved. You guys were very quick to adopt stable coins. I think it was in 2021 or early 2022 that you rolled them out. That was probably from user demand being like, I have a lot of money on chain and I want to put it into startups. Now I imagine I'm a long term believer in the power and value of stable coins. But I never in using AngelList at any point in the last two years was like, I wanna fund this investment via stable coin. And so I'm curious how you think about it broadly and are you using them behind the scenes at all or you just, you said you talked a little bit
Starting point is 03:29:35 about your banking infrastructure. And I would assume that that's still based on traditional fiat rails, just because beauty of venture is like sometimes you wanna close an investment quickly but we have same day wires and those like pretty well. And then you're not like trading in and out of assets like super rapidly or anything like that. And so but but anyways, I'm curious to hear how you're thinking about stable coins broadly. Yeah, so we originally added stable coins because of user demand and it was like extremely aggressive in terms of like demand, you know, and I was like, fine, let me go, let me go figure this out. And at the time we actually looked at, I mean, all the who's who providers and none of them
Starting point is 03:30:14 could really quite fit the use case we had. And so we eventually actually worked with a startup and ended up building out exactly what we needed and now they're actually doing pretty well and they're scaling out. Was that layer two? Yeah, they renamed themselves to RAIL. I'm an angel as well. I remember that you were saying like, we got Avlock, you know.
Starting point is 03:30:40 So what's interesting about stable coins on AngelList is when the biggest use actually comes from the biggest pain from when LPs fund and the biggest pain is when you're trying to move money internationally. And so we actually see a huge amount of demand from international LPs trying to move money into the US because if you've ever tried to send a swift, God bless you if you have, it's insane. It literally takes days and you just don't know where the money is and sometimes bank will just hold on to it and they want to tell you. It's got stuck somewhere in the middle. With stable coins, it's one hop, you're able to get the money right into the US. It goes through all the same compliance laws and everything,
Starting point is 03:31:20 but we see a huge amount of demand for that. So it's not necessarily in the fund side to companies, because you're right, there's not that much trading in and out, but it's more on the LP side into the fund. Yeah. Well, yeah, that's very cool. Last question I have for you, then we will let you leave. I know we're five minutes over.
Starting point is 03:31:38 I'm just very curious. Have you seen looking at some of these sectors that are especially overheated or potentially in sort of bubble territory, right? People are, you know, we could argue whether AI is a bubble or defense tech is a bubble. Have you seen any sort of near term slowdown in defense tech investing this year?
Starting point is 03:31:59 And I'm sure you don't have the data in front of you, but I'm curious. That feels like one that, you one that is still very hyped, but it's potentially investors have their bets now and they're letting, wanna let their kind of bets play out. No, I was actually just talking about this with someone else the other day.
Starting point is 03:32:20 It is still going on with Defense Tech. I think we're actually just entering a sort of a golden age for the US wanting to Basically up level of the all the technology in the military in the Navy So I'm actually not seeing it slow down. I agree with you that there is it feels like it's a bubble but the beauty of Financial markets in general is you'll just get bubbles and then you'll get a few great companies that will come out of it. But I haven't seen that particular bubble sort of like wind down yet. We're still seeing some pretty active heavy heavy investing there.
Starting point is 03:32:54 Very cool. Fantastic. Well, we'd love to have you on to be our private market data. You have all the data. This is great. Thanks so much. Thanks for joining. Yeah, likewise. Good to for sure. Pull it out of you one way or another. This is great. Thanks so much.
Starting point is 03:33:05 Thanks for joining. Yeah, likewise. Good to catch up. Have a great weekend. Bye. Let's move on to some timeline. We got some massive news yesterday from Anderil. They launched the Seabed Sentry Anderil rights.
Starting point is 03:33:19 We must fortify autonomous subsea dominance of the US and its allies. Seabed Sentry is their new AI enabled mobile undersea sensor node network designed for persistent monitoring and real-time comms So imagine this be valuable if you had an undersea cable that you didn't want to get cut They were yeah, there were some very funny posts But I mean Palmer contextualized it very well by just saying like what we do for the sensor towers are on land We now have a product that does that underwater But he said that it can detect
Starting point is 03:33:51 Like other submarines and boats but also Biological things I think you can even track like whales, which is very very cool That's and so if we domesticate the whales get them working for us turn them into a defense tech weapons weapons platform So if we domesticate the whales, get them working for us, turn them into a defense tech weapons platform, and they're also gonna be on top of it. And interestingly, if you zoom in on this video very closely, you'll see that they're partnered with Sonerdyne, which is a very old company.
Starting point is 03:34:14 They've been in business for 50 years, and they have a number of case studies for who they work with, energy, ocean science, defense, carbon capture, et cetera. And I'm sure there'll be a case study on this product as well. Yeah, it feels like as we get more autonomous underwater vehicles, you have people like Chris Hamidon
Starting point is 03:34:34 working on stuff like this, tons of exciting companies in the space, the risk to pipeline and cable sabotage. Totally. It just feels like, again if you can send a $10,000 drone to blow up a pipeline that can cause billions trillions of economic damage that's just gonna be a huge problem. What was interesting is that you know we've been talking about like that maybe the anderol of X is
Starting point is 03:35:00 anderol and but this was something that I haven't even seen startups working on. This seems like an idea that you only get if you're as deeply entrenched as anderol. And yeah, if you're going down into the deep sea. And you can afford to come up with products that aren't being purchased yet, right? Like basically create markets. And you have, and you have
Starting point is 03:35:29 Once you're at a scale that and rolls out and have those customer relationships I think it's a lot easier to do that, but if you're focused on subsea dominant What watch should you have on your wrist probably a submarine? Yeah, you're gonna get it right bezel go to get bezel calm shop over 24,500 luxury watches fully authenticated in house by bezels team of of experts. You can get a dive watch, add it to your collection. You need a dress watch, you need a sports watch, you need a dive watch. Tariffs are hitting Switzerland. So get in while you can. But so retail prices will go up.
Starting point is 03:35:58 I expect prices on Bezel to go up as well, but probably slower and less systematically. So download the app, start building your collection, your wish list, and find something beautiful to add. Speaking of tariffs, George Hotz posted from the TinyCorp account a big long post about how the tariffs are affecting his business. And it's really a frustrating story. He says, we talked to Chris Power at Hadrian.
Starting point is 03:36:24 He's obviously a beneficiary of the tariffs. The tiny box is is much less of a beneficiary, potentially harmed and he breaks it down. He says, to date, we have manufactured all tiny boxes in America. However, we buy parts from abroad. There is no way to buy American made GPU or motherboard at this time. And there won't be for a long time. If these tariffs stand as is, we would have negative margins on tiny boxes. Our motherboard manufacturer has already reached out and tried to get us to pay the tariffs on things
Starting point is 03:36:52 we already agreed to deliver on the delivery price for. But I don't blame them. Their margins probably go negative with the tariffs too. I sort of doubt we'll be getting our 50 90s at the price we agreed upon either. And if that's true, the whole thing is really out the window. And even more stupidly, there's a restricted list of countries you can ship 5090s to. So I'm worried.
Starting point is 03:37:13 I'm not so sure we could move manufacturing of the green V2, their product, and the product may just be canceled. I'm not going to spend my time figuring out weird loopholes and incentives to re-export and FTZ and maybe try and eke out a small profit after all the administrative costs. Tariffs are regulations when difficulty of business, of doing business goes up. Many people only marginally making money just stop doing business. The US has an ease of business, ease of doing business score ranking of six. Hong Kong's ranking is three. If we manufacture here in Hong Kong, we have free trade and can
Starting point is 03:37:44 continue our policy of selling everywhere and passing the tariffs onto the buyer. European Union people have been dealing with this for a long time. Now US people will too. If we can't get 5090s because of short-sighted US export regulations, we'll have to shift,
Starting point is 03:37:58 we'll have to switch to something that we can get. Maybe a different graphics card. So very frustrating and an interesting real life case study, not just a pundit kind of saying, oh, I think the tariffs are bad for economic reasons, or they're great for economic reasons, or it's some part of some grand 5D chess or whatever. This is somebody who's really trying to build a business
Starting point is 03:38:17 on the ground, clearly one of the greatest programmers of all time and a fantastic, just I don't even know how to describe him, like business person, developer, he's kind of everything, but George Hott's breaking it down about why the tariffs are actually leading him to leave America and focus on Hong Kong and who knows how that will play out. But very frustrating story, but interesting to hear him
Starting point is 03:38:42 break it all down. And of course, if the tariffs are moving the markets You're gonna want to be on public comm Multi-asset investing industry leading yields and guess what John they're trusted by million They're trusted by million investing for those who take interested by us go to public comm Thank you to public for supporting the show and and in relation to the To the to the market turmoil
Starting point is 03:39:05 And in relation to the market turmoil, Scooge's had a funny post here. After 9-11, the stock market lost $1.4 trillion, inflation adjusted. Today, the stock market lost $2 trillion. So these tariffs are like 1.42 9-11s. Yikes. Rough, 5K. Likes, very popular.
Starting point is 03:39:22 What else should we cover here? Just FYI, the map of states that are renamed for countries with similar GDP really puts America's dominance in place. Just the state of New York is the same size as Canada. Just California is the size of India. Just Texas is the size of Brazil. Just Florida is the size of Indonesia. Mocked. George Hottes is over in Hong Kong. That's the size of Brazil. Just Florida is the size of Indonesia.
Starting point is 03:39:45 George Hoss is over in Hong Kong, that's the size of Indiana. Yeah, America stays undefeated. I mean, this is 2019, we'll see where it goes. A lot of these countries are growing and some of these states might be shrinking, but I'm still long America, even with all the crazy tariffs.
Starting point is 03:40:03 And no better way to get your message across in America than some out of home advertising on adquip.com. If you want to break through the noise, you got to go out of home. Out of home advertising made easy and measurable. Say goodbye to the headaches of out of home advertising. Only AdQuick combines technology, out of home expertise, and data to enable efficient, seamless ad buying
Starting point is 03:40:21 across the globe. Should we talk about, what else is interesting? Vittorio had a funny meme, the Virgin tariffs warrior versus the Chad, we'll see what happens LMAO. Yeah, I feel like this obviously resonated. Our approach has just been, you know, this is happening, we'll see what happens.
Starting point is 03:40:45 There's a lot of negative impacts, and then there's people like Chris Powers that are benefiting from it. Overall, it's much less easy to be in the Chad camp if you are George Hotz and Tiny Corp, and your product is being impacted or the business entirely. So we all have to see what happens,
Starting point is 03:41:11 but definitely feeling that for entrepreneurs this week that are just dealing with immediate repercussions from it and uncertainty. For sure, should we close out with this thread from Carried No Interest? Let's do it. So Carried No Interest, Let's do it. So Carried No Interest, he's done the show before. He says it's time to coin some new AI software terms.
Starting point is 03:41:31 He's employing Coogan's Law. He says inference to impact, I to I is the first, and inference risk quotient, IRQ, is the second. And he defines these. He says the first that stands out to me is inference to impact. What does this mean? It's the amount of time it takes
Starting point is 03:41:47 within a new software product from hitting an LLM API to getting customer utility. I've noticed a distribution in these new startups. Let's start with Cursor. Cursor has a very low I2I inference to impact. As soon as you start ripping the application, you are hitting an LLM and getting results. The I2I is immediate.
Starting point is 03:42:06 This is a good thing. Now, the opposite is AISDRs. You hit a bunch of LLMs and send some outreach, much longer I2I on this product bad. It's kind of the iteration loop. A low I2I is good on a bunch of levels. Your customers are seeing immediate magic. You can onboard users faster.
Starting point is 03:42:23 Sales cycles should be shorter. When you can shorten the amount of time between LLM and utility, this is objectively very good. But it's time for another one, the inference risk quotient, the IRQ. The IRQ describes the amount of risk created for the customer from a series of LLM calls. For some AI first software companies,
Starting point is 03:42:42 introduce very little risk to a customer by calling an LLM. Others, a good amount. Let's describe it. Cursor works well for this example as well. Cursor's IRQ, the risk quotient, is theoretically low. Cursor generates code. That code goes into a diff.
Starting point is 03:42:59 It should be tested and most errors caught as it progresses through staging environments. Solid IRQ, what about the inverse? Let's go back to the AISDR example. Your AISDR starts ripping emails out the door, some of them are bad, they maybe embarrass your marketing department, oof, probably a medium IRQ. What would be a high IRQ, an inference risk quotient? Let's think about Harvey. Harvey is an AI software for lawyers to analyze whatever lawyers analyze all day. In my opinion, this would be high IRQ.
Starting point is 03:43:29 If the AI misses something important or misclassifies, you could have potential legal damages. There's probably one more item, one more term in here, LMAO, IURQ, inference utility to risk quotient. This would be the amount of risk relative to utility. Cursors, utility relative to risk is so high. Your staging environment should catch bugs, and your output is much higher.
Starting point is 03:43:51 Fantastic. Framework. Frameworks. Yeah. Very on point. I almost always say, carry new interest real name every time. Yep. So eventually, you're going to get docs, buddy.
Starting point is 03:44:02 It's the nature of tech scenes. Not on purpose, but this reminds me of what Avlock was bringing up. If you're asking AngelList via an LLM what the share price is on a specific position, investment, et cetera, it has to be right, because you're going to go make different financial decisions based on that data. And they need to be a, you know, accurate source
Starting point is 03:44:30 of records on these type of things. And people just aren't going to tolerate, you know, hallucinations in that environment. So I think this very well sums up why we're seeing the adoption, you know, cursors, adoption curve, versus we don't, we see these sort of AISDRs that are, you know, getting customer traction, but it seems like the churn
Starting point is 03:44:53 is much higher and the utility is much lower. I haven't seen people, you know, it was great having Ishan from Roxon. Sounds like people are getting a lot of utility out of that, but it's almost more of a CRM type tool than purely an outbound engine. And so we haven't seen anybody raving about their AI BDR SDR yet.
Starting point is 03:45:16 That also could be that if it's working so well, you don't want to do better. Yeah, don't leak the alpha, I guess. Yeah, don't want to leak the alpha. I want to do a couple more. Andre Carpathy had an interesting post here. He says, let's take AI predictions from blog posts, podcasts, and tweets and move them to betting markets,
Starting point is 03:45:30 our state of the art and truth. Obviously, we are sponsored here by Polymarket, and I'd love to see more AI markets on Polymarket. I'm sure we'll be working on spinning some of those up. Andre Karpathy continues to say, my struggle has been coming up with good, concrete, resolvable predicates. This is always the tough thing with polymarket
Starting point is 03:45:48 is that it needs to have a clear resolution. It needs to be not too far out, not more than a year. You don't want your money just sitting there. Ideally, predicates related to industry metrics and macroeconomics, e.g. naively, one might think the GDP, but I'm not so sure that works great,
Starting point is 03:46:04 e.g.C productivity paradox. I also think evals are not amazing predicates because we see over and over that they are incomplete and hackable and saturated often. I thought this was interesting because he's close to my thesis about the artificial economic intelligence, just maybe instead of tracking towards IQ or how does this benchmark against a human or the touring test, we just want to say how much economically valuable work is being done
Starting point is 03:46:39 by LLMs and AI agents and diffusion models, et cetera, of the GPU cycles that we have. We have strict data on CapEx and inference cost and how much energy is going into these data centers, how much economic value is being produced. And once that hits 10%, that's probably some sort of tipping point. Once that hits 50%, the robots have kind of won,
Starting point is 03:47:01 and that could be good or it could be bad. But that is true. Like, okay, the robots, the AIs are producing more economic value than all of humanity combined when we hit that 50% GDP generated by AI threshold. That feels like singularity territory to me. That feels like a fundamentally different society. That feels like UBI or something. Yeah, in 2027. Yeah, yeah. I would take the under on AI generating over 50% of GDP by 2027.
Starting point is 03:47:34 But yes, the productivity paradox is tricky. Tyler Cowen has tried to formulate this, saying that the AI do-mers, they should express their P-dooms in the form of long-dated puts and say, well, if you really think that this is gonna go poorly, you should imagine that there'll be economic turmoil and you should be betting on that and profiting off of that. And if you're not, then maybe you're just yapping.
Starting point is 03:47:57 And so he's gonna be on the show in a few weeks and we'll have to dig into more about how we can concretize these bets and think about how we make predictions about AI progress. There's the humanities last exam, there's ARC AGI, and there's all these interesting evals and tests, but they're not scratching the itch for me in the same way when some new model goes viral. I care more about the Studio Ghibli moments than the, okay, Google's Gemini 2.5
Starting point is 03:48:30 is two points higher on MMLU or some exam, or hacking APBio even, I care less about that. There's also something interesting, which is if a lab has a truly world-changing discovery If a lab has a truly world-changing discovery or innovation, they should not tell the world and not release it and just leverage it to the absolute max internally, which again goes back to people's frustration with open AI.
Starting point is 03:49:00 Okay, if you are at the frontier and you felt like it should be, you know, open sourced and now it's closed, you know, how do the incentives and everything change? Let's close out with Nick talking more about AI alignment. He says, broadly, I think AI alignment people are maxed out in smart and low in wisdom. I like this because it goes back to the, uh, when you build a character in an RPG, you have int and whiz, intelligence and wisdom, and they're slightly different. Um, and, uh,
Starting point is 03:49:35 and I think as we, like, as we try and define what makes a human truly successful, it's not always just put all the points into intelligence. Charisma obviously matters, strength and like strength as a sense of like, your grit, your grind, your ability to keep continue working charisma, obviously super valuable for coordination, bringing people together. There's this agency drive, yeah, a lot of skills about Riz,
Starting point is 03:49:59 Riz, all these different things. And, and yes, we might be seeing intelligence max out and get to these super high IQ models, but are they going to be super agentic, super creative, super driven by wisdom? And this comment is obviously about the AI alignment people. He says, not a comment on anything in particular. I've noticed myself saying it and thought it would be worth writing down but obviously it's it's like perfectly timed with the AI 2027 thing. There's lots of room for people from other fields to contribute wisdom learned throughout history even if they can't do a math Olympiad or whatever. I don't think being able to do high-level math helps that
Starting point is 03:50:39 much. I guess one specific comment I'll make is that I'm quite excited about Emmett Shears new company Softmax. I think he's thinking about things in interesting ways. And so we'll have to dig into Emmett Shears' new company and how he's thinking about this because he's been very outspoken on AI alignment, but also a very successful entrepreneur and has built up probably a very deep trove of wisdom running Twitch and building a real business that has to interface with the realities of the economy and doesn't live, he's not a pure academic,
Starting point is 03:51:07 but he engages with the AI alignment debate in at the same level as academics in my opinion. And so I'm excited to see that, but it is interesting. Obviously Sean was talking about this, like AI 2027, maybe it's just fan fiction. I like fan fiction. I had fun reading that. Stop provoking.
Starting point is 03:51:26 And I think there is utility in that even if you're just telling a sci-fi story. I'm here for it. I'll read it all day. So I would say- Yeah, we want people doing that work. Yeah, I would say more of that but also you do need to- you can't lose the plot. We have to announce a specific milestone milestone is that we just hit four hours Four-hour show we're clearly addicted Now you know that we've been talking about AI AI predictions I have one prediction to make yeah, which is that Monday at 11 a.m. Pacific 2 p.m Eastern, we will be back sitting in these chairs,
Starting point is 03:52:06 ready to go again. And I can't wait. I mean, what's your prediction for TBPN 2027? I think we'll be here. We'll be right here. We'll be right here doing the same thing. Doing the same thing. From the front lines of the battle scarred
Starting point is 03:52:19 Terminator apocalypse, we will be shooting humanoid robots with microphones in our other hands. James Bond's back, baby. We will never surrender. We will never surrender. We will keep doing this until the AI, the bioweapon that the AIs release, you know, just sort of impulse surrender. Maybe we'll be podcasting from a hermetically sealed, biosecure facility. So that no one can, no one gets in, no one gets out, but we're always live streaming.
Starting point is 03:52:49 And then sleeping on our eight sleeps and then waking back up. That's right. And doing it again. Have a fantastic weekend everyone. Thank you for tuning in. We appreciate you all. Yeah.
Starting point is 03:53:00 And looking forward to Monday. Thanks a lot. Looking forward to Monday. Cheers. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.