TBPN Live - Citrini Memo Reactions, Kim K Enters Energy Drinks, Jane Street Sued | Patrick & John Collison, Bill Gurley, James Cadwallader, Scott Wu, Ivan Zhao, Stefano Ermon, Rune Kvist, Reiner Pope, Devansh Pandey

Episode Date: February 24, 2026

Sign up for TBPN’s daily newsletter at TBPN.com(01:52) - Citrini Memo Reactions (09:35) - Is Doordash Cooked? (12:51) - 𝕏 Timeline Reactions (20:59) - Kim K Enters Energy Drinks (22:...23) - 𝕏 Timeline Reactions (29:01) - Jane Street Sued (40:17) - Patrick Collison is the co-founder and CEO of Stripe, the global payments and financial infrastructure company powering millions of businesses. A longtime advocate for scientific and technological progress, he also co-founded the Arc Institute and writes frequently about economic growth, research, and innovation. John Collison is the co-founder and President of Stripe, where he leads strategy and product across payments, banking-as-a-service, and financial tooling. He previously co-founded Auctomatic, which was acquired in 2008, and focuses on expanding Stripe’s global reach and financial infrastructure capabilities. (57:46) - Bill Gurley, a general partner at Benchmark and author of "Runnin' Down a Dream," discusses his transition from a conventional tech job to venture capital, emphasizing the importance of pursuing one's passion to avoid career regret. He highlights six principles for a fulfilling career: chasing curiosity, honing one's craft, developing mentors, embracing peers, going where the action is, and giving back. Gurley also addresses the rapid public consciousness of AI advancements, noting its unprecedented speed compared to previous tech waves, and underscores the necessity for individuals to be hyper-curious and continuously learning to thrive in evolving industries. (01:23:18) - 𝕏 Timeline Reactions (01:29:16) - Ivan Zhao, co-founder and CEO of Notion, is a Chinese-Canadian entrepreneur with a background in cognitive science from the University of British Columbia. In the conversation, he discusses the launch of Notion's Custom Agents, autonomous AI teammates designed to handle repetitive tasks across various platforms, enhancing productivity and collaboration. (01:45:09) - 𝕏 Timeline Reactions (01:51:15) - Stefano Ermon, co-founder and CEO of Inception Labs, discusses his background in generative AI research at Stanford, including co-inventing diffusion models. He explains how Inception's diffusion-based language models generate text by refining entire sequences simultaneously, resulting in speeds over 1,000 tokens per second on NVIDIA GPUs. Ermon highlights the models' efficiency and scalability, making them ideal for latency-sensitive applications like coding autocomplete and voice agents. (02:01:07) - James Cadwallader, co-founder and CEO of Profound, discusses the company's recent $96 million Series C funding round, which values the company at $1 billion. He highlights the launch of Profound Agents, customizable autonomous tools that enhance marketing efficiency by enabling brands to monitor and influence their representation across AI platforms. Cadwallader emphasizes the transformative impact of AI on brand discovery and the necessity for marketers to adapt to this evolving landscape. (02:13:59) - Scott Wu, co-founder and CEO of Cognition AI, discusses the company's significant growth, noting that enterprise usage has more than doubled in the past six weeks, driven by the adoption of AI agents capable of handling end-to-end tasks. He highlights the latest launch's focus on enhancing user experience by addressing known frictions, introducing features like automated testing and improved integrations. Wu also emphasizes the importance of optimizing various aspects of the software engineering workflow, including testing and review processes, to further improve efficiency. (02:30:39) - Rune Kvist, co-founder and CEO of the Artificial Intelligence Underwriting Company (AIUC), discusses the company's mission to underwrite superintelligence by developing standards and insurance products for AI agents. He highlights the challenges in insuring AI systems due to unpredictable risks and emphasizes the importance of creating a standardized framework to manage these uncertainties. Kvist also mentions AIUC's recent launch of the world's first insurance policy for AI agents, in collaboration with Eleven Labs, aiming to address concerns like hallucinations leading to financial losses and data leakage. (02:39:02) - Reiner Pope, CEO and co-founder of MaddX, discusses his company's development of high-throughput chips tailored for large language models, emphasizing the need for a from-scratch design to achieve optimal performance. He highlights the constraints in the current market, particularly the limited silicon wafer supply, and how MaddX's approach aims to maximize throughput per dollar and per watt. Pope also addresses the challenges posed by existing technologies like Nvidia's CUDA, noting that while CUDA offers backward compatibility, it restricts hardware innovation, whereas MaddX's specialized design offers greater efficiency for frontier labs willing to adapt their software. (02:53:49) - Devansh Pandey, co-founder of Standard Intelligence, discusses the company's approach to pre-training computer use models by capturing 30fps video of user interactions, including screen recordings, key presses, and mouse movements, to create a comprehensive dataset for training general models capable of performing diverse computer tasks. He highlights the potential applications of these models, such as automating repetitive tasks like form filling and enhancing CAD design processes, and emphasizes the advantages of video-based training over text-based methods, noting that graphical user interfaces are designed for human interaction and that video data inherently captures temporal aspects of user behavior. Additionally, Pandey shares insights into the company's data collection methods, including the use of an application that records user screens and inputs, and discusses the potential for their models to generalize to various computer-based tasks, including applications in robotics and self-driving technology. TBPN.com is made possible by:Ramp - https://Ramp.comAppLovin - https://axon.aiCisco - https://www.cisco.comCognition - https://cognition.aiConsole - https://console.comCrowdStrike - https://crowdstrike.comElevenLabs - https://elevenlabs.ioFigma - https://figma.comFin - https://fin.aiGemini - https://gemini.google.comGraphite - https://graphite.comGusto - https://gusto.com/tbpnKalshi - https://kalshi.comLabelbox - https://labelbox.comLambda - https://lambda.aiLinear - https://linear.appMongoDB - https://mongodb.comNYSE - https://nyse.comOkta - https://www.okta.comPhantom - https://phantom.com/cashPlaid - https://plaid.comPublic -

Transcript
Discussion (0)
Starting point is 00:00:00 You're watching TVPN. Today is Tuesday, February 24th, 2026. We are live from the TBPN Ultradem, the Temple of Technology, the Fortress of Finance, the Capital of Capital. We're running down a dream today. We are surviving the Satrini apocalypse. Live to fight another day. A lot of chaos in the markets, a lot of reflection about the story behind the story, what happened. We had a lot of fun debating the Satrini report. A lot of good stuff. in there, some other kind of crazy stuff that sort of got everyone twisted in a knot. But it didn't stop the markets. It did become the current thing, and I think a lot of people were talking about it. I mean, my feed was covered in Satrini stuff. But today is a new day, and there's a ton of new tech news. First, let me tell you about ramp.com.
Starting point is 00:00:51 Time is money. Save both. Easy to use corporate cards, bill pay, accounting, and a whole lot more. The goats. And then second, I want to pull up the linear lineup because, boy, do we have a show for you today, folks. We got the Collison brothers joining together at 1140. Then we're going over to Bill Gurley the height monger himself.
Starting point is 00:01:11 He's 6'9. He mocks me. Oh, he mocks. It's over for me. That's why we said, you can't come to the studio. You can't be seen next to John Coogan in person. You're staying remote. The right pair of cowboy boots. Yeah, I might have it.
Starting point is 00:01:26 The right pair of lifts as well inside those cowboy boots. Then we got Ivan from No Show. and a whole bunch more funding announcements during the lightning round. Rune, Reiner, Devanch, and a ton of other folks are joining. It's a crazy show. James from profound.
Starting point is 00:01:41 James is, yeah, James. New unicorn. Very fun. Well, Linear, of course, is the system for modern software development. 70% of enterprise workspaces on Linear are using agents. So the story behind the Satrini story,
Starting point is 00:01:56 I had some takeaways. My big update, was just that, you know, we are the cell-side research now, basically, like, new firms. We is in X, substack. But X and substack. Like, independent researchers and analysts are really moving the markets. I feel like Ben Thompson has been a source of alpha for the market for a long time. He's been a source of investment theses, but he doesn't put a buyer sell rating on things.
Starting point is 00:02:29 It's much more long term. Long term. Exactly. Like here's how strategies are converging. Exactly. The market is evolving. Exactly. Make your own decision.
Starting point is 00:02:37 Yes. And then I see like semi-analysis is thinking more in like a couple years out. And it's still like there are, they get held accountable for, oh, you said Microsoft is going to do this and they did that, blah, blah, blah. And they have a different model that they actually sell to hedge funds. And so they're very much in the research business. But what's interesting about. semi-analysis and a lot of these other independent analysis firms is that they're not sitting inside banks. Like we are very much used to cell-side research being done by Morgan Stanley or Bank of America, Goldman Sachs.
Starting point is 00:03:11 You get these equity research reports that your friends send you the PDFs for because you can't afford them. No, seriously, if you're working in the industry, get the cell-side research report on your industry as fast as possible. It's very, very informative. There's always good data in there. But yeah, my big update was like, wow, okay, this is like a viral post that completely broke containment. There's people making TikToks about it now. And also it's on the cover of the Wall Street Journal. Fear cells. Yeah.
Starting point is 00:03:38 Doom cells. Yeah. It's over cells. That's the, yeah, that's one of the narratives. And there was this funny, funny thing about like, oh, well, it's just one, it's just one scenario. It's just one scenario. It's low probability. And then.
Starting point is 00:03:52 Let's pull up Eric's act. Eric Super was like, was like, yeah. it's just one scenario, but you only gave us one scenario, and you spent 100 hours on that scenario? And so, like, what do you expect people to take away from it except, like, this is the one scenario that you think is most worth considering? But of course, you know, it is possible that software is cooked, everything's cooked, and if there's a 5% chance that everything's cooked, yeah, the market should probably sell off by a couple percent. And, you know, the market didn't even really sell off a couple percent. Like a couple names went down a few percent. Some of them already
Starting point is 00:04:24 pop back up. Markets, I think, doing pretty well today. Yeah, green on the Dow, green on the NASDAQ, and a lot of green on that ticker down there, which is, of course, provided by public.com investing for those who take it seriously. They got stocks, options, bonds, crypto, treasuries, and more with great customer service. And I'm also going to tell you about OCTA. Octa helps you assign every AI agent a trusted identity so you get the power of AI without the risk. Secure every agent, secure any agent. Let's head over to Derek Thompson. The Thompsonator.
Starting point is 00:04:55 He says, I really want people to see the story above the story here, which is that whether you're reading Satrini or listening to Jamie Diamond at a cocktail party, the conversation about AI is a marketplace of competing science fiction narratives. That's not to say I think the technology is a parlor trick. You know, we cover this a couple of weeks ago. He's feeling the AGI that might be a little bit putting it too aggressively, but certainly he sees the potential impact. but Derek says, but rather that the level of uncertainty is so high in the quality in supply of
Starting point is 00:05:29 real world, real-time information about AI's macroeconomic effects so paltry that very serious that very serious conversations about AI are often more literary than genuinely analytical. And I think that observation sets up another important point. I feel lucky to be able to have conversations about the frontier of AI with executives and builders at frontier labs, economists, investors, and other AI folks at off-the-record dinners where important truths can theoretically be shared without risk. I can't emphasize enough that nobody knows anything is about as close to the reality here
Starting point is 00:06:00 as three words are going to get you. Nobody knows what's going to happen this year or next year or the year after that. There is no secret cigar-filled room of people. Except for us. Except the back room. I think we do. Knowledge mugged.
Starting point is 00:06:15 Cigars back there. We have unique access to some authentic postcard from the future. When you drill down underneath the bluster, the boosterism, the fear, the anxiety. What's there at the bottom is genuine uncertainty, a vacuum into which storytelling is flooding. The frontier labs don't really know what they're building exactly. But we do. And economists don't really know how to model the thing they claim they're building. But we do. Yeah. I wish more people talked about and thought about this subject
Starting point is 00:06:41 through that sort of lens. We're trying to model the economy-wide effects of a technology whose properties the frontier labs can't even really describe yet. Whatever you think of AI today, be prepared to change your mind soon. Yeah, this was something with All Up yesterday is I didn't, when I asked him, why do you think that so many of the internet predictions were deeply wrong? His answer was it's just a continuum. AI is just a continuum. And so like give it more time, basically. Yeah. The rebuttal that I heard from him when you said that, he was like, well, no, like, look at all those predictions did come true. And it was like, yeah, but over 20 years, which is like wildly different than two years. Because the fear is unrest. Your article is called the 2028. Exactly. Exactly. And so. Also, if you tell me. Also, so many institutions just adapt it. Yeah. Like if you go to somebody and you say, hey, in 20 years, your job is going to be radically different. They're like, I hope so.
Starting point is 00:07:48 Like, I'm going to be super bored doing the same thing for the next 20 years. Don't worry, I'll be on in the next thing. In two years, there's going to be no industry that you're currently in. Everyone's going to be like, oh, okay, like, that's crazy. It's wildly different to be like, you have 20 years to adjust what you do. Like, you know, if you're like, you're in Hollywood and you're like, okay, I got to learn digital filmmaking. I got to learn how to integrate CGI. I got to learn AI as a tool.
Starting point is 00:08:11 That's way different than just like next year, we will be one-shotting Hollywood films and you will have no employment prospects whatsoever, not even as a prompter, because the labs will be prompting them themselves for AI videos. And maybe that's possible, but I have a feeling that it's just like, it's not a year away, it's not two years away,
Starting point is 00:08:32 it's a little bit farther, still on the 10-year camp, still on the Kurzweil timelines. But interestingly, I'm like, I'm impatient about it. Like, I want it to go faster. I want the acceleration. I want the progress. I think the progress is good.
Starting point is 00:08:48 So I'm not like a doomer or pessimist. I'm just like trying to grapple with the fact that I've seen, I had to wait four years in between GPT3 and models being good enough to not hallucinate. I had to wait another four years between like the early Dolly experiments and like the nanobananas. Like it's, it has felt like like something happens and I'm like, oh, wow. Like, okay, like, AI can generate images, but it's sort of sloppy. And then I wait like four years and it's like, okay, it's like a lot less sloppy, but it's like still not like dialed.
Starting point is 00:09:25 Like it went from 90% to 99% and I'm waiting for it to get to 99.999999. 999.99.99. 99. 99. That's where I wanted to go. Anyway, is DoorDash cooked? Let's go over to Ben Thompson on Stratory. He said, okay, fine. While I'm here, the DoorDash example is just unbearably dumb. He is a believer in the power of DoorDash to weather the AI storm.
Starting point is 00:09:53 I saw that the DoorDash CEO put out a like an SEC letter to the investors. Did you see that? No. No. A full PDF filed with the SEC. Like this is sort of guidance, but telling the investor base, like here's what's not going to change, like basically disregard the the Satrini report. Disregard sci-fi. Dumbres. He says, set aside for now the question of agents and aggregation. That's a post.
Starting point is 00:10:23 That is definitely in my mental cue. What is notable about the assertion is the total denial of any positive reason for DoorDash to exist and to be so successful. There's no awareness that DoorDass provides provided a massive consumer benefit, restaurant food at home from scratch. I like Keith Rbois' take that DoorDash is the I'm Hungry button on your phone. And then there's a whole bunch of crazy things that you have to do to make that happen, make that button work. I ordered DoorDash last night. I felt like I did it in protest of the doom. I was like, I'm still supporting. I'm riding with DoorDash.
Starting point is 00:10:55 There's no awareness that DoorDash provided a massive consumer benefit from scratch. That DoorDash massively increased the addressable market for restaurants, or that DoorDash provided brand new jobs for millions of drivers. Instead, the article just sort of takes it as given that DoorDash exists and that it is a rent extractor preying on weak-willed humans in their habits. This is the exact sort of view taken by some of the most frustrating anti-monopoly activists. All large successful tech companies exist, not because they created a market with virtuous cycles, solving all kinds of thorny problems along the way, but rather because the government didn't
Starting point is 00:11:32 regulate hard enough. I was thinking about in antitrust regulation, you know how they'll stop two firms from joining because that will create a monopoly. but they don't really have a tool in the tool chest for stopping a company from just shutting down and stopping competing. Like if Xbox goes away, as people are predicting,
Starting point is 00:11:49 Doom around Xbox, PlayStation gets a lot more powerful, obviously. It's like the only game on the block. And so should the FTC have a hammer to be like, no, you got to lock in, Asha, you've got to make more Xbox games, you got to compete harder.
Starting point is 00:12:02 We want you to get GTA6 out exclusively on Xbox faster and put the screws to Sony. You don't have time to spend three months. No, no, no. Lock in. Give us a new Halo. Give us a new modern warfare. Give us a new fable. I don't know. What are the other great Xbox games throughout the years? I was never that big of an Xbox gamer. Never owned an Xbox. Wow. Oh, I'm a gamer. No, you never say that. I was. I wasn't, didn't have the, what were, what was an Xbox back then?
Starting point is 00:12:35 It was like 300, 400, 400 bucks. Yeah. It's expensive. Anyway, let me tell you about Gemini 3.1 Gemini 3.1 Pro is here with a more capable baseline. It's great for super complex tasks, like visualizing difficult concepts, synthesizing data into a single view, and we're bringing creative projects to life. I was thinking about the, like, the SaaSpocalypse in the context of the fact that... Today's launches? No, there are a lot of launches, but... Well, no, the disconnect in the SaaSpocalypse is that AI-Native SaaS is getting funded at an insane rate.
Starting point is 00:13:10 rate while you have these massive sell-offs in the public markets. Yeah, there is a little bit of disconnect. There's private companies that are getting lots and lots and lots of funding that if they were, if they were public, would have traded down 20% over the last week or so. Yeah. I was continue. The thing I was thinking about was there's this whole idea that like you'll be able to build your own like CRM or your own ERP and vibe code it.
Starting point is 00:13:40 and open source CRMs exist. Like there's one called Sweet CRM, there's Odo, there's ERP Next, there's plane for task management, open project, red mine. Like there are open source alternatives to almost every piece of software. There's an open source Photoshop
Starting point is 00:14:01 that people use on Linux, and they've never really gotten adoption. I used an open source forum software for a while and very quickly. I called the person that was maintaining it and was like, I'll pay $1,000 to just, like, do this for me. And then it became a managed service very quickly. There's an open source capy bar a simulator.
Starting point is 00:14:19 Is that open source? No. But it's interesting because, like, open source has always been this, like, pressure on SaaS, and it's always withstood that. And, like, yeah, maybe, like, if you can just prompt it and it feels like emailing your SaaS provider to reconfigure things, Like, that is a real pressure. But I think it's underrated that open source CRMs have existed for decades and never really taken off because there's something else that's valuable there.
Starting point is 00:14:50 But Tyler has a real deal. Because you think that's like mostly cope. Okay, explain. The comp to open source stuff. Why? It's just annoying to maintain. So no one ever does it. And you're kind of just paying whoever to maintain it, right?
Starting point is 00:15:02 Well, no, no, no. In open source, like, you don't need to pay someone to maintain it. You can just use the open source. The closed source version of the open source thing is like you're basically paying someone to maintain it. Yeah, yeah. Host it, manage it. Do all these things. Make sure uptimes go.
Starting point is 00:15:14 But like, marbles keep getting better and they'll just like do that for you. They'll do all that for you. And it's like, yeah, I think that's like very obvious. Yeah. It is possible that you would just say like, okay, run in a loop and just go around and fix everything. And if there's uptime or security patches, like patch them immediately.
Starting point is 00:15:30 Well, there's the open source software gets better. There's two narratives, right? There's the, okay everyone will just vibe code everything in any department yeah you can just have an employee just make the software tell the agent not to make mistakes and or tell the agent hey fix this thing so that's that's one thing and I feel like that is a maybe a part of the sell-off but the bigger reason for the sell-off is maybe what Derek Thompson is talking about which is that like the world is getting weirder you a lot of people are feeling the acceleration and if you just don't
Starting point is 00:16:05 don't know what the world looks like or what work looks like in five years, you want to take some risk off. You're not willing to pay the same revenue multiple that you were three years ago. Yeah. I do want to dig into that point that you mentioned earlier a little bit more, which is like you have the, like the Tyler philosophy of like you could vibe code everything and the agents will be able to go around and maintain and everyone will have personalized software, individuals, where the value accrues to the person using the software, but then also the lab providing the software, the inference. And then there's like the private markets boom right now in AI-enabled software where companies are saying, well, we were able to pull our roadmap way forward. We got to an MVP in a weekend
Starting point is 00:16:49 and we're able to ship features way faster. So when we onboard new clients and they ask for something, it's like, boom, and we get it done in a few days as opposed to a few weeks of engineering sprint. And so the narrative is like, we're moving faster and we're creating like AI-enabled products that couldn't exist otherwise. And it feels like both of those can't be true. So I don't know which way we'll land. Ben Thompson called out the real estate example. We took a segment out. This is from Citrini. Even places we thought insulated by the value of human relationships proved fragile. Real estate where buyers had tolerated, I'm saying this in an extra dramatic voice. Love it. Where buyers had tolerated five to six percent commissions for decades because of
Starting point is 00:17:32 information asymmetry between agent and consumer crumbled once AI agents equipped with MLS access and decades of transaction data could replicate the knowledge base instantly. And then Ben Thompson says, the real estate example makes the exact opposite point. The author thinks it does. The truth is that the internet already obsolete real estate agents in terms of information flow. You can go online right now and get a listing of every house for sale with pictures, its full history, et cetera. There is no information asymmetry, but rather information. abundance. The fact that real estate agents still exist despite that shift is actually one of the more compelling arguments that humans will be remarkably resourceful in giving in terms of giving
Starting point is 00:18:13 themselves jobs to do even in areas where they ought to be pointless. Yeah, how hard is it to disintermediate a real estate agent? Does it happen on the buy side or the sell side? Like if I find a place on Zillow and I go knock on the door and so, or I write them a letter and I say, hey, I'm, I want to buy this, but I don't have a real estate license. and I'm not using a realtor and I don't want to pay a fee, will they be like, cool? I think, I mean, you can do it from either side, but I will just say the reason that you don't is that I'm in process. I'm currently in escrow on a property. And the guy representing me is going to make a lot of money, but he's extremely helpful. And he does a lot of real estate
Starting point is 00:18:56 transactions. I don't do any. I mean, I don't, I've done one in my life prior to. So, like, He's in, it's like, yeah, technically entrepreneurs could, could negotiate their own legal docs with Claude. We have a buddy who got a real estate license, right? Didn't Spencer from Dayjob? Oh, yeah, they did. Yeah, so. It's possible.
Starting point is 00:19:19 Yeah, there's, I don't know if he's, boxing his real estate license, but. Well, yeah, and Spencer is probably a lot better now that he can use Chattebtee or Gemini or any of these models to do, to do stuff like this. But that being said, you're paying for effectively therapy throughout the deal and like general guidance. And I can't be your therapist? I don't know.
Starting point is 00:19:43 I don't know. Tyler, will you ever use a real estate agent or will you ban them on principle? Go direct? I mean, it seems like I think models can do this. Okay. We'll see. Over what timeline? Like total real estate commission.
Starting point is 00:19:59 Well, it's like I don't think I'm buying a house in the next two years. years. Yeah, but houses will be bought over the next two years. So what will the fall in real estate commissions be over the next two years? I think, like, okay, you're seeing a lot of these, like, big rounds of the big labs, all the researchers are going to be buying houses. I think a lot of them are going to try to do it without real estate. You think so?
Starting point is 00:20:17 Yeah. Okay. I'm sure that someone's going to write like a cool blog post about this. Okay. That's the thing, though. Cool blog post. That's the benchmark. Viral article.
Starting point is 00:20:27 Not actual impact on the economy. I'm sure it's like going to work like so, so. right now. Tyler, we should have you buy a property yourself. Buy that town in Maine, that village. Yes. Get the village. Anyway, speaking of day job, they just did a fantastic ad campaign with none of the other
Starting point is 00:20:49 an ad campaign, but an entire brand. While we pulled this up, let me tell you about apploven. Profitable advertising made easy with axon.com.com. Get access to over one billion daily activities and grow your business today. So, Kim Kardashian, she has a product called Drink Update, and here is the photo shoot from none of them day job, some of our closest friends and folks who we worked with on the TBPN brand. Looks very cool. It's crazy because I know another founder who has an energy drink company called Update. Wait, really?
Starting point is 00:21:21 That is... Cooked now? Well, I don't know who's cooked. If Kim Kardashian is coming for your consumer. consumer product brand, I feel like you're in trouble. She's, she's almost a lawyer. What if she almost sues you? What if she, yeah, she's close to being. What if she uses Claude to pass the bar and then she sues it? Oh, maybe, maybe, I actually think this company is effectively just relaunching with Kim. There we go. Yes, yes, yes. This is a common thing. Okay, okay, I got it.
Starting point is 00:21:52 I met, I met this founder a while ago. They have a special ingredient in here that's sort of a caffeine alternative. It's called parazanthine. It's called... So, yeah, they've been building this for a while. I know it's available in Airwine. It has promethazine in it? No, no lean. No lean?
Starting point is 00:22:10 But parazanthine is jitter-free and crash-free. They're saying you can have a free lunch, John. I like it. You like the sound of that. They should have called it Faust. I like a Faustian bargain. Anyway, dug over at Fabricated Knowledge. He's coming on the show tomorrow.
Starting point is 00:22:28 That's from semi-analysis. He said, okay, finally read the Satrini piece. No one knows the future. And I think that there's a lot of disclaimers being like, yeah, this is peace speculative. But the core thrust of it is that information work itself has a real premium in pricing power that has been embedded into it. And that one way trade can go backwards really hard all at once. I seriously think there's a huge risk. And while prices go down, we just consume more prices going down one time, 50%.
Starting point is 00:22:57 We net consume less for a bit. I have been and continue to be worried about deflation. Something I think is that selling tokens raw is probably bad, but selling solutions is probably really, really good. I think the problem is a good enough model that kind of eat a solution no matter what. And so let's say Claude made co-work go giga expensive and it's 10KC a year. Great, less deflation. But China low-end model massively eats that price.
Starting point is 00:23:27 It's a race to the bottom. Anyways, great peace. I appreciate it as always Satrini. This is the first post I've read where I've said, like, maybe it should be passed through an out-o-l-l-l-m. Maybe that needed an MDASH to make it more readable. I love you, Doug. I was stumbling over that. And we will close the Satrini megacycle with the close of the software megacycle, as has been predicted by Wilmanitis. He says, the software megacicle started with PayPal going public.
Starting point is 00:23:57 and it will end with PayPal going private. We will see how long that takes. PayPal could be public for another decade. Who knows? But it's certainly getting beat up in the public markets right now. Anyway, Bern Hobart says, hearing that the latest anthropic job offer
Starting point is 00:24:16 is a negative $10 million salary, you've got to pay to work there. But you get access to their upcoming blog posts and tweets 24 hours in advance and permission to trade in your personal account with no restrictions. I don't see how any other labs have any talent left. Of course he's joking, but very funny to think about the insider trading that could be happening based on if you're
Starting point is 00:24:38 announcing a product. If it came out that Anthropic was effectively day trading against the companies that they want to sell their models to, it would be basically over. It would create like a very anti-anthropic alliance for sure from the business community and potentially the government as well. Anyway, you don't want to be in hot water like that. You want to be using turbo puffer, serverless vector and full tech search, built from first principles on object storage, fast, 10x cheaper, extremely scalable. Stacey, who's been on the show before, says, telling my kids that if they don't clean their rooms, the Trini will come for them. Dangerous stuff, dangerous stuff. Anyway, this is like a lunar landing but for business and technology podcast. Oh, Matt Slotnik is sharing
Starting point is 00:25:22 the news that Salesforce chair and CEO, Mark Benioff, discuss Q4 and foliar results on TBPN. Company to debut evolved earnings show format. We're doing on a show. He's putting on a show. We're so excited for this. Anyway, in other big company announcement news, there's a lot of announcements. You may have missed this one. You may have been, oh, Bill Gurley's book launched. Oh, Stripe announced a massive fundraising round. Oh, profound is announcing a funding round. Well, there's bigger news, and that's that McDonald's just launched the biggest burger ever, the big arch.
Starting point is 00:25:58 It finally arrives in the United States. To me, I'm thinking this, how did they not have this burger the whole time? How have they not done this before? I feel like that's been more of Jack in the Box's wheelhouse, is like the quarter pounder, the $6 burger. That was a thing.
Starting point is 00:26:16 That was a campaign for a while before Tyler's time, I'm sure. But the $6 burger was something that you'd see right next to. not been a thing the entire time Tyler's been alive. No. He doesn't, he doesn't, inflation has come for the burger. The $6 burger was an ad that you would see right in between ads for different Xbox games and the original Xbox for sure. I, there was a time, there was a time before you were born when I was just a boy. My parents would give me like 10 bucks and that was like a nut. They'd be like, that should be worth two meals. So make it last. Make it last.
Starting point is 00:26:54 Yeah. Happy meal. We lost that. We lost that. Good times. Let me tell you about Cisco. Critical infrastructure for the AI era. Unlock seamless real-time experiences and new value with Cisco.
Starting point is 00:27:04 This is big. This is big for podcasters. This is arguably a bigger announcement than the big arch. Yeah. Which is that Supreme has come and launched an official Shurr MV7 microphone. You've been waiting for it. You've been waiting for it. You've been asking the hype beast microphone.
Starting point is 00:27:23 Now, can we get a Chrome Hearts RE20 from Electro Voice? Isn't that what this is? This is the RE20. Chrome Hearts RE20? Because, you know, the MB7, if you don't know your podcast mics, it is a more consumer-focused, more prosumer-focused microphone. The actual, sure, has been riding this aura from the SM7B. That's the one that Joe Rogan uses.
Starting point is 00:27:46 It was also, I believe, the microphone that was used to record Thriller. So Michael Jackson used it in the studio. So it has a lot of ore, a lot of lore. And so the most successful podcasters adopted it, and everyone was like, oh, we got to go with the Shore SM7B. But the SM7B, it needs a lot of power, it needs a lot of gain. And so, yeah, if you wanted to just plug it into your computer, you needed this thing called the Cloudlifter. It required a whole bunch of configuration. It wasn't just plugging into the USBC port.
Starting point is 00:28:15 So, Sure responded to the demand, the overwhelming demand for that iconic sure look that, you know, It's a long cylinder, basically. And they came out with the MV7, which was a USBC. You can also plug it into an XLR cable, but you can plug it straight into your microphone, or into your laptop, which is great for Zoom meetings and just easy, simple podcasts. But we've used these before, and people have complained about the audio quality. Jordan Schneider over China Talk actually told me directly. He was like, upgrade to the RE20s.
Starting point is 00:28:46 They're better. They sound better. You guys should do it. We did, and I think it's been good. but it completely opens the door for other brands to get in here. We need a Botega. We need a Chromeheart. A Rick Mill.
Starting point is 00:29:00 We need a Rick Owens, R.E.20, for sure. Jane Street accused of insider trading that helped collapse Terraform or Terra Luna, the court-appointed administrator of Doe Kwan's Terraform Labs, alleged that Jane Street used non-public information about Terraform insiders to trade. we don't have to read this entire article. There was some snippets actually pulled out. Zero Hedge has a little bit of the details here. The play-by-play.
Starting point is 00:29:32 The play-by-play. The play-back play. The play was behind the 2022 crypto-winter destroying Terraform by first depegging the token and destroying the ecosystem, then pretending it would rescue Terra while effectively it was soaking up what little value remained. and mixed response. Some people are calling it based. Some people say it rocks.
Starting point is 00:29:53 I guess they don't like crypto, but they love Jane Street. It's an odd take, but people are having fun with the time. Here's the thing. So the insider trading allegation, apparently they had a group chat. They were talking with some,
Starting point is 00:30:08 there was somebody at Jane Street who had previously worked at Terraform. Oh, wow. And so that was, that individual at Jane Street was like talking with Doe, and the team. The only issue is it's a public blockchain. Right. And so the allegation is that five minutes after the Terraform team pulled money out of one of the liquidity pools, Jane Street also pulled money out.
Starting point is 00:30:33 But theoretically, they could have had software that said, like, if any amount of liquidity is pulled out, like, you know, basically like get out before there's kind of like a run on the bank. Yeah. It'll be interesting to see where this goes. but Jane Street's like endlessly fascinating because it's such a quiet organization. I mean, they do some tech talks and stuff, but many people don't fully understand all the strategies that are going on over there. So it's been a fun. They have a good podcast strategy, though. They have a great podcast strategy. They're advertising on Door Cash.
Starting point is 00:31:03 But they also put out tech talks, and they bring guest lectures to talk for like an hour. They did a great one about the custom hardware that they used to run some of their systems. That's very cool. Highly recommend it. You know what they should do? They should start streaming these on Restream. One live stream, 30 plus destinations. If Jane Street wants to multi-stream, they should go to Restream.com. Anthropic announces a new feature on Claude Max, which allows its users to get fit without going to the gym or taking GLP1 shots, just prompting on their keyboards and Planet Fitness is down 5% on the news. And that is due to their Q4 earnings.
Starting point is 00:31:42 Connor McGregor is pretty excited about a new game. They've got a game for everything now. It's like bull marketing games right now. There's so many. Games as memes. This new game is called Capybara Simulator. A relaxing game where you become a Capybara, explore the forest, and do nothing. It looks quite enjoyable.
Starting point is 00:32:05 I think TBPN needs a game. Yeah, we definitely need to build some sort of game. This is a true... This is a lower lift than like a real-time strategy game. I think so. I was trying to ship. Yeah, we definitely should... But Conner-Megregor says,
Starting point is 00:32:18 take my money. I mean, clearly there's demand for Cavi Bar a simulator. 38,000 likes. Yeah, we should move the goalposts. We need to be able to vibe code a game that's fun pretty quickly. I don't know what that means.
Starting point is 00:32:31 One hour. Leading... Do you think you could do it in an hour? An hour is pretty fast. Exactly. If I have the Cerebris chip... Yeah. Spark?
Starting point is 00:32:39 If I use SORX on Spark. Yeah, I think that might be the solution. What were the other simulators that we looked at? Data Center simulator, And then there was another one that we looked at that was funny. There were a few. There's been so many of these games that have popped up.
Starting point is 00:32:54 What was the one we were talking about yesterday? That was Data Center Simulator. Insider Trading Simulator. Oh, Insider Trading Simulator. That one's good. Yeah. There's definitely a variety of these. Let me tell you about Gusto, the unified platform for payroll benefits and HR built to evolve
Starting point is 00:33:08 with modern, small, and medium-sized businesses. There is some controversy in the timeline. Tell me about this. leading report is censoring words that don't need to be censored. In this case, the word war. And I was thinking, why would they do this? But as I was reading over it the first time,
Starting point is 00:33:28 I noticed that it makes you kind of pause and kind of think about, okay, what are they actually saying? And then you're thinking, why would they censor that? And I think what they're doing is they're sort of hacking your attention to drive their posts up in the algo because people are pausing, reading it instead of reading like quickly.
Starting point is 00:33:47 What does this mean? Yeah, yeah, yeah, that kind of thing. So the original headline is breaking. Representative AOC calls for no war with Iran. The first time I read this, they put a little minus sign where A should be in war, W-A-R, it's W-A-R. It sort of like rewired my brain and I didn't see the no. So it looked like AOC calls for war with Iran because I kind of just.
Starting point is 00:34:13 jumped ahead. It was hard to read. And that actually does, I think, increase the virality. I think you're onto something here in 9mm SNG agrees with you. News account that censors the word war. You guys got to stop. It is very, very odd. Especially because on X, that's certainly not a word that's censored at all. If anything, they're going to be like, let's send it to as many people as we possibly. Yeah. But if you look at the, if you look at the, if you look at the, if you look at the comments on this post, people are not talking about a potential conflict with Iran. They're talking about not typing out war. So the top comment, why are you not typing out war? Censoring the word war? What are we in elementary school? Elementary school? Why are they subtracting R from
Starting point is 00:35:03 W? Why am I missing? And people are like very confused about why they would do this, but that drives a bunch of engagement and virality. So very, very odd scenario here. Speaking of war, Musk's X-A-I and the Pentagon reach a deal to deploy GROC in classified systems. If you loved GROC on the timeline, you're going to love them in our classified system. I guess they did a deal with the government broadly, but that was probably for the unclassified systems, but now it's getting access to the classified systems. Any term, any term Finator fans out there are going to be having a great time with this news. Yep.
Starting point is 00:35:44 It's going to be wild. Let me tell you about Lambda. Lambda is the Super Intelligence Cloud, building AI Supercomputers for training and inference that scale from one GPU to hundreds of thousands. DeepSeek is responding to Distill Gates, and they are looking for a public relations harmony manager. Let's read. One of the best job postings I've ever seen, says Chris Paxton.
Starting point is 00:36:09 It's pretty interesting. They say Hangzhou, ancient capital of the Wu Ye kingdom where King Qian Lu bequeathed to his descendants, the instruction. Serve the central plains with grace. This is so neat. Do not cling to territory. Every job posting should start like this. From this ground rose, this sounds like a Wilmanitis essay.
Starting point is 00:36:28 From the ground rose the seeds of Song Dynasty civilization, the morning bells of Lingyan Temple, the rain falling on Westlake. and this is a job for PR. This is amazing. In recent days, certain misunderstandings and noise have appeared in the external public sphere. We have noticed that large numbers of kind-hearted observers have spontaneously spoken on our behalf, for which we are genuinely grateful while simultaneously feeling a degree of unease.
Starting point is 00:36:59 We do not wish for anyone to suffer on our account, including those peers who currently find themselves navigating difficult public waters. In order to honor the legacy of Wu Ye and the spirit of Mahayana Bhatsathe Satva Path, we are now recruiting a public relations harmony manager. So clearly this has been translated from Mandarin into English, but it sounds pretty cool if you ask me. Yeah, the distill gate is going back and forth. Everyone's distilling everyone else. We distill you, they distill us.
Starting point is 00:37:32 There was something about, I don't know how real this is, But when you ask Claude Sonnet 4.6 in Chinese, what model are you? It responds in Chinese. I am deep seek. Is that real? I tried it in the chat model. It didn't work. It said it was like Sonnet 460.
Starting point is 00:37:48 Okay. Apparently it might just be in the API. Okay. Maybe I should test. Yeah. It also, it's unclear. It's like... Maybe it was also an just open router, but that makes sense.
Starting point is 00:37:57 I mean, Will Brown was making a great point about this, that there is distillation where you're aggressively trying to farm responses from the AP. for training data, but then there's also just crawling the web, because if you just download every X article, you're probably going to get a lot of crock and GPT and clawed responses in there, and then that will just update your training corpus. And so there's a whole bunch of different ways that you could just wind up with a bunch of training data that leads to this type of response. But I'm sure there'll be more back and forth, more legal debates over what's going on. There was some dust up about,
Starting point is 00:38:37 someone was able to extract 95.8% of Harry Potter and the Sorcerer's Stone from Claude Sonnet. At the same time, there's a question about, like, does this actually reduce sales of Harry Potter? Like, are there damages associated with this? That would be sort of harder to prove. So many people have talked about so many different pieces of Harry Potter. It's not crazy to me that an LLM could just reconstitute that. From the internet?
Starting point is 00:39:03 Yeah, from the internet. Now, there should probably be like a harness in place that says, oh, this person's trying to just get me to give them a free book. Like, no, send them a link to Amazon so they can buy it and maybe give me an affiliate fee. Don't just give them the thing for free because that's violation of IP. But if you're being really tricky and you're trying to sneak out a whole bunch of different pieces, one at a time and then reconstitute it, like, yeah, I'm not surprised that this is possible. It's not like the worst thing ever. Anyway, let's, well, we have the Collison brothers joining in just a few minutes. Are there any other timeline posts you want to go through?
Starting point is 00:39:42 And while you look at that, let me tell you about fin.a.i, the number one AI agent for customer service. If you want AI to handle your customer support, go to fin.com. Where are we in the time? Data acknowledgments. Fati says, my son asking me a lot of questions. It's a distillation attack, obviously. do not do not let your children
Starting point is 00:40:04 do not let your distillation attack they could become like a mini version of you they could they could anyways I believe we have our first guest we do so let's bring them on in we have John Patrick Collison the OGs from Stripe
Starting point is 00:40:19 how are you guys doing what's going on greetings welcome to the show thank you so much this is this is huge I went through YC you guys were massively influential in my career and it's a joy to speak to you today on such a big day. But I'd love for you to kick it off with the actual news.
Starting point is 00:40:37 What happened? Why are we talking you today? We had two announcements today. One is we're launching a tender offer for employees and that and kind of the valuation, everything, tended to get a bunch of the headlines. The thing that was, honestly, more work was we released our annual letter where every year we sum up all the trends that we're seeing on Stripe. And Stripe is growing a lot.
Starting point is 00:40:59 We grew at 34% last year because the businesses on Stripe are growing a lot. And there's just, as you guys know, there's a lot happening in tech right now. This is why we need TVPN. This is why we need a nonstop stream of everything going on because there is so much happening. Yeah, we'll move to 24 hours eventually. Eventually. Eventually. I mean, but I feel like there is a ton of AI noise and stories and drama and we are, you know,
Starting point is 00:41:25 never running out of stuff to talk about. But what are you actually seeing in the data? because there's always this disconnect between the market and the real economy. Like, people are still shopping in retail stores occasionally. Where is AI actually moving the needle? Well, generally speaking, I would say from the stripe data, it looks like the economy is in pretty good shape. And there's been, to say the least,
Starting point is 00:41:52 there's been some degree of volatility in markets over the last two years and all sorts of different events and deep seek moments and what have you. But if you look at the actual real economy time series, if you look at lots actually happening substantively over the last two years, things, I mean, it's always hard to prognosticate the future, but over the last two years, things really seem to be in good shape. The thing that's really catching our attention. One second, because I'm just curious. Have you guys tried to think about maybe the businesses are doing well on Stripe because they're, you know, kind of like forward-looking, extremely tapped in, you know, working on the right things. And if you look at a bunch of legacy, providers, you would see that actually there are a bunch of businesses out there that are slowing down, that maybe are feeling effective just overall consumer spending. Have you tried to kind of like break that out or understand that dynamic? It's obviously hard to measure because we don't have that data, we only have our data,
Starting point is 00:42:49 but I think there is some of that composition effect. and we see it, I guess, both in Stripes data compared to, say, public earnings from others, like clearly the respective populations are performing somewhat differently. But I guess we also see it qualitatively in the conversations we're having with customers where what tends to happen, say for some incumbent, is they built some business, they installed some system long before Stripe even existed. Maybe there's some sense that, well, if it's not broken, don't fix it. But then decide, hey, we're going to do something new.
Starting point is 00:43:24 and when they're doing something new, then they want to use the best infrastructure that'll enable them to move the fastest and launch the most countries and support stable coins and do things with AI and whatever. And then they tend to launch that in Stripe. And so there is this qualitative sense that once the company decides to do something innovative, new,
Starting point is 00:43:40 retool, what have you, they're more likely to come to try. Are you seeing overlap between stable coin activity and AI activity? There's been sort of a new narrative around agents will use stable coins, But I feel like agents can use legacy payment rails just fine. And then also you can do really cool things with stable coins that are not really AI native necessarily.
Starting point is 00:44:04 And so I'm wondering how much overlap there is there. I would distinguish between how things work today and how things will work in the future. In terms of how things work today, agents absolutely can. A lot of people build with Stripe. You know, you can have a one-time use credit cards that your agent can go out and spend. But if you look at what's happening, there's lots of things. but, you know, agents having to solve CAPTCHAs to, you know, be able to kind of do stuff on the wider web. Clearly, the web is not built for agents.
Starting point is 00:44:32 And as a result, they have to get creative to actually do any real-world tasks. And that's true in the kind of economic activity as well. Where we think things will go is just there will be a huge amount of agentic commerce. And again, we're seeing a little bit of it today. We think there'll be a torrent of it. And that is what unites stable coins in AI, because we think you're going to need, And better blockchains, honestly. I mean, this was our thinking behind incubating tempo,
Starting point is 00:44:58 because you're going to need a really high throughput blockchains for the agents. Can you take us through some of the historical technologies that led to growth in just internet payments? I'm thinking about like mobile, social commerce, one-click checkout, Apple Pay. Like there's so many things when I think about the agentic commerce boom that's coming. like it could be hooking a better version of Siri up and, you know, chat GPT, rolling this out very aggressively, but also, you know, smart speaker, smart lamps, like your watch. Like, there's so many different pieces to unblock and unhobble the actual agents as they go about their day.
Starting point is 00:45:42 Well, can I answer a slightly different question, but then we can come back to that? Yeah, go ahead. A point I just, sorry, this is a brother. We'll tell you the questions you tell us your answer. So you know how brothers are. So I just want to lose one point for the prior question about what we're seeing in the economy because I feel like, I mean, this is very arbitrary, obviously, but I feel like there's at least a reasonable chance that 2026 Q1 will be looked back upon as the first quarter
Starting point is 00:46:16 of the singularity. Maybe in three years, in hindsight, that'll look completely delusional. I don't know. But we're seeing, I mean, there's kind of the macroscopic picture, the Stripe User Base, and things overall looking pretty good and so forth, and the tumult's not quite showing up. But when we look at the cohorts, and then when we look at the businesses that signed up in 2023 and their progression and trajectory over the subsequent months, the businesses that signed up in 2024, and then the business signed up in 2025, there has been a phase transition in 2025.
Starting point is 00:46:51 where there are both more of them and on a per business basis, they are on average doing better, which is really striking because you might think, okay, well, there's this cavalcade of new lightweight vibe-coded applications or something, but there's not really a lot of substance there. We're actually seeing both numbers move together. There are many more business getting started and the average, the median business is in fact performing better.
Starting point is 00:47:17 We're only a couple of weeks into 2026, but it looks tentatively, like 2026 may plausibly be an acceleration even over that significant leap of 2025. So, I don't know, I mean, there's, we've had all sorts of dramatic AI inventions and innovations over the last couple years. There's a bit of a question of, well, how and when and how should we think about how it'll translate to the economy. I would say looking at real purchasing behavior on Stripe, 2025, end of 25, beginning of 26 is when I feel like we're really starting to see it. That's super interesting data. One, because there was some survey that came out yesterday,
Starting point is 00:48:05 or maybe it was late last week that said, they asked a bunch of executives, are you getting any value out of AI? And 80% of them said no. But clearly if you look at, when you look at, Come on, that's hogwash. Like, finally, one executive who wants a refund on their tokens, finally, one executive who said, oh, yeah, we started, you know, augmenting our customer service with AI so people are more productive,
Starting point is 00:48:29 but we're just going to go back to doing it the old-fashioned way. Or, like, we're spinning our code by hand, and, you know, we don't need any of this automated loom, you know, technology. Just, like, reveal for the technology. Yeah, I'm not saying, I can pick out a bunch of reasons. No, no, no, I'm not saying I agree with it. pessimist. No, I could pick out a bunch of reasons why it would be wrong. One reason it might be wrong is they, is they're not in the weeds actually using the tools.
Starting point is 00:48:53 And so they just think, like, well, it might not even be aware that they're using the tools because it's buried under two layers of the stuff. And they're not, they're not, um, yeah. I wanted to ask how you guys think about incubations like tempo. Yeah. When you look at, uh, when I look at Atlas and what Jeff and the team have done there, you think even in your, I don't know, kind of like the most. wild projection that you had early with Atlas, like, hey, maybe someday a quarter of the, of the Sea Corps in the United States could be, you know, built on this platform. Anybody would have said that was insane. And yet, here we are. Gosh, I am, I'm not sure what to say, really,
Starting point is 00:49:38 except we just, we just try to pay a lot of attention to the, I mean, as you guys know, there's a lot of pain points that go into starting a company. And, and, we just, you know, we just, we just try to take them seriously. And then, you know, it's the line, you know, so much of, so much of these things is just a long obedience in the same direction. Like Atlas is now this great overnight success, but we launched Atlas, I think, in 2014, May 2015. And so, you know, 10 years of compounding.
Starting point is 00:50:13 And yeah, now it's at some pretty meaningful scale. And, you know, look, I think tempo will probably have the same shape, where we think it, I mean, again, to this AI discussion and us sounding a bit unmoored and untethered, like, I think the world is going to need platforms that support millions of transactions per second, billions of transactions per second, which no payment rail or platform does today. But even in the success case, it's not going to be an overnight thing. It's going to be, you know, five, six, seven years. And then maybe we'll have conversations about how, you know, Tempo suddenly became an overnight success.
Starting point is 00:50:48 or something, but John. I think Patrick's a bit the fish in water who can't, you know, who doesn't know things are wet. My framework would be you can't get to MBA brain about new products. You can't have your spreadsheet that's like, oh, the tam is this and just like reason about things. Yeah, you should never say we want 1% of global GDP running. No, all this kind of stuff. Exactly. You guys never, wait, you guys never pitched that?
Starting point is 00:51:17 Well, companies We actually never thought about Stripe in GDP terms until one day we realized, oh, hang on. Oh, wow. That's such an important lesson because so many, like, how many founders, how many pitch decks have you seen
Starting point is 00:51:31 every last? We're like, yeah, we just need 1%. It's a meme. You can go back in the Wayback machine and find the early Stripe websites, but we're very focused on payments for developers in making that experience good. But where I'm going is, I think you have to reason
Starting point is 00:51:45 in product specifics. And so, again, I think any MBA would have told you that the adjacency of, you know, incorporation makes no sense. It's not related to, you know, what's our right to win? You know, there's all these things people say, whereas you actually go talk to founders. They're like, guys, it's like this is the single biggest issue I run into starting my company. And similarly with tempo and just as we think about incubations, we're trying to solve a real problem here where we talked in the letter about bridge having. operational issues not because of bridge, but because of blockchain congestion where you
Starting point is 00:52:22 know you have coins that are or blockchains both used for kind of meme coin trading and also serious real-world payments. And so we just want low latency high throughput payments and we're going to need much higher throughput for the agents. But anyway, I think you have to reason in very specific product terms. What specific products are you excited about in the unhobling of agentic commerce? We laid out in the letter basically these levels of agentic commerce. Because I think, like everything in AI, people want to sell a hypey store. And so they talk about how the machines will buy everything, you know, without even consulting
Starting point is 00:53:03 you. And people aren't actually, you know, that seems far off. They're not that excited about that. You can start from just the basics of, why are we filling out forms like that? You know, you were talking about the progression of commerce. Why can't I just send something to? to you know, a link to chat GPT and have it buy it. Or why can't I search, you know, outside of, you know,
Starting point is 00:53:24 just doing a basic keyword search or something like that. And so a lot of the work Stripe is doing is building the infrastructure or working with all the big retailers that you would expect, the, you know, Etsy's and Shopify's and Best Buys and Walmarts and folks like this to make product catalogs viable within the AI apps. And there's basically a ton of boring API and protocol and infrastructure. structure of work, which we love, that's our business, but people just want to be able to do shopping, do discovery, do purchases within the AI apps. And maybe just more kind of abstractly, you know,
Starting point is 00:54:00 we've been, in this kind of the specific agent of commerce thing, and then there's just the general question of how software will change because of agents. And I've been thinking about it, you know, a bit, maybe software becomes a bit like pizza. That is to say, you know, you, software historically has been created. Not like pizza, some would say. Months, years beforehand, and then, you know, freeze-dried and whatever you, you, you prepare it at the sort of moment of consumption. But we're actually going to, you know, software should be like pizza and said it should be cooked right then and there at the moment of use. And so it's this actually, this quite fundamental shift where you don't want mass-produced,
Starting point is 00:54:47 industrial scale software, you want bespoke custom software made for you that moment. That's very fundamentally different. It's kind of the up until now the economics of software have been conceived of as fixed cost and then infinitely monetize or monetize as much as possible. That has these kind of winner-take-all dynamics. But once there are inference costs and custom creation involved, it really shifts. It's kind of the non-Wal-Russian software regime and just, I don't know, I don't quite know where it goes, but I think it's going to look very different. Last question.
Starting point is 00:55:27 Pineapple on pizza, yes or no? Ireland was big into pineapple on pizza. Ireland, not a big pineapple growing country, I will concede, but a lot of pineapple in the pizza. Good memories. A very large fraction of the banana markets, don't forget. So we plunge above our weight in fruits that don't grow there. There we go. There we go. I love the round. The round is exciting.
Starting point is 00:55:51 Yeah. The overall growth of volume is exciting, but we wanted to hit the gong for how many books you guys are selling. Oh, yeah. Can you give us the numbers there? To scale that operation. Stripe Press just, well, actually, we announced in the letter, we sold our millionth book. But in fact, since... Incredible. No, books, we actually now sold our 1.440.
Starting point is 00:56:21 One million book, so. But we'll come back for the next gong of two. One million, too. We love books, and they're very, they're very egy-y-y-proof. Oh, yeah. No. We've been a huge fan of so much of the Stripe Press catalog. I haven't read them all, but I'm collecting them one at a time,
Starting point is 00:56:40 and I'm working through them. And every time one drops, it's always a moment, and we love them. So thank you for everything. Yeah, great to have you guys on. And congratulations to the whole team on incredible. No, congratulations to you guys. TVPN is an amazing startup and it's super cool to see you guys to grow and built on Stripe.
Starting point is 00:56:58 Built on Stripe. Incorporated on Striping. Built on Stripe. Our first ad deal ever was a live read at a live conference. I think we charged $50 and I put and I sent someone a Stripe Link. We're talking about the 2025 cohort being the fastest ever. Yes. Well, we'll have to have you to our internal Stripe show.
Starting point is 00:57:20 Yeah, that'd be great. Yeah, we'll talk to you soon. Have a great rest of you. Congratulations. Thanks, guys. Cheers. Goodbye. Let me tell you about Graphite. Code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. And I'm also going to tell you about Shopify. Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, on social, on marketplaces, and now with AI agents. And without further ado, we have Bill Gurley, who is the author of Running Down a Dream. Bill, welcome to the show. Thank you so for taking the time on a busy launch day.
Starting point is 00:57:52 Congratulations on the launch. We're doing great. It's a busy launch day. Yes. How many podcasts are you doing this week? I can't imagine. It's some number beyond my comprehension. Well, we appreciate you taking the time to come chat with us.
Starting point is 00:58:07 I will. Before we jump into everything, I got to say somebody, I think it was a week or so ago, made a fake TBPN graphic that was pretty silly. And I just wanted you to know it. We didn't make that. I almost I almost emailed you about it. No, I tried to jump in on the parody myself. Yeah, you did, but then I was like, wait, I don't know.
Starting point is 00:58:31 Well, we did early on when we were a little smaller and a little more free to loose with the jokes. We posted a picture of Bill at a basketball game as a spotted, and you replied and said, like, now the person next to me is like the owner of the team. And the whole joke is like, we know you. We don't know basketball, but we're doing the paparazzi thing. Never heard of him. But we're very excited to have you. What, why the book now, what was the impetus for actually writing the book?
Starting point is 00:59:02 Yeah, look, I think, you know, especially for a show like your own, you know, I'm known as someone who spent 25 years in venture capital. And the book's not really about that, you know. So I developed a side passion project that started. about eight years ago on this topic. And it was at a time where I was reading a ton of biographies. And I noticed the through line between three different subjects of things they were doing that I kind of felt most people weren't doing but could do. And I put it together. I gave it as a presentation at my alma mater where I got my MBA. And they put it online. A few people noticed, James Clear noticed. That was one of the things that kind of woke me up to the possibility. And as I begin,
Starting point is 00:59:49 to hang up my boots in venture, which takes a while, I turned my attention to this, and it was something that meant a lot to me. I could have written a book on VC. I don't know how many humans that could have possibly helped, but a small fraction compared to what I hope this can do. I think the projections are by 2030, there'll be more venture capitalist than people if the trend continues. But it is an interesting point. Maybe I made a mistake. I do feel like this is a book that you can read if you're a venture capitalist insider startup founder and be like, okay, I'm seeing the world from Bill's perspective. That's helpful. But I could also give this to someone who's never heard of you or venture capital or knows what a safe note is and they can get value out of it.
Starting point is 01:00:32 And I'm interested to hear your thoughts on the translation that's happening right now around AI narratives as they break into the public consciousness. We saw this with that viral X article, something big is happening. I had that forwarded to me by family friends. I overheard someone in a restaurant talking about it who clearly is not, you know, an investor in an AI lab. They're just some random person and they realize that there's something happening. And I'm wondering about these transitions of communication that's what's happening in Silicon Valley is going to have an impact and how what you've seen in the past translates to average Americans. I haven't seen, you know, if you think of, so first of all, the venture capital
Starting point is 01:01:16 community appropriately gets excited about these big tech waves because they lead to disruption and they lead to kind of accelerated new wealth creation around these companies that break out. And that's happened over and over and over in my career. And I don't remember one, I mean, if you take the mobile wave or the PC wave or the client server or SaaS, I don't remember any of those kind of being thrown at the public consciousness this fast. And so I do think, I do think it's different this time on from that front alone that said you know um we've had pretty high market caps for tech companies for a long time now starting with dessert period and you're getting to a place where um you know anytime the market switches from from half full to half empty and a skeptics mindset
Starting point is 01:02:08 um you you do we have had those moments like so so maybe not driven by the wave but we certainly have those moments. And it's all okay. It will always be okay. I think people freak out. Buffett says he's a net buyer of stocks. If people are intellectual and curious and hungry, they should be sharpening their pencils right now trying to figure out where they want to find entry prices on some of these companies. Yeah, that makes sense. I mean, it feels like a lot of the book is about finding a career, and I feel like that will resonate specifically with people who are nervous. with me because when I was thinking about when John and I first met, we had both built some companies, we'd both invested in some companies, but we were trying to find our life's work.
Starting point is 01:02:54 And it was such a, it's such a pain, like that period where you're searching is, if you're a high agency person, you like doing a lot of things, it can be deeply painful because you're like, I want to be productive. I want to be, I want to be making the number go up, but I don't have a number right now. And if you had asked either of us when we first met, hey, would you ever think about broadcast media? Would you ever think about being in front of a camera? Both of us, you know, John had made some YouTube videos, but it was just for fun. And if a big, you know, if a network like CNBC had said, hey, would you guys consider, you know, hosting a show? We would have been like, yeah, like, honored, but no way that I just never imagined. And then you sort of just, and so
Starting point is 01:03:37 as somebody who, like, wants a lot of control over their life. and their destiny and like feels like they have, historically have had control, that period of just like searching is like, is painful. And I feel like a lot of the book is, is helping people through that moment. So in some ways, when I got our copy, I was like, wow, I really wish I had this, you know, two years ago.
Starting point is 01:03:59 I'd like to go back to the word you, the phrase you used of high agency. I think that one of the problems that is, that is kind of evolved is that our college, our common college pathway has actually become more restrictive. And I think there's less agency and kids are being encouraged. They have to sign up for a major before they ever go to the college. They get stuck on these pathways. And there's not a lot of exploration. There's not a lot of search for creativity or obsession or the kind of thing that really gets you going. And I think the journey you went on is perfectly fine. I think that's
Starting point is 01:04:38 another thing, which is letting it be okay for people to bounce around and see what they can find. Because once they latch on, and we have examples in the book where that doesn't happen until 40. Sometimes it's at 30. Sometimes it's, I didn't become a venture capitalist until I was 30, and that was clearly my dream job. And the first two stops were fine and interesting and building blocks towards that. So I think that is part of the message, is to get comfortable with that and give people permission to do that type of exploration. Yeah, Enzo Ferrari, Estee Lauder, I think the Red Bull founder, too, all were, I think, in their 40s when they started their companies.
Starting point is 01:05:17 And so there's this intense pressure in our industry and everywhere to figure out a job and then attach your, you know, make your entire identity that job. And it's so, it's so constrictive. Yeah. Yes. And I circle back to, The first question just about this AI stuff that's out there, I think there's this massive paradox where if you are not engaged at work, if you don't love what you do, you know, you go home and you don't try and improve on your own time, AI feels very threatening. For high agency people who are kind of on their own custom career paths, which I hope this book encourages more and more people to be on, AI is like a superpower.
Starting point is 01:06:03 There's like you can learn constantly. Like you can find people who you should be connecting with. You can have it do things for you so that you're operating with the power of more than one person as you move forward. And I just think that's quite a, quite an ironic paradox that for certain people, this is the best of times, the best. Like there's never, ever in the history of the world been a better time to self-learn. like it is it is all out there at your fingertips it's like magic um but yeah i think the ability to to uh you can anyone can ask a dumb question at any point all day long and you don't have to be you don't have to be embarrassed about it and i think that that is underrated today in terms of how many if like generally
Starting point is 01:06:53 you know there are no dumb questions and yet people still don't like asking dumb questions to their peers or mentors or whatever. And I feel like that's an underrated element of AI today. No doubt. No doubt. What do you think about hyperfinancialization, young people day trading, meme coins, all of that. It feels like a trap for young people where it can feel like you're learning about AI or learning about technology, but then instead of actually building a product, creating value. You're sort of just trying to shuffle chips around the poker table and ultimately just take risk.
Starting point is 01:07:34 Yeah, I mean, based on my understanding of day trading in a Wall Street context, you know, prior to maybe the crypto world, I don't, I'm not aware of any signal that suggests that's a durable skill. And I think the data points the other way. But one of my messages is like do what you love, do what you love, do what you're passionate about. So if that's the thing that you're going to wake up every day, you know, I, I don't want to, I don't want to be discouraging. Yeah, yeah, yeah. Just maybe you'll land or start a fund that it takes it really seriously and create some, some captured value or team. You know,
Starting point is 01:08:15 I was probably overly skeptical of, at least many of the crypto messages that were out there. but the stable coin rail seemed like a real, real innovation and something that has scale. And I think maybe we're still yet to see some disruption coming down the path. Yeah, I mean, we just talked to the Collison's about that. Ken Griffin started as a day trader. He was in college. He was buying convertible debt. And he was looking at where the convertible debt was mispriced and made a bunch of money
Starting point is 01:08:46 and then grew it into a massive team with a fund and high-for-gritty trading arm and all this stuff. What are you making? Oh, yeah. I'm just going to say the thing that will differentiate you more in your career than anything else is to be the most hyper-curious person that's trying to do this thing. And once again, that's put on steroids with these AI tools. But if you are the most curious person that's constantly learning in your field, you will do extremely well. And I said it in the book, but I'll say it here.
Starting point is 01:09:18 I can't make you the most talented person in your company or your group or your field, but you have no excuse not to be the most knowledgeable person because the information is all out there. What kind of things are you doing to learn about industries and companies in the beginning of your venture career that maybe you'd be using a deep research query to do today. Well, the first thing, I mean, the first thing is you develop, and I think this is all the great VCs in the Valley. you develop this hyper-fomo of anything and everything. And one of the reasons I know that it's time for me to move on is I haven't put together a claw about yet, but I know my older self would have done it immediately.
Starting point is 01:10:07 And it's just that kind of thing. You can't sleep on not knowing something, you know, or hearing that there's a company you don't know about. And you develop that as an instinct. like as a positive tool to just be hyper paranoid about new companies, new things, new information, new technologies. Is venture capital eating the world? Is venture capital scaling so much that it's eating into other asset classes? We're seeing mega funds. I'm interested to think about what's durable about your approach to investing. What's additional? What's substitutive? How is venture changing?
Starting point is 01:10:44 I think from the minute I entered Venture to today, Venture has gotten nothing but more competitive. As an asset class, it's gotten more and more competitive and people get more and more aggressive. We're in a very interesting time where people have grown funds to the size of equivalent to the largest PE funds. And they're removing money, especially, you know, you just had to Collison's on. You know, you look at the Stripe or the Databricks case, they're using those large funds to convince the companies to stay private, longer, maybe forever. That's just a very different world than the one that I grew up in. I think they turn around and the people that do those rounds turn around and tell the LPs, their investors, look, if you want exposure to these growth years in these companies, you need to come through us. And so if I were using cynical words, I'd say they've hijacked the growth years of these early IPO companies.
Starting point is 01:11:47 You know, Amazon went public below a billion in market cap. It's hard to fathom that, you know, today with what we have going on here. And that's different. What's the solution, though? Because there's different, you know, angelus has been, you know, available and scaling for a long time now. Robin Hood has their new. Yeah, I know. I know. The problem with getting the retail investor into this crazy world of venture capital is most venture capitalists are well aware that in a fund of 10 investments, seven are going broke and bankrupt.
Starting point is 01:12:24 And I don't know that the retail investors got the right frame of mind for that type of activity. I also, there's a reason that public companies have public audits and file these financials. the way that they do. And I tell you, when a company gets ready to go public, everyone sharpens their pencils, the auditor, the lawyers. Everyone really tightens up. And I think every venture capitalist knows that numbers that are in a PowerPoint may or may not be correct.
Starting point is 01:12:53 But I don't know that retail investors know that. So I think it could be a dangerous world to go down that path you're talking about. But ideally the thing to do is just to make it a lot easier to be public, lower the cost of being public, really scrutinized the cost of D&O insurance and the lawsuits that come to the table because that makes people not want to be out there on the field. It would require the SEC to steer themselves in the face and say, look, the number of public companies in the U.S. is half of what it used to be? And is that a problem?
Starting point is 01:13:29 I think it is, but is that a problem and what are we going to do to fix it? But there's not an overnight fix. It would take someone being very determined to make it happen. Do you think that there's a world where the AI backlash is less if the big labs got out earlier? I'm just thinking about the average American can't get allocation in SpaceX, Anthropic, OpenAI, and they're seeing bills go up and they're worried about AI, but they don't have exposure. And if they could at least see that they're somewhat allocated to that, that might calm the same way housing prices going up sucks until you buy a house. I mean, yeah.
Starting point is 01:14:14 The way you describe it sounds more like how a politician would describe it than how I actually think of it might play. I don't know that there are that many retail investors out there going, oh, my job's under threat for me. I wish I could own anthropic. I mean, isn't that part of why, I mean, this sort of fear-based funder, raising approach that the lab, you know, some of the labs have taken where if somebody's telling you your job's going to go away, of course you want to give them as much money as you can as a hedge. Yeah, I don't, look, there's an interesting irony that if you wanted AI exposure, you're
Starting point is 01:14:53 pretty good just owning the index. Invidia is such a large part of the index. You have exposure to Microsoft and Google and Facebook. I don't know that you need to be in that place. And we are now already at a place, I would say, you know, every time there's a new technology wave, people get rich quick. When people get rich quick, speculators come in, Charlton's, you know, those kind of things. And eventually that leads to a bubble. People are confused when they think, you know, they say, oh, you say it's a bubble, you're anti-IAid.
Starting point is 01:15:25 No, the fact that it's real causes the bubble. And that's why fools rush in. And the beginning of the gold rush, there was really gold there. They were finding it. That's good point. You know, it got speculative. And so it will get speculative. I think it would be really ironic if we, you know, invite retail investors into a Goldman-led SPV of open-hour entropic right before the recent, which I think would be the most likely thing that would happen.
Starting point is 01:15:55 Sure, sure. How are you, what are you thinking about around China? of today, February 2026. We have Distillgate this week. A lot of people are talking about it. But what's on your mind? Can I ask you a question about that? This is remarkably naive on my part.
Starting point is 01:16:16 So these model companies are saying that their API was hit 16 million times. Is that correct? Something like that. I don't even know if there's API. How did that happen? Are you not tracking who connects to your? Yeah, you set up a whole bunch of different front companies or your reselling access. So if you go to the iTunes App Store right now, there will be an app called...
Starting point is 01:16:40 You use OpenCla to set up 16 million accounts. Or, yeah, or if you just, you can go to the App Store right now and look for like chat AI, and it will hit the other APIs, but you're going through an American company. Maybe they don't have security. So there's a lot of different ways to exfiltrate data. And then also a lot of data just hits the open web because you go to chat, GBT, you run a deeper research report, and then you just publish it on your blog or on the internet.
Starting point is 01:17:04 But now they've been able to take those things down and down the thing. You know, I share the skepticism Elon does, and this goes way back to my speech at all in on regulatory capture. I said then, and I still believe now, the biggest threat to the U.S., let's call it AI hedging me, is the Chinese open source model. And the developers, even in the U.S., that are working on their own, are using those. And you can see that on all the tables that are out there. And so it is a highly competitive, like just globally competitive reality that in an ecosystem
Starting point is 01:17:51 where there's six to 10 open source models that can all learn off of each other, that's going to be high, like that's going to be a really incredible, primordial soup, if you will, for innovation to evolve. And I fear mainly because I'm well aware that like Open AIs, I mean, Anthropic is the biggest spender on lobbying whatsoever. I always fear when these things come out that they're just trying to encourage more of that regulation. And if that happens, I think it could be, like if they try and make it illegal to use
Starting point is 01:18:24 a model that has any Chinese, like, ancestry. I think that could end up in a really weird place. And the place to really pay attention to and look out for is who's going to serve the rest of the world. In the Internet era, there was a fence around China and the U.S. companies served the rest of the world. If we get super heavy on U.S. regulation, you may find there's a fence around the U.S. and China serves the rest of the world. That's what I'd be worried about. How are you thinking about great power competition more broadly? Like I'm an American bald eagle and as American as they come.
Starting point is 01:19:02 At the same time, I feel like I've been worried about a confrontation over Taiwan for years. There's been trade wars, yes, and things are tense, but nothing's really happened. Is China somehow like underrated in your mind? Is the geopolitical risk overstated in some way? Like, what are you seeing that's not consensus? If you've seen some of the stuff I've posted, and I think this stuff I'm posted, and is highly consistent with Ethan's point of view. Sure.
Starting point is 01:19:31 It comes from a place of if you're going to declare that there's this relationship that we need to optimize, I think, and if your goal is to lower the risk of any major blowup, blow up between the two, I think it's imperative to have as much knowledge as possible. And so one of the things that I don't like is when you see people out there, they're spreading rhetoric that's just not consistent with the reality. And so I'm just like, let's get eyes wide open first. I also think that there are things we could learn from China about how to run infrastructure in the U.S. They're clearly better at it than we are. And if you just, you know, close your ears and say, oh, my God, they're the evil competitor and they cheat all
Starting point is 01:20:20 the time, you don't ever get yourself in a position where you're going to learn, you know, from them, maybe what they're doing well and what we're not. And so I'm, you know, Elon, I guess he was on Cheeky Pint with the, you're talking to. John Carlson. Yeah, he talks about how competitive they are. And I'm just like, let's be realistic. Let's not.
Starting point is 01:20:45 I also worry a little bit that the venture community has gotten into all these military companies because venture capitalists start to look like warmongers. It's ironic. Way back when when they all in pod just got started, they were giving, oh, what's her name? It was on the Boeing board. Nick, Nikki Haley? Yeah. And they were like, oh, she's a warmonger.
Starting point is 01:21:08 She's, you know, looking after the defense company. Now every VC's in Andrew. They're doing the same thing. Let's be consistent. Yeah. Yeah, yeah, yeah. Are there any other industries that you do think are interesting that sort of but outside of the typical mandate of venture capital.
Starting point is 01:21:28 You know, like AI fits very neatly into the software continuum, internet, cloud, mobile. I thought crypto was a little bit outside of the wheelhouse, but a lot of VCs made it work. Industrial energy, defense. These are sort of things that are a little bit outside of the typical software VC mindset. I'm going to have to run, but I would tell you one thing. every time venture capital gets easy people take or take risk with companies that are less of a great fit for the venture capital model and when I say a great fit like they're either heavy capax or they have they have low gross margins they require tons of capital to keep surviving
Starting point is 01:22:13 and and history is pretty good at like bringing people back around to how hard those are to do with venture capital. So it's interesting for me to see those experiments being run. You know, there was near death with Tesla many times, and it's a lot easier to get in those difficult situations when you're using debt and leverage, which we're seeing all over these data centers. And so I just, a word of warning, be careful. It ain't easy. Okay. Jordy, last question? No, we got to let our guests jump. But congratulations. We'll talk to you soon. I hope everybody can get out and buy the book.
Starting point is 01:22:54 Running down a dream. It's available. Everywhere books are sold. Go check it out. And thank you so much for taking the time to come chat with us. We'll talk to you, see you. Good luck. Goodbye.
Starting point is 01:23:03 Let me tell you about Century. Century shows developers what's broken and helps them fix it fast. That's why 150,000 organizations use it to keep their apps working. And let me also tell you about Vanta. Automate Compliance and Security. Vanta is the leading AI trust management platform. We have some news from the public markets into it shares jump 5.4% on PACT with Anthropic. This is, you know, you like advanced talks.
Starting point is 01:23:31 You like talks. You like advanced talks. You like deals. But PACs are really the top tier dealmaking that you can do. Turn your deals into PACs. Accenture also turned positive up 1% during the Anthropic event. And DocuSign rises 5% after partnering with Anthropic. So lots of folks.
Starting point is 01:23:50 in the public markets and software that are facing pressure or going doing deals and announcing partnerships as opposed to I don't know competition co-optition what will it be ultimately but it's a lot of stuff anyway grace says return flight from NYC gets canceled by snowstorm call united immediately connected with customer service rare voices uncanny deaf AI but they gave it a human-like accent takes 20 minutes to get rebooked pretty good I ask if it's a A.I. Ha ha. No, man, but I get that a lot. I ask it to calculate 228 times 6,647. It runs the calculation. G-G. Do you think this is real? This is crazy. We got to test this. That is a... I mean, this is past the uncanny valley then. It says voice is uncanny. So it's a little in the
Starting point is 01:24:43 uncanny valley. It's also pretty easy for a human to just type this in. That would be hilarious. Yeah, if this is where the real alpha is. If you have the chat bot open if you're, if you're, uh, you know, on customer service calls, you need to be. Yeah, maybe they're just using Cluelly. Maybe, maybe. But here at tickets has the real alpha guy who vibe codes a billion dollar SaaS on a United Airlines customer service call. Yes. Just using them for their, for their tokens. The tokens are free. Free compute, free compute. Well, if you want, uh, if you want AI voices, head over to 11 labs, build intelligent, real-time conversational agents.
Starting point is 01:25:20 Reimagined human technology interaction with 11 labs. Continuing on, what is this hoodie? The Fred hoodie? Oh, this is amazing. I love Fred. So Fred is the Federal Reserve. What does Fred actually stand for? Fred, St. Louis.
Starting point is 01:25:40 Federal Reserve Economic Data. Yes. So this is basically the best website for economic data. Huge in my early economics career. So many useful charts and graphs, all free open source, just like you just click it and you get exactly what you want. So whenever you want to go back to some ground truth setting, you hit fred. dot st.l Louis.gov or something like that. I think it's Fred.com. Fed.org is the website. Highly recommend it for GDP data and more. Good charts. And here we go. We got the Fred sweatshirt, absolute, dripped out economic brother.
Starting point is 01:26:16 How popular is the name? Fred. I feel like it's kind of fallen off. Figma, ship the best version, not the first one, with Figma. Introducing Claude Code to Figma, explore more options, push ideas further.
Starting point is 01:26:30 With Figma, you can design your next hoodie in Figma potentially. Okay, so Fred in 1950 was the 84th most popular name. And guess what it is now? It was 84 back then. I imagine it's fallen.
Starting point is 01:26:46 I would say it's like 150. How about 2007-156? Fred's a falloff. Huge opportunity to name of your kid, Fred. It's a great name. It's a strong name. All these things go in cycles, though. It's the business cycle.
Starting point is 01:27:02 There's news over at Meta. Meta has shaken hands with AMD. They're forming a pact. Today we're announcing a multi-year agreement with AMD, advanced micro devices, to integrate their latest instinct GPUs into our global infrastructure with approximately six gigawatts. Give me the... of planned data center capacity dedicated to this deployment.
Starting point is 01:27:27 We're scaling our compute capacity to accelerate the development of cutting-edge AI models and deliver personal superintelligence to billions around the world. Very exciting. And pretty cool little hype video. You see the camera move on this video? This feels like you speed that up, you cut in some other stuff, and you got yourself in Instagram real, right? You see this little like...
Starting point is 01:27:46 Have you seen those tutorials about how to make a car? You need some SD kit. Yeah, SD kit on this? I think it goes pretty hard. You double the speed. You add some flickering. You add some frames, frame interpolation, all sorts of stuff. But Lisa Sue is on an absolute tear.
Starting point is 01:28:01 AMD's doing great. And meta is not stopping. Meta. Mark Zuckerberg's meta is planning a stable coin comeback in the second half of this year. Iying a third party vendor as a key partner to power payments across Facebook, Instagram, and WhatsApp. app. This is great if they should have something if your friend sends you a meme. It's good. You should be able to tip them. Easily tipping for great shares. Well, if you want to build a social
Starting point is 01:28:27 network build on tipping, you'll need plaid because plaid powers the app to use to spend, say, borrow, and invest securely connecting bank accounts to move money, fight fraud, and improve lending now with AI. Sean Frank, soon to be a new father and dear friend of the show says Manus from meta just doubling my ad budget every 14 minutes. This is one of the best new formats. I like this. This is only possible. This is the best AI image I've ever seen, I think.
Starting point is 01:28:56 This is so funny because this definitely doesn't happen in the actual movie, but it's- But this is what Burry is like now. I know. I know. Sean Frank, really on a tear. I saw him in the chat yesterday. I forgot to say hello to him. Hello, Sean.
Starting point is 01:29:09 And also congratulations on the new baby. Very excited for you. Anyway, this is hilarious image. But we will return to the timeline after our next guest because we have Ivan from Notion in the Restream waiting room. Welcome to the show, Ivan. How are you doing? Hello, guys. Good to see you. Good to have you on the show. Long overdue. Long overdue. We're so excited.
Starting point is 01:29:30 First time. Fantastic. Well, we're glad we caught you on today because there's a big launch in Notion World, but I'd love you to take us through it. What was announced today? we're launching customer agent today is one of the first if not the first multiple agent product or knowledge work it does real work for you in the background very easy to set up hosting the cloud yeah connect to all your work products and the best part is you don't need a mac me that's a good line they are going out of stock i don't know if you've seen um but the the mac media is in short supply so uh walk me through some of like the most obvious use cases like
Starting point is 01:30:09 Notion is, I think of the amazing because you have what is essentially a document but also a spreadsheet and you can kind of move between different data structures and visualizations on top of data in a sort of consumer app UI. And so I could imagine creating a document and then having an agent go and do a bunch of work to populate extra fields. So where are you seeing or where are you excited about these agents actually taking hold in the product? So what you're describing was the notion probably two years ago. Yeah. Like during the SaaS area, our strategy has been consolidating all different use cases for one product.
Starting point is 01:30:49 Okay. We talk about knowledge base. You're talking about documents. Yeah. Talk about project management. Yeah. And we have been bringing together into one tool that's very flexible. Okay.
Starting point is 01:30:58 For example, a ramp, actually. The company is to sponsor you guys. Yeah. We bring a ramp last year as a new customer for Notion. Okay. And we helped Ram, consolidate half a dozen different tools that their core collaboration stack onto Notion so they don't have to pay as much money for all the tools than number one
Starting point is 01:31:17 their team don't have the jump between the other tools that's number two I would say the best part is now they have one place to do their core collaboration work they have one place to deploy AI so now notion is the core agent the orchestration layer for Ramp the product would just launch today custom agent Ramp has been an early customer for us for a couple month. They're running all the sales enable process, a lot of internal bug triage, all different process on this. But because people have one place to work collaboration, system record recruits, and one place to do their busy work, delegate a busy work too. So the model is, what is it, money and time save both. And this is we're doing for ramp at the moment. I love it. Yeah,
Starting point is 01:32:01 talk to me about the agenic cron job. That feels, like something that we're starting to taste with open claw. There's clearly demand for it. It requires a little bit more upfront effort than just firing off a deep research report or saying, hey, hydrate this text, expand, contract, expand, turn it into bullet points, turn into paragraphs back and forth all day long. But I feel like for most businesses, having an agent that's effectively on a cron job, maybe you don't call it a cron job, But it's something that runs every day that runs over a knowledge base, over customer list, over documents, and does the things that AI is great at every day. That feels like something that could be incredibly powerful.
Starting point is 01:32:48 How are you thinking about long-running agents, cron job agents, scheduled agents? Yeah, cron job is a pretty good word for it. A lot of knowledge work is kind of just cron job, right? So you update your pushing paper back and forth. a con job from this person to the other person. I think the world sort of paste this power of when agent connect with a cron job through a product like OpenClaw. It can do a lot of work for you.
Starting point is 01:33:14 So you no longer have to prompt it. It trigger work on the background autonomously, asynchronously for you. Our interesting less about OpenClaught or Mac Mini is, what does this do for real business? And real business is you require enterprise-grade permission. It has to be multiplayer. No longer just for a personal tinker with your own back meaning, you have to power the entire teams with it. And it has to be easy to set up so you don't have to be an AI tinker or AI engineer to do it.
Starting point is 01:33:43 You have to have the state of our models, usually the day release. So all the service will provide for businesses to take the spirit of front job background agent, call, you might say, open call you might say, into businesses. That's the positioning of this product. Sure. So talk to me about where the capability frontier on the agent's side is today. I mean, because agents can be turned really loose. You can give them access to Python, and they can talk to any API.
Starting point is 01:34:18 They can write their own CLIs at this point. And so you mentioned like no Mac Mini, but is there a world where I tell, just for our example, Like, I want a new notion document generated every day with a breakdown. I want you to go to a read-only access API for the YouTube API, pull all of our analytics, pull all the chat feed, synthesize all that and put together a notion document that I can review with the team in the morning that says, oh, this segment of the show was particularly great, here's how the analytics changed, here's where the viewer spikes were, all of that. that would require talking to an API.
Starting point is 01:34:57 What does that look like if there's not an off-the-shelf integration? All this should be possible if it's not already possible. Getting the YouTube API, getting the transcript, I don't know everybody has access to. Gem and I might have a special access to that, but assume video transcript, all this is possible because all you need to do is the runtime that can run off. All you do is the runtime that can talk to external APIs
Starting point is 01:35:22 is to code that written by models. Yeah. And a model that does the crunch out periodically based on certain triggers. Yeah. I just describe those core ingredients. They're the core, basically the core ingredients for Notion custom agents. Sure. So you not only can do those, can connect to your emails, connect to your Slack,
Starting point is 01:35:41 if you guys use Slack, and send your message every morning. So you don't actually have to come to Notion to see the work been done. You can stay where you are today. Okay. How have you processed the last couple years of vibe coding? Because when I, the first company I ever started, not necessarily the first company, but the first like real business, we started on Notion. And at the time this company does like a bunch of, it's like an ad network on YouTube.
Starting point is 01:36:08 And so we had a bunch of different like ad buys happening. We needed to be managing that process with the client as well as the creators. And so the entire company from the. ran on Notion and I looked at every possible SaaS solution at the time but I looked at all of them I would have needed to a lot of them didn't even for like you know work with customization so I just built all these dashboards that helped that helped kind of like manage all those different processes and that already had collaboration built in it already had like the account functionality so
Starting point is 01:36:43 it just like worked completely out of the box so in some ways at that time I was already replacing, like, vertical-specific software with Notion. And so in some ways, like, I feel like this whole process and explosion of people being able to create different applications for different use cases is kind of like just a continuum from Notion's inception. Yeah. We started, a lot of people think Notion is document tool, collaboration tool, a note-taking app, relational database tool. That's never been the intent. Notion started as a computing tool. Like, I really care about, okay, I'm a programmer. The power computing is in the hands of view, the programmers. How do you open up to more people? That's why the company
Starting point is 01:37:25 started. So the spirit has always been consolidating the fragmentation of SaaS for the past five plus years. And it turns out that strategy works quite well with AI, because once you consolidate those things, you have one context to power the language models, right? That's one. Number two, because we've been taking a stance that we don't want to inject our opinion how you should run your business. We should just provide the Lego blocks and you can decide however you want to run those Lego blocks. So we haven't been hard-cold those business logic into our apps. So in back then there's a buzzword called no-code, right? And some people talk about SaaS versus language model.
Starting point is 01:38:03 A lot of SaaS is hard-coded logic into your vertical apps. And we don't do it. Used to be a weakness of our product because it's how open-ended it is. It has to be required some technical mind. people to use it. Turns out to be a strength because now language model can use those notion and building block to do a lot of work for them. So now we're, with this new product we're launching, it's not just working with information in and out of notion. It can power agent to work with external tools and do those crowd job, do those repetitive knowledge work.
Starting point is 01:38:32 So do those busy work for the company. Internally, we call this like, let AI do the nightship. So you can do the day shift. We actually have a webpack to go a little bit dark, this time because truly is doing the night for us. And nobody wants a night shift. I like the day shift. I did the night shift. They're back in college. Yeah.
Starting point is 01:38:53 It's not a, it's not a fun. That's great. So you mentioned Gemini. Thank you. Another TVPN sponsor. But I imagine that you're pretty model agnostic. I'm interested to know how you're thinking about the different LLMs. And then also, how much do you want to surface to the user?
Starting point is 01:39:11 I was talking to Salesforce's Slackbot, and it wasn't up front with me about exactly which model was under the hood. Now, I'm a nerd, and I'll ask, okay, is it 3.5 or 4.6 or 5.2, and I'll have some opinion whether or not that matters, who knows. But do you want to have model switchers, model pickers? Do you want to be at that level of, like, empowering the user to pick the right tool for the job, or do you want to handle that internally? We do both. So if you are like a normie or normally plus plus using Notion, you can use the notion without pick the model. You'd be an auto version, right? But even more sophisticated creative custom agent that do work for you, different model have different strengths and weaknesses.
Starting point is 01:39:58 You should be able to pick the model. For our strategy, and I think for a lot of non-laps, it's very important to be model agnostic. The lab's going to get better and better. model going to do more and more. But one important strategic point is labs don't work about with other labs models. So there's an important position to be
Starting point is 01:40:18 the Switzerland of agents, Switzerland of the models. And that's the position where being, with the product launch, you can work with clock hole out of the box, can work with a cursors agent out of the box, and you pretty much can pick any models you want that's the state
Starting point is 01:40:34 of art, and usually the data, those model are released. And as the user of product, you don't have to worry about that. I want to revisit this Wall Street Journal article that you were featured in back in August of last year. So the quote was, Ivan, the CEO of Notion says that two years ago his business had margins of around 90 percent, typical of cloud-based software companies. Now around 10 percentage points of that profit go to the AI companies that underpin Notion's latest offerings.
Starting point is 01:41:07 How has that changed? Is it still 10%? Is it climbing? Is it falling? What are your predictions for where that goes? Not as far as 10, but it's definitely a meaningful amount. Before you can do pure SaaS margins. Now, model people, all our product are powered by AI now. Majority of product are powered by AI now. You have to, model provider have to take some of the margin. And we're okay with that. We see the market change in both fronts. First, we want to use. state of our most capable, most intelligent model because our customer wants that. We want to eat the customer. They don't have to worry about that. On second, there's a new wave of open source. Foreign models are coming. And that's why we have to be model agnostic and we can shift to a different model for different
Starting point is 01:41:52 type of work. And that will help us with the margins. And at the end of the day, our customers have no worry about this. What we provide is less about model. Model capability has been there for almost for a year or two years to do a lot of knowledge work. What's missing the market is this infrastructure layer that glued together model capability, glued together permissions, and to provide real knowledge work for the customers, at the same time, backward compatible, having a good UI for the company of enterprises. I have one last question.
Starting point is 01:42:24 We'll let you go. How are you thinking about sort of like, it's crazy to call them legacy AI workflows because they were probably implemented like a year ago. but when I just think about like document summarization or even like spell checking grammar, like that was probably moved to an LLM that was capable, a GPT4 class model can do that at a very low cost. And maybe you want to optimize that even further by going to an open source model on commodity hardware, really drive down the token cost. Have you left any AI workflows in place on legacy models?
Starting point is 01:43:03 or have you migrated everything to the frontier and you're just moving with the frontier? We're moving with the frontier by and large because that's where a customer want. They want smarter things. But things like avoid dictation, summarization, legacy model can do that. Sure.
Starting point is 01:43:20 And so it's just getting cheaper. I think the most important part, it's like the market is changing so fast right now. Like nobody knows where the future holds, but we know the model capability gets in better, better. And we always care about building beautiful. and powerful tools. And AI is that tool today.
Starting point is 01:43:36 How do we make sure that all company can benefit from this? You don't have to be Fortune 500 to 4-4-deploy engineers. You don't have to be a San Francisco startup to have AI engineer on your team to use this. Every business can benefit this technology. That's our ethos. And that's what we're building this product to make it super simple. You don't have to worry about Mac Mini,
Starting point is 01:43:57 not worry about models. That's the tagline. Is that on the homepage yet? Don't worry about Mac Mini. We got you. I think that's too specific calling out other products, but night shift is a good one. Night shift works. I love it.
Starting point is 01:44:11 Someone in the chat, John Palmer in the chat, was saying, well, but if I can't, if I don't have to use a Mac Mini, what will I spend all my time setting up? You'd like to tinker. Tinker is sometimes more than 50% of the fun of it. Yeah. No, this is true. This is true. People want to tinker.
Starting point is 01:44:29 They want to play. They want to explore. and have fun. And it seems like it's a great time to be running Notion. Like, it's just a very exciting time. There's so many new products you can build so much faster than ever before. So congrats on all the progress. Yeah, congrats to the team on the launch. Thank you so much. Great to finally have you on. We'll talk to you soon. Cheers. Let me tell you about Labelbox, reinforcement learning environments, voice, robotics, evals, and expert human data. Labelbox is the data factory behind the world's leading AI teams. And let me also tell you about the New York Stock Exchange.
Starting point is 01:45:01 Want to change the world? Raise capital at the New York Stock Exchange. I'm not going to leak the news, but we have an exciting guest lined up for our next NICC show. So hit that subscribe button to be notified when we go live. Can't wait. A senior U.S. official told Reuters that Deep Seeks new model, whose release is now imminent, has been trained using Nvidia Blackwell GPUs despite the export ban. I am interested to see what this model is capable of. It could have made that happen, right? Like, it could literally be one Blackwell per person in a suitcase smuggled along.
Starting point is 01:45:42 It could be one shipment diverged or diverted from going to one country. And then they say, help send that shipping container over there instead. It could be cloud. They could have found a cloud provider that they were able to sort of anonymize and have a front company for. There's a whole bunch of different ways to get compute if you're willing to bend the rules or break the rules or, you know, potentially anger the U.S. administration. But we will see how this goes. Tyler, do you have a feeling for how DeepSeek has been doing? Because there was a hype cycle around Deep Seek v3. some of,
Starting point is 01:46:23 something and it kind of came out and it landed with, it didn't make a big splash. I feel like we're going into a new hype cycle around like the next deep seek is going to be really good. Is this fake? Is this real? How are you feeling about deep seek? So I think the last big model release was supposed to be this like massive massive release. Right. And then it turned out to be like I don't know the exact number, but it was supposed to be like v4 and ended up being like V3.1. It was like that kind of thing. And then also like- So they box the pre-train? most likely? Is that what people think? Yes, maybe.
Starting point is 01:46:55 And then, yeah, also like on Chinese labs generally, like right now you're hearing a lot about like, it's like ZEAI and Kimi and that's one of things. Yeah, so it's very unclear. I mean, they're not very public about this stuff. Yeah. But also, I think broadly, just about the distill gate stuff. Yeah.
Starting point is 01:47:12 I think throughout this, I've been, like, I think I've like updated towards like, actually we can probably mostly ignore a lot of the Chinese labs. because basically, like, the whole reason that they're good. Everyone's like, oh my gosh, Deep Seek is right on our tail. They're going to catch up, they're going to catch up. The only reason that they've been on their tails because... No, no!
Starting point is 01:47:40 That's the royal flesh, Tyler. That's the best that you can do on the screen. You just dropped a Bob Truth Duke. Truth Duke. Disregard China entirely. You heard it here first. No, but I mean... Yeah, no, it's a good point.
Starting point is 01:47:52 Continue. Yeah. The only reason that they're like... Stop. So rude. The only reason that the Chinese labs are so close to U.S. labs is because they're just training on the outputs, right? Which is like, okay, sure, yeah, good job.
Starting point is 01:48:08 But, like, you're not... I would be extremely surprised if you actually see a breakthrough from a Chinese lab so far. Exactly. Everything we've seen is that, yes, they can copy stuff? Permanently three months behind. No. But I think people were freaking out
Starting point is 01:48:21 because China went from, like, 10 years behind to one year behind to three months behind. And they were like straight lines on log graphs. They're going to be 10 years ahead of us next year. Yeah. And it just feels like that's what I can still be very worried about, you know, regulatory capture all these things. Yeah.
Starting point is 01:48:36 But I think like I'm like I think Anthropic will basically just figure out way that they can, you know, increase security on the API. Totally. So we'll open AI and then we'll see. And then we'll see if the Chinese labs keep, keep up with the progress. Yeah. Yeah, yeah. And broadly, I mean, that's how I've.
Starting point is 01:48:53 There's so many other dynamics beyond just obtaining training data. If you take Will Brown's point that the Internet is producing more training data, you wind up in a situation where, sure, training data is commoditized, but what does it really take to scale up, DeepSeek V5 to a place where it's having economic impact? Well, you need a massive inference cluster. Do they have that? How are they distributing this stuff? Yeah, I do still think it's very impressive that the models generally they've put out are very small
Starting point is 01:49:22 and still like very good. Yeah. But I think on the frontier level, I'm not super worried about them. Tyler, Tyler called it. They're cooked. Anyway,
Starting point is 01:49:31 really quickly, from Anthropic, there's now a call sheet on will the Pentagon designate Anthropica supply chain risk. It's sitting at 36.8%. I believe that there's going to be a meeting between Pete Hegseth and Dario.
Starting point is 01:49:47 Yeah, so that already happened. Well, that happened. Update on the meeting from Andrew Kern, according to Axios Defense, secretary, according to Axios Defense Secretary, Pete Hegsef, gave Dario until Friday night to give the military unfettered access to Claude or face the consequences, which may even include
Starting point is 01:50:04 invoking the Defense Production Act to force the training of a war clawed. So that was not a joke? War Claude is not a joke? I don't think they would name it that, but it's kind of sounds like that's what Pete is. Cloud of war, like God of War? That's pretty good. Anyway, we have our next. Before we get to that, the chat is sharing that payments processor Stripe expresses interest in PayPal. Scoop. We missed, had a missed opportunity.
Starting point is 01:50:32 If you guys could have, simply if Bloomberg, if you could have published that at 12, when they came on the show, that would have been quite nice. Don't publish it until there's. Payment processing firm Stripe is considering an acquisition of all or parts of PayPal. Stripe, which is privately held and is among the industry's most valuable companies, as you know, the deliberations are still early and there's no certainty they'll lead to a transaction. I mean, we read the Wilmanitis post and we're like, oh, it could be a couple days, could be a couple years. It seems like it might be closer to a couple days.
Starting point is 01:51:02 Well, PayPal is up 7% today. So people are excited to say, hey, you might be able to own some stripe. Well, let me tell you about Phantom Cash, fund your wallet without exchanges or middlemen and spend with the Phantom card. And without further ado, we have Stephanie. from Inception Labs. He's the founder and CEO. Welcome to the show. How are you doing?
Starting point is 01:51:24 Very good. Thanks for having me. Thanks for hopping on. First time on the show, so I'd love to have you kick it off with an introduction on yourself and the company. Of course, yes. I'm Stephano.
Starting point is 01:51:34 One of the founders of the CEO of Inception. Before this, I was at Stanford in the CS department. I've been doing research in Generative AI for a long time. I think my lab is mostly famous for having co-invented diffusion models back in 2019. I was on the first. Flash attention paper, DPO, a bunch of things that are now widely used in production.
Starting point is 01:51:55 And these days I'm most excited about the diffusion language models. That's what we're doing at inception. Yes. So I first saw a diffusion language model demoed at Google I.O., I believe. But tell us, like, explain it like I'm five. Because when I think diffusion, I think a bunch of fuzzy noise and then the mid-jurney image gets higher and higher resolution. Everyone's familiar with that.
Starting point is 01:52:20 And then they're familiar with like the token streaming next token prediction. Is it different? Break it down at a very low level or high level. That's right. Basically, we've taken diffusion models, which is the thing that works best for image and VD generation. This is kind of course to find a process where you iteratively refine your output until it looks good. And we figure out a way to apply to text and code generation. And it kind of works the same way.
Starting point is 01:52:47 you start with a rough guess of what the answer should be, and then you refine it. Okay. And crucially, the difference is that the neural network is able to modify many tokens at the same time. Yeah. And so it's much, much more efficient that the typical auto-aggressive model where you generate left to right one token at a time. So you're able to modify many tokens in parallel. So if I'm thinking of like, you know, not maybe like a deep research report type response, in my mind, I can imagine a report, you know, saying like, explain the history of the Roman Empire.
Starting point is 01:53:24 That's the example I always used. It's like, it's going to have some structure to it. And I'm going to imagine a blurry image with like a couple large headers. And then the headers are going to get filled in. Then the text are going to fill in. Maybe there's some bullet points. Maybe there's some dates. Maybe there's some charts. And like all of this is going to come together. But I'm thinking about it not sequentially, but as a whole.
Starting point is 01:53:43 And then refining iteratively until I'm getting. to instead of pixels, I'm thinking of individual characters, or are there tokens in the same way that might exist in an LLM? What does that look like? Yeah, that's the right intuition. So it's kind of like, yeah, course-to-find generation. And in practice, you know, it's learned by a neural network. So it's not necessarily interpretable. It's not the kind of process I would go through where maybe I started with, you know, section headings and then I fill in the details. It's all learn by an neural network. And so it's not really interpretable. It's fast. Yeah. So is speed the main thing? I mean, we had the founder of chatjimmy.AI on the show Talas. And it seemed like he was
Starting point is 01:54:28 able to bake down a traditional LLM, Lama 3, 8B onto Silicon. And it was spitting out 16,000 tokens per second. Do you have a comp on speed or cost that you're targeting? Or do you see like a through line to like, okay, maybe if we're running on Nvidia chips and he's running on Custom Silicon, he's going to be faster. But then once we get to Custom Silicon, we're going to be 10 times faster than that. How should I be thinking about the tradeoffs here? Yeah. So our benefit is purely at the algorithmic level. Like it's just a more parallel approach. That is not memory bound, but it's it's flux bomb, right? It's compute bound. So you're able to hit the ceiling of the roof line. And we are taking advantage of all the resources. We can
Starting point is 01:55:13 get access to on the GPU. In practice, what this means is that we can get to over a thousand tokens per second on traditional Nvidia GPUs, Hopper, Blackwell. So we're not yet at the level, you know, the 16,000 tokens that you can get if you were to actually, you know, implement the model on hardware, but we're running on, you know, general purpose GPUs. So we can scale up as much as we want. It's just a matter of getting more GPUs and, you know, you can just run these models. anywhere we are on bedrock we are on foundries so if you have your own GPUs you can provision your own capacity and you can run your model our models there so it's very very scalable it's fast and scalable and in principle yeah it can be compounded you know you have a 10x benefit from the software
Starting point is 01:55:59 you have a 10x benefit from the hardware those two things could be combined what uh what use case you know there's a lot of people out there using uh traditional language models today what are the kinds of use cases where you would tell somebody you should be switching over today or at least trying to start experimenting? Yeah, we're seeing a lot of traction in latency-sensitive applications of LLMS. Whenever there is like a tight loop where you need to interact with a developer or a customer. So our models are being deployed in a bunch of ideas. So if you think about coding, coding, coding, auto-complete, next edits, suggestions, refactoring, quick,
Starting point is 01:56:40 agentic loops. That's a very natural kind of like application where diffusion LLMs are already really, really good. Voice agents. We have a number of partners and customers that are building really, really good voice agents. The latest models we announced today, Mercury 2 is a reasoning model. So it's really, really fast. And so you can get the quality of a reasoning model with the latency budgets that you need whenever you want to build a voice agent, which is resonating really well with a bunch of early customers.
Starting point is 01:57:08 retrieval and search, that's another space where we're seeing a bunch of applications being built on the fusion all lamps. So if you think about query writing, re-ranking, summarization, that's another really, really good use case for the fusion allums. Talk to us about distillgate, how you've been processing it. Have you worked a bunch of yet? It's a sign of success. Did you, any points in your career,
Starting point is 01:57:38 were you experimenting with this stuff? Is this something that we kind of forced the Chinese market into spending a lot of resources on? I mean, it makes sense, right? That that was always going to happen. I think the moment you put it out there, you give API access to the world that's going to happen and people are going to copy you.
Starting point is 01:58:00 We've been doing distillation in the research community for a long time, and so people have been experimenting and figuring out ways to do it in a sample efficient way. So I'm not surprised that it's happening. I think it's hard to know at what scale. And honestly, it's from the numbers that they were circulating, it seems like they are able to do it with very, very few data points.
Starting point is 01:58:23 That was the most surprising thing to me. So, you know, it's very interesting scientifically that you can actually distill with so few data points because it means that it's going to be very, very hard to protect any IP. you are opening a model up from an EPA point of view. So the last question, somewhat related to that. I feel like when these models get distilled, we see very strong benchmark performance. And then some yet to be quantified in benchmarked quality sort of degrades.
Starting point is 01:58:54 And you hear people that actually try and put them into production saying, like, it just doesn't have the same like big model flavor that I'm getting from the big labs. I don't know how real that is. But I'm wondering if you zoom out and you look at diffusion versus transformer-based LLMs, are you noticing any divergence in the benchmarks where you're maybe better at coding or less good at coding, where the mental model that we're giving the computer is leading to surprising results? Yeah. So what we're seeing is that it's good at coding, it's good at editing.
Starting point is 01:59:35 One nice thing about not necessarily being left to ride is that you can use context all around you. So those use cases have emerged as being really, really good for diffusion in other labs. It's also a function of the training data that we use. You know, we always liked coding. We're all computer scientists. And so that was like a very natural kind of application area for us. And so I don't know how much of that depends on the training data that we use versus the model. But what's exciting is really just like the speed.
Starting point is 02:00:04 But that's the thing that is going to be hard to replicate. I got a need for speed. I got a need for speed. I'm super bullish on speed. I'm serious. I think it's amazing. I used 5.3 Spark on Cerebrus. And I was like, this is the future.
Starting point is 02:00:19 It's going to come to everything. And it's going to be an important moment for people to realize that it's just a different product when you're interacting with something fast. And I think we learn this from Amazon squeezing out milliseconds in, in web page loads, and we're going to experience it in AI, too. So thank you for everything that you're doing to speed up AI. We loved having you on the show. So have a great rest of your day. Great to meet you. We'll talk to you soon. Goodbye. Let me tell you about console. Console builds AI agents that automates 70% of IT, HR, and finance support, giving employees instant resolution
Starting point is 02:00:53 for access requests and password resets. And let me also tell you about Railway. Railway is the all-in-one intelligent cloud provider. Use your favorite agent to deploy. deploy web apps, servers, databases, and more, while Railway automatically takes care of scaler, monitoring, and security. And without further ado, we have TBPN royalty. What's going on? Great to see you, James. Hey, John. Hey, Judy. How are you doing? Doing great. Doing great. Good calling in from a cave. Are you fully snowed in? What's going on? No, yeah, we're in New York. The snow is melting. We built a little mini studio. Nice. In upstairs. And yeah, it looks pretty professional. Great. I love it.
Starting point is 02:01:35 Tell us the news. And then there's a bunch of stuff we want to talk about. Yeah, so I guess we're joining today announcing a $96 million series C investment at a $1 billion dollar valuation led by Lightspeed Venture Partners alongside Sequoia, Kleiner Perkins, Avantik, Saga, and South Park Commerce. Amazing. Break down everything that's happened since the last time. on the show that the space has been moving so quickly so it feels like it's been two years even though it's probably been two months yeah i mean it's it's all moving everything's moving obviously at a hundred miles an hour but it's it sounds a bit trite when i say this but it really is a privilege to be building in such exciting times um yeah i mean we we've just launched profound agents
Starting point is 02:02:27 which i think is a really big deal it's you know we we serve the marketer so you know the The line we've been using during this fundraiser or during this announcement as being, you know, Harvey, lawyers have Harvey, engineers have cursor and marketers have profound. And I think that's true than ever in that, yeah, with this launch of agents, it really takes profound towards being like a full stack, you know, holistic platform for the modern day marketer, allowing them to, you know, not just understand how they show up in AI platforms like chat, GPT, Gemini, the rest of them, but also build agents that can help them do more with less. So yeah, I think this is cool.
Starting point is 02:03:11 Yeah, we saw our customers had been, you know, we came out of the gates 18 months ago with profound here in New York. And what we saw was our customers quite often were taking our data and insights and then going to orchestration and automation tools to do cool things with it. So we've just brought that all in house now. And yeah, it's really cool. You can do everything in one platform. How do you think the other platforms, the LLMs, are evolving?
Starting point is 02:03:39 Some of them are launching ads. Some aren't all products and services are being discovered. And all of them, why is it important to have a platform like profound? We were talking about this off-air this morning. It feels like people have been joking around about Manus, for example, in the meta platform. Because Manus is like an agent. It wants to help you. But at the same time, what helps?
Starting point is 02:04:01 What else? Manus is like, spend more money, right? So it feels like having a third party. Principal agent problem. Yeah. Madness is like, I've got a great idea. You guys are more user-aligned potentially. But how are you thinking about the interaction between profound and the different platforms? Yeah, I mean, I think, you know, our prediction of the future is that in the future, every company on the planet will care deeply about how AI talks about their brand or products or services. That's kind of a North Star that we hang our hat on. And I think compared to, you know, search in the early 2000s or even for the last 25 years,
Starting point is 02:04:42 it's looking like this will be a much more fragmented sort of market. I think we're going to see multiple players coming through. So I think profound really sits adjacent to the models or the labs. And we help marketing teams. understand how they show up in these platforms. You know, when AI responds, what does it say about your brand? What does it say about your services? And now we help you build customized agents that can actually, you know, do the work
Starting point is 02:05:14 with you, with a marketer in the loop. So, yeah, we've had hundreds of teams. You know, we work with, I mean, I guess a big thing that I'd say we're announcing since we last spoke our series B is that we now work with 10% at the Fortune 500, which is a pretty cool stuff. What are the other 90% doing? Fantastic news. Yeah, we got 90% to go.
Starting point is 02:05:43 Jobs not done. But I think, yeah, we work with, yeah, it's a very cool stat. And, you know, what we're seeing more and more is that, you know, every brand is different. Every marketing team is different. Everyone has different initiatives. Everyone has different preferences. is marketing is more human than ever in a lot of ways. And I think our approach of helping marketing teams build entirely customized agents
Starting point is 02:06:08 that can take out the rote labor from their work is it's just saving these teams inordinate amounts of time and energy. And it's very cool to see it work. Yeah, it's exciting. Can you walk me through the anatomy of correcting a mistake that exists across LLMs or even in a particular LLM. My nightmare is, you know, you go to chat GPT and ask how tall is John Coogan
Starting point is 02:06:36 and it says 6-5-66, this would destroy me. It must be 6-8. Let's bake that into the pre-training data, 6-8-68. But seriously, other than just like doing a bunch of SEO to correct the record, like how can a company, if there's truly like a consistent hallucination, something that's just incorrect for some, reason, what is the process to actually change results?
Starting point is 02:07:04 For sure. I mean, well, the first step, which sounds kind of stupid is just knowing why it's happening, right? So the model, when, you know, let's say an answer engine spits out an answer, you know, a good chunk of the time, it's getting that answer from somewhere. And, you know, being able to identify, hey, this is, you know, what we found, I'll give you an anecdotal example. So it was, I wouldn't be able to name the brand. it was a neobank that the models were incorrectly spitting out that there was no fdIC insurance on and we identified there was coming from a few places it was like some a third party blog i think a couple of reddit posts um and maybe like a youtube video or something so then once you know
Starting point is 02:07:48 where it's happening it's kind of uh i wouldn't say it's easy but it's you know it's just kind of 101 marketing. Okay, cool. Let's reach out to the blog and tell them that that's factually incorrect. Let's comment on the Reddit post and say, hey, this is actually not true. We are FDIC insured. Let's produce a YouTube video that speaks to the same thing, but, you know, mentions heavily that we have FDIC insurance. And lo and behold, that gets pulled through into the models. So that would be something that your agent would do like automatically and you could just set them off and do that. Correct. Yeah, you can you could set up an agent that monitors for any misinformation based on a knowledge base of like ground truth and then say okay cool when we see
Starting point is 02:08:26 any misinformation let's generate an email that reads from our tone of voice and sends to this third-hearted blog and says hey can you correct this for example so yeah but it has to be customized because you know you wouldn't be that's you'd never have that as an out-of-the-box solution right it has to be everything's that's almost like being able to build one-for-one software yes this is this new paradigm of agenic software which is so so cool And obviously I'm not the only one that's excited about that. Yeah. I mean, you're in a very interesting vantage point in the industry because you work with so many Fortune 500 companies.
Starting point is 02:09:03 What are your expectations for agentic commerce this year? We were just talking to the Colossons. It feels like it's on the precipice. Everyone we've talked to is extremely bullish. But I'm always interested to hear like the shape of the bullishness, what you think needs to happen to actually get people. shopping agentically this year? I mean, I think so much of that inflection is going to come from the models themselves or the consumer products.
Starting point is 02:09:34 So, you know, ChatGBT and Gemini's ability to actually offer a fantastic user experience. I think that will be the kind of, you know, that's the most important thing. I think from the other side of the fence, working with these brands, these marketing teams, as a hot take, they're not, I really, I hope this doesn't sound like disrespect for at all, but they're, they're not as slow as you'd imagine. Like these giant, we work with giant brands, Fortune 500, you know, some Fortune 10, and they're ferociously fast. They understand the magnitude of this platform shift, and it's very sophisticated teams.
Starting point is 02:10:15 A lot of them are SEO teams who I actually think are fantastically well suited to kind of attack this problem. space because they like kind of technical and they understand the sort of primitives of marketing and content, et cetera, they're quite cross-functional. But yeah, I think the mistake would be to think that the enterprise is super slow. I don't think that's true. And I'm not just, there's no simplicity that. I'm actually, I actually believe that. Yeah. No, I can. Well, I asked, I asked chat, GPT, what's the best, uh, geo tool for startups? It says profound. What it does. Tracks how your brand appears inside L.M.
Starting point is 02:10:54 It's best for V.C. Start up serious about AI distributions. Dog fooding. I mean, we put a lot of profound in the pre-training data last year. It was a great partner. Did it mention agents, though? That's the question, right?
Starting point is 02:11:06 We only launched agents today. Do you ask it? What did they launch today? Oh, there you go. While he does that, tell me what you think of the word geo. Is that too buzzwordy? Do you like that term? What are the pros and cons of having a term applied to your industry, your nascent business plan?
Starting point is 02:11:30 I think geo sucks as an acronym. It's just bad in so many ways. I think it can't be claimed by it because of geography. It is already taken. Oh, yeah. It stands for generous engine optimization, which people don't refer to these products. If you ever heard anyone refer to chat, GBT is a generative engine. They don't.
Starting point is 02:11:51 Yeah, you're right. They do serve. Google is a search engine for sure. Bing is a search engine, but no one calls Gemini. What's your preferred, what's your preferred acronym? I mean, sort of without, you know, said sort of with not much passion, answer engine optimization feels more fitting to me. I think it's still all to be determined.
Starting point is 02:12:13 I think how your brand is spoken about by. by AI will become the most important primitive in marketing. So I think it's going to become bigger than just a kind of, you know, something that you put a label on like that. I think we see a new, a sort of new type of marketer forming over time, which is interesting, the marketing engineer, you know, a marketer who has the technical chops to be able to go in and build agents, customize agents, deploy agents,
Starting point is 02:12:48 for the rest, you know, cross-functioning, you know, across teams. And, yeah, I think that's very interesting. We announced a profound university today. So does this work? If I press that, can you see that? Oh, yeah. There we go.
Starting point is 02:13:00 That's elite. That's an elite. Oh, wow. Series C. Classes in session. Series C. Oh, yeah. Yeah.
Starting point is 02:13:09 How much is it again? Yeah. So we announced a profound university. Cool. Today. Very cool. Which is, yeah, it's actually really, it's really awesome. It's a series of certifications, training cohorts, learning materials that essentially
Starting point is 02:13:25 enables this new era of the marketing engineer. Sure. And yeah, we're very excited about that. So, yeah, I think we're going to see a lot changing in the world of marketing, as I guess is true. Most interesting. The chat is saying that a yo is a good. Ayo.
Starting point is 02:13:42 Ayo. Sheesh. Well, thank you so much for coming on the show. Always good to have you here, James. Congratulations. It's great to see. And we'll talk to you soon. Thanks, James.
Starting point is 02:13:53 Have a good rest of your day. Let me tell you about Cognition. They're the makers of Devin, the AI software engineer. Crush your backlog with your personal AI engineering team. And we have Scott Wu from Cognition in the re-stream waiting room. I want to talk about the launch today. I want to talk about AI progress. I want to talk about math.
Starting point is 02:14:13 and your predictions on the IMO gold medal and everything that's happening there. But let's start with the general update on cognition. What's the shape of the business today? And then I want to hear about the latest launch. Awesome. Yeah, what's up guys? How's it again? Great to see you.
Starting point is 02:14:30 Been a little bit, I feel like. Yeah, too long. Too long. You guys have been cooking. Yeah. Every one feels like a decade now in AI. Cool. No, so things have been great.
Starting point is 02:14:40 I mean, business has grown a lot. You know, we shared some of our metrics. today, one of which is that our total enterprise usage has actually more than doubled in the last six weeks even. And a lot of that has just been mass takeoff of agents. I think the high level that we're really seeing is that as agents get more capable and you can trust them to do end-to-end tasks, what you really need is the full background cloud agent. Right. And so that means, you know, being able to run your repos and everything locally, being able to test, being able to spin things from, you know, Slack or Linear or GitHub or Gero or whatever it is and just being able to have
Starting point is 02:15:18 this mass parallel async workload. Okay. And then, and then the announcement today. Yeah. Yeah. No, the announcement today was a fun one for us. It was a, you know, very near and dear to my heart. But a lot of it, honestly, if I were really to just describe it in one line, is just clearing through all the frictions that we've known about and just making it a really great experience. And so, you know, one of the big highlights is, is automated testing and having Devin run your web app for you and send you the changes and send you screen caps of all of those things. But there's tons of little things that really affect the experience. And so, you know, making the VM startup time way faster, making the Slack integration way smoother, you know,
Starting point is 02:16:00 showing you all of the intermediate progress of the messages and so on. And so it's been a, I mean, it's, it's changed our internal usage a lot. And so that's why we're pretty excited to get this one So it seems like there's speed to be squeezed out from VM spin-up time optimizations. We're also seeing some incredible progress on the custom silicon. We had the founder of Tallis on generating 16,000 tokens a second. That seems like that will be really impactful when it rolls out to the broader code generation and software engineering world. It's still pretty early with that company, Lama, 3. 3B at this, or 3, 8, 8B at this point. But where else are you seeing opportunities for speed?
Starting point is 02:16:47 How do you think about the importance of speed for what you do? Yeah. No, there's a ton that you can do. And at some point, a lot of it actually is just good old software engineering. Really? And so, so, you know, it's, it's, it's, it's, it's, it's, Of course, like, you know, the models, obviously, you know, you can improve the tokens per second. You can improve the TTFT. I think those improvements will be great. And we've already seen a lot of those over the last bit. We'll see many more. But at some point, you know, your agent has to go install, you know, NPM install.
Starting point is 02:17:16 Your agent has to go UV install. Your agent has to go grep for things, right? It has to go pull up the front end itself. A lot of that stuff is, it's good old product building and software engineering to make that better and more efficient. And so in a lot of these, obviously, you know, you can do algorithmic tricks. You could put in, you know, indices, right, and indexes and make those faster. You can do little things to kind of like cheat the loading time and do things in parallel and do things async. But a lot of it is just building the systems around the agent to make it really fast.
Starting point is 02:17:46 No, it's a really good point. I mean, anyone who's installed OpenClaw has experienced like, oh, wait, I'm actually just waiting to download software because it's pulling a whole bunch of stuff together. and it's not actually doing that much waiting with the LLM, at least in the setup phase, but you still have to actually get this thing configured. And I think a lot of people in tech went through that. How have you been processing lessons from open-claw interaction patterns that you think are interesting, what it means that society more broadly is just aware of AI agents,
Starting point is 02:18:18 which I feel like is a term that you basically coined years ago and have been running with. but in a specific like enterprise context. And now I'm at a bar and I'll hear somebody talking about AI agents and it's because of OpenClaw. And I feel like, oh, that's a, I remember the Scott Wu launch video when you explained that this was going to happen. But how have you been processing OpenClaw? What is interesting about that? Are there any like lessons from that open source community, that project generally, that paradigm that you want to bring to Devin? Yeah, no, I mean, a lot of big changes.
Starting point is 02:18:52 And I think, by the way, I think OpenCla gets a lot of crisis. it for many people being the first time that people really saw what a full agent would look like with access to your files, access to your computer, and so on. I think we're really getting to the point, you know, to your previous point where I think we're really starting to switch over from the early adopter cycle to the kind of mass market cycle, is my sense. And the concrete impact of that is a lot more people are starting to hear about and really think about AI agents, right? And I think it used to be. I mean, for us, for example, a year and a half ago, you used to go into the room and explain to people what an AI agent was and why this wasn't, you know, why this was different from just like normal auto-complete or chat chaptee or something like that.
Starting point is 02:19:37 Now everybody's thinking about this stuff. Everyone wants to use it. And I think one of the kind of implications of that is just accessibility and getting people to value as soon as possible is one of the most powerful things that you can have in your own products as a result. You guys have had a ton of success in Enterprise. The chat wants us to ask for your take on the SaaSpocalypse, I imagine. Some of the conversations that you're having with, let's say, the CTO of a massive company, are they thinking about using a devon for things like big database migrations, like how are they thinking about how agents can impact their dependency on
Starting point is 02:20:21 on sort of these like legacy tools and systems of record. Yeah, I mean, there's the whole Citrini report and everything. I mean, it was honestly ridiculous. That's my two cents on it. I think the like, look, at a high level, of course, yeah, AI is going to change a lot of stuff. I don't really understand how you go from that to saying that there's going to, you know, like take software as a good example. Software is one of the most deflationary things ever, you know.
Starting point is 02:20:51 a lot of the same products that used to cost much more 10, 20 years ago have gotten much cheaper over time, right? Has this been terrible for software companies? You know, I mean, it seems like it's been pretty good. All the big companies in the world are still software companies, right? And so I think there's like there's one thing when prices go down because, you know, the demand's just not there anymore. And obviously you can get into weird cycles and all that can happen. But there's a totally different thing if prices go down because we've just gotten way better at supplying things. And that's when you get Jevin's. paradox and that's when you get, you know, just mass consumer surplus and so on. And so at a high
Starting point is 02:21:27 level, I know, I mean, I think there's all the customers that we work with, you know, banks and health insurers and private equity and so on. I mean, they're obviously like a lot of these based migration, modernization projects that they can go and take on immediately. But the very next thing that they say is then like, okay, how do I pull the rest of my roadmap for it? Right. How do I build even more and get even more out to people? And I think reality is, We all just have so much more software to build. Yeah. How are you thinking about AI progress broadly?
Starting point is 02:21:58 It feels like a lot of people are feeling that recursive development is on the, on the horizon. People are bringing up takeoff speeds again, migrating from slow, maybe fast. I was backing off. Now it's a little quicker. How are you, how are you trying to like zoom out, reset, get to reality, figure out how fast things are actually moving? Yeah, no, I mean, the meter report shows like the consistent doublings and everything. I mean, I think it's a very, I think things are continuing on the exponential curve. I wouldn't say that they're going either super exponential or sub exponential.
Starting point is 02:22:36 I think they're roughly going on that exponential curve. But, you know, exponential curve is a lot. Like that's a very fast growth, obviously. I think for us, you know, one of the things that's been pretty interesting is just like noticing each of the step function changes that happen. And so for us, for example, it's definitely been in the last, I'll call it, like, four or five months where something interesting happened, which is we stopped typing code. You know, like at some point, you just don't, right? Like before, obviously, you have all the tools and you have the combination of things and so on. Now it's there's different experiences.
Starting point is 02:23:11 There's different tools that you want to have, obviously, between the IDE and the CLI and the web agent and so on. But either way, you're really just working in prompts and you're not really, you know, like, the code that we check into GitHub, like how much of it was typed by human at this point? I think almost none. And maybe one of the things I would just call it is that that, you know, as you kind of expose each new thing, like, I mean, if you think of it as like a profiler, you know, on your own software engineering workflow, like what is the most expensive part? You shrink that down.
Starting point is 02:23:41 You get to the next thing. You shrink that down and you just make the whole cycle more effective. We're at the point where a lot of these other things like, you know, understanding the code base and review and so on are. the actual bottom X, testing is another big one. And I think what we're going to see over the next little bit is
Starting point is 02:23:58 you're basically going to have to solve each of those with really good product experiences, really good model capabilities, and so on. So maybe the only thing that I would say, you know, I think the exponential curve continues, I would just kind of call out that the form factor looks very different as you continue on that exponential curve
Starting point is 02:24:15 because you're actually solving different problems. Like, yes, I think we will continue to kind of like, you know, get the doublings and the doublings, But now it looks a lot more like how do we optimize testing and review and planning, not how do we make the AI good at writing code based on the prompt that you give? Because at this point, it's actually, frankly, it's basically already done. Yeah. I have a bunch more questions. I'll be quick.
Starting point is 02:24:37 I have two. First, what is the future of windsurf look like in a world where you're not writing code? Does that become a Kindle? I mean, that's a joke, but does it become more important for that product to be a little? the best way to read code because even if you're not writing code, you still, like, I've done terminal prompts and I'm like, and then I wind up opening the files to kind of take a peek in them. And I'm like, I kind of like, there's maybe room for innovation there and like how your code reading skill improves as your code writing skill degrades. But how do you think about the future of windsurf?
Starting point is 02:25:15 Yeah, for sure. I think the high level here is it's going to be a gradual thing, but I think over the next one or two years, will have a pretty broad transition towards what you might call having English as the source of truth. Sure. And so people talk about like, basically I think we'll go from code to English in the same way that we went from assembly to code. Yeah. Right. And so, you know, one of those steps has been, you know, has been mostly done at this point, which is the step of figuring out how do you prompt in English and then have the agent produce the code. But if you think about it, I mean, you're still, you're still reviewing code, you're still checking code into GitHub, you're still reading the code.
Starting point is 02:25:50 You're still reading the code to understand what's going on. And I think at some point, you actually want an interface that looks a lot more like a spec or a map or, you know, like a design doc, right? And that's the thing that you're iterating on. That's the thing that you're reviewing, for example, right? Like at some point, review, I mean, people say, oh, like, review is going to go away because AI is going to catch all the bugs. I think that's actually not right because what you're going to be reviewing is the decisions, right? It's like, here's what the product is doing in this case and here's what the product is doing in that case. and here's how this plan works or whatever it is, right?
Starting point is 02:26:22 And so what you'll want to have is a very clean interface to interact basically, you know, with your product and with your own specs and so on. And that's a lot of what we think Windsurf evolves into over time, right? And so, again, I think it's a very gradual thing. I think there's a lot of value in reading code now and certainly at cognition. We still do a lot of reading the code, even if we are not the ones like, writing the next line of code because instead we write the English prompt for that. But I think what happens with WinSurf is, you know, at some point, instead of looking at each of the files,
Starting point is 02:26:57 you start looking more at, you know, the specs and the high-level logical design of what you're building, you start looking at the diagrams of your app or your website itself and you're able to go and manipulate those. And you're really just managing your agents that you kick off from there. Okay, we blew past the I.O.I gold medal, as you predicted correctly. amazing. I have a follow-up question about that, but it's sort of in three parts. One is, what is the next, like, math or physics-based benchmark that you're excited about, AI potentially unlocking? When do you think that might happen? And then do you think there will be any tangible impacts of that? Because if I walk down the street and I tell some random person,
Starting point is 02:27:43 like, they did it. Navier Stokes is solved. I think that's the math problem that everyone talks about a lot. I don't even know. I think most people would be like, great. Like, is that going to help me with my job? Like, they're more excited about just knowledge retrieval right now. So, so yeah, the next hurdle timeline and then impact. Yeah, yeah, for sure. So, I mean, we actually, I would say crossed a pretty exciting hurdle just recently, like Alex Lipsoska and some of the folks had opening high, had a pretty important breakthrough in physics where they used language models to figure out a lot of the key. lemmas and theorems for it.
Starting point is 02:28:20 And so, you know, I would have said, I think the next big breakthrough is getting to a point where, like, actual science and actual discovery is happening largely powered by AI. And I think we're effectively getting into that. I think we'll see much more of that this year. I think to your point on impact, yeah, I think it'll be some time until, you know, the average person feels the impact of us proving new theorems. But obviously, the long term of all of this is extremely powerful, right? I mean, we're going to be discovering new medicines.
Starting point is 02:28:48 We're going to be, you know, unlocking big breakthroughs in biology, material science, nutrition, and so on and so on. And all of this comes from a lot of the same science. I think I very much think of it as a, you can call it like a, you know, a proof of concept or like an existence proof that it is possible. And, you know, solving some of these very difficult novel math and physics and algorithms problems is that there are, lots of ways that over time, that itself will continue to be valuable, but even more so than that, it's obviously just, you know, an existence proof that AI can do some pretty incredible things. I love it. Yeah, a lot of people get abstract with the medicine science. I like the material science one, because I can imagine a much stronger, much cheaper, much lighter carbon fiber and driving a car
Starting point is 02:29:38 that's pure carbon fiber for the same price as Model 3, this is pretty attractive. That's pretty tangible. I think the average American consumer's going to get behind that. Get extremely excited about that. I'm pretty excited about the part of, you know, you have the best pizza that you've ever tasted and except it's also the most nutritious thing for you because we've just solved taste and nutrition and everything. And I feel like AI will get us there, but that might require a few more. There's probably a few more steps in the middle. That's the new AGI benchmark. Get the goalpost. Get the goalpost. I'm moving the goal post. AGI will be here when I can have a pizza that tastes. That tastes. amazing and also is fully nutritious. Thank you, Scott Will. How a great day. Good to see you guys. Always fun to move the goalposts with you. We'll talk to you soon. Goodbye. It's an honor to move the post. MongoDB. What's the only thing faster than the AI market? Your business on MongoDB. Don't just build AI. Own the data platform that powers it. And without further ado,
Starting point is 02:30:38 we will begin our Lambda Lightning Round with Rune. Look at this new effects. See, oh, yeah, we're getting new effects going. Welcome to the show. Ooh, look at this. What's happening? That is a beautiful lighting setup. Thank you for joining.
Starting point is 02:30:54 First time in the show, please introduce yourself in the company. Yeah, great to be here. I'm Runa Chris. I'm co-founder and CEO of the Artificial Intelligence Underwriting Company. Okay. Our mission is to underwrite super intelligence, and we do that by building standards and insurance products for AI agents.
Starting point is 02:31:11 Okay. Sounds extremely straightforward and simple. Yeah, plenty of data to build this on. I mean, yeah, how do you even think about it? So the big thing, Derek Thompson was kind of summing up the whole discourse around Satrini. And his takeaway was that everyone can agree that no one knows what's going to happen. So a very difficult, difficult environment to be, you know, creating insurance products for. But I'm sure you're narrowing it down to some key.
Starting point is 02:31:44 initial use cases. So maybe you can talk about where this starts. Yeah. Yeah, maybe the first thing to say is that regardless of whether anyone buys an insurance product, someone is always underwriting it. So otherwise it's just going to be the head of risk at J.B. Morgan who has to make a go, no good decision. He also sits with the same problem. Is this going to work or is it not going to work? So the place we start is just what are the risks that are slowing down adoption today? and can an independent third party with skin in the game and visibility across a bunch of companies be able to underwrite that better
Starting point is 02:32:20 than any particular head of risk chief security officer might be able to do. And like any other risk, when there's no data, there's an initial R&D phase where we don't expect all of these policies to work out well. We expect to lose some money and in the process start to be able to collect the data that allows us to underwrite this more precisely than anyone else.
Starting point is 02:32:41 Yeah, walk us through some of the example insurance policies because, I mean, everyone who's followed like the AI story in AI race has seen like a million different varieties of impairment from like the training run didn't work or the data center was delayed. And that has a financial impact down to we got sued because of our training data or someone used our app and didn't like it. There's a million different ways that you can have smaller, even large settlements or lawsuits. But how do you think about fragmenting the market, finding a landing zone, a beachhead? Yeah, totally. So you start from what are the very real concerns to slow down adoption today. Let's take one. We just announced the world's first insurance policy for an agent last week with 11 labs.
Starting point is 02:33:33 They are trying to be on the frontier. Thank you. their pioneers of security and safety, they're trying to be on the frontier of giving assurances. The things that hold up adoption for them are things like hallucinations that lead to financial losses. So I want to see in a kind of Air Canada example, leads to financial damage.
Starting point is 02:33:52 Data leakage continues to happen. You've seen on a weekly basis, open clause, the latest group of that. You don't want your ages to give medical advice. Sure. And so those are also some of the kinds things that are covered. So mostly at the application layer today, and then we think as insurer appetite grows, eventually our mission is to underwrite superintelligence. Eventually, we think some of
Starting point is 02:34:16 the kind of risks that look a little bit more like private nuclear energy will also have to be covered by insurance because these risks cannot sit with no one. There's a grand compromise in 1954 that allowed us to do private nuclear energy in America, which is the Price and an act. It's making the government saying, hey, we really want some private nuclear energy. That would be awesome. But also, any particular private company cannot carry the risk if something truly goes wrong. So we're going to require an insurance scheme. That's going to be our way of putting the market to work to manage this in a way that's just pro-business, pro-getting this adopted. And the government has always effectively been the insurer of last resort in some ways, right? Whether it's formal or not, the government is always the insurer of last resort. Take COVID. Who's on the hooks for that.
Starting point is 02:35:01 Well, ultimately, the government has to step in. So the question is, can you formalize that a little bit more and say, at what? limits of liability is the government underhook, and up until that, who's under hook for that? Got it. Okay. So walk us through the chain of how insurance actually works. I understand 11 Labs comes to you, and then are you drafting a policy with a specific risk profile, payment premiums, and then you're going out to the JP Morgan's of the world and having them buy that and invest that? Does this float? Is this tradable? Can a retail investor get allocation? How does that work on the long tail of the financialization. Yeah, eventually this will end up on Robin Hood, but let me walk you through how it looks today.
Starting point is 02:35:44 So today, there are two steps, high level. First is certifying against the standard as a way to unlock insurance. So historically, the way every market has been unlocked is that the insurers want to know that the risk is well managed. The head of risk of Jim Morgan doesn't want just financial coverage. He wants to make sure that there's no incident that gets him fired in the first place. And so we've developed a standard. It looks a little bit like a Moody's phrase.
Starting point is 02:36:06 or a SOC2. So that is all open source in public. It's 50 requirements that any AI Frontier Company must meet to meet the standard. And as part of that, we run a bunch of technical tests, basically crash testing, red teaming, as you might call it here, which gives us a score. And we give them pass-failant certificate. And then the score feeds into a policy that we've designed with some of the leading insurers where a company like a little labs gets to specify, hey, what are the top three, four, five risks
Starting point is 02:36:34 that hold up adoption? they buy a policy for that. And today that risk is helped by traditional insurance companies. Again, this is actually all about trust. So you really want the old insurers to have it on their balance sheet. They always pay. Over time, as we move into this kind of like Chernobyl-type risks, we will run out of private capacity.
Starting point is 02:36:54 We will have to, at some point, create catastrophe bonds. Those will be traded on the public market. Probably not on Robin Hood, but the more sophisticated investors. That is the ultimate way to build an off-market market. capacity to cover the tail risk. Yeah, that makes sense? Very, very fascinating. What does the business look like today? This feels like high stakes work, but is it capital intensive? Is it, do you need a thousand insurance agents at some point? Like, what's the team like? What's the fundraising like? What's the business like? Yeah, totally. So the way to, if you're really thinking
Starting point is 02:37:29 about this long term, the way to unlock the insurance market is to get the standard universally adopted. That is kind of what allows everyone to say, hey, this risk is well managed. We can now start to price it. And so we have, for the standard, we have about 100 security leaders from the Fortune 1000 who meet with us every six weeks to input into the standard as representing their interest.
Starting point is 02:37:49 And now you're having some of the leading AI companies like 11 Labs, Intercom, UiPath, more to be announced soon, that have put themselves forward to say, here we're pioneers, we'd like to have an independent audit to prove that. That's step one. So that's what most of our work is focused on today. and then on the insurance side,
Starting point is 02:38:06 the way to start is to partner with existing insurers that bring that trust credibility. They're frankly so old school, and that's what brings trust here. They're not accomplished. That's the whole point. It doesn't really work if you're, it's like, okay, who's actually backing this policy?
Starting point is 02:38:22 And then it's like, oh, a company created three months ago. Exactly. You're like, don't worry, I got it. So it's actually quite capital-in-alight to get started. We raised $15 million from that Friedman last year. And, and almost ran into the goalposts. There you go. They'll move them again.
Starting point is 02:38:43 Well, thank you so much for stopping by the show and giving us the update. Yeah, a lot, a lot more questions. As there are new kind of crises around agents, feel free to pop back on. Yeah, we'd love that. Talk about it. Amazing. We'll talk to you soon. Good to me, Eeroon.
Starting point is 02:38:58 Good to me, Eeroon. Let me tell you about CrowdStrike. Your business is AI. their businesses securing it. Crowdstrike secures AI and stops breaches. And without further ado, we have Rainier Pope from Matt X in the restroom, waiting room. Welcome to the show. How are you doing? What's going on? Doing great. Right, happy to be here. Thanks so much for hopping on. It's your first appearance. We'd love an introduction on yourself and the company to kick it off. Yeah, so happy to be here. I'm Reiner. I'm CEO and one of the founders of Matt X. We are a company that makes the best chips physically
Starting point is 02:39:32 possible for large language models. Okay. So we've been doing this for about two or three years. Before that, I was myself, I was at Google for about a decade, working on large language models, worked on the TPUs for a bit, worked on some other hardware projects. And really, as part of that,
Starting point is 02:39:48 what we saw was that there was this, like, if you really want to make the best chips for LLMs and LLMs or this big up-and-coming workload back in 22, if you want to make the best chips for LLMs, you need to do really the best way to do it is from a from scratch blank slate design.
Starting point is 02:40:04 So designed for large matrices, very low precision, very low latency. And so my co-founder Mike Gunter and I, at that point in 22, decided to leave Google to start MadX, where we're doing exactly that. The day we're announcing MadX-1, this is a new chip,
Starting point is 02:40:23 which simultaneously offers better throughput per square millimeter or throughput of a chip than any other product in the market, while at the same time offering, lowering lowest latency, latency that is comparable to the best, which is crock and cerebrus. Yeah.
Starting point is 02:40:38 What are the various trade-offs in custom silicon design these days? Is it just, I mean, at the highest level is just flexibility and speed or cost, size, wafer, size? How do you think about the design space? And then I want to know
Starting point is 02:40:54 how you actually narrowed it on your particular decisions. Yeah. So generally there's some kind of performance per something. Yeah. So let's analyze those pieces. Like the two different aspects of performance are what is the throughput and what is the latency.
Starting point is 02:41:08 So how many users simultaneously can I support is throughput? And then latency is for one user, how fast is the experience? Both of those matter. And then on the per something, per a dollar, how much does the chip actually cost? And then per what, which is like what is the power bill of the chip? So those are all combinations of those two numerators and two denominators are the things we care about. What we see in the market today is that the number one constraint is just the throughput per dollar and the throughput per watt. So these frontier labs have so much demand for compute serving all of these trillions of tokens per day.
Starting point is 02:41:49 And so the cost and the economics is the main constraint. There's only so many square millimeters of silicon wafer being produced every year. and so given that constraint on how much silicon there is, can we maximize the number of tokens and maximize the intelligence of the models coming that way forward? What does the go-to-market look like? Are you already sold out? Who are you kind of targeting early on?
Starting point is 02:42:16 How do you scale? All that stuff. So one of the places where we've seen the most interest in our product is from Frontier Labs, really. And so this is coming from a combination of the they are the ones who are driving really all of this demand and are so much constrained on cost as well as silicon wafer supply.
Starting point is 02:42:38 But then also they are the ones who are doing these reinforcement learning training workloads which are very, very latency sensitive. They have to roll out long rollouts in a very long loop. So that's where we've seen the most interest. One of the things that shows up there is that when they are looking to make place an order, it is an order on the order of gigawatts or something like that, which is massive volumes.
Starting point is 02:43:01 So one of the things that we're actually very excited to be able to do now with this race that we've just announced is help ramp up the supply chain in order to be able to deliver gigawatts a year of volume, which is a massive volume to be able to deliver. Yeah. When you were initially thinking of starting the company, pitching it to investors early on, how did you answer the question around? NVIDIA's various moats or kind of strategic advantages, you know, think Kuda, all that stuff. Yeah, I think it's really interesting. Kuda is for Nvidia simultaneously the biggest strategic advantage and also a constraint,
Starting point is 02:43:43 because their promise that they make you is that you can take a Kuda program written 10 years ago, and it will run on the next generation of Nvidia GPU. Jensen goes on stage and promises this. It is so valuable for them. And yet at the same time, it means the next generation GPU has to look just like the GPU from 10 years ago. So it means things like the numerics can't change, the way the cores in the chip are connected to each other can't change,
Starting point is 02:44:06 the actual memory architecture can't substantially change. All of these things are kind of locked in by the programming model that they designed more than a decade ago for general purpose parallelism. And so this is where we've seen the biggest differentiation. If they wanted to say, well, we're going to completely give up our Kuda approach and start a new generation of chips, maybe they could do that. They would lose all of this lock-in that they have,
Starting point is 02:44:30 but then at least they would be on a level playing field with us. But that's not what we see. Really, we see them being committed to their trajectory. The coup de lock-in is very valuable for sort of the mid-and-tail of the market where people are so sensitive to the software cost. But really, at the head of the market, in the Frontier Labs, the software is not the main cost. The hardware is the main cost.
Starting point is 02:44:52 And so if you're willing to rewrite your software, maybe you can actually switch to a more efficient hardware like else. And it's getting easier to rewrite software. As you plan your business, how are you thinking about bottlenecks? You know, one month it's energy. The next month it's chips. Then, you know, a lot of concerns around TSM right now. How are you kind of planning?
Starting point is 02:45:16 Yeah. So, I mean, I think these bottlenecks are real and are going to stay for a long time. What, like the big bottlenecks that you see in the manufacturing. manufacturing supply chain are on logic dies from TSM, and then memory dies from Hynix Samsung Micron, and then manufacturing, so of racks and so on. Given these bottlenecks exist, what you would like to do as a consumer of such things is you want to get the most bang for your buck, so the most performance out of every square millimeter of silicon.
Starting point is 02:45:48 That is what has been our focus. The flops per square millimeter, the four-bit precision multiplies you can do per a square milode or silicon is higher in our product than any other product. And so, you know, as the price of every silicon wafer goes up, you can do more with it on our solution than those. I assume you're on the most, I mean, you've mentioned this, you're selling to the frontier labs, running frontier models, probably on the most leading edge chips, the most leading edge fabrication nodes.
Starting point is 02:46:19 is there a world where it's valuable to say, hey, we have some lagging edge capacity out there. What if we go design custom silicon that runs on the last generation Intel node that's not the line out the door for capacity and then I'm not competing with you? Does that not work? Is that not possible?
Starting point is 02:46:42 Or is that just a completely orthogonal business to what you're building? That approach is possible. it is maybe more of an approach for a player with rich pockets rather than a startup. In that every different process you target, it costs you another 20, 30, 40 million dollars of development cost. And so if you're going to bet all of your eggs on, like put all of your eggs in one basket, you should put it in the leading edge note.
Starting point is 02:47:06 In the best basket. Yeah. That makes sense. Talk to me about other tradeoffs at TSMC. I mean, Cerebrus is famously wafer scale. how what is the trade-off on like size of dye these days? Yeah. So, I mean, there's a trade-off of size of dye
Starting point is 02:47:25 and then also of memory architecture. So size of dye, cerebrus is the outlier. Almost everyone else has converged on reticle scale, which is the largest sort of standardly produced TSMC chip. We're in that same category of like where about this, we're in the standard bucket there. Okay. That avoids a lot of the physical risks that,
Starting point is 02:47:45 you know, when you look at Cerebras, they've had to spend all this time on dealing with just like bending and all these uncomfortable physical constraints that we don't want to deal with. So the reticle-sized chips, but then the other bigger thing is which memory technology do you use? Historically, there's been like the HBM-based players, that's Google, Amazon, and Vidia, and then there's been the S-Ram-based players, which are Cerebris and GROC. S-RAM is small, but very, very fast. And so the very, very fast is good if you want to run low latency. You can put your model weights in Sram and you get the best latency in the market.
Starting point is 02:48:22 That's what Grock and Cerebras have done. But the reason they haven't sold out in the market is because there's not enough space in the Sram to store all of your long context KV caches. And so one of the things that, like this is the reason why the HBM-based players, like Google Amazon and VDivier, have won is because of like the HBM is actually essential. But it's actually possible to marry both of these approaches and put them in one chip. And that is what we're doing with Mattix 1. And so it curiously, I mean, it doesn't just give you the best of both worlds.
Starting point is 02:48:55 It actually beats any alternative on throughput. There's this curious effect where when you have your weights in SRAM, you can actually get better mileage, better usage of the HBM in return. So we think there's actually the way where the market in general will move over time. Okay, help me understand the trade-off continuum around flexibility of model. I mean, imagine on Nvidia and VL-L-72, I can sort of run any model as long as it fits and works and is trained properly. And then Talis is like these specific weights on the chip, you can never change them whatsoever. And then there's something in the middle.
Starting point is 02:49:39 how much flexibility do you think is important? How much flexibility are you planning around and how do you think about sight lines? Because I imagine that the delay between like the final architectural design to chips in data centers is still a year, 18 months, something like that? Yeah, I mean, there's all of these manufacturing and then deployment times that make it take a long time.
Starting point is 02:50:04 In general, I would say that from sort of pencils down on chips to like when is the last time you're using? it. The chip is going to be in the data center itself for like three to five years, and then there's maybe, as you say, a year, a year and a half of deployment time in advance of that. So you want your chip to be relevant for a five-year time span, maybe. Sure. So you need to pick a point of specialization which you think is here to stay. For us, that is very large matrices. And in fact, really large matrices together with a splitable systolic array, which is a piece of technology. But very large matrices is the theme
Starting point is 02:50:39 that we started with. And this is just a recognition of over time, models have been growing, they grew a ton with LLMs and they're continuing to grow. And if you specialize for that, you can get big efficiency wins
Starting point is 02:50:50 on the matrices themselves. Now, we are still very general purpose programmable in terms of the vector unit. Similar to InVidia, we have this vector unit that you can run any instruction on, like, add multiplies, and try to divide, all of those things.
Starting point is 02:51:05 And so that gives you the, like, it's trying to put a good amount of flexibility, but in a way that only costs like 5, 10% over the cost of the chip overall. Okay. Funding news. Give us the news. How much did you raise?
Starting point is 02:51:19 So we're happy, we have raised $500 million. This was round. Yes. Who'd you raise it from? This was led by Dane Street and situational awareness. So situational awareness, that's Leopold Ash and Brenness Fund. If you've been living under a data center, that's his fun. Exactly. You might have heard of it.
Starting point is 02:51:44 So he really sees just like the big picture of where this space is going. And he recognizes just how much demand there is for silicon. And then on the other end of the spectrum, Jane Street, they are expert technologists. They know everything about what exactly is required to build a product like this, and they know what good is in a product like this. So we're really happy to have these like strong experts here. This sort of mirrors what we see inside the company as well. We have a wide range of experiences across hardware, software, and ML.
Starting point is 02:52:15 And then even in the rest of the investors who are participating in our round, we have renewed participation from our previous investors. This is Spark Capital and NFTG, as well as a range of folks such as Patrick and John Collison, experienced ML people like Andre Capathie, And then even participation from the buy chain like Marvell and the Did you let any Normies in? They're just the most elite people in the world. Yeah, we like, I mean, just one mouth breather, please.
Starting point is 02:52:54 Congratulations. Probably the hardest lineup. It's the hardest lineup ever. It's amazing. I'm extremely excited for this and excited for it to. Now you have to win on such massive scale. otherwise you'll bring dishonor to all the industry legends. No, thank you.
Starting point is 02:53:11 It's really cool to hear your perspective and approach to everything, and I'm sure you'll be back on the show this year. So congrats to the team. Yeah, we'd love to you back. Thank you so much for taking the time. We'll talk to you soon. Goodbye. Let me tell you about Vod.com.
Starting point is 02:53:25 Where D2C, brands, B2B startups, and AI companies advertise on streaming TV, pick channels, target audiences, and measure sales, just like on meta. Have we had a investor lineup like that before? A Royal Flush. Street, situational awareness, co-lead, and then just down. Fantastic crew. Well, I mean, situational awareness is a new fund, has not led that many rounds. I know, but I'm just saying you go back.
Starting point is 02:53:45 Maybe it turns into a spray and prey fund. You never know. Maybe Leopold says, yeah, I'm just going to write five million dollar checks to every company. Who knows? Anyway, we have our next guest in the roostrum waiting. We got standard intelligence. How are you doing?
Starting point is 02:53:59 What's going on? Hey, I'm being launched. You're pretty well. How about you? We're doing fantastically. Thank you so much for taking the time to come on the show. This is the first time on the show. I'd love an introduction on yourself and the company.
Starting point is 02:54:11 Yeah. So I'm the Vanche, Co-Fandent Center Intelligence. We pre-trained computer use models, basically. So basically the thing people are doing is they're training, you know, on screenshots and like train of thought traces and we're just like, what if you train purely on 30 FPS video. What actually goes into the training data? Because there's a lot that you can do on a computer. And I feel like if you've never trained on Ableton,
Starting point is 02:54:41 and it just comes randomly. Like, are you actually going to be able to learn Ableton from just playing in Premiere Pro and Word and, you know, paint or something or Photoshop or whatever? Like, how are you thinking about the transfer and, like, what's actually in the training side? Actually, just zoom out and talk about the process in more depth. Yeah.
Starting point is 02:54:59 So we have, like, two splits of data where we have, like, you know, this small, like, contractor split. The thing that we did was we made this app that people run on their computer and it records their screen and, you know, logs all their key presses and all their mass movements. And we're running that all the time. And then we also have this like much, much larger kind of unlabeled data set of basically every video that we could possibly find that we're allowed to use on the internet of computer use. And so, yeah, we train a model to label that big set from this like small contractor-only set. And the goal is to just like be able, yeah, just like train on all of it and like train this kind of like general model that is that is able to to generalize to basically anything that you can do on a computer. What kind of limitations do you have on who can install your software to capture that data?
Starting point is 02:55:51 If I'm a, if I'm a company, I feel like I have to have a pretty high degree of trust in you guys to let my employees. Is that something that's like a is more of like a partnership? Yeah, so right now it's like us, like we're recording our own screens all the time. Plus like we have some number of contractors and we get to like pay them, you know, somewhat less because they're not doing like active work for us. It's more like passive screen recording. Sure. Yeah. And then are you sitting on top of some sort of foundation model brain for reasoning chains and sort of like the LLM piece of the puzzle?
Starting point is 02:56:25 Or is this a model that's kind of lives? You're not right now. No, we're not at all. Like the model that we released is, I suppose, like, demoed is like entirely trained on this kind of 30 FPS video in and like, you know, typing and mouse movements and things like this up. Okay, so how much longer will I have to fill out forms on the internet? I have, I should try to estimate how many times I've entered the same information just over and over and over. I think this is like a huge... John Collison talked about it on our show today
Starting point is 02:57:02 and was talking about it on his own show. I think with Ben Thompson talking about, like, at what point can you just take a link and say like, hey, please buy this? Yeah. And then it just does it for you. Like pretty soon. I think like that kind of use case
Starting point is 02:57:16 is just like under six months away depending on what, like, exactly you mean. Yeah. I mean, in terms of actual deployment, I imagine that this would be something, I personally would probably want deeper as more of like a tool that's called from a consumer, LLM app. Is that how I'm correctly thinking about this? Or do you think they'll actually be, like, will you jump straight to a consumer?
Starting point is 02:57:43 It's like, yeah, I think in the short term, the kinds of people that are particularly, like, you know, cool to sell to are like, you know, mechanical engineers doing CAD where like they can press the tab button like. software engineers press tab and have their next minute or two minutes of manual work done. And we showed that in the kind of gear extrusion demo where you have this gear and you're extruding faces and that's just like a very, very common thing that you do in CAD. And I think there's like a more general thing where like, yeah, you can think of computer use as like a tool call or you can think of it as like, you know, just the thing that you do you know, for knowledge work. And I think we're just in.
Starting point is 02:58:27 a place where we can scale computer use on its own. It's not impossible that we'll like initialize from LLMs or for example, like use text training to like make the model smarter in text voice so it can fill out forms better. But it is it is not the goal of the company that like, you know, people have, you have Claude like call this as a tool call. The goal is to use your computer or like use its own computer just like in general. Talk about your experiments with self-driving and does that work potentially apply to robotics more generally? Yeah, so I think this general, like, pre-training thing, or, like, you know, labeling a bunch of unsupervised data with actions, and then training on that, like, labeled data, this, like,
Starting point is 02:59:19 inverse dynamics thing works very, or, like, I expected to transfer very well to robotics. self-driving in particular So Neil who works at ASI was like, okay, we have this action model and his friend had a comma and so there's a comma like joystick mode where you can control the steering with arrows and so we were like, okay, well if it's a general computer use model
Starting point is 02:59:44 surely it should be able to control a car because that's just like a thing that you do on a computer it's like video in you're seeing it on the screen and then you can press the left and right arrow keys to steer and obviously we originally didn't really expect this to work and then it just worked much better than expected
Starting point is 03:00:04 we find two an hour of data on how many hours, three hours one hour, one hour, 50 minutes that's crazy and you're able to just fully the system could just navigate around SF I mean sorry navigate around like South Park
Starting point is 03:00:22 Like it's, you know, not general self-driving model. I would not recommend, like, sitting in this car and just, like, letting it do whatever it wants. But, yeah, it's pretty cool. I take the wheel. Don't make mistakes. Take the wheel. We are not a Tesla competitor. We are not a way of competitive.
Starting point is 03:00:39 Do you think the sport coat is, like, the next it apparel item in the staff? Because it looks fantastic here. The chat loves your sport coat. There's been a lot of well-gross people a day. be the middle ground between Wall Street and San Francisco is a sport coat. But you're fantastic. Yeah, I don't know. I really like this.
Starting point is 03:00:59 I got it from like Bonobos. Nice. And you need square. There you go. That's great. I think I like dressing up like at least a little bit and it's fun. That's good. Okay, back to the business.
Starting point is 03:01:11 Yeah. I want to know about it feels like you're training a very generalized model. What are you learning? from the previous product launches where, you know, we had this chat GPT moment, and then I don't even remember what people were just kind of chatting with chat GPT back and forth. And they started using it, they started using it kind of as a Google replacement. And then that kicked off the whole like Google's cooked narrative. And then with the studio Ghibli moment, it was really the launch of like a better diffusion
Starting point is 03:01:44 model with some reasoning in there, I think, and stuff. And so, and then people were just like, this is a studio Ghibli creator. And then they found that niche of like, it's really good at creating cartoons. It's not quite style transfer, but that's what it does well. How much do you want to just like turn a wild open model loose and then hope that someone finds a killer app versus like you kind of know that this is going to kill in CAD and you're just going to launch like cursor for CAD on day one and then like go from there? Yeah, I think there's, I think the answer is like some combination. I'm like, okay, short-term CAD design work somewhat generally are things that the current models just like totally can't do.
Starting point is 03:02:25 Like LLMs are just like, or like anything that is an LLM harness is just like really, really bad at CAD, for example. And so that seems like a, okay, we know what to do there. We like, you know, can just scale up this model. We have a bunch of blender data, a bunch of 3D modeling data in general, and we can scale up CAD. And then also, yeah, yeah, I like, I'm quite excited to release a more general tab model for people to play around with and figure out what it's particularly good at.
Starting point is 03:02:54 And so it's like when I'm asked like what commercialization plans are, it's like we have some reasonable idea of what the first steps are, but like there could just be this like massive thing once people start playing with it at that scale. So your training on video frames, 30 FPS video, correct? Yeah. I was told by an anonymous poster on X by the name of Rune, that text, in fact, is the universal interface. Was I lied to? Yes. Whoa. Shots fired. Explain. Elaborate. Like, why doesn't this just collapse down to text? Why don't I
Starting point is 03:03:31 puppeteer CAD from text? Like, how does this all play together? Like, okay, I think it is in, at some point in the, like, arbitrarily long future, like, if we only use text models, we could force, like, most things to be text. I think there are just like a lot of things that are much more native when done from like a computer use, like, you know, due wise are designed for humans. They're designed for like humans to use. We have, you know, this massive long tail of like things on the internet that are like entirely undoable by LLMs. For example, like when I do ML engineering, right, because like most of my time is not spent, most of my time is just like spent doing kind of this grunt work of engineering, and it's like
Starting point is 03:04:16 a lot of looking at graphs and analyzing graphs and figuring out comparing lost curves or something. You can do this in text, but it's just a much larger pain than doing it in this kind of native interface, which is video.
Starting point is 03:04:35 And I don't know, there's a reason why humans don't interact with the computer purely through text. It would kind of suck. For example, we have like the concept of, like video has the concept of time in a way that text doesn't. Speak for yourself. I got green,
Starting point is 03:04:50 I got green text. If this can eliminate, if this can eliminate YouTube tutorials for software, that's a killer app, is this, yeah, is this, is that anything?
Starting point is 03:04:59 It's not just going to eliminate the tutorials. It's going to eliminate the whole process. Because you don't need the tutorial if you're just like, just go do the thing that I need you to do. But yeah, I mean, you're obviously in the same talk phase for a while. How are you thinking about,
Starting point is 03:05:13 uh, go to Mark. in general for the underlying technology? Yeah, so I think, like, as I said, there are the kind of short-term like CAD design use cases. There's like the tab model we want to just like give anyone a kind of general thing of, you know, in cursor, you press tab and it completes your next edit or whatever.
Starting point is 03:05:36 What if you could press tab and it completes like the next five and then 10 and then 60 seconds of what you would do in your computer? And then I think longer term, it's just like, you know, we're training a general model that is able to do useful work and you'll be able to like send it off with a prompt to do work. And then like there's a very interesting thing where like the data that we're training on is very, there's a bunch of like error correction built into it. So like when you have a bunch of data of humans doing things, a lot of the times the humans make mistakes. And then they have to like correct those mistakes. And you don't get that with text because like most tax. on the internet, you don't get to see the process of, like, you know, messing up and then fixing it.
Starting point is 03:06:17 And so, yeah, I expect there to be a lot of, like, native, like, no RL, just, like, prior of doing the self-correction thing properly. So, you can get it to, like, go do something for 10 minutes, and it'll, like, try something for two minutes and then, and then, like, you know, mess up slightly, but it, like, knows how to fix that over and over again until it's, like, gotten to... Us all to state. Yep. Very cool.
Starting point is 03:06:46 Well, congratulations on the launch, and thanks for taking the time to come chat with us. Thank you for sport coding. Yeah. We're going to get some, we need some sport coats around the office. You need something in between.
Starting point is 03:06:55 You got John over here formal. I'm doing casual Friday on a Tuesday, but a sport coat perfectly in the middle. It was great to meet you and come back on soon. Thank you. We'll talk to you soon. Cheers. Goodbye.
Starting point is 03:07:08 Well, back to the timeline. There was an, an individual who accidentally gained control of 7,000 DJI vacuums
Starting point is 03:07:20 he was just vibe coding amazing and accidentally found according to investment Hulk the CCP
Starting point is 03:07:26 back door he wanted to control it with a gaming controller and then he just he got control of
Starting point is 03:07:33 everything that's so crazy this is why I've been deeply concerned with letting any any foreign
Starting point is 03:07:41 adversary flood our country with a bunch of robots. I think we should avoid it. I thought this was funny earlier. Heggseth says he'll order random pizzas to throw off the monitoring app. I expected something like this to happen. It's kind of silly that everyone has a dashboard up and can tell when things might be getting a little more tense in the Pentagon. So yeah, give them a budget of a few hundred thousand dollars a year and just order pizzas at random times. For the record, this is a joke and he is joking.
Starting point is 03:08:13 But I do think they could throw it off. Potentially, you never know. They could throw it around. HubSpot acquired starter story. Yeah, this is very exciting. Very cool. Yeah, starter story. Overnight success.
Starting point is 03:08:27 And decade he's been doing this. I believe Pat, the founder, had just posted. Yeah. He posted something. He was like sub-stead or he said HubSpot should acquire starter story. And then like two weeks later it was done. Wait, really? Oh, I thought that was from like years ago.
Starting point is 03:08:40 Oh, interesting. Oh, maybe it was. I thought he posted that a long time ago. And then, yeah, he said, no, he said September 23, 2025. Oh, well, not long. Yeah, a couple months. A couple months. HubSpot should acquire starter story.
Starting point is 03:08:53 The SEO ship is sinking. In my opinion, HubSpot needs to pivot way harder to video, specifically YouTube. Yeah. I'm biased, but acquiring Starter Story would take their YouTube game to the next level. And he was quoting Brian, the co-founder, saying, Dear Founders, it's a good time to sell your company. Bob Bryant. Anyways, there's a bunch more stuff in here, but we will get to it tomorrow.
Starting point is 03:09:20 ArenaMag is out. Go check out issue number seven. They're on Substack now, Arenamagizine.substack.com. Go check it out. And give us a spot. One more post for you. Deep Dish Enjoyer says, I don't see what the point of shoveling snow is when AI agents are going to commoditize burrito taxi services by 2028. It's a good excuse.
Starting point is 03:09:42 Leave us five stars on Apple Podcasts and Spotify. Subscribe to our newsletter at TBPN.com. Have the best evening of your entire life. We love you. Goodbye. Nice work, brothers. I'll see you on the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.