TBPN Live - David Sacked by NYT, Sir Dylan Patel Joins, Kushner & Sama are Thriving | Ro Khanna, Jonathan Swerdlin, Cristóbal Valenzuela, Vincent Weisser, Ben Hylak, Alby Churven

Episode Date: December 1, 2025

(00:16) - Alby Churven is a teenage entrepreneur from Sydney who, by age 14, has already founded Finkle, a gamified learning platform aimed at teaching teens coding, entrepreneurship, AI, and... real-world skills. He began coding when he was six years old, and previously built Roblox games and a youth-oriented soccer brand before pitching Finkle to Y Combinator (Winter 2026). Alby’s vision blends youthful creativity with a mission to rethink education — and his journey has drawn global attention for ambition and boldness. (07:22) - Three Years Since the Launch of ChatGPT (13:06) - Gemini Surges (20:17) - David Sacked by NYT (39:54) - 𝕏 Timeline Reactions (01:01:19) - Dylan Patel, Founder and Chief Analyst at SemiAnalysis, discusses Google's strategy to sell Tensor Processing Units (TPUs) externally, highlighting the challenges posed by their non-standard design and the need for broader software support. He emphasizes the importance of open-source software in expanding TPU adoption and notes that while Google's internal software stack is robust, making it accessible to external customers is crucial. Patel also touches on the competitive dynamics between Google and Nvidia, particularly regarding hardware performance, software ecosystems, and market positioning. (01:33:48) - Ro Khanna, a Democratic U.S. Representative from California's 17th congressional district, is known for his advocacy on technology, economic equity, and transparency. In the conversation, he discusses his legislative efforts, including the bipartisan Epstein Files Transparency Act, which mandates the release of all Justice Department files related to Jeffrey Epstein, aiming to hold powerful individuals accountable and restore public trust. Khanna also addresses the impact of artificial intelligence on employment, emphasizing the need for policies that enhance human capabilities rather than replace workers, and highlights the importance of balancing technological advancement with job preservation to maintain social cohesion. (02:11:19) - Jonathan Swerdlin, co-founder and CEO of Function Health, is dedicated to empowering individuals to proactively manage their health through comprehensive lab testing and advanced imaging services. In the conversation, he discusses Function's mission to provide affordable access to over 160 lab tests and full-body MRI scans, enabling early detection of potential health issues. Swerdlin emphasizes the importance of utilizing technology to make personalized health data accessible, aiming to help people live longer, healthier lives. (02:27:59) - Thrive Announces Partnership with OpenAI (02:29:55) - Cristóbal Valenzuela, CEO and co-founder of Runway, discusses the release of Gen-4.5, the company's latest AI video generation model. Gen-4.5 achieves unprecedented visual fidelity and creative control, producing cinematic and highly realistic outputs while providing precise control over every aspect of generation. Valenzuela highlights that Gen-4.5 has surpassed competitors like Google's Veo 3 and OpenAI's Sora 2 Pro, securing the top position on the Artificial Analysis Text to Video benchmark. (02:46:41) - Vincent Weisser, CEO of Prime Intellect, discusses the recent release of Intellect 3, a 100-billion parameter model developed through scaled reinforcement learning and post-training, achieving state-of-the-art performance at a smaller scale. He highlights the creation of an open environment where contributors worldwide can develop reinforcement learning environments, enhancing the model's capabilities across various tasks. Weisser emphasizes the trend of open-source models matching closed models' performance and the potential for businesses to fine-tune models for specific applications, leading to better performance and cost efficiency. (03:01:01) - Ben Hylak, co-founder and CTO of Raindrop—a company providing monitoring solutions for AI agents—discusses the challenges of silent failures in AI systems and the importance of real-time monitoring to detect and address these issues. He highlights how Raindrop's platform processes millions of events daily, enabling engineering teams to identify complex problems like tool call failures and user frustration. Additionally, Hylak shares that Raindrop recently secured $15 million in seed funding led by Lightspeed Venture Partners to further develop their monitoring infrastructure. (03:17:02) - 𝕏 Timeline Reactions TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devRestream - https://restream.ioProfound - https://tryprofound.comJulius AI - https://julius.aiturbopuffer - https://turbopuffer.comPolymarket - https://polymarket.comfal - https://fal.aiPrivy - https://www.privy.ioCognition - https://cognition.aiGemini - https://gemini.google.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive

Transcript
Discussion (0)
Starting point is 00:00:00 You're watching TVPN. Today's Monday, December 1st, 2025. We are live from the TBPN Ultradome, the Temple of Technology, the Fortress of Finance, the Capital of Capital, Ramp. Time is money, save both. Easy use corporate cards, bill payments, accounting, and a whole lot more, all in one place. We have a special guest. Special guest today, opening the show with us.
Starting point is 00:00:20 I'll be from the Land Down Under. Probably saw him go viral recently. But why don't you introduce yourself? Yeah. So I just, I actually arrived in L.A. on Saturday. Welcome. Yeah, I'm from Sydney, or Wollongong, so about an hour and a half from Sydney. And yeah, I've been building something called Finkel, which is basically Duelingo for Life Skills.
Starting point is 00:00:44 And I just applied to YC as well with that post on X. How many views did you get on the application video? I think it got like 7.8 million, so yeah. Let's hit the gong. Pretty viral. Alvey. Well done. So give me an example of a life skill that you can learn with your app.
Starting point is 00:01:07 Yeah, I guess like entrepreneurship especially, startups and stuff. Because like in Australia, I don't know about it in the U.S., but school is very entry level. It's not hands-on. I feel like it's very just not preparing us for life. Like you do need it if you want to be a doctor or a lawyer or something, but some kids don't want to do that. And, like, yes, you do have commerce and computer science and stuff,
Starting point is 00:01:32 which I am doing as electives. But they're not hands-on, and they're very, like, outdated and, like, textbook-heavy. So I feel like actually learning life skills that you can apply now, especially, like, with AI and everything. Like, if you don't know how to use AI now, you're sort of going to be left behind. Very exciting. What are you hoping to get out of your trip?
Starting point is 00:01:55 if you're on summer holiday right now? Well, like, my exams just finished before I came, so there's still like two weeks left of school. How do you think you did? I think I got like a B in science and like a B in math. Focus on the game. Focus on the game. Yeah, I really guess what I'm trying to get out of it
Starting point is 00:02:18 is just to like meet as many people as possible, make as many connections as possible because this trip probably won't a trip like. this probably won't happen again for a while. So, yeah, that's sort of my goal. What's the status of the YC application? You've submitted it? Yeah.
Starting point is 00:02:33 Have you heard back yet? No, it hasn't been. It's still like... We've got a recommendation. Yeah, we got a lot of here. Your YCL on watching this, please go leave a recommendation. Yeah. Yeah.
Starting point is 00:02:44 But congratulations. Thanks so much for coming by. What is the stage of development of the actual application, the product itself? Are you live? Can people go downloading? demo right now we're getting like beta testers but the beta should should be launching soon probably by like the end of this year do you have a wait list are you doing email capture yet yeah like wait list beta testers we've got like a couple hundred but yeah very cool incredible well congratulations on
Starting point is 00:03:13 all the attention i'm sure you'll convert it into a lot of opportunity and have a great trip yes and good luck with the yc application thanks so much and looking sharp in the suit Looking sharp in this, it's. Amazing. Have a good rest of the time. Thank you. Thanks for stopping by. Before we move on to the rest of the show,
Starting point is 00:03:31 let me tell you about Restream. One live stream, 30 plus destinations. If you want to multistream, go to Restream.com. And it's been three years since ChatGPT launched. I wanted to reflect a little bit. Everything changed, or maybe nothing changed, or maybe some amount of change in between everything and nothing. You're more on the nothing changed camp.
Starting point is 00:03:51 I sort of agree. with you. I was sort of reflecting on like, okay, Thanksgiving's happened. It was Thanksgiving over the weekend. You know, how different is my world? Like, there's not a humanoid robot that's cooking for me. And also, even if we had a humanoid robot, I think that would, I think Thanksgiving would be the day we let the robot sit in the closet because we enjoy, no, no, let us cook. We enjoy cooking. Cooking is a fun family experience. And so of all the things. Let us. Thanksgiving is like the track day of cooking. Like, even if you have a robot, does it, you still want to do it on Thanksgiving. You don't want to cook on a random Tuesday
Starting point is 00:04:27 when you're busy, you got lunch, you know, all this other stuff. Thanksgiving is the, is the Nurberg ring. And I was doing some dishes after Thanksgiving, and I felt like it was a good way to kind of like, it felt like walking off the pie in a little way. I wasn't walking, wasn't walking very far. Yeah, yeah, yeah, yeah. Yeah. And so, yeah, so that, that hasn't really changed that much for me. Um, I was thinking, I was reflecting more on the agentic commerce thing. It feels like ChatGPT and OpenAI, they really are pushing to make revenue from Agenda Commerce, like both in this holiday season. And incredible speed of execution. Like, clearly it's a big opportunity. If you can figure out how to, you know, run ads, commerce, convert, take a cut
Starting point is 00:05:06 of that. That's big. My experience actually demoing it, it was kind of interesting. Like, the actual product in Chachapit is pretty good, but you can see that the walled gardens are already going up. So one place that I like to go to for reviews. of products, specifically around the holidays, is the wire cutter. Now, the wire cutter, their whole twist was they wouldn't rate each product. What they would do is they would pick a category, and then they would just tell you what their best product was in that category. Sort of like a cluster max of vacuums.
Starting point is 00:05:36 So they would give you the platinum tier vacuum and then a budget pick. And so I've always liked the wire cutter. I think they do a very rigorous job. They were acquired by the New York Times. The New York Times is currently in a lawsuit with Open AI. And so if you go to ChatGPT and say, hey... And I think they're about to be in a lawsuit with David Sacks. Maybe, maybe, which we will talk about on the show in a little bit.
Starting point is 00:05:58 But if you go, so I went to Chat Chhabiti and I was like, hey, okay, pull a deep research report. Just pull everything from the wire cutter and tell me every category and every product that's top ranked. Because then I can just scan it really quickly and be like, oh, yeah, I didn't even remember that that category existed. That would be a great gift. I'll get it. And I'll go through the wire cutter link. I'm fine with that. I'm paying chat GPT.
Starting point is 00:06:21 I'm happy to go and use their affiliate link on the wirecutter. That's how the wirecutter monetizes. But it couldn't do it. It said, hey, we don't, we can't touch the wirecutter. Like, it's off limits. You've got to head over there yourself. Pop open a Chrome tab, brother. If you want to get over there, like that's on you.
Starting point is 00:06:38 Or maybe an Atlas tab. I don't know. But so that had not really changed that much for me. But the one thing that did really change on Thanksgiving was the discourse. Like, the AI narrative has fully arrived to just family and friends. You mean family in the home? Yes, yes.
Starting point is 00:06:58 In people that don't work in technology, that don't, their job is not podcasting on it. Not that. More talking about is it a bubble. Where do you think all this stuff goes? The stuff that, you know, we've been talking about for months. You're not living in a bubble? You think the average family in America is starting at the AI bubble? I saw multiple newsletters where the whole conceit of the newsletters.
Starting point is 00:07:19 going into the holidays was how to talk to your family about the AI bubble and how to talk to your family about AI generally. And I think it's real because if you've been watching your 401k over the last year, you've seen a massive spike and then a recent sell-off. And if you've turned on any news or opened up any newspaper, you've been hearing about one trillion dollars. And you're like, what, a trillion dollars? That chat GPT app, they need a trillion dollars to make that thing work? Chat. Right? Chat. And so it and so it is. It is a really big narrative. And so I wanted to reflect on like what has actually changed over the last three years.
Starting point is 00:07:55 And specifically in the Mag 7, the Mag 7 has been on absolute tear. Just over the last three years, the value as a whole has basically tripled. It was a little under $8 trillion. Now it's over $21 trillion. It's a lot of value created in the last three years. NVIDIA was second to last in the Mags in the Mag 7 when ChatjiPT. launched. It was worth just $420 billion, something around there. Today, the stock is up over 10x, basically. It's $4.36 trillion. And up today. Despite all the chaos. Dylan Patel was trying
Starting point is 00:08:33 so hard to bring that stock down, but he couldn't do it. He's coming on the show at noon. We're going to confront him about his bear posting and whether or not the market is over redcom is down today. Okay. Why is the buyer, the maker of the TPU. Oh, yeah. I mean, a lot of these things, it's like it's already been priced in. I mean, even when you read that semi-analysis piece, you know, a lot of it's like, we've been writing about this for months, people have already put this trade on, et cetera, et cetera. But I do think that the invidia, the 10x that's happened, has really created some crazy zealots and just an entire industrial complex, because there are so many people who put, who heard AI, they tried the Chachapit thing, and they were like,
Starting point is 00:09:18 this is big. How do I get it on this? I can't buy Open AI. Open AI is running away with it. Oh, they need Nvidia chips. That's the logical next step. They went in in Nvidia and they got a 10x. And they could have gotten a 10x on like a million dollars, $10 million. There's no amount of money because it was already a $420 billion company. So you could be, you could put your entire retirement savings in it. No problem. Complete liquidity, right? It's not, oh, you've got to get some SPV. It was really easy. Siki Chen from Runway was saying that back, I think it was 2020. 20, 2021. He said he put an uncomfortable amount of his net worth into Nvidia. Yep. And obviously. And near Cyan, same story, right? Still underappreciating. The NVIDIA 10-year fund, all it does is
Starting point is 00:10:03 buy NVIDIA. Just by investing in it, you can't possibly sell. God's chosen company. Yes. That's what the, I think, title of the fund was. Oh, really? Yeah, yeah. That's hilarious. And so, I mean, yeah, there's been a ton of zealots. We're going to talk to Dylan Patel at noon about some of the zealots that have been attacking him. Previously, the world's largest company in November of 2022 was Apple. And at the time, they had a sizable lead over Microsoft, Amazon, and Google. Now that gap has closed a bit as the hyperscalers have grown more over the last three years on the back of the AI boom.
Starting point is 00:10:37 And it's interesting. I mean, you can sort the Mag 7 by market cap. And today, you get the following ranking. Tesla, then meta, then Amazon, Microsoft. alphabet, Apple, and then Nvidia at the top. And the big question, I think, that's on everyone's mind and kind of underpins the horse race that we cover every day on the show is, what will that ranking look like in the next three years? Is Nvidia really a monopoly? Is it impervious to attacks from the, you know, different suppliers? What does Broadcom have to do to get into the
Starting point is 00:11:10 Mag 7? I don't know. Tesla's sitting at 10 on the market cap. Yeah. Companiesmarketcap.com, which we are not affiliated with, which is just a fantastic website. Broadcom is sitting number six above meta currently. I don't know. I mean, I think several years in the $1 trillion club, like just being undeniable at that scale. There's also just like a bit of branding.
Starting point is 00:11:41 Some of the companies that made it into the Mag 7 were, I feel like the Mag 7 leaned. understandable, like not that deep in the supply chain. Even Nvidia was the deepest. Invita had the least of like a consumer brand, but still a lot of people use the gaming graphics cards. Broadcom is really tricky because there's no consumer angle whatsoever. Consumers can buy Tesla. They can use meta products. They can buy on Amazon, have a Microsoft, you know, operating system. They can use Google. They can have an iPhone. And they can have an Nvidia gaming graphics card. The top 10 right now. Tesla is sitting at 10.
Starting point is 00:12:18 SMC at nine, eight is Saudi Aramco, seven meta, and then six is brought. I also think you have to be an American company to be in this like mag seven or whatever the hot ranking is, like Fang. Fang did not include, never included oil companies, never included international companies. Because if you go there, then you could be like, oh, well, let's include like the Chinese tobacco company that's worth a trillion dollars or something like that. Like there are some crazy, there's some crazy like foreign owned companies that are, if they were independent, might be worth. with a trillion dollars because they just have so much of the country's assets. Yeah, exactly. But it doesn't really count because it's just sitting there out in the ether. Well, let me tell you about Gemini 3 Pro, Google's most intelligent model yet,
Starting point is 00:13:03 state-of-the-art reasoning, next-level vibe coding, and deep multimodal understanding. And speaking of that, Bucco Capital Bloch has a post here, Gemini app downloads are catching up to ChatGBT, GBT, and Gemini users now spend more time in the app than ChatGPT users. People are going back and forth on can Gemini catch up? You know, the model clearly very good. The big bombshell in the semi-analysis piece over the weekend was this idea, which I think has been bandied about before,
Starting point is 00:13:32 this idea that Open AI has not done a proper pre-trained since 4-0, and the 4-5 pre-trained kind of got mothballed. But there was this question about, is pre-training dead? seems like the Google folks said, no, it's not. And then they went and did a pre-trained and Gemini 3 outperformed. Anthropic also pre-trained. I mean, yeah. We asked Cholto about this and he said, oh, yeah, we're still bullish on scaling.
Starting point is 00:13:59 Yeah. I think actually, like Cholto kind of like in the subtext said like the reason Opus 4-5 was good is not because it was a new pre-chain is because it was RL. That's what I read. That was your reading? Yeah. I feel like there's still there's still juice in the level. of pre-training, but it's not scale. Like, we only have one internet.
Starting point is 00:14:19 Ilya was correct about that. It's not scaling the size of the pre-trained, which is what happened with 4-5 from GPD 4.5. That was just bigger, I guess. But it does seem like there's little optimizations that you can do on the pre-training side. But I don't know. We'll have to dig into it.
Starting point is 00:14:37 But I think the thing that no one is debating is the fact that the Gemini 3 as a model with nanobanatro, with V-O-3, is just like the actual foundational intelligence is plenty good to be dominant in the consumer AI category. The question is, can you actually get people to install the app, use it, can they enjoy it? Do they not churn and go back to ChetGPT?
Starting point is 00:15:02 I've been fighting back and forth, left and right, going into one app and the other. I was getting a ton of disconnect errors with the Gemini app, even though the model's great, and there's some really cool features. Yeah, they need to catch up on the product. side. Exactly, yeah, the product side.
Starting point is 00:15:17 And so a lot of people are saying, like, oh, Gemini team should just, the app team should just go and, you know, copy Chat Chept's homework and, you know, copy all these little features. I've put out a post that the folks over at the Gemini team actually, you know, did turn into bug reports and I think are working on. But it really does seem like it's a really, it's a sprint to actually create an app that is as sticky as ChatGPT because ChatGPT, the app is fantastic and very, very well designed.
Starting point is 00:15:42 And so the... Yeah, and there's some reporting from similar web is what the FT is using to track average user minutes. I always find those hard to... I mean, it must be like Nielsen ratings where they're like polling people or something because... Yeah, and I don't know how this works. You can't get a pixel in OpenAI. Like, you can't get a pixel into the Gemini app. And are they counting user minutes if a tab is open, but I'm not actually in...
Starting point is 00:16:08 And is this just desktop? Because that's like completely separate from mobile use. desktop and mobile web, which, again, I don't know a lot of people that are using mobile web. I don't know. I wouldn't read too much into this data specifically. I would much more look at, like, what are the structural advantages that we know exist? And I mean, with Gemini, one of them is, to that point about the wire cutter. You know, you know where the wire cutter shows up? Google search results. You know what company has one bot for scraping everything? Google. So the Google bot identifies as one entity. So you can either say, I'm allowing Google or not. And it's a big, it's a tall order to be like, yeah, I don't want to be in Google results.
Starting point is 00:16:50 And so a lot of companies are saying, yeah, I'm good with Google showing up in Google results, but that also shows up in AI search results. And there are things that companies can do to say, hey, don't put me in the Gemini, you know, like training data set necessarily. But in terms of just actually showing up, you've seen it in the Google, in the Gemini app. It says using Google search. And so if I go to Gemini and I say, hey, head over to the wirecutter, find me the best vacuum cleaner. Google probably can do that.
Starting point is 00:17:18 I'm testing it right now. I'm testing it right now. Whereas Open AI is in a fight with the New York Times. Whereas Google and the New York Times, like they might not love each other, but they definitely have like an uncomfortable truce, right? A funny, a funny Gemini integration that I've used is that you land in a hangout and you just say, who is this person? Is this real? You can actually do that? It is. It pulls up a sidebar. You can just ask, like, who am I meeting with right now? And it'll give you like a...
Starting point is 00:17:48 It's clearly. Who am I meeting with? What should I say? What should I ask them? What is my name? What are... What do they want to know about me? What should I tell them about me? Okay, so Gemini was able to pull wire cutter recommendations. Yeah. I don't know. This is interesting. I feel like, I feel like, um... Yeah. I wonder, yeah, I wonder if, if wirecutter is actually benefiting from this in any way yet. I mean, for sure, because Google hasn't, Gemini hasn't rolled out the agentic commerce stuff that would actually like scrape out the referral token.
Starting point is 00:18:29 And so if I'm, if I'm in Gemini and I'm saying, I'm going to do some agentic shopping or whatever, and I say, pull me the best vacuum cleaner from the wire cutter, it goes over and does that. and then I land on the wire cutter, and then I click that link, that should give the wire cutter the credit. Now, if I, as a follow-up prompt, go in Gemini and say, okay, great, the wire cutter told me the best vacuum cleaner is from James Dyson, of course, is the Dyson. Find me the Amazon link. Well, Gemini's probably not given the wire cutter the attribution at that point.
Starting point is 00:19:01 It might even be taking its own attribution. I don't know exactly how it's functioning right now, but I would imagine that that link does not get reinstantiated as the wirecutter affiliate link. And so we could say, I mean, these are all, like, going to be pretty existential questions for the SEO crowd, anyone who's monetizing off of SEO. We saw some screenshot that apparently site traffic to Vox properties is down 50%. And I don't know how much of that is just the shift to social media versus the shift to AI. Yeah, how much is it their business. strategy, just being like, hey, we want to do more video.
Starting point is 00:19:38 Yeah. And that'll be distributed off our site for the most part. I think a lot of people generally do not, they consume more and more content on social media platforms. They go from YouTube to their RSS player to audio books, to Twitter, to Instagram, and they kind of bounce around from one and the other. And then every once in a while, they will go in and actually land on a particular site. Like you can.
Starting point is 00:20:02 If you go to tbPN.com, you can get our newsletter in your inbox every month. morning. And you can also sign up for cognition. They're the makers of Devin, the AI software engineer, crush your backlog with your personal AI engineering team. Well, speaking of the New York Times, David Sacks is going to war with the New York Times. He says inside the NYT's hoax factory, calls it a hoax factory, because the New York Times posted a piece about David Sacks saying that the headline was Silicon Valley's man in the White House is benefiting himself and his friends, and Ryan Mack was going back and forth with Sean McGuire, or, yeah, Sean McGuire. Ryan Mack says, today has been a good example of what X has become complaints from a subset of
Starting point is 00:20:49 wealthy tech folks about a story that circulates more widely than the actual story itself. Musk bought the platform to control the message, and he and his friends are getting just that, and Sean McGuire says, you don't get to run this headline, then write an article that doesn't validate the claim and then get away with playing the victim. We see the we see through the ruse. And so David Sacks has responded in full to the NYT's hoax factory. He says five months ago, the five New York Times reporters were dispatched to create a story about my supposed conflicts of interest working as the White House AI and Crypto czar through a series of fact checks. They revealed their accusations, which we debunked in detail. Not surprisingly,
Starting point is 00:21:29 the published article included only bits and pieces of our responses. Their accusations ranged from a fabricated dinner with a leading tech CEO to non-existent promises of access to the president to baseless claims of influencing defense contracts. Every time we would prove an accusation false NYT pivoted to the next allegation, this is why the story has dragged on for five months. Today they evidently just threw up their hands and published this nothing burger. Anyone who reads the story carefully can see that they strung together a bunch of anecdotes that don't support the headline. And of course, that was the whole point. At no point in their constant goalpost shifting was NYT willing to update the premise of their story to accept it. I have no
Starting point is 00:22:11 conflicts of interest to uncover. No conflicts of interest. As it became clear that NYT wasn't interested in writing a fair story, I hired the law firm Claire Locke, which specialized in defamation law. I'm attaching Claire Locke's letter to the NYT. So readers have full context on our interactions with NYT reporters over the past several months. Once you read the letter, it becomes very clear how NYT willfully mischaracterized or ignored the facts to support their bogus narrative.
Starting point is 00:22:41 So Will says hiring Claire Locke for this is sick. Cruise missile to blow up a straw hut. He's a big fan of litigation. He loves litigation. Well, people have been supportive of this broadly in tech. Let's go through some of the reaction. Sam Altman says David Sachs really understands
Starting point is 00:23:01 AI and cares about the U.S. leading in innovation. I'm grateful we have him. Brian Armstrong. Yeah, here's here is, here's my takeaway. Yeah. If you believe that AI and crypto are, our industries that we should support in the United States, then you want to have a czar focused on those things
Starting point is 00:23:26 that generally feels positively about those things. and wants to create the best possible environment for those industries to thrive in the U.S. I think that there's actually a debate on both fronts, right? Like there's people on the left that think AI and crypto are just default bad, and they want less of them. And there's people on the right that believe that too. But I think that ultimately there's arguments for why the U.S. should bleed in stable coins,
Starting point is 00:23:57 which, you know, is part of the... part of why the Genius Act is important, and a lot of, you know, the AI action plan, there's going to be debates on individual points in that, but in general, I think, you know, creating an environment in the U.S. where we can continue to lead in AI is important. So I think there wasn't, I didn't see any sort of like smoking, smoking gun in any of the stuff. There were some allegations around the all. I don't think they smoke very much at all. I think it's mostly tequila drinking.
Starting point is 00:24:31 That's true. All in tequila. Although J-Cal does tote a gun regularly. Oh, yeah. So maybe that's the smoking gun. He's a Texan. Yeah, no, I didn't see anything very specific. I mean, it's all in.
Starting point is 00:24:48 Like, they are super connected. If you partner with them in some ways, like you would expect to get more of a read on where they're spending time in D.C., what they're seeing. That seems like they're close. clear lines on what you can share, like what turns you into a lobbying firm and what doesn't. I think they've stayed out of becoming a lobbying firm, and so they have clear rules on that. Yeah, I think Boz distilled it pretty well.
Starting point is 00:25:16 Before we read his post, let me tell you about Adio, the AI Native CRM. Adio builds scales and grows your company to the next level. Boss said, I don't know David Sachs, but I want more expertise in government. Experts tend to have made money in their area of expertise, have friends in their area of expertise. If people can't have history or friends in a field before leading it, then our leaders won't know anything. And I thought this was a good distillation of like the core debate about like, should you
Starting point is 00:25:42 have someone who has never participated in an industry overseeing it? Or should you, like someone who's purely academic, purely outside of it? And I believe there's some readers and probably people at the New York Times that would like somebody that hasn't participated in either industry to be running in a role like that and just blanket against both industries and sort of like hold them back. So the reaction is interesting in the comments. I mean, first, the top comment is somebody like beefing with Baz over how he ran the Quest store. It's like clearly a VR aficionado who like has a ax to grind over niche VR policies. But the second post is what I want to get to because it actually addresses the
Starting point is 00:26:25 core claim here. And Alex says, the construct you're thinking of is called a council. It's been used for a long time to allow the elected with limited knowledge on a domain to get a consensus of options from a range of experts. This minimizes conflicts and prevents kleptocracy. But like, isn't that what a czar is? I thought, I thought, I thought, I thought Sacks was a council. Like, he's not, he's not an elected official. Like, the elected official is Donald Trump, the president. And like, there's a variety of folks there. And then, and then, uh, Sacks is like appointed to this czar role that is just to give his, like, his, like, he, he doesn't have the right. He doesn't have the ability to just, like, create legislation out of thin air, right? He, he is, he is, he is very much account. I was trying to look up the history of czars, right? It is weird. Is it like, have we always had czars? I know there was a whole thing about the border czar.
Starting point is 00:27:16 The first major czar was Bernard Barak, appointed by, uh, president Woodrow Wilson to head the war industry's board in 19. The press dubbed him the industry czar because he had sweeping powers to coordinate wartime production. During World War II, President Franklin D. Roosevelt appointed several czars to manage the massive wartime economy, including a shipping czar and a synthetic rubber czar. These roles were essential. Synthetic rubber czar? People are stoked for that. People don't talk about the need for our ongoing need for synthetic rubber czar. No.
Starting point is 00:27:50 These roles were essential because existing government bureaucracies were too slow to handle the urgent demands of total war. During the Nixon era, the modern concept of the czar, a policy specialist with a specific portfolio, solidified under Nixon. During the 1973 oil crisis, Nixon appointed William Simon as the energy czar to manage fuel shortages. He also had a drug czar during the sort of like beginnings of the war on drugs. So anyways, again, I think unless you're just blanket against these industries, it's hard to argue that you want somebody that doesn't have any expertise in said industries. Yeah, some of these claims, here's one, it's sort of hard to track.
Starting point is 00:28:47 Like, so he says free from those, this is from the New York Times, from the actual article screenshot. Free of those restrictions, Mr. Sacks flew to the Middle East in May and struck a deal to send 500,000 American AI chips, mostly from NVIDIA, to the UAE, the United Arab Emirates. The large number alarmed some White House officials who fear that China, an ally of the Emirates, would gain access to the technology, these people said. But the deal was a win for Nvidia. Analysts estimated that it could make as much as $200 billion from the chip sales. And so, like, I understand, like, we've covered the debate around export controls and should NVIDIA, where should NVIDIA be able to sell things.
Starting point is 00:29:30 But it's never been an open and shut case in my mind. It's never been like, oh, it's so obvious that the UAE is completely off the table. Yeah. I don't know. Yeah, I mean, it was also just like painting, painting the friendship between SACS and Jensen as, like, something that felt wrong was a little bit considering it's the most valuable company in the world. Yeah. One of the most important AI companies, potentially the most important AI company,
Starting point is 00:29:58 if you just go by weight in the, in various indexes. Yeah, I don't know. I mean, it's like, it's clear that he doesn't have Nvidia bags directly. Like, that's completely debunked. So you have to do these like 25 different steps to get to some sort of conflict. It's a lot of like, you know. I read this and I think like this is. if you're the average New York Times subscriber
Starting point is 00:30:23 this is probably that they were probably like very excited by this story right? Yeah I mean a lot of I think a lot of people are definitely like just riled up by the all-in podcast Charlie in the chat says all-in pot about to be an all-timer after this article do you think it's possible that David and Jason coordinated to get this hit piece done
Starting point is 00:30:47 to grow all-in even first? further. They said, we're at such an insane scale. Oh, yeah, that was a crazy thing. Yeah, Jason, Jason said a bunch of, I mean, Jason made a lot of good arguments about this, but one thing was he was like, we would be smaller if we, what was it? He was like, he was like, we would be bigger if we didn't talk about politics. And that seems crazy to me. I feel like politics is like the ultimate Tam expander in the history of podcasting and media broadly. Yeah, the audience for political content is like 10 times larger. I would, I would think so. I do believe, that Jason loves talking about tech, and like, I think he's, I think he's, he's, he's an OG. He's, he's said that multiple times. But, but I would be shocked if, if, if politics was not a, was not a Tam Expander for, for podcasts broadly. And then the other thing is that he said that they lost money on the all-in events. I don't know how that's possible. Like, those events, obviously, they're, like, big budgets, but, you know, I would, I would imagine that, like, the sponsors can, and the ticket sales, they're not cheap tickets, right? I would, I would imagine that.
Starting point is 00:31:49 they'd be making money off that. I certainly hope so. I mean, they've been running this thing for five years. It's incredibly valuable in the ecosystem. They should be able to capture some value there. Maybe they set up their own data center to sort of manage. They're just a new one. Yeah, we decided to bring a podcast production on-prem, and we ordered a lot of cabl. We ordered a hundred thousand black walls. A lot of cablis. Blue Al, Blue L really has us by the polls. It's rough. Uh, Martin Scralli here says the Sacks piece illustrates the exact problem with the New York Times. Voters specifically want this type of person, not a bureaucrat who has never worked a real job, Lena Con, Kay Street.
Starting point is 00:32:22 So the issue and the reason I think this article was written is that New York Times subscribers specifically want this type of article. Yeah. Yeah, Whiskey Titans going back and forth here. Did you miss the entire part of the article? This isn't a, quote, we can't have businessmen in government. This is a we can't have the government officials who host government summits and sell access to the president for $1 million via their podcast business.
Starting point is 00:32:52 And Martin Scraly says, I doubt it was Sacks who wanted to sell $1 million passes. And Whiskey Titan says, I agree with you. I'm sure it wasn't, but letting Jason run rampant until Susie Wiles steps in isn't a great look. I happen to think Sacks is doing fine at this particular role, but I also understand the general public feelings. Like there's a lot of graft.
Starting point is 00:33:11 The New York Times isn't the right conduit for that argument, though. And they're going back and forth. The timeline truly isn't turmoil over this. Dan Primack had a good take. He had a whole breakdown of this, which I think was interesting. He said, let's kick this off. But first, let me tell you about fall, build and deploy AI and video and image models. They trusted by millions to power generative media at scale. So Dan Primack said, lots of people are sending me the New York Times story on David Zax, outside of the all-in sponsorship proposal, which feels oblivious at best, corrupt at worst,
Starting point is 00:33:51 I'm not seeing much in there that's new, at least to those who've been following. Dan Premack says, as an aside, it's true that SACS slash Kraft still have a ton of AI investments. Thing is, all tech investments at this point are AI investments. It's kind of like internet investments at this point. If you invest in tech startups, you de facto invest in AI start. ups. And Jason says, we lost money on the event. The NIT knew this and deliberately published false information. And Dan Premack says, they included statement that you lost money on it. What did they print that was false? They were somehow make that, that we are somehow making money in
Starting point is 00:34:30 this or some gain. And Dan Primack says, just reread, just re-read. Doesn't claim that all-in made money, said you tried to generate revenue via one million dollar sponsorships, including for VIP reception that didn't end up happening, but ads that you don't know what, but ads that you don't know what sponsors ultimately paid, or that it doesn't know what sponsors ultimately paid, included the statement that you lost money. Am I missing something? And Jason says, Mr. Sachs has raised the profile of his weekly podcast all in through his government role and expanded its business. Confused. I thought you were talking specifically about the White House AI summit pieces, Dan Primmick, talking in general, don't know how you would, would not quantify.
Starting point is 00:35:18 SAC's role in White House defraised all-in profile, at least among normies. As for role in biz expansion, guess you could stake your claim there. I completely disagree with this. I feel like the all-in podcast put the White House on the map. I feel like a lot of people were like, they found out about the White House and about the U.S. government. Which house? Exactly.
Starting point is 00:35:39 Exactly. Because of the all-in podcast. They were listening to on podcasts, and they were like, wait, wait, wait, you're telling me. There's people in Washington, D.C. And they run this whole chronic country. They're in charge of the rules. Yeah, they create sort of laws and framework over how our country should operate, which industries we want to, you know, support and grow. You're telling me, you're telling me that there's a group of people.
Starting point is 00:36:02 And one of my besties is up there running in it. This is amazing. I got to learn more about this. I've got to figure out what a bill is. I've got to feel how a bill turns into a law. Chat, what is a bill? Jason says, if anything going deep into politics has been a net negative for all-in, at least in my opinion, we would be growing faster and wouldn't have lost some percentage
Starting point is 00:36:23 of our left-leaning audience if we'd stuck to tech, markets, science, VC, etc. That's an interesting take. I still think that politics made all in so important. It made it so big. Well, yeah, and it made the content polarizing, which, but I think that polarized. in media is good. You actually get more attention. Not necessarily good from all points of view,
Starting point is 00:36:46 but good from a pure, just like, reach. I mean, yeah, I was looking at the, I think the, I think the, the ratings are like the amount of viewers for, for, like, CNBC is bigger than Bloomberg by, like, a pretty significant margin. Because Bloomberg's, like, extra wonky.
Starting point is 00:37:06 And CNBC is a little, I mean, it's literally called consumer. business news. Like that's what the C stands for, I believe. And then you have Fox, which is even more. Like Fox News is political and it's much bigger ratings than CNBC or Bloomberg. And then ESPN is like by far the biggest. Because it's like sports. Everyone loves sports. And so maybe that's the final, that's the final form. They should go full poker and then full sports. It just become sports center competitor. I could see it. It might be the way. A.B. says, I only learned about the Trump about Trump because
Starting point is 00:37:38 Chamoth endorsed him. Yes, exactly. I had never heard this guy. Who? The vodka and social media entrepreneur? He's running for president. Okay, so Dan Primack is weighing in again concluding it. He says, the New York Times story was mostly a nothing burger,
Starting point is 00:37:59 at least for those familiar with the situation. As for hoax, the story itself, as published, isn't being disputed. Obviously, the New York Times info questions that Sachs lawyers answered and disproven info wasn't included. That's how journalism works. The real complaint seems to be about the headline, quote, Silicon Valley's man in the White House is benefiting himself and his friends. I get the complaint, but that's not, but it's really a matter of interpretation, not true, false slash hoax. Imagine if you had a friend and they went to
Starting point is 00:38:30 the White House and they didn't try and benefit you. You'd feel, you'd feel like you'd turn. You might not be friends with You might not be friends with anyone. Sachs and Trump and the Trump White House are pursuing, let them cook AI policy. I like that, that they believe will help us win the AI race and that the rewards outweigh the risks, others disagree.
Starting point is 00:38:47 Yeah, this is so true. It's like, there is no like, oh, like we now know the correct way to win the AI war. Like, we know that there's a correct way. It's very obvious. It's like, no, everyone's debating this constantly, even inside of tech and Sachs has one view that I think has actually played out,
Starting point is 00:39:04 pretty well, considering that he's been anti-doomer, anti-fast take-off, more industrial capacity, more opportunity to grow GDP. There are some elements of his takes that are a little bit more like TBDs, like what actually happens to jobs over the long term, how does it manifest in GDP growth over the long term. But so far, I think he's been correct. And I think that's what Dan Premack's saying here. He says, only time we'll tell if Sachs is correct. What we know for sure, though, is that his deregulatory policies should help VC funds, his, those runs by his friends, those run by strangers, et cetera.
Starting point is 00:39:41 Thus, the headline is defensible, albeit pushing an agenda. And that's the timeline and turmoil, folks. Let me tell you about graphite.com. Code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. I can't read this. AI amblicus, this name. This Ilya interview will be compulsory viewing.
Starting point is 00:40:02 for any future student trying to understand what misallocation of capital looks like in real life. See, I completely disagree with this take. People were going back and forth on this. We talked about this a little bit over the holidays, but fleeting in bits says, can you say more
Starting point is 00:40:18 just that he doesn't have any business direction or something else? And the original poster says, these are my intuitions, but for what it's worth on the micro level, he just seems adrift and a sea of possibility and not my kind of person. See, I originally read this as the misallocation of capital that I've seen is like the 10th, 11th, 12th foundation model lab
Starting point is 00:40:44 that has like $100 to a billion that is just like kind of iterating in what Ilya already worked on and already developed, right? And doesn't necessarily, like if they just do, if they just create a model that's like not state of the art, like I don't know that there's going to be incredible value in that. Meanwhile, I'm like, okay, you take the guy that, that, that, that, that, that, that, that, that, that's work led to chat GPT and you give him a few billion dollars and let him, you know, continue to iterate and, and, uh, he's not just like, you know, firing a single can, you know, multi-billion dollar cannon and hoping he hits a target. It's like this incremental
Starting point is 00:41:20 research that, uh, I think is still one of the best shots at like developing the next paradigm, whatever comes after LLM. So, um, I, I read this. I, I, I, I think that you're, you're reading of this was right, but I initially read it the other way. And I was like, yeah, I do think this is, you know, somewhat, somewhat bearish on the, on the incremental large language model lab. Yeah, I don't know. I mean, I can kind of steal man both. Like, we're going to have Julian on the show in, when is he coming on at, or we're having
Starting point is 00:41:53 Vincent from Prime Intellect come on the show. And I was talking to him, we'll get more information from him. He's going to be on at 140. Vincent was explaining that more and more, more and more companies and different business processes, they do need specific training runs. They do need the skill sets of a foundation model lab, but there's a lot of business to be done that's not purely AGI seeking, not purely paradigm shifting. So I do think that there's some value there, if the business can be run well, which is a big if,
Starting point is 00:42:27 But there is a path where a thinking machines or one of these companies is going to go and do specific reinforcement learning, specific model development for a specific company and task, that can work out. It's a very different business than searching for the next paradigm doing science. And maybe you shouldn't even call it a lab because you're not really even trying to do foundational science necessarily. You're more productizing. Company. Yeah, it's a business. Which is great. We love that.
Starting point is 00:42:53 But what's interesting about Ilya is that when we talked about this, like, it is a venture-style bet, like, let the scientist go experiment. Maybe it will work out. It's extremely high risk, probably a zero, but if it works, it's huge, right? So the expected value is still high. What's crazy is that we're doing a venture-style bet at growth scale, and it's just massive amount of capital for something that I think the consensus here is that it's either he solves it, and it's, incredibly valuable and leapfrogs everything and it's just amazing or it's just you do get lost and you get a sea you get lost in the sea of research and ideas and you never really produce anything so uh i love i love the high risk uh bets i i just understand why people are saying
Starting point is 00:43:39 like well what that that scale that's a lot of money that's a lot of money uh but that but that has been happening internally at google for a long time they probably burned a lot of money on research projects hasn't been that big of a deal because they had the engine for it and if the If the investors are significantly diversified, they should be fine. Yeah. Anyway, what else is in the timeline today? Finn.AI, the AI that handles your customer support, the number one AI agent for customer service. We did get a good meme.
Starting point is 00:44:09 We got a couple good memes. Cody says, when my wife asks what we should eat for dinner but says no to my first two suggestions. We are back to the age of research. I like it. And then when she asks what I want for dinner from Bayeslord, the answer to that question will reveal itself. I think there will be lots of possible answers. Very true.
Starting point is 00:44:31 It's a great new meme template. I like it. When my husband asks how many Amazon packages are still on the way, the answer to that question will reveal itself. I think there will be lots of possible answers. But I think that's actually true. Like if he creates some new AI, like there's a bunch of different ways to monetize it. We know this is a fact.
Starting point is 00:44:51 But of course, Ilya is now joining the ranks of Jan Lacoon and Rich Sutton and under Carpathie of sort of industry legends that are more or less saying that scaling is over and LLMs are dead. You know, on the other side, Shulte is saying scaling maybe not over. So we'll see. This is, uh, this post is, yeah, this post is great. Scaling is over, and LLM's are a dead end. Aw, you're sweet.
Starting point is 00:45:22 Scaling is over, and LMs are a dead end. Hello, human resources. I love his meme template, because it's like, yeah, Jan Lekoon has been saying the same thing. He says, for the record, my current BMI is 24. This guy rocks. It's very funny.
Starting point is 00:45:39 I thought he would drop the meta tag on X by now, but I guess he's still. Oh, he used to wrap in them? Didn't he leave? Reporting to leave. He's like, on his way out, more or less. Another one billion to SSI. There's a bunch of this in the SSI bucket.
Starting point is 00:46:01 Let me tell you about profound. Get your brand mentioned in chat, GPT, reach millions of consumers who use AI to discover new products and brands. Of course, we are having Dylan Patel on the show in 12 minutes, and we should do a little bit of a run-through of the drama. on the timeline. The timeline wasn't turmoil. Lots of people very, you know, upset
Starting point is 00:46:24 with semi-analysis latest post... How dare you... How dare you... They took a swing at the king, which was the name of their article. They said, TPUV-7, Google takes a swing at the king. The king is, of course, NVIDIA. And they
Starting point is 00:46:40 are asking, is this potentially the end of the Kudamote? Anthropics. They're talking about Anthropics. One gigawatt. TPU purchase. The more TPU meta, SSI, XAI, OpenAI, Anthropic, Buy, the more GPU CAPX you save, next generation TPUV8. And they're going into what the battle between TPU and the next generation GPU out of NVIDIA will look like. And this upsets some people.
Starting point is 00:47:10 There's a lot of folks who are long NVIDIA. Either they have invested in NVIDIA, they made a lot of money in VINVIA, or their whole business is tied to NVIDIA or AMD, even. Or they bought the local top a month. Potentially. There's a whole bunch of reasons. You could also just disagree with this, and you could just think that semi-analysis,
Starting point is 00:47:34 their takeaways are wrong. But I think it's a thought-provoking article. I think there's a lot of data in here. They're extremely thorough. And I think that they do leave you with a lot of new information that you can, you know, do with what you want. And I think in general, the response to this article was very positive, but there were some folks who were very upset by it and went all over the place.
Starting point is 00:47:56 And on accounts that put a noun and then capital as their name. Yes. And suddenly they're an expert on everything. Yes, yes, yes. Yeah, it was a little odd seeing the credentialism come out from the Anans. Because, like, I don't think we should. should get in the two can play that game camp. It's a little bit rough. But there's a little bit of interesting stuff in here. I want to read through some of this. Let's kick it off with the
Starting point is 00:48:26 opening of the semi-analysis article. The two best models in the world, Anthropics Claude 4.5 Opus and Google's Gemini 3 have the majority of their training and inference infrastructure on Google TPUs and Amazon's Traneum. Now Google is selling TPUs physically to multiple firms. Is this the end of NVIDIA dominance? The dawn of the AI era is here, and it's crucial to understand that cost structure of AI-driven software deviates considerably from traditional software. Chip, micro-architecture, and system architecture play a vital role in the development and scalability of these innovative new forms of software.
Starting point is 00:49:02 The hardware infrastructure on which AI software runs has a notably larger impact on CAPEX and OPEX, and subsequently the gross margins, in contrast to earlier generations of software where developer costs were relatively larger. Consequently, it is even more crucial to devote considerable attention to optimizing your AI infrastructure to be able to deploy software. Firms that have an advantage in infrastructure will also have an advantage in the ability to deploy and scale applications with AI. And we've long believed that the TPU is among the world's best systems for AI training
Starting point is 00:49:38 and inference, neck and neck with King of the Jungle Nvidia. 2.5 years ago, we wrote about TPU supremacy, and this thesis has proven to be very correct. TPU's results speak from themselves. Gemini 3 is one of the best models in the world, and there's a very funny bit in here. I need to find it. Saving. Oh, yeah, here. So this is a very spicy line in here.
Starting point is 00:50:03 He says, OpenAI hasn't even deployed TPUs yet, and they've already saved 30% on their entire lab-wide. Nvidia fleet. This demonstrates how the Perf per TCO advantage of TPUs is so strong that you already get the gains from adopting TPUs even before turning one on. And so basically what he's, what he's explaining is that because of the competitive dynamic between Nvidia and Google with the CPU now, you can use TPU as a stalking horse and say, hey, if you don't cut your prices in video, we know that you have really high margins. Or not even cut prices, but encourage, and investment. Exactly.
Starting point is 00:50:42 And so that's what they're explaining here. And V-VIA would rather invest back into your business instead of cutting prices. Yes. And so says, we think the more realistic explanation is that NVIDIA aims to protect its dominant positioned at the Foundation Labs by offering equity investment rather than cutting prices, which would lower gross margins and cause widespread investor panic. Below, we outline the Open AI and Anthropic arrangements to show how Frontier labs can lower GPU total cost of ownership by buying or threatening to buy TPUs.
Starting point is 00:51:16 And so OpenAI, NVIDIA, you know, it was $22 billion per gigawatt, the rest of the system. So it's a $34 billion per gigawatt expense to NVIDIA, but NVIDIA is doing effectively an equity rebate of $10 billion per gigawatt in investment. And so how that works out is a 29% partner discount. Anthropic has similar math, but a little bit higher at 44% partner discount because Microsoft is paying for a piece of it. And so it's an interesting thesis and it's unclear exactly like, well, you know, if the claim that the more, the investors will panic if it was actually just lower gross margins, well, if you say the quiet part out loud like this and you have, you know, you do the math to show that there is basically a discount that margins might be coming down because of competitive dynamics.
Starting point is 00:52:16 Does that wind up resulting in investor panic? I mean, certainly it didn't today. Isn't NVIDIA up today? Yeah. NVIDIA is up 1% adding a casual, you know, what, $10 trillion or $100 billion or something? $10 billion, quadrillion. Yeah, gigagillion dollars. Yeah, I just, I mean, I, and again, we said this earlier on the show,
Starting point is 00:52:40 but Broadcom is down almost 4% today, which I would have expected it to be the other direction, given that to actually buy TPUs physically, you need to go through Broadcom. Yeah. Yeah, so a lot of people are going back and forth on, you know, can semi-analysis be trusted because they're writing about about,
Starting point is 00:53:05 you know, NVIDIA and Dylan. I think some people didn't understand that he was joking. Zephyr here has a post.
Starting point is 00:53:13 Dylan is being tongue-in-cheek, but he's not wrong. NVIDIA was extremely dominant for the last three years, as we saw in the stock. It's up 10x over the last three years.
Starting point is 00:53:21 New competitors will cause a reduction in market share and margin compression, but TAM is big, so revenue profits won't go down. 75% of GM is just unsustainable. Hyperselers will also use
Starting point is 00:53:34 the cheap TPU's threat to extract better deals from Jensen, priority access for Ruben Feynman or discounts on GPUs. Jensen called Altman and initiated the $10 billion deal after he saw the information about the information article about OpenAI testing TPUs.
Starting point is 00:53:50 And so this is in reaction to that point about opening I hasn't even deployed TPUs yet and they've already saved 50%. There's a decent post here from just another pod guy. They say, Dylan's speed running through all the learning of sell-side research, industry capture, pissing off IR execs, gatekeeping info based on client tier,
Starting point is 00:54:10 difficulty scaling beyond single-star analyst, distorted MSN representation of your notes, eventually spending too much time marketing versus researching, amazing biz. Content though, obviously Dylan would push back on a lot of this stuff. If you actually read through
Starting point is 00:54:27 the entire article, there's nothing in the article should actually, in this article, should be that surprising because so much of the article is just referencing old semi-analysis research, some of which they did a, you know, sort of before the paywall, some of which they did under the paywall. But it felt like a kind of a culmination of everything that they've been saying for a really long time. And I think that
Starting point is 00:54:51 part of, I think, the surprise here is just how much, how much faster this conversation has really come to a head than people may have expected. I think, I think the, at least, um, At least, like, surface level on the timeline, I think people felt like the TPU threat was maybe like a 2026, 2027 conversation versus being like it's a part of these buying discussions right now and negotiations. Yeah. Yeah. The other buried lead in the article was, of course, about pre-training. So there's a snippet in here. Opening Eyes leading researchers have not completed a successful full-scale pre-training run that was broadly for a new,
Starting point is 00:55:35 Frontier model since GPT40 in May of 2024. And, you know, this, this is, it's, it's so interesting that this, like, if this was wrong, you would imagine that there would be a whole bunch of reaction from open AI people or like proxies or surrogates, right? People quote reading and be like, that's just not true. Wow, somebody else is cooked. But the fact that I haven't seen anyone respond to this and say like, oh, this is, this is wrong, like we actually did.
Starting point is 00:56:02 Not that, not that, like, that's the north star for what the business is. Like, the business's job is to create profits, right? It's not to, you know, complete successful, full-scale pre-training runs. That's not the goal. That's just something that they might do in service of making a better model, making a better product. But ultimately, it's whatever the customers want. And if the customers are happy with 4-0-level base pre-trained
Starting point is 00:56:27 and a bunch of reasoning on top, that's fine. So what else is in the back? and forth. People are also, I mean, it really, it does make me happy that we didn't go deeper into ranking people because it does feel like when you create a list of tears and rank a bunch of people, you're just creating a big bucket of enemies down at the bottom of like people who want you dead because you rank them low. But I'm sure we'll get into the discussion of ClusterMax and what how people are interpreting ClusterMax. Because there's a whole bunch of ways to read it.
Starting point is 00:57:04 Like one way to read it is like, which stock should you buy, right? But like that's not necessarily the read. The other read is like which product is the best to work with as a customer. But it's like, what customer are you? There are some that are in the lower tiers that are fantastic for very specific use cases.
Starting point is 00:57:25 Like this is the nature of every business. Like one of the, one of the, one of the neoclods that was particularly upset with Dylan is in a very niche market. But if you're in that niche market, it's probably a great product. It's probably great for you if you satisfy like this specific list of criteria
Starting point is 00:57:42 and you don't need these features. You're probably fine then. But it's a lot of fun. People are going back and forth. They're also debating whether or not Dylan is independent given that he lives with Shulto from Anthropic. We got to ask him why he has roommates. I'm not even concerned about a conflict.
Starting point is 00:58:03 It's roommate gate. Yeah, it's roommate gate. But what about this other, is this is a tinfoil hat post from JuCon? My theory is that meta deliberately leaked the story to the information about Google's TPUs. For meta, it's a classic risk-free power play. The moment Jensen Wong reaches wind, catches wind of meta using Google Silicon, and Vidia is likely to rush in with an investment.
Starting point is 00:58:29 They might even be negotiating as well. we speak. This allows meta to secure capital and shift from burning their own cash to potentially getting discounts or effectively buying Nvidia chips with Nvidia's own money. Plus, if they actually do secure Google TPUs, they solve their compute shortage. It covers all bases. I wonder when other hyperscalers will catch on to this magic wand. All you have to do is hint at using TPUs. But the issue, the issue is how many red flags would be waving if Jensen was like, yeah, we're investing $20 billion in meta. We're very excited about Meta and owning a piece of...
Starting point is 00:59:07 Yeah, it seems very, very odd. So he's in a position where I don't know what kind of leverage Jensen has around in those conversations with Meta because he doesn't want to discount. And it's not like an open AI where he can just announce an investment or an anthropic, etc. So how does any type of rebate actually happen is a question? Yeah. Well, before we bring in our next guest, let me tell you about turbopuffer, serverless vector in full-tech search, built from first principles and object storage, fast, 10x cheaper, and extremely scalable.
Starting point is 00:59:44 Let's read through some more TPU stuff to set the table. So, Clyde Chan, says, I keep seeing stuff about TPU. Has anything materially new happened? There's no evidence Google has ever trained Gemini on non-TPU hardware, going back to pre-GPT models like Bert. TPUs predate NVIDIA's own tensor cores. Anthropic and character and SSI and Mid Journey have long used TPUs. I'd be surprised if META weren't looking at them.
Starting point is 01:00:09 NVIDIA's moat has never been deep for the big labs. See OpenAI deciding it could do better than Kuda and investing in Triton instead. Regularly edging out CUDNN on benchmarks. There's nothing magical or structural about any of this, just good engineers doing good work. TPUs are not that much more efficient than GPUs. small performance per watt difference are dwarfed by whether meta has the right kernels and systems engineering talent to pull it off both invidias and google's moats are small and we are still at the point where individual good engineers can flip the entire balance why was this not priced in
Starting point is 01:00:44 this is all super old public info i i have a feeling that uh that this uh clive chan who i guess is over at uh was at tesla and an open ai is uh is a little bit of like first time in the public markets first time realizing that the people who trade this stuff are not necessarily on the super inside of the labs actually understanding the decisions that are being made inside the labs.
Starting point is 01:01:11 It's a completely separate ecosystem. And that's why organizations like Semi Analysis exist. And I believe we have Dylan Patel from Semi Analysis in the Reistram Winery Room. Let's bring him in. Dylan, how are you doing? I'm doing fantastic. How about yourself?
Starting point is 01:01:27 You know, I saw the meme image that you guys put out there for me, so I had to wear a tank talk to show you. Let's go. Let's go. Dude, we need a bigger, bigger screen for that bicep. We'll work on it. Where in the world are you? I'm in Florida. I was spending Thanksgiving with my family here. I'm trying to chill out a little bit. It's nice to have the family pamper me a little bit because I broke my foot a couple weeks ago. I'm sorry to hear that. How'd you break your foot? Tripped over at TPU. Family reunion playing football in Texas. We're American as we can get.
Starting point is 01:02:03 There you go. Well, we were just running through a little bit of the TPU article. Can you actually set the table for me on like, what do you think is new about it versus what has semi-analysis already been saying? And this is more just like tying everything in a bow. Yeah, half of the article is just referencing research. We've been saying this for two years. We've been saying this for one year.
Starting point is 01:02:28 And even referencing Google's own content about the TPU, dating back, even further. Yeah, I would say the majority of this piece was, if you're a client, it's already been pretty much all published, but it hasn't been tied together. It hasn't had a narrative around it, right? Because when we think about what we put out on the paid side versus what they put out on the newsletter, right, our clients sort of get, you know, what changed, what happened, here's the numbers, that's about it. right? We don't explain the technology that much because our clients are sophisticated, right?
Starting point is 01:03:02 They're either in the industry or they're finance pros who don't give a shit about the technical stuff. And so it's either of those two, right? And so we're just explaining, here's what's happening, here's the change, here's the numbers, right? So for months, we've been saying Google's selling TPUs. For months, we've been saying, hey, here's TPUV7 versus Blackwell. We've even put out updates on, here's what we think TPUV8 is versus what we think Rubin is. And so generally it was making it to a narrative and explaining the technology and the corporate, I would say, politics or dynamicism around it, right? So that's, you know, I think, I think there has been bits and pieces put out by other folks, right? I think the information has done great reporting on some
Starting point is 01:03:42 of the stuff after we did, but in the public space, I think, you know, for as an example, right, like, so other people have put out bits and pieces surrounding this, but they haven't put out the full picture. So as far as like what's new, it depends on where you sit in the stack, but but, you know, anthropic and meta and folks like that have been talking to Google about buying TPUs for many months, right? Whereas people externally are, you know, last week when Gemini 3 was launched, or two weeks ago, people were just learning that TPUs are trained, training Google's models, right? So it's where are you in that information spectrum, right?
Starting point is 01:04:17 Yeah, totally. So on that information spectrum, the finance bros, they can probably just like, if they, if they read into this, oh, bullish Google or bearish and video or whatever, like, they can kind of trade in and out as they please. But what on the on the more technical side like are people using semi-analysis research to understand like, okay, I'm a neocloud. What do I want to rack for next year? Maybe I need to be putting in a TPU order. Is that, is that how people interpret your research? Like what happens on the technical side of the house? Yeah. So, so as far as like some of the paid stuff we do, we have one model called the TCO model, right, which is calculating the TCO of all these
Starting point is 01:04:58 different hardware's performance, building up the entire cluster cost, you know, breaking it out into like a dozen plus different things, whether it's storage or networking and breaking down the cost of everything. So there we put out research on TPUs because as soon as Neocloud started getting offered, hey, you want to buy TPUs? Yeah. We're like, okay, we need our own ground up model. So when you're negotiating a big contract, what you do is called it should cost, right?
Starting point is 01:05:20 You go and calculate what it costs for the company versus what it costs for me to deploy. And then you like think about like, oh, what's the margins they have? what is ridiculous to offer them what is not, right? Because everyone always wants to know, like, hey, what margin are they making off of me? Can I push that down a little bit? What is ridiculous to demand in a negotiation versus what's not? So we've already been working, you know,
Starting point is 01:05:41 through this TCO model, we've put out four different updates on the TCO of TPUs, V7 and V8, because there are neoclouds out there, as well as labs who are purchasing TPUs that are using that to understand what's the cost. Now, you know, Anthropic, I will say, just already knew and figured it out because they've hired so many Google people,
Starting point is 01:05:59 but other labs are also looking at it, right? And so, so, you know, when you say, hey, on the cost side of things, on the technical side of things, right, there's a lot of network engineers now out there who have never deployed Google hardware that are now like, okay, I need to figure out how to do this. Techs, right?
Starting point is 01:06:13 Like, you know, so there's people who have DM'd me that are like, oh, we've been, you know, as you know, we've been thinking about deploying neoclouds, but your material on this is technically better and teaches me more than Google's own material, right? So it's like, this is helpful to people on multiple factors. Yeah. What about the software side? Google's built their own internal stack to compete with Kuda. How much of that are they going to actually give to their customers who are buying TPU? Because that feels like you, it feels like potentially you could overrotate on, oh, well, Gemini 3 is really good, but why is it good? Is it just because of the hardware? Or is it also Google's incredible prowess, multi-data center training, all this fancy stuff that they have that they won't be giving you when they sell you the TPU?
Starting point is 01:06:58 Yeah, so that's the interesting thing, is some of the soft software will remain close source, but you can still use it. Okay. Right? And then some of the software, they are trying to open source aggressively. And then some of the software, they're just never going to get out there anywhere, right? So it sits in three kind of buckets, right? The interesting, I guess, newer thing that we did in the piece was we looked across all these
Starting point is 01:07:18 different open source AI software, right? Whether it's PyTorge, whether it's VLM, whether, you know, all these different open source libraries. And we calculated and counted up how many Google commits there were, right? And you can see there's a chart in the article where the number of commits that Google's doing on TPUs has exploded over the last handful of months, right? As they've decided to shift their strategy, sell TPs externally, they also recognize software has to be open for this, right? You know, only the gigab brains that, like, Anthropic can figure out how to do everything themselves, right? It's those people outside of Anthropic, you know, types that need a bunch of open source software that builds on top of it, right?
Starting point is 01:07:54 And what's interesting is when you look at like, hey, NVIDIA, you know, the biggest argument that Nvidia doesn't really make for GPUs, but they should is that, you know, about 40% of the software that's open sourced is actually just from China, right? On Kuda, and that's the Kudamote, right? It's like 40% of the software is just like open source stuff, whether it's people committing to VLM or Pytorch or all these other libraries, right?
Starting point is 01:08:15 BiteDense, open sourcing stuff, deep seek open sourcing stuff. And Google, you know, they don't have people open, you know, Anthropics not going to open source software. So Google needs to catch up not just by, hey, here's all the software we have internally, let's open source it. They also need the ecosystem to build a ton of software on top of TPUs. And so that's the real big challenge there. And there's an element of software there that Nvidia is happy to open source. And customers of Invidia are happy to open source that Google will never open source because it's, you know, Google Cloud is selling the TPU.
Starting point is 01:08:46 Gemini is the one actually using it and developing a lot of the software. And these two groups are not always going to be aligned. Yeah. Isn't that, like, I mean, what are the other kind of just problems with Google becoming an actual, like, seller of TPU? It feels like there's obviously an opportunity because Nvidia has high margins. There's demand. It's a great chip. But culturally, like, Google tries a lot of different things. They have a lot of advantages. But occasionally, like, they fall flat on their face with just like they can't even get an RSS reader out or something like that. So, like, are there other risks to, the TPU, not really finding its footing for reasons that aren't just the laws of physics. Yeah, so the biggest challenge I see with them is everything is non-standard, right? Google for years, they developed liquid cooling first, right? Sure. For AI computing.
Starting point is 01:09:40 They deployed rack-scale architectures, right? Everyone's talking about GB200 rack-scale architecture. Google did it first with TPUs, right? But when they did all of this stuff, they didn't give a crap about, hey, you know, this has to go in 50, 100, thousand different people's data centers, right? This has to go in my data centers that I design myself. So everything is super vertical. The entire liquid cooling supply chain is super vertical, entire.
Starting point is 01:10:01 The racks aren't even the standard width, right? So when I look at like a data center, it's like the door, the loading bays, because there's so much wider, the Google racks are like three times as wide. It's like maybe it might not even fit into the data center, like physically like the doors. So there's like all sorts of random, like, I wouldn't say random. It's Google from First Principles design stuff. Totally. Yeah, yeah, yeah.
Starting point is 01:10:20 But if you're neocloud and you're like, the hot thing's going to be TPU next year or the year after, and I want to be able to sell into that market, it's not just flip a switch, drop in, replace with TPU. You have to maybe build a whole new building. Like, it might be that significant. Right, or like knocked on some walls. And then, like, you know, I need to go get liquid cooling, not from Dell and Super Micro and HPE who I've, who serviced me already. I need to go get it from some random supplier who's only ever sold to Google. And usually they're sitting across the table from like some gigabrain engineer who has a team of 20 people working on liquid cooling instead of like, you know, my one guy who does liquid cooling procurement and negotiations and like also does procurement of like network stuff.
Starting point is 01:11:03 Yep, yeah. There was a tinfoil hat theory floating around that meta leaked their TPU interest to try to gain some sort of leverage over maybe some negotiations with Nvidia. I don't know if you see any possibility in that, but how do you think those conversations are going? Jensen doesn't want to discount and compress his margins, but at the same time, he can't do this kind of like equity rebate thing. If he took a big position in META. It'd be very suspicious. I totally get the opening eye investment. That seems like it makes much, much more sense than saying, hey, we're going long meta.
Starting point is 01:11:43 You know, he's a $4 trillion company. Yeah, at the end of the day, right, like, TPUs have like a set of maybe 10 customers, right? Because you have to be super sophisticated. And so what really is challenging here is, you know, meta looks at the numbers, you know, it's like, okay, opening I'm getting 30% off because they're investing in me as a role. Obviously, they get equity, but they're investing in me and I get 30% off on these GPUs as a result, right? Meta, you can't do that. So meta, I don't think that they're just negotiating, right? you know, are they just negotiating
Starting point is 01:12:16 with InVideo when they buy AMD? No, there are any engineers, right? They're developing all the software. They're actually deploying Lama 4 or 5B was exclusively on AMD for a number of months, right? For inference, right?
Starting point is 01:12:29 So when you look across, hey, is meta just like playing around trying to negotiate, it's like, no, no, no, like they're looking out for what is best, right? And meta is power constrained and TPUs are currently way more power efficient. Meta is compute constrained. And TPUs are potentially
Starting point is 01:12:43 higher performance per watt and higher performance per dollar, right? At least that's what we believe for TPV7, it is. So they'd be dumb not to look at it, right? And they have the time, they have the people, they have the team. Now, Nvidia at the same time has to play the game of chicken, right? Yeah, sure, they could discount the pricing somewhat. And because what's funny is invidia is more vertically integrated than Google is when selling hardware, right? Google has to pay Broadcom who pays TSMC, whereas Nvidia gets to pay TSMC directly, right? There's this vertical integration challenge where invidia could drop the price a little bit and they'll be fine but they don't want to right you know the whole point is you charge the highest price possible and then the last thing is they've got this like um you know they've got this view about antitrust right you don't want to cut deals for specific customers because that looks bad right instead you want you know right now del pays the same price for a GPU as gigabyte as meta now the networking hardware there's different pricing because there's a lot more competition and invidia can cut a lot more there But on the GPUs themselves, NVIDIA's pricing is very fair, right?
Starting point is 01:13:44 Fair in the sense that they're making a shitload of money off of everyone, you know. Yeah. Talk about kind of Jensen's leverage that he has around Ruben allocations as some of these customers start to at least consider TPUs. Yeah, so as far as like next year's TPU deployments, it's pretty set in stone for the vast majority of the volume, right? Anthropics got a bunch, and then there's some sprinkled elsewhere. But as we go into 2028, where Google can actually ramp, you know, the flip side is Ruben is also ramping. And at least based on our research looking throughout the supply chain,
Starting point is 01:14:24 you know, over a year ago when opening I started their chip team, they poached like 15 Google people overnight, right? In one week, like someone I knew I heard Gap was like, oh, yeah, I'm joining Open Eye. Then I text like another three people I know. And they're like, oh, yeah, I'm also joining Open. I'm like, what the fuck? So Google's a lot of their best TPU.
Starting point is 01:14:40 engineers have left, right? They also have a ton left. And so what that's done is, you know, chip timelines are so long, that didn't affect TPV7. That's affecting TPV8. At the same time, Google's trying to diversify their supply chain, get from not just Broadcom, but also media tech. And so Google's got a real challenge on TPV8 in that it's good. It's an improvement. But then when you go look at what NVIDIA is doing with Rubin, Rubin is so much better because NVIDIA is just pedal to the floor, paranoid as fuck. We have to be the best and we have to be way, way, way better than everything because how much better I am than everyone else is my margin, right? And so Nvidia has the sort of like at least currently we think Nvidia is going to be so much better
Starting point is 01:15:19 that they'll be fine and they'll be able to maintain margins, right? Now, things can happen. Ruben can delay or TPUs can delay and the position looks better or worse, right? There's a lot of unknowns to go through. But as far as like what is Jensen's leverage is, look, I'm going to make the best hardware and plus my software advantages and I'll be able to continue to be dominant and dominate the market, There's curveballs that could go, which is like, oh, Google software, they could open source enough software that actually their software ecosystem is not far behind. InVIDIA, maybe they don't want to, right? Or, hey, they could execute everything. Invidia has a three, six month delay. Now all of a sudden, they're a lot more competitive, right? And so all these things are still open questions, but it's, it's, Nvidia can play the allocation game as well, of course, right? Hey, I'm going to give all of the GPUs initially to companies that probably could buy TPUs, but that ends up being, all the AI labs and hyperscalers, right? At least, you know, like meta, right? And bite dance, people that would actually be willing to buy TPUs.
Starting point is 01:16:16 And then you end up with this, like, weird situation where, okay, well, that's like 75% of the GPU market anyways. When I look at the AI labs through the neoclouds, right? When there's, you know, nebius and iris energy and all these other, you know, core weave and all these folks are deploying for opening I anyways, right? You know, this sort of ends up being like, well, sure, I could stiff like some people on the allocation, but at the end of the day, everyone who is a potential customer for TPUs is sophisticated enough to be where they were going to be on the beginning of the allocation
Starting point is 01:16:48 anyways, right? That makes sense. How are you framing ClusterMax these days? Is it for customers who want to buy services from Neoclouds? Is that the primary goal of Cluster Macs? Because I feel like some people look at it and they're like, this is a buy rating, this is a sell rating on the stock. So the funniest thing is, like, Cluster Max v1, the title of it was Cluster Max, How to Rent a GPU, right?
Starting point is 01:17:15 Because we discussed all of that. And then in Cluster Max V1, I believe we put Iris Energy and underperform, right? At the same time, the research side of the business, we explicitly were like, dude, they've got these data centers. It doesn't matter if they suck it running GPUs. They've got these data centers. They've got this power. If you just value them on a watts per, you know, how much money they could make, it's a long. Yeah. At the same time as like, Jordan, he's running ClusterMax is like, Iris kind of sucks.
Starting point is 01:17:43 And it was other people in the technical team before him. You know, it's like, it's like, it's like Jeremy who's running the data center side. And I think he's been on TBPN is like, dude, Irish Energy is a stock. Right. So it's like, it's kind of like, you know, what the technical side of the house does versus what the, you know, research side of the house does? Yes, they talk to each other. Jeremy did ask the team like, hey, what do you think of Iris Energy? I think it's a log. And the team working on. Cluster Max is like, I don't know, like, you know, it's a bad cloud, and it's like, that doesn't matter. So Cluster Max has nothing to do with the stock, right? Now, obviously, there's going to be some correlation with how good is a stock versus, you know, who's going to want to rent from them. Yeah. But at the end of the day, right, like Cluster Max is the goal purpose, sole purpose, it will be explicitly say in there is it's for people renting anywhere from like, you know, hundreds of GPUs to, you know, right, right? The AI Lab scale, right? The AI Lab scale, there's different considerations. But in that range, tens of thousands of GPUs all the way down to hundreds of GPUs, that's who we're targeting.
Starting point is 01:18:40 Plus, we're giving a bunch of feedback for people to make the cloud ecosystem better. The unsung hero between ClusterMax v1 and V2 is that we move the bar up, right? You know, what it required to be in gold was much more. What it required to be in silver was much more because everyone improved so much, right? And as we continue to, like, increase the requirements, make it harder and harder to keep moving the gold pose. Right. People keep improving the ecosystem, and actually, you know, this is the funny thing. It's like ClusterMax is evil. It's like, when we look at the quotes and we've got hundreds of quotes on Clustermax.aI, all these companies are like, dude, I love this. This one specific bug that this neocloud had. They fixed it as soon as you wrote about it. Right. Or like, hey, help me understand the reliability. Help me understand this or that. People are like love Clusterbacks. And, you know, altruistically, like I think we're generating billions of dollars in value just from, hey, like, all these clouds are more efficient and there's less failure. and it's easier to get your workload running on any random GPU cloud and the market is more efficient.
Starting point is 01:19:39 Now, I'm not making any money off of that. How am I making money off of ClusterMax? I'll be very clear, is people who hire us to do diligence, right? So people who want to acquire a NeoCloud, people who want to sign a massive, massive deal that's not just like thousands of GPUs, but tens of thousands of GPUs. And then lastly, it's people who want to invest in a NeoCloud. Those are the three areas where we're making money off of, quote, unquote, cluster max, but not really. We're not selling ratings. Sure. You know, we're in fact, like a customer will do a consulting project with us or want to buy some research from us.
Starting point is 01:20:12 And I'll explicitly put in our Slack shirts, Slack, or I'll send an email to the CEO like, dude, just so you know the people working on this are not the people who are doing Cluster Max rating, right? You know, the people who are by, you know, the research on like these data centers are there and this is the power ramp or here's the accelerators or here's the TCO. That's not the people doing ClusterMax, right? and I don't care about, you know, whether you buy it or not, I, you know, at the end of the day, Google and Amazon and Microsoft are way bigger customers than, you know, Floodac and like, you know, those kind of companies, right? And yet one, some of those are ranked in silver and some of those are ranked in platinum and gold. And that's because technically what's what matters not, hey, you know, obviously when we talk about who buys our research, the biggest companies in the world are going to pay me more than the midsize companies in the world. Okay, question from the chat. And the price is discriminated based on that.
Starting point is 01:21:00 Would you change the rating of a NeoCloud if Shalto promised to do the dishes for two weeks straight? You know, there was an argument that I saw someone as like, who does the chores? And it's like, brother, we live together by choice. You know, we pay someone to come once a week. If you cook something, you do your own dishes. But like, you know, frankly, we're working so much. And I think, like, you know, I think I think Dorkesh has ordered pizza from the same spot three nights in the row before. Right?
Starting point is 01:21:28 Like, it's, it's, it's, it's, it's, uh, is, is, is, is being an adult man with roommates underrated? Um, so I, I haven't lived with people in years. And then I, when I moved to this year, this is crazy. Okay. I moved with, I moved with, I moved to SF, uh, this year. You know, I'm like, oh, you know, I should live with friends just so it's more fun. Um, and the first house kind of fell apart. So I moved into this house with these guys. And we've been talking about it for months. Um, I love it, right? It's like, look, we, we, we, we have, you know, if you think about, oh, what if we all rented our own places that were good? And then we pulled that budget together.
Starting point is 01:22:01 We have a nice place. Yeah. Right? And then in that place, we have plenty of space for ourselves. Yeah. We pay for someone to come and clean once a week, right? So at the end of the day,
Starting point is 01:22:09 what is, what is the negative here is like, well, we're living with our friends, but we have enough space to where like, and, and the beauty is if you, if you do bunk beds, you have more room for activities.
Starting point is 01:22:21 Exactly. Anyway, no, no, sorry. Sorry, actual question from the chat. When is TPU going? on inference max we got to know so we're working on it right we we're working with google um technical folks um you know finally enough actually um we triggered a security wording for this google engineer uh kimbo went to a a jacks conference right jacks is yeah is the opposite is like pie torch for but for tp u's is the most simple it's it's google's own internal thing right um that people do use externally
Starting point is 01:22:50 um he went to this pie torch or this jacks conference a google engineer presented something he's like can I get the slides? They send it to him. And then Google security alert, like, locks him out of his computer because he sent us like some technical, like, information. And, like, for three days, the guy can't work and he's freaking the fuck out. And I'm like, I'm like, bro, this is like, do not fire this guy. He sent me stuff that you presented at a public conference.
Starting point is 01:23:11 He's like, oh, okay, yeah, yeah, I'll get that fixed. But anyways, like, we're working with, we're trying to, you know, implement it. We have access to some TPUs. The software stack is different, right? Yeah. You know, just as much as much as a rewrite or re-implement inference Mac. like the code that actually runs. I won't say it's that much work,
Starting point is 01:23:28 like as much as like completely redoing Influence Max, but there's a ton of work, right? So we're moving as fast as we can. Yeah, yeah. Internal target is this year. Okay. You know, I don't hold us to it. Then the obvious question is, is like,
Starting point is 01:23:40 I feel like inference max is my North Star for TCO relative in AMD versus Nvidia land. There was a bar chart of TCO for GPU versus Nvidia. It looked like, it looked like TPU was doing really well on that chart. The bars were very low. Where did those numbers come from? Do you have confidence in those numbers?
Starting point is 01:24:00 Or do you think the numbers will change once you actually get TPU on inference max? Yeah. So inference max shows performance TCO, right? You know, it's great, great. Like, you know, like, guess what? Like, you know, TCO of like a Raspberry Pi is incredible. It's like five bucks, right? Sure.
Starting point is 01:24:17 You know, versus a GPU is $50,000. Performance divided by TCO is what matters. So that bar chart is saying, look, TPUs are cheaper. and at least on quoted specs, now let's make some assumptions around utilization. And in the article, we explicitly said, look, we don't know what the utilization is. It's going to change customer to customer.
Starting point is 01:24:34 Here's a range. Worst case, it's like a little bit worse than GPUs. Best case, it's way better than GPS, right? And so Inference Max will tell us what the actual performance is in inference. Because we don't know yet, right? Currently, the open source software for TPUs is not good enough for us to just take the open source software and say that's the performance. Right? Because that's obviously, like, not real, right?
Starting point is 01:24:56 Anyone who, like, is actually buying TPUs is going to spend engineering hours to work on it. And so we're trying to work with Google to get a real performance number that is achievable by people, you know, and will be upstreamed into the open source software because this is an in progress thing, right? No one cares what TPPV7 can do today. It's about what it does in six months. And so, you know, obviously we don't want to be, you know, today, TPUs if you're using VLM or worse performance TCO than GPUs, without a doubt. but the target is moving very fast. And, you know, there's a ton of, like, low-hanging fruit for us to implement before we actually put a number out there, right? And so where does Google sit there?
Starting point is 01:25:32 We'll see. I personally believe the TCO-sided things, the total cost of ownership is based on what we know on supply chain, right? How much do the chips costs? How much do the racks costs? How much is the liquid cooling cost? How much does the memory cost? How much do the cables cost, et cetera, et cetera, et cetera, right?
Starting point is 01:25:46 That's based on our estimates up and down. So I think the TCO-side of things, we're pretty confident. it's the performance side of things where we don't know right there is a wide range and that's what we sort of tried to state in the article right performance is a wide range uh can you can you uh explain more about google and broadcom's relationship max hodak from from neuralink and science was was asking on the timeline last week why why have broadcom as a middleman couldn't couldn't uh google do the design and and place the orders from tsmc themselves but but what's your read on on on that relationship and how
Starting point is 01:26:21 durable it is. Yeah, so when you think about chip design, there's a few different stages, right? There's defining the architecture, and then there's actually like implementing that architecture onto a process technology. There's laying out that architecture into gates on the chip.
Starting point is 01:26:37 And then there's like the whole supply chain side of things, negotiating contracts, getting allocations, et cetera. That takes like 18 months, right? Isn't that like an 18 month process, basically? Yeah, 18 months or more, right? Yeah. I would say, I would say actually, like,
Starting point is 01:26:52 Nvidia is faster side and Google's on the slower side, just because, you know, Nvidia's been doing it for longer. They have a bigger team, right? And they, you know, but at the same time, Intel has the biggest chip design team,
Starting point is 01:27:02 and they move even slower than that, right? They take, like, four years. At least that's what they did a year or two ago. We'll see what the new CEO can get into the, you know, reorgate, right? But as far as, like, Google, you know, when they first started the TPU, it was a very few people,
Starting point is 01:27:16 and they relied heavily, heavily, heavily, heavily on Broadcom to do everything, right? They just defined the top-level architecture, and Broadcom did everything I said below, right? Negotiating with supply chain, figuring out how to lay out gates, everything, right? As time has moved forward, Google has taken on more and more of this, right? Now they use, you know, they've talked a lot about alpha chip where they use AI to help floor plan the chip, right? Once you have the architecture, how do I physically lay it out onto the chip, right? They've done more and more and more there.
Starting point is 01:27:43 They haven't taken over everything yet, but that's sort of the point. But Broadcom has this, like, super big advantage, right? Invidia, they acquired Melanox, you know, call it five, six, seven years ago, huge acquisition. Who's the biggest networking company in the world? Broadcom. Right? Broadcom is the biggest networking company in the world. And, you know, when you talk about AI, it's the architecture of the actual processing elements. It's memory, which you're buying from, you know, the memory companies, right, Hynix and Samsung and Micron. And then it's networking, right? When you try and boil it down to the most simple things in software, right? The networking side of things is so important. And the
Starting point is 01:28:19 let's say technical competence of everyone around the world besides Broadcom and Invidia in networking is so low, or rather it's just not as good as them. They're actually good, but it's like Broadcom and Nvidia are just so good, and Broadcom is better than Nvidia in many ways at networking that
Starting point is 01:28:34 when you think about what is Google doing, yes, they're defining how the network topology is, but when you're talking about the physical network surtees, you know, how packets get transferred, all these different things. Broadcom has heavy, heavy influence there. So to this day, right, Broadcom is still charging margins like they did three, four years ago, even though Google has taken up more and
Starting point is 01:28:53 more of the work. But at the same time, Google can't leave until they figure out how to do the networking and supply chain themselves or with a partner. And so what are they doing on TPUVA that is potentially a distraction that's slowing down their execution is they're working with media tech, right? Media tech at times has helped Cisco with their network chips. Media tech has a lot of work on some of this networking stuff. They're nowhere close to Broadcom, right? On revenue, right, for Or, you know, that's one metric on technical competence. You know, that's another metric. I think media tech is good, right?
Starting point is 01:29:23 But, like, they're just nowhere close to Broadcom. So now Google is having to work with, you know, I don't want to say subpar vendors, but inferior vendors to Broadcom. And that's just to increase their margin on TPU8. I would even say their angle when they started this project was never we're going to sell TPs externally. It was, dude, we're paying, you know, a 3x markup to Broadcom. And half the cost of this chip is memory. Like, what the fuck are we doing?
Starting point is 01:29:47 You know, at the same time, it's like, well, sure, physically the cost for the networking is not that much. But what value does the networking bring is, you know, sort of Broadcom. And then Broadcom is also doing the like game theory, not science of like, well, you can't really leave us. So we're going to charge you what we think is fair or what we think we can charge. And Google's like, oh, no, we're stuck to you, right? So Media Tech is taking way, way, way less margin. They're not passing the memory through them, right? And so, you know, this ends up being like, hey, that's a huge advantage for them.
Starting point is 01:30:16 Flipside is like, well, they've got to engineer all this work that Broadcom was doing instead of working on a way better architecture. They've got to work with a worse vendor, objectively worse, although MediaTech, like I said, is very good, to try and implement TPUs more directly with CSMC with less Broadcom sort of in the middle. Very helpful. And Google, you know, because it's risky, is going down both paths, right? They're continuing to work with Broadcom on TPV8, and then separate TPV8 projects.
Starting point is 01:30:46 they're working with media tech, right? Because they can't risk, you know, find whatever 30 points of margin, 40 points of margin, 50 points of margin. I can't risk the TPU being late because ads runs on that, Gemini runs on that. Yeah. Can you give any takes on the NVIDIA's $2 billion investment in synopsis that got announced this morning? I don't know if you saw it. I'm assuming you did. Yeah, so in a time where, you know, let's say the two biggest shipmakers, Broadcom and Nvidia are making more money than ever and everyone else in the supply chain and all the hyperscaler trying to design more and more chips, everyone's, everyone's sort of working on that. You've got, you've got the, the EDA vendors are at the lowest possible valuations or
Starting point is 01:31:23 lowest valuations that they've had. They're still very expensive, but lowest valuations they've had on a earnings multiple basis for a long time. And this is on the eve of, hey, like, objectively, are there going to be more chip designs or less chip designs in five years? A lot, lot more, right? Now, the flip side is AI chip design is coming. There's 20 plus companies doing AI chip design.
Starting point is 01:31:44 It's a bit. We've got a really long article coming on that soon. That will sort of explain the landscape, but AI chip design is going to shake up everything. And to be clear, this is AI, this is AI AI chip design, correct? AI helping chip design, whether it's for AI chips or for like power chips. Yeah. And so, so the question is like, you know, Nvidia has a lot of tools internally, right? The dirty, the thing about EDA is that there's three companies that own 95% of the revenue. But at the same time, Google, Goal and Invidia and Broadcom and all these guys also designed a lot of their tooling internally, although they are massive customers of all three vendors, right? So it's kind of like an oligopoly where the customers also contribute a lot. And so Nvidia's whole goal here is like, how do I get every EDA flow working on GPUs? Because today a lot of it is running on FPGA's,
Starting point is 01:32:32 a lot of it's running on CPUs. And AI, chip design is going to get a lot more AI influenced. How do I get everything working on GPUs in terms of like the operation of it, even if it's helping people design not GPUs. Right? Yeah. And I don't have enough engineers to work on all the software. They've open-sourced a lot of software, right? Like Kulitho, it's software for lithography, right? And they've got all this software up and down the chain all the way from lithography to laying out chips and all this other things. They just want to make it all run on GPUs. And so that's what their goal here is, right? And now they've given synopsis a huge, huge, they're buying synopsis at the lowest valuations that synopsis has ever had with all this cash that they were going to give away
Starting point is 01:33:12 in dividends or buybacks anyways, and they're getting synopsis to now make GPU's first class, right? And so I think this is a win-win for synopsis in Nvidia. Well, we can go way longer, but I know what's on your calendar. You've got to hit the gym. Thank you so much for coming by and chatting with us. This is really helpful. Have a great rest of your day. Enjoy the holidays with your family. Great catching up. We'll talk to you soon. Cheers. Goodbye. See you guys. See you. Let me tell you about public.com investing for those who take it seriously. They got multi-asset investing and they're trusted by millions.
Starting point is 01:33:49 We have Rokana in the Restream waiting room. Let's bring him in to the TVP and Ultradone. Roe, good to meet you. Welcome to the show. How are you doing? I'm doing well. You guys have become quite the celebrities in my district. Everyone is tuning into your podcast.
Starting point is 01:34:06 I'm glad to hear it. I'm glad to hear it. And we're happy to have you on the show. Thank you so much for the time. what was that are the all in guys are the all in guys jealous or do they do they respect you i think that they have they have they're in the stratosphere they have left the you know the the the low the scrap yeah we're picking up the scraps compared to them they i think i think like any great silicon valley startup you're at their at their heels i it's funny we we had a new york times
Starting point is 01:34:32 piece about us it was a very nice uh you know just like here's what tbpn is doing just kind an explainer piece. David Sachs, of course, got a little bit more of the investigative journalism treatment, got five reporters. We only got one. And so I think that tells you about the relative importance of the shows. But anyway, I'm sure we'll get into that. I would love for you just to kind of set the... Normally, normally when our guest joins, join, and they're wearing a suit, we say thank you for dressing you. But I think this is, I'm assuming it's one of your daily drivers. Yeah. Well, you know, I'm not back in my district. It'd be the only way I'd lose my seat as if I started to show up with a suit to, like,
Starting point is 01:35:09 the flights was there. Oh, it's a suit guide. Yeah. It's the uniform in D.C. But I was hoping you could sort of take us through a little bit of the prehistory, since it's the first time on the show, just explain how you wound up in this position, a little bit of your backstory.
Starting point is 01:35:26 And then obviously there's so many hot topics that I want to talk about in artificial intelligence and tech broadly. And I want your opinion on everything that's going on. But I'd love to kick a lot. off a little bit of like how you wound up in Congress? Sure. Well, I am the son of immigrants. My parents came from India in the late 1960s. My grandfather spent four years in jail alongside Gandhi as part of the Indian independence movement. And that really inspired my love of public service. When I came out to Silicon Valley, I had a professor, Larry Lessig, he said, if you care about policy, go out to Silicon
Starting point is 01:36:06 Valley, that's where the interesting things are happening. That's where the big things are happening. So I went out. And I ran when I was 27 against the Iraq war, and I got killed. I got crushed, 71 to 19, but came to the attention of folks as someone willing to stand up for against the war. And then I worked as a tech lawyer. I supported President Obama. I got to go work for President Obama. And then I wrote a book about what we needed to do to build new manufacturing across this country in 2012, what we needed to do to really have the modern economy in different parts of the country. You're one of the first American beginning of the American dynamism movement, would you say? Yeah, I was going to say to President Trump, he stole all my ideas
Starting point is 01:36:54 in terms of manufacturing. But, you know, he... Well, that's good. That's good, that's good then, right? We just want to... Yeah, no, look, I, I support. the American Dynamism Movement. I'm a fan of sort of what Mark Andresen wrote in a Wall Street op-ed about, like, how do we not make masks in America? How do we not make basic things in America? When my parents came to this country in the 60s, we were the place to be. We were humming.
Starting point is 01:37:18 We were brimming with confidence. Kennedy said to go to the moon. And in my first book was about why manufacturing still matters. I think it was a colossal mistake to let China eat our lunch on so many key industries, especially now with rare earth and metals and magnets. I mean, we should have a Manhattan project to do that in the United States or New Zealand, Australia, Chile. But, you know, so after my time of the Obama administration,
Starting point is 01:37:42 after I wrote this book, I said technology is going to shape so much of the future of this country. I have a vision of how we can make sure that it helps everyone in my district and around the country. And maybe I have something to offer to Congress. So I ran against an incumbent again, lost again. California is a machine-dominated state. It's very hard to break in when I persisted and won on my third cry. There you got.
Starting point is 01:38:06 Third time's a charm. And so for this year in 2025, how would you frame your top priorities? There's this weird disconnect between that we've been tracking on, like, how relevant is AI? It's so dominant in tech. And yet, if you talk to somebody at Apple, they'll be like,
Starting point is 01:38:27 we didn't want to focus on AI this year at all. We wanted to focus on battery life because that's what helped us sell phones. AI was actually not a driver of iPhone sales, for example. It's a deeply pervasive discussion point, and yet it's not necessarily, and yet it's widely used, but also widely hated. It's such a unique technology.
Starting point is 01:38:48 But just in terms of political priorities, what's been on the top of the stack for you this year? I want to answer your question on AI. Obviously, in the last few months, what's been the highest priority is getting these Epstein files released. And when Thomas Massey and I passed the Epstein Transparency Act, it was my bill passed 427 to 1, 100 to 0 in the Senate, and Donald Trump signed it. Most urgently, it's about justice for these underage girls, over a thousand victims who were raped at Epstein's Island. But it's also about this kind of idea of elite impunity that these rich and powerful people, I call them the Epstein class, don't play by the rules, which you and I have to play. by and people are tired of it.
Starting point is 01:39:30 And it also is a story of how in the world do you get some things done in Washington? How did a Bay Area progressive congressperson end up getting Donald Trump to sign his bill and getting 427 people in the house to vote for it and 100 senators? So that has been the immediate priority. But what I say to folks...
Starting point is 01:39:48 So are you optimistic that the American people will ever get a, like a truly cohesive narrative on the Epstein... story, or will it be our generation's JFK assassination? I'm confident we're going to get far more than we've had so far. The release is now mandated by law December 19th or December 20th. And then more names are going to fall. You've already had some high profile names fall because of their affiliation with Epstein
Starting point is 01:40:19 covering up for him or being inappropriate. There are going to be other names that come out. Now, do I think that it's going to satisfy everyone? No, there's always going to be some sense that we didn't get a full justice, but it's going to be much better than these women who were denied justice for decades, which was not partisan. I mean, they were shafted by a justice system that didn't work. And there are a lot of rich and powerful people who got away with it. But look, what I tell people is that AI is going to matter even more than anything. And to your point about Apple, it's not AI literally as just AI as GROC or chat GPT or.
Starting point is 01:40:57 a technology that detects patterns and can predict the future based on patterns. It's more that AI has become a symbol for a technology revolution that people know is changing everything about their way of life and the economy and where they feel like they don't have control, that they don't have a full say in what that's going to mean. They don't have a full say in what that's going to mean for their kids in terms of having good paying jobs and they're unsure if their kids are going to have as good a life as their parents had. They don't know what that's going to mean culturally for them as citizens. Are they going to have the same sense or are they just going to be manipulated by algorithms? And they don't
Starting point is 01:41:36 know what that means culturally as their kids are on phones in school and becoming sort of creatures with machines. And so this whole concept of how technology is going to be something that empowers people and that people feel comfortable about as opposed to fearful of is the challenge in my view of our time. And, you know, I've gotten attacked from some people in Silicon Valley saying, oh, it's kind of a Luddite. And I was like, no, I'm not a Luddite. Of course, I believe AI can do a lot of great things in medicine, in coming up with new disease and lowering costs. But I don't think we can be oblivious to people's concerns about keeping jobs and keeping social cohesion and making sure their kids are going to have a good economic future. And so I've tried to be thoughtful
Starting point is 01:42:32 about how we adopt AI, how we adopt technology in a way that keeps the American Dream alive and benefits folks. And I mean, that's such a wide remit how we adopt AI technology because you can see it implemented from a chat bot that, you know, some random person uses or, you know, kids are using AI all the way down to, you know, deeper in some the bowels of some enterprise software product that, you know, no human was ever interacting with to begin with. And then it's just streamlined a little bit with some, some AI dropped in the middle of some big system. How are you thinking about creating some sort of taxonomy around AI? Do you, Do you like a divide between generative AI and more traditional machine learning workloads?
Starting point is 01:43:23 Do you see a divide between consumer and B2B applications, self-driving versus what happens in a chatbot? How are you thinking about actually breaking apart that problem? Because there's so much there when we say AI. I would say that the key distinction is, is AI going to enhance human capability or eliminate human beings? That is the distinction and that we need to figure out as a society how we get more AI that is enhancing human beings as opposed to just eliminating them. Let me share two thoughts on this, both of people who influence me. Steve Jobs described a computer as a bicycle for the mind. He didn't say computers would eliminate the mind.
Starting point is 01:44:14 He just said it would make the mind go really faster and better. And my view is how does AI do that? And then Darren A. Smogl who won the Nobel Prize at MIT has this idea of total factor productivity. Let me try to explain it simply. If you just had AI replacing human beings and those human beings then becoming not productive, not only would you have frustration in a society, right? I mean, who wants to just get a check without contributing people at pride? But you also wouldn't actually maximize total production because you have all these people who could be doing things who are not productive and or not being able to earn a living and spend money. And so what he says is that there is some savings of that for consumers and for shareholders if a technology just eliminates labor.
Starting point is 01:45:05 But the best technologies like electricity, like automobiles, don't just eliminate people. what they actually do is they increase people, workers' ability to produce, that there are technologies that increase human capability, and so you have the benefit of the synthesis of the technology and the worker. And that that is actually what transforms the lives, and he calls it total factor productivity. And so my ideas around this has been, how do we do that? How do we make sure we just don't eliminate four million commercial drivers? How do we make sure that the adoption of things is actually making us more productive and that it's being done with respect to workers and capability. Okay, but so let's make it more specific because I agree at a high
Starting point is 01:45:55 level with a lot of that. But let's talk about like a specific role or job like truck driving. You've generally come out against or have concerns around. AI-based job displacement with long-haul trucking and truck drivers. On the other side of that, if I'm running a trucking company and I want to deliver the best possible service for my customers, it's possible that AI would be able to support that, what kind of like policy do you think is right in order to create, you want some guardrails around the industry, how AI should be used in trucking? I'd love to kind of understand more. Yeah, I would say have a human in the loop. And so what does that mean? When I'm on a
Starting point is 01:46:52 plane, a lot of it is automated, but we still have a pilot there. And I'm glad we have a pilot. I wouldn't want to just fly in an automated plane. And so does this mean that a truck driver's job may become more appealing? Because right now, as you know, we have a shortage. actually, of truck drivers more demand. But if they have a assist from a technology that maybe allows them to rest more, that's less taxing, they're there for the edge cases. If something is possibly going wrong, they're there to deal with maintenance. They're there to make sure that you have loading and unloading happening.
Starting point is 01:47:30 We can reimagine what the role of a truck driver is going to be. And we can certainly have a temporary view for the next five years that you should have the driver there now. That doesn't mean that at some point there may not be jobs or certain parts of things that don't require a driver. But it doesn't seem unreasonable for five years to say we want a driver in the loop and let's rethink the types of jobs that that will be. and if we need the government to be helping invest in the developing of this technology, fine, but do it in a way that's going to be complementary with drivers. This is kind of happening already with Waymo where there is a human in the loop, but the ratio of teleoperators to cars on the road is potentially higher than one-to-one right now,
Starting point is 01:48:23 you know, according to some reports. But over time, I think the Waymo team expects. there to be fewer and fewer humans in the loop over time. The question is, how fast does that happen? And you're sort of proposing maybe try and make that as gradual as a process as possible. Because, I mean, you go back to like the elevator operator used to be a human. Now we use buttons. And no one's really missing those jobs. They phased out over time. I think the main thing is everyone is concerned about rapid job displacement, not necessarily. the, if I told you your grandson can't, can't be a truck driver, you'd say, oh, you know, he'll find a
Starting point is 01:49:05 different job. But if it's like every truck driver out of the job next year, that's obviously much more disengaging to the U.S. economy. Is that how you think about it in terms of just timelines more than strict rules forever? I think that's thoughtful. There's a famous economist to one said in a gender time, jobs for the father, not for the son. And by that, that he meant, look, we've got to make sure that people in their 30s, 40, 50, 60s have jobs. That doesn't mean that that's exactly what their kids are going to do or their grandkids are going to do. For a lot of these human in the loop legislation, we're talking about five years. We're not talking about 15 years. And we're talking about roles evolving, right?
Starting point is 01:49:50 I mean, it may be that there are, these evolve, and then there's less of a need to hire folks down the line and you have a natural transition of folks, but you're taking people who are workers and making sure that they're productive and they have a good life. Let me explain why I think this matters. Phone operators, which people often give an example. And Alex, Alexis, who's at Chicago, had a great point about this. That was 2% of the workforce. Commercial drivers are 10% of the workforce. You already have an anger in the country of so many people who, displaced by globalization, displaced by the concentration of wealth in some areas. And you really want to throw into this mix of rapid mass job loss displacement of, and then what?
Starting point is 01:50:38 Just compensate them and have people stay at home and just get a check? Like, is that the society that we think is going to be productive? Or do we rather figure out how they have some role and some say in the transition be managed in a way that considers their interests as well. And I get that it's a good-nature debate, and people say, okay, kind of you're adding some cost to the issue. And if all you cared about was shareholder profits and minimizing consumer costs as your only holy grail
Starting point is 01:51:16 and you didn't care about jobs and you didn't care about communities, then people have a legitimate critique. of me. But I would argue that that was the mentality during globalization, and it's what's led to so much of the polarization, not just for politics in the United States, but in the Western world, led to things like Brexit, led to anti-immigrant sentiment. And maybe we should consider jobs and communities, not as dispositive, but as a factor, just like we consider consumer costs and shareholder profits. Yeah. What's your look back on how the Uber story played out? because that was a weird moment where there was a big pushback from the taxi cab drivers.
Starting point is 01:51:59 Those jobs still exist, but they're just way less profitable because the medallion system has kind of been undone. But if you want to make money driving someone, you can, but you're making less money. Do you think that we should have handled that differently if we could run back the time? Or do you think it happened slowly enough that it was actually okay and delivered enough value to the consumer? Because Uber is one of those weird examples where the amount of, you know, taxi cab like activity, ride-sharing activity, it ten-xed and more people take these rides than ever before in the taxi era. And yet it did have remarkable impact on the market structure of that industry. Well, I'd be hypocritical for saying I'm against Uber.
Starting point is 01:52:51 I take Uber's all the time. Yeah, right? I'm the same way. Expoisee, you know, next time I do an Uber. But I'll say this, we should have done more for the medallion owners, right? I tried actually in New York. This is before Zoran Mamdani became Zoran Mandani. When he was an assembly member, he was really focused on a lot of these taxi drivers who had lost their medallion value.
Starting point is 01:53:17 and we're underwater, and what could we do to compensate them? And I'd actually reached out to Jamie Dahman, who tried to do something to his credit through J.P. Morgan, and it ended up not working out, but we should have, as a government, done more to help those folks who had medallions who lost all their value. And that's an example of something where we could have been more proactive. And then there's a huge debate about Uber drivers
Starting point is 01:53:46 and whether they're getting enough value and have enough say over their lives. I would argue that we need that. And I'd argue we need national health insurance. This is the biggest area where if you're not going to be employed as a traditional employee, it would really help if people didn't have to buy health care on an exchange that has soaring premiums. So there are better things we need to be doing to help that Uber driver. But am I glad that there is a technology like Uber? Yes, I am. I think it has created jobs and it has made life easier for many people.
Starting point is 01:54:23 You brought up Mamdani. He made a post, I think it was yesterday or the day before, that a bunch of Silicon Valley types were agreeing with, which was, you haven't seen that very often. It was basically around S&B deregulation, making it easier to get a small business off the ground, is that, should that be a more important conversation in every state and region? I feel like growing up in California, I've seen so many businesses like try to get off the ground and you end up seeing like a finished restaurant that just has its door closed because they're waiting on some permit or something like that. And it's obviously hard enough to start a restaurant. And it seems like oftentimes local governments,
Starting point is 01:55:11 can get in the way. Do you think that needs to be just a bigger part of the conversation as, you know, given that starting a business is a great way to insulate yourself from at least some job displacement risk with AI? Yes, it does. And look, Zoran became famous in part of this halal video where he was basically saying it takes too much regulation to have a halal stall and we need to streamline that. And so I believe, yes, we need to make it easier for people to,
Starting point is 01:55:41 to start a small business, to be their own business owner. That's not just making the permitting easier. It's also making sure people have access to capital. A lot of times that's a barrier. But I'll tell you one thing that I think is often a blind spot for folks in my district. I love small businesses. I love entrepreneurs. I think that there's a lot of people who want to build it all completely agree. We love small businesses here too. But there's the but. Most Americans, most Americans are not going to go just start a small business. Like this idea that every person in Bucks County, Pennsylvania, where I grew up, or Western Pennsylvania, should start a startup or build a business.
Starting point is 01:56:26 Like, my dad never did that. He had a middle class life. He worked for the same company for 30 years. And there are a lot of people who just want a decent job. And they just want a job that can support a family. And there's nothing wrong with that if they want to be in manufacturing. manufacturing or they want to be a nurse or they want to be a child care provider. And so sometimes our rhetoric becomes like, why can't everyone become an entrepreneur? It's like, why can't
Starting point is 01:56:49 everyone become a politician? Now maybe an entrepreneur is a better life. But like a lot of people just don't want to do that. And they still want to have the American dream. And so all I'm saying is let's think about how to help small business owners, but let's also think about the four million people who are subscribers and like what is their life going to look like. And it's important to have that balance. Give me some lessons from the recent trip to China. I'm fascinated by how they're dealing with AI. Are they doing anything right? Are they moving even faster? Do they have a solution to the job displacement problems? Is there anything good or maybe risky that you found going out there? what were your takeaways? Three takeaways. One, one third of the AI talent is in China.
Starting point is 01:57:37 What does that mean? That means it would be totally counterproductive to ban Chinese students from coming to the United States or Chinese entrepreneurs for coming to the United States. We want to have that talent come to the United States because we still have a better ecosystem for capital and for investment. Second, we need to make sure that we're developing the talent. in AI here in the United States and investing in STEM and making sure that we're encouraging the local development of that. But third, and this is the most important, guess how much youth unemployment is in China?
Starting point is 01:58:17 It's nearly 20%. It's really high, right? You guys are too smart on this stuff. It's really high. 20%. But that's crazy. How is that possible? I feel like, can't they just go build more bridges
Starting point is 01:58:29 and create more jobs? I thought it was a command of control economy. I don't know. They have enough to empty sky rises, I think. Maybe. I don't know. Yeah. What was your takeaway from it?
Starting point is 01:58:39 As I describe it to people, you can't build dating apps in China, right? Like, so, you know, the people who have these fancy degrees. Is it a band? Yeah, I mean, they're in such a directed economy. They want everyone to like make stuff, manufacturers, not do, not do things that they would consider frivolous. Sure, sure. You know, a sports app, a music app.
Starting point is 01:58:59 Interesting. All the cultural stuff that we do that improves consumer life or things about consumer needs. And, you know, so you're someone who gets this fancy education in college, and then they're like, okay, go work at a factory. And just like we've undervalued people who want to work at factories in America, we should be having more trade schools and more respect for factory workers. They've undervalued people who don't want to work at a factory. And the reality is, like, you should have both choices. So these people, they're there, and they don't want to go necessarily to build a bridge or necessarily to build a next factory of robotics.
Starting point is 01:59:41 And it was hilarious because I would talk to the Premier Lee Chong or others, and they'd say, well, it's a voluntary unemployment problem. These are just folks, they should be doing these jobs. But what if in America we said, okay, you know, is one of the newsrooms when they were being laid off said to someone, go become an. electrician. Well, that's as offensive as telling a steel worker to become a coder. Like, you know, people do things and they want to do what they aspire to do. And China is a command directed economy that is overvalued manufacturing, doesn't have that diversity. We do. Our problem
Starting point is 02:00:16 has been the opposite, that we undervalued making things. We undervalued the trades. And so what we need is sort of a balance for America to have manufacturing, but also this incredible ecosystem of the service economy, which can employ people where China can't. And that's ultimately why I bet on America. I'm also one point you're sick of this argument that let's just go be like China, where they're going to eat our lunch. Really? You know, the Chinese model is a crony communism. Like, okay, G. Jinping gets rich and a bunch of people who are running these companies get rich. And the rest, and then you have 20 percent unemployment and you have a consumer welfare declining and look at how most people live. They don't live in nice houses with,
Starting point is 02:00:59 you know, two cars. So like, I don't want China as a model. And I'm not going to compromise every American having economic security just because we're chasing China. China is not the model. America needs to be more like America of how we built America in the 1940s, 50s. I completely agree. Quick couple. Want your takes on a couple things. Housing affordability. I think a lot of people agree right now that housing affordability is sort of like upstream of a lot of the problems that we're facing as a country. What's your current stance on how we can improve affordability at kind of the local level and at the federal level? I'm a YIMB. I'm an abundance guy on housing. We've got to build far more housing in California. I don't endorse people who are sort of
Starting point is 02:01:52 zero housing people in my district. We've got to realize that aesthetics matter, but economic equality of opportunity matters more. And you can't have $5 trillion companies in my district and expect to live like where the Valley of the Hearts Delight. Like if you got that many companies, you've got to have housing near transit and dense housing to make sure that people can live there. And that it's not just a place where wealthy people can live and that the working in middle class is getting shafted. We also need to stop private equity from buying up single family homes. People say, oh, this is a red herring. No, it's not a red herring. In some places, they have bought up too much single family home. So pro-building, pro-streamlining,
Starting point is 02:02:35 making it easier to build and having zoning reform and stop private equity from buying up these single-family homes. What about international? Will we make progress, at least in California, on those issues in the next 10 years? Yes, because I think people realize we didn't make enough progress over the last 10 years that this is a failure of California policy. And whoever is elected the next governor, I can't imagine it won't be on an abundance agenda
Starting point is 02:03:06 when it comes to housing. And it's not going to be, okay, let me do it at the last year or try to do something of an eight-year term. It's going to be day one. How do we start to do things that it's going to build more housing? So I think it's been a wake. up call for, for California. That makes sense.
Starting point is 02:03:23 Any quick comments on the current state versus federal AI regulation? We didn't get to touch on that earlier. And you had some comments recently on SB 10-1047, the bill in California. But what's your updated view on where regulation should be happening? Well, look, ultimately we need a federal regulatory framework, but the way you get good federal legislation is having legislation in the states. That's federalism. And I don't understand how you would have a moratorium on having state legislation. When federal legislation right now looks bleak, the prospects of it are bleak, it is such an unpopular position even among Republicans.
Starting point is 02:04:11 So my view is build a consensus that you can have thoughtful regulations at the federal level and work on that. Don't stop states from regulating. And this idea that, okay, you're going to stop all the growth. I mean, my district is $18 trillion of value. We've got five companies over a trillion dollars. East of the Mississippi, there's not a single $1,000. California is undefeated. It's so good.
Starting point is 02:04:43 You talk to folks in, like, Bucks County, Pennsylvania, where I grew up, and they're like, come on, come on. They're producing more wealth than ever before. Like, what we want to know is how are our kids going to fit into this? And I just think that that, I wish more tech leaders. You know, who sometimes gets it as a Jensen Wong is talking about this. Like, yeah, I mean, how do we create economic development opportunities in places that have been left down? how do we make sure that everyone comes along on the AI revolution?
Starting point is 02:05:14 I just think it's in tech companies' interests to embrace this in a similar way as the economic royalist embrace the New Deal eventually. I mean, you can't have just a capitalism that is only working for some with large chunks of the country suspicious and left out. Yeah, I just worry that we don't know the shape of what we're regulating yet, Like the unintended consequences of social media took five, ten years to develop. I mean, two years ago, we were reflecting on this. People were worried about AI killing everyone and creating the Terminator.
Starting point is 02:05:51 And then what wound up happening. Well, it wasn't really political misinformation. It was much more people chatting with it for a really long time going crazy, you know, maybe overbuilding, maybe risk in the debt markets. Like the risks were very hard to predict. there were risks, but it wasn't exactly what we thought. And so I'm always, I'm a little bit hesitant about, like, you know, maybe there should be regulations, but how, when will we be confident that we know how to regulate it? Is it right, is now the right time? Do we have clarity?
Starting point is 02:06:25 Because a lot of the stuff, it stands on, you know, we already have fair use. We already have copyright protections. And so a lot of it can be enforced through the courts, I would imagine. But of course, if new problems come up, they need to be resolved. And that's the way we resolve them in a democratic society. I think that's fair. The places I focus on are jobs and American citizenship. And I agree with you on the jobs part, but it just feels like the jobs, we haven't seen a collapse. And even people building the AI technology are like, this is going to put everyone out of jobs, and that's good.
Starting point is 02:06:59 And then the people that hate the technology are saying it's going to put everyone out of a job. And that's bad. And it's kind of crazy because they all agree that the jobs are going away. And yet what do you get when you actually look at the jobs figures? It seems like we still have jobs. Like it seems like we actually can't delegate to the AI. And I can't just say, hey, you know, trucker, like I want the AI to handle this one. It's just the technology is not there yet.
Starting point is 02:07:22 And will it be a year? Will it be five years, 10 years, 100 years? There's a whole bunch of incentives to say it's coming right now. And it's hard to get a read on and predicting when things will happen is is, you know, fortunes are won and lost on that, on that alone. Totally agree with you. John Maynard Cain said we would all be working 15-hour work weeks. And he was on, he knew more about economics than any of us.
Starting point is 02:07:44 So, you know, it's hard to predict. But I think what we can do is when you look at Derenay Smoglou, it says, well, why don't we have a neutral tax code? So we're not incentivizing a depreciation of investment in technology and automation over hiring people. I mean, there are things we can do that make it that, We prioritize having people in the loop. And then there are things we can do in our social media environment that protect us as citizens and kids.
Starting point is 02:08:11 Two things are like, let's eliminate bots, right? Elon Musk talked about doing this on X. And there's still a ton of bots. But a lot of the bots that use AI are, in my view, hurting our democracy. And then let's protect kids from some of the harms on social media. Yeah, yeah. You know, so I guess, you know, I love sparring with folks
Starting point is 02:08:31 And I appreciate sort of the criticism I've gotten from some of the tech folks for the tweets on AI and drivers. But I guess what I would hope for tech people listening to this is don't resist every form of regulation and sort of dismiss people's anxieties. Instead, be part of how we get smart regulations and how we answer people's concerns. Because if 70% of the American people believe the American dream is dead and have a concern about AI, The answer to that for anyone who's been like in a relationship is not to dismiss it and say they're dumb. It's to say, okay, how do I address that anxiety so that we can move forward? And I guess my hope would be that there'll be more tech leaders like that. Victor Peng is one who was the former leader at AMD.
Starting point is 02:09:20 I mean, there's some people who are thinking in that way. And I think it's in Silicon Valley's interest to have that kind of view. I think you really freaked people out with there should be attacks on math. job displacement. Well, there is a tax on the profits, right? Like, we tax profits. So, I mean, there's a question of, like, maybe we adjust that, but it's all, these are all dials that already exist. We're just discussing how we turn them, I would imagine. Yeah. I don't know. Well, thank you. Thank you. This is one thing different for me than other politicians is I cost ideas out there. If I think there's good pushback, then I adjust my views. And I'm like a
Starting point is 02:09:58 politics like this. I just talk like I talked to someone, someone over a drink over at a bar. And everyone else is like so scripted. Oh, you can't put out an idea because, you know, maybe it'll come back 10, two years later on Face the Nation. I just don't think that's what our politics are. It's like, put your ideas out there. Like, what human being doesn't have some ideas that are dumb? Like maybe, maybe Einstein didn't or something. Most of us, yeah, we put up good ideas. We put bad ideas. I love it. I love that. I love that approach. And it certainly sparks a conversation. And it certainly fits with what we've done here today.
Starting point is 02:10:30 This was really fun. We really appreciate you coming on the show and just like going all over the place and just talking through all this stuff. It's fascinating. I'm learning a ton and we really appreciate you taking the time to come talk to me. Yes. Thank you so much for coming on. You guys are doing great. Seriously, you're elevating the conversation and Silicon Valley.
Starting point is 02:10:46 It's an honor to be on and I look forward to me. Yeah, we'd love to have you back on the show and go way deeper on all of these. And I'm sure by the time the next time you're on, all of the data points will be different and we'll be looking at and we'll be staring at new problems and they will require new solutions and new discussions. And so thank you so much for taking the time to come talk to us. I appreciate your approach.
Starting point is 02:11:06 Thank you. Have a great idea. We'll talk to you soon. Bye. Before we bring in our next guest, let me tell you about numeral.com. Let Numeril worry about sales tax and VAT compliance. Compliance handled so you can focus on growth. Our next guest is,
Starting point is 02:11:23 Jonathan Swartland Function Hey, sorry for keeping you waiting We were in a political quagmire We were in the swamp We went to the swamp We don't normally go to the swamp Normally we talk about series Bs
Starting point is 02:11:38 We talk about large series Bs He did get us going though He was telling us how much value has been created In his district It's in the trillions, it's in the tens of trillions And we were just foaming at the mouth About the market caps And then we set a bunch of other stuff
Starting point is 02:11:51 But thank you so much for coming on the show. For those who aren't familiar, introduce yourself. Introduce the business. Tell us what's going on. Absolutely. Great to be here. Now you're climbing out of the swamp. We're going to talk about something a little less swampy.
Starting point is 02:12:04 Thank you. We talk about health. It's great to see you guys. Well, I mean, health is, like, honestly, more political than politics. I can see it can be. But this conversation won't be. It's funny. We actually say that biology is bipartisan, though.
Starting point is 02:12:20 And I, and I like this idea of everybody can agree that nobody likes to suffer. Yeah. You know, and everybody can agree that preventable death shouldn't happen. Yeah. So it comes that, but of course, the nuance of how you get there can become political because who's going to pay for it, right? Yeah, not bad, but also just. Well, that or the, well, this diet is better than that diet. This diet's right wing, that diet.
Starting point is 02:12:48 Oh, working out, that's a right wing thing. or like, oh, this is left wing. And, like, you know, different ingredients became politically charged over the last few years. My powder is better than your powder. For sure. And sometimes there's political influences on the right and the left
Starting point is 02:13:01 who actually have the same supplier. And then they put different branding on top of it and they sell that. That's a fascinating rabbit hole to go down. But anyway, we're not here to sell supplements. Let's talk about the business. You know, it's funny. It's like, I'm not left wing.
Starting point is 02:13:15 I'm not right wing. I'm the whole bird. Otherwise, you fly around in circles. is kind of the idea. I love it. I love it. I like it. The whole bird.
Starting point is 02:13:22 That's great. The whole bird. The whole turkey. So, yeah, take us through the shape of the business these days. What's the value prop to consumers? What's the progress been? How big is the company? Kind of set the table for us.
Starting point is 02:13:33 Okay. So simple value prop is get on top of your health. It's time you own your health. So what does that start with? It starts with a new platform. It's $1 per day to join. And the platform includes twice a year comprehensive lab testing at over 2,200 locations,
Starting point is 02:13:47 any quest diagnostics around the country, you go and you test everything, heart, hormones, liver, kidney, thyroid, cancer signals, you name it. Up and down, all of that data goes into a platform, into an app that explains you what's actually happening inside your body. And these are the things that you would not get in a physical. This is like a true, true deep look. What a function is created is this entirely new standard for your health that every year, for the rest of your life, you know that you're well on top of whatever's happening inside
Starting point is 02:14:15 your body. You're seeing how it's changing over time. You're making sure that you're getting well ahead of disease. And you're doing everything you can to feel your best. So that's the value proposition. And that's what function delivers right now. We started with lab testing. Because that's like that's the most impactful data. 70% of medical decisions are based on lab testing. And recently we acquired a company. You might have heard about this. It's called Ezra. And Ezra is an imaging business. And so Ezra does. And, and And what has been amazing for us, we've gotten FDA cleared AIs that have reduced the time that it takes for somebody to get an MRI.
Starting point is 02:14:59 Okay? So why does that matter? One, nobody wants to be cited in an MRI machine typically. Yeah. Two, it also massively reduces the cost. And it picks up the efficiency. And so what we've actually done is we've introduced lab testing, can one of the largest, most powerful lab tests from country.
Starting point is 02:15:15 And then we wanted to imaging on the imaging site and bring down the cost. And what you're seeing actually emerge is this new standard for health. We took the most impactful parts of the health system for capturing your data. And we packaged it up in something that's really simple, understand, and really affordable for many, many people. Talk about how MRIs were used historically. Are these things that get done when, like, you have acute pain or you have an acute pain or you have issue and then you're doing it and this feels like terrier ACL yeah this this feels like kind of flipping it and saying using it as like preventative preventative cares is that the right
Starting point is 02:15:54 read that that's where I read and not just preventive I would just I would just say responsible because this idea of preventive is great but it's also what might be happening right now that you don't even know about right and so the word preventive and the word early are a little tricky for me because the word early it's like why is it early detection we just call it detection Can't we just get rid of the word early? What MRI does is traditionally it allows somebody to look inside the body. But to do that, it's been really, really expensive. We get an MRI, you know, tear your ACL, something like that.
Starting point is 02:16:30 Like you basically, you have to send thousands of dollars to look inside your body. And that's the way insurance is set up and that's the way MRI set up. But what MRI can do is it can look at every single organ and look for tumors that are 0.2 centimeters. meters, two millimeters. It can look for stroke risk, aneurysm risk, endometriosis, hernius, tears, everything. So if you actually want to understand what's happening inside the body, an MRI is an incredible way to do it. But it's been so arcane and so difficult. It's never actually been architected and set up to look at the body and get well ahead of things. And it's usually been, oh, you go to a hospital, you broke something, you have an issue, you go look at this
Starting point is 02:17:11 one particular area. In functions case, you can actually look at most of the body. And it's body through an MRI, and you can detect cancers early, you can detect the aneurys and stroke risk, et cetera. And you can do it for $499, and you can do it across almost 200 locations by the end of this year. There's never been anything like this. This is the first time in history. This has been possible.
Starting point is 02:17:33 It is the first time in history. It's been possible geographically from a cost perspective, technologically, and culturally, it's changing. People are realizing this. What I was alluding to before, it's a really important point, is a new, standard of health is emerging. And that standard includes twice a year comprehensive lab testing. It takes a 10, 15 minutes each time. You go in, you get your whole body testing, you find out what's actually happening inside. And the second thing is now a quick MRI every
Starting point is 02:17:59 year. If you do it, what you're doing is you're actually creating a baseline for your whole health. And you're seeing how things are changing over time. You're catching velocity. You're seeing bad trend lines. And you're also just flagging critical issues, as well as finding out what you can optimize and what can be better in your life. And what's crazy to me is the current standard, like the status quo, we've all done this. We've all gone into the doctor's office. They test you for like 20 things. You get a phone call in three weeks. You're good to go. John Jordy, see you in six months a year, two years, whatever. And you move on with your day. And that's just this episodic once in a while, very narrow perspective on your health. That's gone. But they miss. They're not
Starting point is 02:18:40 looking at cancer. And they're really not even looking at heart disease, the two leading causes of death, let alone metabolic dysfunction, hormonal issues, thyroid issues, and function looks at all that. I give you a crazy stat. A new study just came out. Forty-five percent of people that were hospitalized for their first heart attack did not have what is considered high-risk cholesterol. That should be terrifying. Why? Because if you go to a doctor's office today, a regular old physician's office for a checkup, you get your LDL checked, right? You guys have done this, yeah? Yeah. Okay. that marker was born in the 1950s it's older than my father vintage it's a vintage marker
Starting point is 02:19:20 some people would say lindy some people would say that's lindy okay let's just steal man it for a minute they might say it's lindy so so look um there is no world where any top cardiologists say I'm just going to rely on LDL cholesterol basis. When every top cardiologist tell you, let's look at APOB, let's look at LB little A, let's look at lipid particle size. For most people, they don't, those words are foreign to them. But they should be. I mean, it's this avant-garde stuff, right? But so there are way better ways to look at the heart.
Starting point is 02:19:56 But we're relying on something that's back to the 1950s of status quo. And what functions done is for hundreds of thousands of people now, we've actually delivered a new standard of health that includes twice your testing of everything that looks at your heart with looking your kidneys your thyroid your hormones everything women don't have to go to their doctor and ask for a hormone panel and get chased around instead they can actually get a look at what's going on with their hormones and then on the cancer side to real quick cancer side 400 you're you're four times more likely to survive cancer we catch it early but right now status quo is you have to wait to have symptoms to catch cancer
Starting point is 02:20:36 Yeah, it's crazy. That's no way that's okay. I don't want that for my family. And now there's technology where we can actually, with an MRI as well as with a grail test, that we test for many, many, many thousands of people. We can actually get way ahead of these things. So I can talk about it. Yeah, yeah.
Starting point is 02:20:52 So I'm sold on the product. I think it's hard. I think it's hard not to be. It's the best kind of like value offering, I think, in health, like period. And I was sold, obviously, when you were raising. pre-seed back in the day or have I been two two and a half years ago or something like that feels feels like forever ago can you give us like a I'd love to get your view on an update of like the the market structure a lot of companies have seen you guys weren't function wasn't
Starting point is 02:21:24 the first first lab you know testing and and health platform like this to exist but your guys is execution and the growth I think you're one of the at least growing faster than a lot of the fastest growing AI companies that we're seeing out of the last year. Give us an update on like the shape of the market, how you see the market evolving. Because like I was saying, a lot of people are trying to like ride your coat tails. But I'm curious for an update there. You know, the market is realizing that the word consumer health.
Starting point is 02:22:05 has been this like dirty word for 20 years or something and it's not it's it's what it is is premise on the most primary thing that we experience as human beings is our biology it's our it's our life experience and what's the lTV of your health right you'd be willing to pay anything for health it's the most valuable thing in the world for you and so we're finally in a place where we can actually see technology and products broadly applied to health. And so you're looking at a TAM that conservatively is $7 trillion.
Starting point is 02:22:43 And some... Give it up for $7 trillion tams. John, hit the... Hit the size gong for a $7 trillion tan. Had to. Had to. Anyways, continue.
Starting point is 02:22:56 I love it. I love it. No, so look, this is... People have been spent, getting absurd amounts of money on their health through these massive service platforms like insurance companies, big health systems. And finally, people are saying, you know what, health happens outside of the doctor's office
Starting point is 02:23:13 and I'm taking it into my own hands. And what we're doing is we bring scientific and medical rigor directly into a platform that people, they themselves can sign them for, they themselves can manage. And so they can make decisions for themselves. And that gets them way ahead of diseases, you know, as I've been saying before, like, this is this is not a it's not a trivial space i think it's the it is the most anticipated service for ai it is the best application of AI in the world is to our help because that is the major experience and so of course we're we're we're surprised that the category and all the
Starting point is 02:23:52 all the competitors aren't it's not that it's not bigger that more people are jumping into this like we we know that people are going to try to ride these coattails but we just we are our head is down and our focus is in how can we deliver as much value per dollar for each one of our members we have hundreds of thousands of members soon millions of members we have been growing really fast because we're at we're actually delivering something to somebody that has real real real substantial value and at a time a lot of technology can do a lot it's like what are we really paying for and it's like where can people get started you mentioned it's a dollar a day.
Starting point is 02:24:30 Correct. Does it take me through like the customer flow? Is it just a website and then I go to the lab, explain how people can get, can get going? Okay, so it used to be $9.99 when we started. It was manual. It was paid. Per day. $1,000 per year.
Starting point is 02:24:47 Sign me out. $1,000. He said per year. Don't worry. John's just messing with you. So it started at $1,000 per year. Then we worked really hard to bring up the efficient. in Seesant Tech, got it down the $499.
Starting point is 02:25:00 And a couple weeks ago, we announced it's now $365. It's like when it actually $365 per day. Because health is an everyday thing. And it's an understandable price. And when has health care actually been deflationary? Yeah. And then, so go to Quest Labs probably twice a year. You go to Functionhealth.com.
Starting point is 02:25:18 Yep. Functionhealth.com. Functionhealth.com. You just sign right up. Yeah. Right there on the scheduler, you sign up for your lab appointment. Yep. You show up at the lab.
Starting point is 02:25:27 your blood drawn, urine collected, you walk out 10, 15 minutes later. In 24 hours, results start pouring in. Yep. Now your app is live. Yep. And now all the data is coming in. It's making sense of it. And you test every six months.
Starting point is 02:25:40 Every six months. Got it. Exactly. I think people, one of the reasons people underestimated this kind of category as it was emerging is so many people got burned on like DNA testing, but with DNA, DNA, like the 23 and me, you test it once. And then there's like zero incentive to retest, right? You did 23 and me, you have the data.
Starting point is 02:26:00 Well, it depends. Are you working on your DNA or not? Have you been modifying your DNA? If you modify your DNA regularly, you should probably be testing your DNA regularly. You never know. You never know. I might have rewritten my entire DNA, all of them from start to finish. Every base pair is different now.
Starting point is 02:26:16 Sign me up again. I'm ready to go. We have to study you if that's a case. We have to bring you in. John needs to be studied, honestly. generally got ridiculous what does 25,000 Diet Coke's due to the human body we're going to find out what is 500 Diet Coke's a year a dollar a day on function and four diet and at least four Diet Cokes a day for John we're actually running we're running a split test we have the exact same lifestyle we show up at the gym every morning we work out we prep the show we do the show we hang out with our family we just we're going to do that forever but John drinks Diet Coke, and I drink
Starting point is 02:26:57 Yeramate, Yeramate, podcast in a can, from Andrew Huberman, of course, and we're going to find out, yeah, yeah, we're going to find out. Well, thank you so much for taking the time to come chat with us. We have a small bit of breaking news I want to get to you before our next guest. So, we will be seeing you soon.
Starting point is 02:27:13 Oh, one last thing. Give us the numbers in the last fundraising round. I want to ring the gong for real. Yeah, let's do it. Series B, $298 million raise $2.5 billion evaluation. But look, the thing to think about here is that's basically a dollar for every American adult. There we go.
Starting point is 02:27:31 And so what that is is that's a vote on your health. That's not just the... I love it. I love it. Well, thank you so much for taking the time to stop by. We will talk to you soon. We'll talk to you soon. We'll talk to you soon.
Starting point is 02:27:43 For the next one, for the C coming in person, we've got a seat here for you. Be honored to have you in person. Be great. Let's go, brother. We'll talk to you soon. Great to see you. Goodbye. Let me tell you about Vanta.
Starting point is 02:27:55 Automate compliance and security. Vanta is the leading AI trust management platform. Also, if you're running a NeoCloud, you've got to get on Vanta because that's one of the criteria for Cluster Macs. I'm not kidding. Not making this up. Sock 2 compliance is a big factor in actually making it up the tier rankings for ClusterMax, because, of course, if you're training on customer data, you need SOC2 compliance. You need the whole process. Anyway, the breaking news that I wanted to get to really quickly is Josh Kushner is partnering with Open AI.
Starting point is 02:28:29 Open AI, he says, we are excited to announce a strategic partnership between Open AI and Thrive Holdings. Through our partnership, Open AI will become an equity holder in holdings, and collectively we will set out to deliver frontier technology to our customers. For decades, technology has transformed the world's largest industries from the outside in. We believe the AI paradigm will be different in that some of the most profound transformations will now occur from the inside out. We view the businesses that we own and operate as the right reward system to build test and improve industry-specific products and models. So the race is on.
Starting point is 02:29:08 Is it inside out or outside-in transformation? What's going to happen? These are the new fast takeoff short timeline, long timeline. Are you an inside-out guy or an outside-in guy? This is going to be the defining debate over the next couple days. So get ready to lock in. We'll be covering it here. We'll probably have some people on who are digging into this, investing in this,
Starting point is 02:29:29 have long takes, short takes, who knows. But I want to get to the bottom of what this outside in versus inside out transformation will look like. We've been digging in a little bit, talking to some folks who are building companies, buying companies. Taylor says, deal guy, you go. This is the deal guy, you go. It's happening. It's happening. Well, before we bring in our next guest, let me tell you about Figma, think bigger, build faster.
Starting point is 02:29:54 Figma helps design and development teams build great products together. We have Cristobal Valenzuela from Runway in the Restream waiting room. Let's bring him in. How are you doing? Good to see you again. Thank you so much for taking the time to come talk to us on such a big day. Kick us off a reintroduction on where the company is today. And then the news.
Starting point is 02:30:15 I'd love to know about the news. Yeah, thank you for high. me again. It's been a while. Yeah. Yeah, so big, big news. We just released our latest frontier model runway, Gen 4.5 is some model we've been working on for like quite some time. It's the best video model right now in the world, which is a pretty remarkable fit. Yes. So I think it's, it's it's pretty good. It's pretty fun to play with it. To be clear, that's my audio. I'm at it. Yeah. It was not, not sure. But perfect timing. But, but, but, but, but, but, but, but, but, But let's play some of the video.
Starting point is 02:30:51 I want to see the demo videos that you put out, the examples, and I want to ask you a bunch of questions about it because it's an extraordinary claim. Google is a serious company. They have a very serious asset in YouTube, and I'm fascinated by, so first, give me the news. Video Arena Leaderboard, that's the ranking that you're using. How is that scored?
Starting point is 02:31:16 How does that actually work? So it's kind of like a way of crowdsourcing performance. You basically ask people in the internet to vote against two videos and it's anonymous. So you vote left or right. And then as you keep on voting, you accumulate more votes. Once you vote, you can see who you voted for, but beforehand you don't know. And so over the last couple of months, we've been working for like this entirely new way of, I would say, training both video models and image models in such a way that hopefully we thought it would all compete. others in the arena. And we got results a couple days ago. And yes, we managed to basically
Starting point is 02:31:52 loud compete all other video models, including both Google and Open AI, which is a very remarkable fit. If you think about the scale of resources, like I think it's the era of, Ilya was saying this is the area of research again. And I agree. But it's also the year of efficiency, like really good, really focused teams with highly efficient, like, you know, mandates can get really far. And so, yeah. Yeah. Tell me about. What you optimized for here, because SORA seems, it's an incredible model, and it was for like a minute, like, whoa, really mind-blowing. Then I feel like I kind of developed an immune system for it, and I can clock a SORA video, and it feels like SORA was very much trained on TikTok almost, or vertical social media video. And so what have been the breakout SORA videos?
Starting point is 02:32:43 it's been a lot of dash cam footage and doorbell nest camera footage and they've also degraded the model dramatic they have degraded the model a lot whereas v-03 it felt like it had it had a little bit of the Hollywood polish but it was more like Michael Bay when I looked at it looked very saturated it was cool it looked good but what you went for it feels a little bit more. I want to say cinematic, even though that's kind of an overused term. But talk to me about what your goal was, or even if you, even if you have a goal when you go into a training run like this. It does. So I think there's an explicit goal and an implicit goal. I think in a way, all models, specifically video models that are more visually like clear or like perceptible have
Starting point is 02:33:32 some sort of personality behind it. And I think that personality reflects a little bit both the point of view of the company and like the way you want to train the models in the first place. to your point, like if you want to make like consumer slop and like quick, like shareable stuff, you're going to train the models just from the ground up very differently that for the stuff we're trying to do, which is a much more professional, like high quality, very controllable sort of like tools. And so a lot of where you're like basically outlining as I would say the personality of the models and somehow also reflects the personality of the companies. Like if you're trying to sell ads, you're going to do a very different model from if you're
Starting point is 02:34:10 to bake creative tools. And so I don't think there's one single recipe or one single ingredient. It's more of a just like taste. Like I think that word gets thrown a lot in research this stage, just taste. And I think taste is both the research. Like what do you want to work on?
Starting point is 02:34:24 Like having vision, like having, okay, I want to pick this specific problems I want to work on and this is how we're going to solve them and this is what we've learned over time. That's one form of taste. And the other one more aesthetically is like what things look good. Like and that's like,
Starting point is 02:34:38 like beavers on a construction site. This is actually very clear taste. That's pure taste. Look at this donkey. Hilarious. Right. Look at the motion of the donkey moving, like the camera at the angles. Like the amount of data creation or team of artists and like filmmakers and like people have spent.
Starting point is 02:34:54 It's not, it's not like trivial to be honest. I think that's also the taste component like shots like this. It's like. Some of this is horrifying. I mean, I guess that's the point. It's really had to summon the demon on this one. You got to go to a dark place literally. Have you been inspired by Anthropic at all?
Starting point is 02:35:13 It feels like somebody could put you in the Anthropic for video bucket and that they're just like extreme focus on code and ignoring everything else. And meanwhile, your competitors are putting a lot of resources towards this, but they're not betting their entire business on it in the way that you are. Yeah,
Starting point is 02:35:31 I think it's like a mercenaries versus like visionary type of like, I would say bet. It's like you want to have people who who feel like very committed. to the vision long-term, and the way you do that is, like, you're very focused on, like, the culture and, like, that culture eventually shines in the product. I think atropic has also that. You can tell, like, who works there and, like, how they think. And it's all very cohesive in a way.
Starting point is 02:35:55 I think we spend somehow a similar amount of time, like, doing that in a way. And I hope you can tell via the models themselves that, like, that personality comes across nicely as well. Yeah, and I agree. Like, you don't, that at the end will be perhaps the most difficult. finding part of the companies that like stay in the long run like i think if you just throw money the problem you're not going to get too far to be honest yeah um what went into the actual training run are you at a are you at a scale now where it's a meaningful capital investment to
Starting point is 02:36:29 build a model like this we saw the scaling paradigm change from like you know maybe it's a hundred million dollars to do a big frontier language model run then we were talking about billion dollar training runs, bigger and bigger training runs. The results are remarkable, but has it been a remarkable amount of investment to get here? Or are there more efficient ways to actually get to a frontier result without spending frontier money? Yeah. I mean, it's definitely not cheap. This is not like traditional SaaS. So you definitely have to spend more money, more resources. But I think we're proven that like we're not spending tens of billions of dollars to get there and to like overcome the challenges. And look, to be
Starting point is 02:37:14 honest, like the model is not perfect. There's a lot of things we're going to improve and we're going to fix and we're going to do larger training runs and we do more over time. But it's kind of a, I would say the expense, the most expensive thing is like the natural intuition, the team builds around what kind of works and what doesn't work. It's kind of go back to the idea of research states. Like you can't throw money at it. It just have to spend enough time. We've been working on running for almost a decade. And so there's a lot of you've learned over the time about what works and what doesn't, that informs a lot of the efficiencies on training. And yes, like expensive models will, like, you'll need more money to train larger and bigger
Starting point is 02:37:48 models. Like, if this is the worst, the models will be, imagine them in like two years. Like, you're going to get there by training larger models for sure, but also knowing how to train them in the first place. And that's the part that I think is hard to quantify per se. And what I'm really excited about is not only what the models can do, but also like the efficiencies on not even training, but on inference, like this is a price point that's very comparable to our previous models. So it's actually very usable. And hopefully you'll be using it in real time very soon. And so that level of, I would say, efficiency at inference level, we haven't yet seen it. And I think we're going to get there very soon. Yeah. Fascinating.
Starting point is 02:38:28 I mean, some of those videos are very destructive. They're pretty remarkable. Your unlimited plan includes 2,250 credits monthly. How much video can one actually generate with that? Well, technically unlimited. Okay, I was confused because it said there's still like a credit system. No, so we have a queue. There's like we have like compute and there's a queue and you get into the queue and you generate us like the Q becomes available. If you just want to generate like fast, you pay for credits.
Starting point is 02:39:00 So, but depending on how anxious you are with like new generations, so it's a measurement of how fast you want it. But eventually you can just literally generate unlimited. It's, by the way, I think no one else has a plan like that. It's a pretty good deal. What is like what are the length of generations that are that are most commonly being done today? And is that a metric that you track? Like are people consistently, is it like a 20 second scene that's the most common today? And are you trying to get to two minutes or two hours?
Starting point is 02:39:34 Like, how do you think about duration? So, well, technically you can do, you can do like arbitrary durations if you want it. But like the average scene duration in like a short film or a movie is like actually two to three seconds long at the most. And that's actually been trending down. It's like the scene, right? The scene itself, it's like two to three seconds long on average. And so when you actually, when people mean like, I want to drink a 45 minute like long thing,
Starting point is 02:39:58 you don't want 45 minutes of like one camera, like fixed. You want like sync pods and world and you want the character like a shot, a medium shot, a long shot. And you have, you know, like that's a different problem from like creating one continuous long sequence. So the one continuous long sequence for me is less interesting than the like multi-shot approach where you can create much more compelling like narrative work. And I think we're not that far away from that being our reality. Like, you can generate consistent narrative work, like really good visuals, really good stories. Like, with the level of quality of the videos we're seeing right now here, but they're all tied together in a way that just makes it feel like cohesive with each other, you know? Yeah. And so that's a different problem, I would say, all together.
Starting point is 02:40:44 Yeah. There was some debate on the why doesn't the cursor for video exist yet. Do you have any thoughts there? What's the cursor for video? Basically, a nonlinear editor, like a Premiere Pro. a DaVinci Resolve, an Adobe After Effects for video, cursor for video, like replacing the actual bones of the software that the editor, that the video creator uses.
Starting point is 02:41:11 There's been a couple apps that have spun up runway. Originally, the reason I was using it back in the day was for Greensburn, for Karmicade, basically. It was fantastic for that. And it feels like that, building a canvas, building an NLE, that feels like one, potential pathway to victory. It's also very difficult because you can't just fork VS code.
Starting point is 02:41:34 There are no leading open source NLEs on the flip side. If you wanted to play nice with Adobe, you could be a vendor a la the way Nanobanana is now vended into Photoshop. And that could be a solution. And, you know, there's a variety of ways to win. I'm interested in hearing your approach. Yeah, definitely interesting question. By the way, shout out for you for being an OG on runway since 2019.
Starting point is 02:42:02 Yeah, something about crazy. I love you. Yeah, so my two thoughts are first, the art of like NLE and editing and film, it's an art. And it's just a lot of like pacing and like details that are very nuanced and specific. It's it's about granular details. And it's hard for, I would say, model to or system to automate that level of like decisions. That's on a purely NLE side, right? But I would say, at least for us, more interestingly, is the question of, like, do we need an NLE in the first place, right?
Starting point is 02:42:36 Like, do we actually need these primitives? If you think about nonlinear editing, this idea that you're, like, stacking frames of video against each other and, like, you're cutting them. Before it was with physical racers, and now we have digital racers. You're cutting things together. My bet is that you probably won't need, like, anales. Like, that whole paradigm will feel like a fax machine, like, in a few more years. And so I feel
Starting point is 02:42:59 That's somewhat What's happening With like the The Devons and the clog codes And the codexes of video I just I do wonder if there's going to be an intermediate step Or maybe it'll just be absorbed by the current
Starting point is 02:43:13 NLEs I mean I'm sure that's what your customers are using right Yeah I don't know We'll see it play But I'm not too fond of like You know pushing like better versions of analysis out there
Starting point is 02:43:25 I think there's There's something around how you make video and how you interact with this AI system that just naturally allows itself with different primitives. And if you think also about the fact that very soon you'll start to see this happening in real time,
Starting point is 02:43:37 like when you make real-time like narrative work or videos or experiences, how are you going to call them? You don't need to edit things async because you're generating on the fly and you have people interact with them. And so it changes, that's what I'm saying, it changes the nature of like those things in the first place.
Starting point is 02:43:54 And there's a transitional period where like you'll, we're seeing like analytics being augmented with AI, but I think it's that transitory. I don't think it's going to play out like in the long run. Yeah. Yeah. No, I think. Has Hollywood capitulated yet? What's going on there? We had, it's funny. I've been hearing, I've been hearing more and more about Suno from not just guests and friends at the show, but just like random people out in the world. It sounds like every single musical artists now is like using it in some degree, even if they're not willing to talk about it. What is the case in traditional Hollywood and entertainment? You can't exactly hide that
Starting point is 02:44:33 you're using AI video. It's basically out in the open immediately. And there's just so much like so much negative energy that gets focused on it specifically from people that are within the industry. You know, I think the negative energy is like the water problem with AI, you know? Like, it's kind of this unrealistic and very noisy, not-representative sample of what's actually happening within the industry. If you go to LA, when you speak with the agencies, with the Thailand, with the filmmakers, with the studios, with the production teams, they're on board on AI, like, years ago, like months ago. Like, they're fans, they're using it. They understand it. Of course, there's pockets of people who are, like, more advanced than others.
Starting point is 02:45:17 But I would say that the narrative publicly hasn't yet to catch up with that. mostly because some people might not want to speak about it. It's much more interesting to say, like, all the negative things that the thing, to say the positive things. I would say Hollywood already has overcome that, and they're pretty much on board. I would say gaming companies are now where Hollywood companies were like a year and a half ago or two years ago.
Starting point is 02:45:42 So that's, I would say, an industry who's now catching up more to what I can help them and how they can use it. So, yeah, I would say some of those narratives are a bit fake, to be honest. yeah well thank you so much for taking the time to come on the show on a busy day we appreciate it and uh i can't wait to play around with the new model we have a benchmark here bezel bench where we try and recreate a very complicated shot from uh that we shot practically with a bunch of different watches uh with our with our intern or gap semester tyler cosgrove uh and it has a very the shot's very long it pulls out it twists around it's it's a pretty complex shot, and that's our current benchmark, and we'll be testing, and we'll let everyone know how it goes. But thank you so much for taking time to come to chat with us. We'll talk to a see. Thank you. Thank you. Bye. Let me tell you about Julius.com. A.I. The AI data analyst that works for you join millions who use Julius to connect their data, ask questions, and get insights in
Starting point is 02:46:42 seconds. We have Vincent from Prime Intellect in the Restream waiting room. How are you doing? Great to see you. It's been too long. weekend. Thanks for having me. Congratulations. Master of finding the one day that we're not live to launch your new news. Tell us what happened on Wednesday. We're grateful that you did. The one day that we were off of streaming. Yes. I'm so excited to give you a rundown. So basically for the broader context with primitive kind of like our broader goals really creating open front of models and the infrastructure for everyone to create them.
Starting point is 02:47:21 And last week, we released Intellect 3, which was basically really like a scale up towards scaling RL and post-training and creating like a SOTA model, especially for like more agentic tasks. So basically what we did is we took GLM and did the whole SFT stage and RL stage to create kind of like a state of the art
Starting point is 02:47:44 100 billion per meter MME model and really kind of like that whole infrastructure is kind of quite a challenge like from like the RL environments to the broader like code sandboxes and the whole stack to do post training that's basically what we built over the last half year I think Will Brown came on the show
Starting point is 02:48:02 to unpack some of it on the verifiers and the environment side so basically that's kind of like what we released last week and really proved that kind of like we got performance at a hundred billion scale that thus found open source only 300 to 600 billion per meter models like
Starting point is 02:48:18 deep seek are on, for example, achieved before. So basically getting to a better performance, actually at a much smaller scale. And I think in general, it showcases that, like, open models are starting to catch up. Obviously, I think quite interesting. In general, seeing the trend that not just with our model, but also more broadly, with other releases like Deep Seek today and over the weekend, that actually, they are also on par with, like, the closed models now. And I think really our goal is, so it's almost like a preview release, but already sort of, is we basically released like our early checkpoint and we're actually scaling it much further, also on more like agentic capabilities, but basically really like making it sort of across like a range of task.
Starting point is 02:49:04 And really, I think the foundation of this, which is quite interesting, is that we created this environment where anyone in the world can create one of these environment environments, which we ultimately then included in a training run. So basically, different people in the open source contributed actually to the RL environments that we trained on for this model. So, yeah, give me a concrete example of like this shift of businesses that need to, you know, buy a model that has been trained in a specific RL environment. You know, we've heard the example of like someone's creating a clone of DoorDash and they're figuring out how to do DoorDash orders agentically. But what else are you seeing? What are some other good examples of when a business would pull this off the shelf from all the different opportunity, from all the different APIs that are out there and create something, I guess, semi-custom for a specific business use case? Like, what are you seeing out there? Yeah.
Starting point is 02:50:02 So I think what's interesting is like there's, I think, two buckets. Basically, there's a bunch of these people like creating our environments for the labs, like the Dordish clones, et cetera. So basically, to push really a capability. So I think we're in this paradigm right now, obviously, where ultimately, like, scaling RL is the main way on how these models improve, right? Like, we've seen it with Opos or with GPD-5 and Gemini. Like, there was mainly, like, I think, a scale up in RL. But basically, what we are seeing are two things.
Starting point is 02:50:30 It's like, on the one side, it's like, there's a lot of demand for these oral environments. But then the other side, RL is very sample efficient. So you can take an old model and then really create an oral environment and for the specific use case you care about and scale capabilities for that. So I think a good example of this was, for example, a cursor with composer. That was like what's widely believed or the known, it's like to be a scale up of an open source model. And the URL environment was cursor. Like they basically just gave it like the tools and the things within the harness and application of cursor itself.
Starting point is 02:51:03 But they trained basically that model and like really on getting really good at using cursor. And I think we'll see the same play out like across all the applications where basically the broader theory is like every application or every company will be an AI company or AI native and will have an opportunity to really post-train and use RL
Starting point is 02:51:25 to make the models work specifically on their application. So even if you take examples of like say a Figma, right? Like if they want to make their platform a GEMTIC, really they need to cut an RL environment around Figma and post-Train on. on that environment to be able to serve that within Fitma, like kind of like out of the box, like the closed models, won't be perfect at like re-navigating and making those applications agentics.
Starting point is 02:51:51 So I think that that's the broader theory. I think really it's also like it's so like the capital requirements are much, much lower than I think the big labs want to believe you. Like in the sense where it's like you can for like hundreds of thousands of dollars like post-trained a model, right? It's like to be much better on your application. and then also to, you are able to, like, serve them all cheaper. One weird trick, post-trained a model for 100K and create a better.
Starting point is 02:52:15 So, I mean, that's basically what you're saying is that, is that if I'm Figma as an example, and I could use a frontier model that's really expensive and beefy and it knows everything about, it knows some stuff about Figma, but it also knows about the Roman Empire. I can go in RL on just my particular application and have a smaller model that's fine-tuned on open source, open source model and get better performance than with the big, beefy, you know, do everything Omni model. Is that right? Exactly.
Starting point is 02:52:46 And I think really you get better performance, but also at a lower price point potentially. Because you can really specialize the model to be extremely good for your use case. So I think you could see this with like cognition, posturing their own model with like perist posturing their own model composer. And Composter also it's like it's much cheaper to serve. It's much faster. like saying for for for the more commission was building. So I think what we're
Starting point is 02:53:08 seeing and we started to work with like dozens of customers on like helping them basically do post-training and RL. I think we're basically starting to see a huge pool in terms of like enterprises realizing that like if they want to get a specific capability RL is a way to get it and
Starting point is 02:53:25 ultimately enables them quite capital efficiently to train those models and serve those models. And then really get to like a point where even in deployment, all the interactions from the user help improve the model. So I think with a cursor example, like every, for example, cursor tap interaction, every yes and no, that a user gives to the model, is updating the model every two hours.
Starting point is 02:53:46 So it's what like Drakash talks a lot about. Two hours. Like online REL. Yeah, like they basically retrib, like continuously training the model in two hour interval and pushing updates every two hours to Perci tap. So basically every user using cursor for the last two hours is being post-trained on, so to speak, like with kind of like an online. online URL loop. I think that's something which will see more and more that basically applications
Starting point is 02:54:09 will do their own URL, their own post-training. Actually, then, and it seems like really how we unhubble basically towards AI, where it's like the question is like, why haven't we say automated, like specific and valuable knowledge work yet? And I think the answer that also like Schulte was speaking about, for example, on the, with the example of like automating taxes and accounting for a member, right? It's like no one has really created our own environments, whole strain on them, and then serve the model in the application where the end user is. And then ultimately, the end user's interaction with the agent can improve the model further. Right. So I think that's really the paradigm that we see play out, which I think is
Starting point is 02:54:46 really a paradigm of like thousands of models or like millions of models that like basically continuously improve. And where actually the applications win to some extent through distribution. Like ultimately, they own the end customer interaction, right? Where it's like even the cursors and prognitions have like an advantage there over folks who are basically just model providers and who don't interact with like millions of developers. And I think we'll see the same play out across all the different applications.
Starting point is 02:55:13 And it's something like from the site I talked about also in the context of like co-pilot and Microsoft, right? Like they own distribution. They can like create the cursor for Excel or like PowerPoint or other things, right? And then whole strain on all those interactions. So I think we'll see this like play out
Starting point is 02:55:28 I think across like all the different verticals. And I think it's like a border trend of just like every company needs to become AI native, right? Like, and own also to keep owning the distribution. Like they don't want to give all of it up to the big H.R labs. Yep. That makes sense. We got a question from our intern, Tyler, if we can shoot over there. Yeah, I guess I saw you guys talk about this a little bit online.
Starting point is 02:55:52 But is there any, like, point of you guys training your own base model? Yeah, so basically, I think one interesting release in this context was like, And today we actually released, like we supported RCI in their base model release, which is like kind of like catching up to the Chinese base models. So basically we supported them in training a small MOUE base model, which achieves like pretty sort of results. So we released that, I think, like an hour ago with them. And we actually now like ramping up with them towards like a much bigger base model. so fully pre-trained from scratch. So we actually just had like 2,000 B-300 going life, I think, yesterday to ramp up
Starting point is 02:56:38 towards like a much bigger pre-training. And I think like it's really, I think like the broader pattern is like since kind of like Lama had some reorgs and changes and Mistral became sort of like a forward-deployed European enterprise play or something. I think there's really no one left outside of China right now to go end to end in the model stack. I think others like reflection, I think, are trying to also pick that up. But I think there's very few players, I think, outside of China. So I think that's our broader goals really is like serving like the world globally, but also like the West and the US
Starting point is 02:57:12 with like an end-to-end pipeline, right? It's like from data to pre-training to mid-training to post-training, like the full stack and making that accessible to like enterprises and people who are like trained normal. So I think like there's a huge, I think pool where a lot of enterprises or even like sovereign like nation sets, et cetera, like you can't train on Chinese old models, but they also, they can't rely on closed models. So I think there's a huge gap in the market right now that we're trying to fill of really like serving kind of like that whole segment. Do you have anything else, already? No, this is great. I want to know one last question about, you know, what will the market structure look like in maybe a year or two around
Starting point is 02:57:55 like implementing these RL environments for companies because when I see, you know, you say every company is an AI company, I believe that's somewhat true and I believe every tech company, maybe every founder led tech company under 10 years
Starting point is 02:58:11 might be able to say, okay, yes, we're going to go and train, fine-tune a model and turn our application into an RL environment. But if I'm, you know, the Coca-Cola company, you know, I might not be at that level of like going and building RL environments for every business process.
Starting point is 02:58:29 I'm probably more of a buyer of this AI as SaaS almost. So how do you see that kind of breaking out? How do you see like a truly legacy, you know, non-tech company adopting a fine-tuned LLM or an RLD model? Totally. No, I think there's like early doctors and later like later doctors. I think Coca-Cola might be more like a later doctor. not need to adopt it early on, but I think they are even more adopting it just like in less obvious places, right? It's like ultimately, I think they're initially just like using
Starting point is 02:59:04 the AI tools that use us, for example, in a sense where it's like, say, customer service, right? It's like, it's a like perfect example of like where you get a lot of gains out of post-training and then like they might put like, like basically the AI native customer service platforms might use us to post-train using Coca-Cola data. Sure. To serve them a better like model. So I think What we'll see play out, I think is really just like making like a lot of that like so accessible to your point that it feels more like using SaaS where I think like one element of it is like we're like launching also like our whole like RFT platform basically and offering to make it extremely like easy and plug and play. But then there's also like a forward deployed element right where you can outsource a lot of that stuff to our team. And I think the other element is like really like we're walking walk in terms of like making our own thing kind of like a gentic. an autonomous, that you could basically just use, like, an autonomous AI researcher to do all
Starting point is 02:59:58 that for you, right? Like, that you basically just, like, block it into your system and, like, the AI even, like, creates AI for you. Yeah. Like, like, like, I think, like, I think that's the next paradigm is really making, like, making in general training models, like, fine-tuning models, post-training models, like, as accessible as vibe coding as today, right? In a sense, what's like, I think with vibe coding, like, literally every human on Earth is able now to, like, code some stuff up. And I think we'll see the same play out with AI. in the next 12 months, and that's one of the big things that we're playing into. We're kind of like pushing towards autonomous AI research.
Starting point is 03:00:30 Ready, I can do most of it for you. Well, thank you so much for taking the time to come and talk to us on the show. Congratulations on all the progress. And we will talk to you soon. Great to see you, Vincent. Goodbye. Have a good one. Let me tell you about Privy.
Starting point is 03:00:44 Privy makes it easy to build on crypto rail, securely spin up white label wallets, sign transactions and integrate on-chain infrastructure all through one simple API. And I'm also going to tell you about adquick.com. Out-of-home advertising made easy and measurable plan buy and measure out of home with precision. Our last guest of the show is Ben Hylach. Did he do the Jaguar rebrand? That's him. Ben, welcome to the show.
Starting point is 03:01:06 And we'll follow him forever. Grab a seat. Hang out. Good to see you. Oh, you brought hats. Fantastic. Thank you. Please, grab a seat.
Starting point is 03:01:16 Introduce yourself. Introduce the company. What's the new? Yes. So my name's Ben, Hylick. let's take a second for the flow fantastic this is kind of like a vintage
Starting point is 03:01:27 Silicon Valley flow somewhat of a lost art I appreciate it you guys have great hair as well I felt a lot of pressure you'll notice I'm not wearing a hat today and it's because I did notice actually I kind of I discovered a blow dryer
Starting point is 03:01:40 I think around nine months ago 10 months ago so that was a big name your life never been the same since but yeah my name's Ben Highlock as you guys know I'm the CTO of a company called Rain Drop. So really simply put, we monitor agents in production. So we're building a product ourselves probably around two years ago now, which was like a coding agent. And we realized that there was just this huge gap of like if you're using Century, if you're using traditional
Starting point is 03:02:08 analytics, you know, they're covering like the things the users are clicking and almost everything that's happening in your product if you're making an agent is just not covered. So you just have no idea. These agents are going absolutely wild. They're going crazy. They're going haywire. You know, what's been insane, I think one of the things that's been like really kind of critical to our growth in the last couple months has been realizing that as agents get better, this problem gets worse. So that that was not necessarily intuitive to us in the beginning. You know, you think like, oh, well, agents are going to get better. Maybe this problem becomes less important. But it's like, actually, as it become more capable, they can use more tools. More valuable. Exactly. So for example,
Starting point is 03:02:46 if you take a company like Replit, it's like, you know, maybe a year ago or two years ago, or when they first launched, you know, you couldn't quite get as far, right? Maybe you could just get, like, a personal website or something. And so if it messes up at that point, it kind of gets stuck, it's like, okay, maybe it's not the end of the world. But now with Replit, you're able to build just, like, real applications. Like, people are building real production applications. So now if you get to a point where it gets stuck, something goes wrong, suddenly it's like, it's a real issue. So that was not intuitive before.
Starting point is 03:03:16 So agents are a pretty overloaded term. At this point, I think of, you know, when I fire off a deeper research report in chat, GPT, that's an agentic workflow to some customer service agent that's happening completely behind the scenes. And the customer might not even know that they're dealing with an agent. And then there's coding agents. There's a few that you mentioned. Are you dividing the market and trying to focus on an early landing zone first? Or do you want to do all of those?
Starting point is 03:03:45 Yes. So we focus on essentially, and I was like, I agree. The word agents overloaded, we're very hesitant to use it for a really long time, and then we realize it actually matters. So, we focus on products that have some sort of user input and some sort of assistant output eventually. So that's sort of our focus. So what we're not focused on is, for example,
Starting point is 03:04:06 like we're not going to focus on like specific like ML pipelines or things like, you know, maybe like translating text or like summarizing text even. It's like, we want to see like the user, the user is sort of like has some sort of request. The assistant is responding to that request. And we do map essentially everything that happens in between that initial user input and to what the assistant actually responds. And then what's the go-to-market for you?
Starting point is 03:04:32 I mean, it's been a little crazy, actually. We've had a lot of inbound. So some of our biggest customers have been inbound. A lot of it has been like when we first launched, I think, I guess this was like six months ago or seven months ago now. Agents weren't as big of a deal. And so I think in the first month or two, we had a lot of customers. We're like, okay, like, I have e-vals.
Starting point is 03:04:52 I think we'll need this for some point. But it didn't really make sense for them. And a lot of them came back in the last like a month or two after that. And we're like, holy shit, okay, now I get it. We need you. So it's actually been a ton of inbound. We don't really pay for advertising anything like that. You know, if we see a really crazy failure in the news, we'll reach out to that company, obviously,
Starting point is 03:05:10 and be like, hey, this is something we can help with. Sure, sure, sure. How are you thinking about the, you know, target, like the best, type of customer? Are you segmenting it by size? Do you want to go enterprise up front because they're implementing agents at scale? Or are you more likely to see immediate results of the startup that just kind of gets it and they can hop on really quickly? Like, how are you thinking about prioritizing if you are at all? Yeah, it's a really good question. I think that we really look at the entire range. And I think that we see and have always seen startups as being
Starting point is 03:05:42 a really core part of keeping our company healthy. You know, I've heard a while ago that like post hoc has this metric where they look at like what percentage of YC companies and every batch are using them. Sure, sure, sure. And so that's why we started with startups. Like, they're always, they're able to move faster. So for example, like when a new model comes out, it just gives actually a very specific example. So GPT5 introduced intermediate reasoning, right? It was kind of one of the first models to do this where like it's going to make tool calls, it's going to look at the results of those tool calls, think about it, and then make more tool calls, take that, think about it, write, you know, more tool calls.
Starting point is 03:06:16 It sounds small or subtle, but actually it kind of means that, you know, if you architected your system, your pipelines in the wrong way, you just couldn't use that. Like, and it really helped. So where startups will just, like, just throw everything out the next day, right? And they'll ship a whole new thing in a week. You don't see, like, you know, like, if you look at, like, the biggest enterprises, they're not going to do that. Sure, sure.
Starting point is 03:06:41 So you can learn really fast with startups. That being said, on the flip side, I think that the problem we're solving is actually most painful for enterprises, right? It's like the most critical high stakes environments are where, like, failures cost the most in every single sense. Yeah. How much of the... Categories of agents that you're excited about that are maybe under hype today. Oh, interesting. Coding agents are, like, sufficiently hyped, I think.
Starting point is 03:07:08 Coding agents are, and for good reason. And for good reason, but, like, and maybe, maybe... Maybe they're deserving of more hype. Yeah, yeah, yeah. But what other category, you know, I think people have been sold on the AI, BDR. Yes. Haven't exactly, maybe companies are getting a ton of value from it, and they're getting so much value.
Starting point is 03:07:28 They don't want to come on TBPN and talk about it because they don't want their competitors to know. But, and then obviously, like, CX feels sufficiently hyped. But what else are you seeing? Man, there's so many different things. I think, you know, speak, for example, language learning, I think the better, like, as models get better, that experience just actually starts to become really, really, really viable. So, like, that's an example of something where it's like, yeah, it existed a year ago. It existed two years ago.
Starting point is 03:07:57 But, like, as voice models get better, as, like, the models themselves get better, it's actually not just like, you know, if you try to use chat CBT, for example, to learn a language, you sort of can. But if you ask it to, like, critique you, for example, it just never will. Like, if you say something wrong, it just isn't going to stop and be like, hey, look, actually like... It's still glazing. You're absolutely right.
Starting point is 03:08:16 Yeah, exactly. Don'te Asta La Biblioteca is the most complicated Spanish sentence. It will, it will, right? You're fluent. You're fluent. It's like, yeah, you're pretty much good to go.
Starting point is 03:08:26 And even if you can get it to the point where, like, if you can really, really, like, prompt it into critiquing you, it'll just like start critiquing everything. You know, which is also not what you want as, like, you're learning a language. So, like, it turns out, and I think we see this with a lot of products
Starting point is 03:08:37 that, like, getting something right is actually a lot of details and really, really understanding that domain. So I think, we're seeing that in literally every domain, like, whether it's, like, marketing, whether it's like, even just, like, the idea of having a personal assistant, like, notably we don't have that yet, which is crazy, right? Like, we have these assistant models, but then none of us actually have an assistant. We can just chat and be like, hey, send this email. Yeah. Right? I don't.
Starting point is 03:08:59 And so I think, but I think we're starting to see products actually, like, nail that, like, smaller mostly, but. How are you thinking about, um, just, I, I don't know if, I don't know if, like, if you're Century for AI agents, does Century actually handle this? But just types of AI failures that happen for more infrastructural reasons. So just the GPUs are on fire. Or, like, there's just not enough GPUs in this particular cloud, and you just see a spike in demand,
Starting point is 03:09:25 and so you just can't provision more. Like, those types of more tactical errors. Do you help with that? Sort of would be the answer. So I think it's actually really interesting. One thing we realized about e-vals is that they don't catch those sort of issues. Like, you know, you're kind of testing just like the model. what is the model responding?
Starting point is 03:09:41 But then there's all of these things that happen in between. I remember really, really early on when we launched, one of the issues that a customer caught was like their file upload was broken. So a bunch of users all started complaining about like, oh, like the file uploads taking too long. It's like, okay, well, it's not like an AI problem, but it is. And so we see that with like tool calls.
Starting point is 03:09:59 We saw one of our customers had an issue sort of what you're saying, which is that like they started having like, they have their own GPUs, they started having like an infrastructure error. And it was mixing up responses. between users. And so users all started complaining, like, hey, that's not what I, like, what are you talking about? That's not my sister. It was like an increase in that, in, like, I don't know if you're talking about meta, but I think that happened in meta. It wasn't meta. They're not one of our customers yet. But there was a situation where, like, people could share,
Starting point is 03:10:24 it was, it was not that bad, but it was something like, I could share my chat with you, but if I shared it with you and I didn't know that I was sharing it, it would go out everywhere. And so, yeah, stuff like that happens. Totally. There's all these sorts of things. So you can actually catch those sort of problems, it's actually one of the, one of the things is like, that ground truth is actually really, really important. Because if you just see like a few errors, like, let's say you have tool, like your agent calls tools. Like, yeah, it's going to error once in a while, right? Like, that's, it might not be the biggest deal. But especially once you, if you can see when it actually starts to affect users, like that's really, that's really powerful. Yeah, yeah,
Starting point is 03:10:55 that makes sense. What about degradation of models under the hood? I feel like people, I don't know if it's just a meme. I've noticed it here and there. I'm not, I'm not benchmarking everything every night, like some big companies. But it does feel like that sometimes, right? It feels like, it feels like sometimes, wait a minute. I used to respond in this many tokens. Now it responds this many. It used to look HD, now it looks standard definition. Like, I know, I agree with you. I think it's real. I think it's real. I know that I can't say too much. I know that at least on one occasion that I think people were led to believe that there wasn't a thing. I know that there was.
Starting point is 03:11:32 Okay. So that's it. You know what I mean? I can't say who is a big company. And because I noticed this and I thought I was, I can't say whose hands were caught red-handed, but I thought some red hands. Yeah, exactly. And like, it was like, I thought it was a cursor problem.
Starting point is 03:11:45 It was like some really absurd behavior. And then I went into Chachapitia and it was doing the same. Oh, I just said. But anyway, yeah, like, I think that the reality is that like every single one of these providers are like having these sort of problems. And they're trying to optimize costs. They're trying to, like, make changes. So I think it's natural.
Starting point is 03:12:00 And some of them I understand where I'm like, oh, okay. Well, yeah, realistically, I haven't used that in a long time. I came back. I kind of sure. Yeah, yeah, 100. I don't really mind that you put me on the lower tier. Yeah, yeah, yeah. I just hope that for the people that actually, like, went and built businesses around
Starting point is 03:12:12 this that are using at the API level that are hopefully paying for the service at a high gross margin to you. You're not degrading the service behind their backs. 100%. Right. So, anyway. Who did the deal? Anybody we know? You want to hit the gong?
Starting point is 03:12:27 You want to hit the gong? Oh, let's do it. Yeah, yeah. Hit the gong. Tell us how much you raised. How much did you raise? How much did you raise? We raised, so we raised $15 million total.
Starting point is 03:12:39 From whom? Lightspeed. Who did the deal? Bucky. Yeah, let's go. Let's hit it again for Bucky. Let's hit it again for Bucky. This one's for you.
Starting point is 03:12:47 This one's for you. Yeah, we're big fans of Bucky up here. I just wanted to get him a shout out. Us too, us too. I think the moment we met him, we're like, okay, like he matched our energy like great vibe yeah yeah he's doing how's building the team going uh it's going it's going uh i think we're really really picky we've realized um and so it's really hard um and i think hiring in san francisco is really hard um we have a great team uh it's honestly really really small
Starting point is 03:13:14 still um we're if you want to get out of san francisco you could book a wander with inspiring Views, Hotel Grady Men, these dreamy beds, top tier cleaning 20% of consumer service. It's a vacation home, but better. You can do an offsite there. We could do our offsite. That's beautiful. I once used a team offsite as a recruiting tactic.
Starting point is 03:13:31 I said, we were going on an offsite in two weeks. Okay. I posted a picture. Oh, yeah. You want to come? We got an amazing. We'll do it. I'll do it.
Starting point is 03:13:39 It creates some urgency. We're doing it. So if you're watching right now, I'll post the picture soon of the house. Okay, fantastic. But we have an amazing team. Yeah, I figure if you're picky, and you're in San Francisco, it's like the most ruthless, like, talent war constantly. You know, the other thing is that I think when you hire amazing people,
Starting point is 03:13:58 they have zero tolerance for working with people that are not amazing. And so I think you can't even fool yourself as a founder if you start, like, you know, whether you're work telling, whatever, it's like if they're just, you know, if it doesn't fit, like everybody, everybody knows and feels that. Have you had to bring anyone's soup? Are you familiar with this? I'm not familiar with this. Okay, so apparently the AI, this is from Ashley Vance, this is a scoop, just
Starting point is 03:14:19 dropped on core memory on the podcast, so he had Mark Chen, Open AI's research chief on the show as part of a post-Gemini 3 sit down to get the update from Open AI. And he said, I knew the AI talent wars were rough, but not this rough. Zuck is out there, apparently delivering handmade soup. Wow. And Open AI has soup counters. And so I guess, uh, wait, they count how much soup is. Sorry, a soup counter. I don't even know what this means, Oh, I see, I see, like, they count. No, no, I think it's like a counter. It's just like a cafeteria.
Starting point is 03:14:54 Right? Like, aggressive. What exactly is this tit for test? We can play this on the show later. But, uh, well, but yes. I mean, no, my, my, my partner has, you do need to come with some meals for someone. Yeah, yeah, you have to come up with creative ways.
Starting point is 03:15:06 Like that sort of thing works. Um, yeah. You know, we, we do typewritten, I'll write a note on a typewriter, you know, when we do our offer letter, right? So that, that adds something a little bit of that's as, you know. Are you messing with that? I love. I love.
Starting point is 03:15:17 I love, no, I'm serious. I love type writers. No, I like that. It's just a way to prove. I can't value this message. All the texts is AI generated, I'm sure. Of course, yeah. I'm just copying from chatty music.
Starting point is 03:15:29 No, I think it's like a little bit of a proof of work thing. You're not just the new entire. You're a revelation. This is a statement. Yeah, yeah. Well, that's great. Congratulations and all the progress. Very excited.
Starting point is 03:15:41 I'm sure you'll be back on the show soon. I will. Giving us plenty more updates. And it's been fun because, I mean, I believe that we started. tracking your journey via your viral joke post about doing the right from the very beginning or something we've always had fun uh uh featuring your post it's great to have you here live in person
Starting point is 03:15:58 live in person one one year ago today i remember just roughly one year ago i was sitting in a parking lot and i was listening to the first time i ever heard of you guys you were reading like one of my tweets yeah it was just so surreal that like people from the internet are reading my tweets like one of our customers sent it to us actually you had printed out yeah so i call my mom today i was like telling. I was like, hey, I'm going to be on the, I was like, you're not going to know what it is. But remember those guys that were talking about that tweet? This is the whole, this was the whole schick was like little love letters to Silicon Valley
Starting point is 03:16:26 folks, just like little messages of just, hey, we've found something that you did, fun, because anyone can like, anyone can read post, you know, it's easy to send a small thing. It's very hard to actually print it out, sit down, talk about it. But we appreciate your post, and we appreciate you coming on the show and hanging out today. So, thanks so much.
Starting point is 03:16:43 Thank you. We're going to close on the show, and we'll talk to you in just a second. while he's walking off, let me tell you about getbezzle.com. Shop over 25, 26,500 luxury watches. Super intelligent. Fully authenticated in-house by Bezell's team of experts. I also need to tell you about 8Sleep.com. Exceptional sleep without exception.
Starting point is 03:17:03 Fall asleep faster, sleep deeper, wake up energized. I had a rough night. Kids have been all over the place, but I still got 92. Look at that, John. 98. 98. 98. That is remarkable.
Starting point is 03:17:17 Well, is there a bunch of, yeah, we'll see if you're breaking news. Bucco Capital Bloch is on the timeline. You can feel the panic behind the urgency and intensity with which people are defending NVIDIA. It feels visceral and quite intense. You can tell how much there's riding on this. It makes a lot of sense. What else did you want to cover?
Starting point is 03:17:41 I thought it was notable. Pager duty has fallen to a $1.1 billion market cap. And there was a good reason. 500 million of ARRs are trading. They're not growing anymore. They're trading a 2.1 X ARR. It's profitable, according to Jason Lumpkin over at Saster. So yeah, rough time out there if you're not growing regardless of the revenue scale.
Starting point is 03:18:04 Two days ago, we shared that Enron back November 29th, 2001, NVIDIA replaced Enron in the S&P 500. I saw this post go out from our incredible team, and I immediately Googled to fact that. I was like, there's no way. Someone has made a terrible mistake on our team, and we are doing fake news unironically now. We used to have some fun, but apparently this is real. It's real.
Starting point is 03:18:31 It's real. Jensen was like, I'll take that spot. November 29th. Obviously, that's not how it works. It is much more mathematical than that, I believe. Standard in Poor's picks the largest companies, and after certain ebbs and flows of the market, they swap folks in and out.
Starting point is 03:18:49 But this went pretty viral, 5,000 likes. But what is really interesting is, of course, the Nvidia and Rod, like, comparisons are just so silly to me. Obviously, it's like, you know, the discussion is like, will it go from being the best business in the entire history of the world to being, like, you know, somewhat competitive and have to deal with, like, minor competition from other people?
Starting point is 03:19:13 It does not seem like it's some ridiculous Enron situation that's like so insane. People are just having fun with that headline. But what is incredible is this this branded shirt he's wearing. Look at this thing. Fantastic. So awesome. I love it. Not enough people trying to go snipe, vintage, and video merch.
Starting point is 03:19:33 It's a great shirt. It's a great look. And I feel like it's got to make a comeback. The button down, this is the pre-Silican Valley. I'm just in a T-shirt era. but it's post suits you know it's like we're not suits we're working in technology we're still going to throw on a collar but we're going to dress it down a little bit no tie guys scroll up scroll up on this for a second yeah i'll keep going keep going oh who's not who's not who's not
Starting point is 03:19:58 tiler you got to follow the account no this is not my account i think this is a this is more of like a burner account situation oh it's a it's a scraper that we use to for it is it is you got to correct that Tyler, come on. Gorkham over at fall had an absolute banger. This was a chart showing ASML sells fewer than 500 units per year and generates 37 billion in revenue. Is there any company in the world with a wider moat?
Starting point is 03:20:26 And Gorkham says, Series A pitch meeting. Sorry to cut you off, but what happened in December 2024? Since there's like a slight dip in the chart. Yeah, what did happen? Why did their revenue drop in 2024? I actually don't know. Is it just so much pull forward from 2023 or something? Maybe they were developing some hubris,
Starting point is 03:20:52 decided to get complacent. Yes, I mean, I certainly understand the concept. Okay, according to the CEO, customers in Taiwan had delays and weren't ready to take delivery yet, and orders got pushed back at the same time. China raced to get as many machines as possible before export controls tightened. Okay, that makes sense.
Starting point is 03:21:07 Let's hope it. Sosh Zatz says Oxford Dictionary didn't get the memo apparently rage bait named word of the year I think it No no no
Starting point is 03:21:19 I think they're actually right that it would be the word of the year but it is so funny that you you posted this and then Oxford Dictionary Yeah so this is true according to the BBC
Starting point is 03:21:30 rage bait named Oxford Word of the year 2025 It certainly feels that way on the timeline Your post, 1 million views on this, 3.6,000 likes. People really, this really set the agenda for a little bit. Wow. Congratulations.
Starting point is 03:21:46 What a banger essay. Should TBPN do a word of the year? I like that. Or a motion. Motion. Motion might be our word of the year. Word of the year. Motion's a big word of the year.
Starting point is 03:22:01 Motion named Word of the year 2025 by TBPN. If you have it, you'll know. know. You'll know. We'll call you. Tyler has motion. In other breaking news, Keith R. Boy is taking shots at Airwalex. Airwalex is now on the other side of a billion dollars in ARR. What I love about this chart is that isn't that we hit a big milestone? This is the founder, Jack Jang.
Starting point is 03:22:25 It's how fast the business is accelerating. It took more than six years to 100 million AR. What does Air Wallachs do exactly? I think they provide payment rails for a bunch of American fintech. You handle international. Oh, okay. Okay. And so Keith Rabeoy, been on the show multiple times, says, cool growth chart.
Starting point is 03:22:45 Have you disclosed to U.S. customers like Rippling, Bill.com, Brax, Navon, that you're quietly sending their customers to data to China. Air Wallach has become a Chinese backdoor into sensitive American data like from AI labs and defense contractors. You must already know this. But your China-based ops, infrastructure, and investors create legal obligations to assist with CCP espionage upon request. Through Air Wallachs, Beijing can assess supplier payments for AI labs so they could know who's using what models, payroll data for defense contractors,
Starting point is 03:23:20 personal data for employees abroad. That's obviously not good. Obviously, many companies do business in China, and that's not inherently a bad thing, but your company has become a guaranteed vector for data transfer to the Chinese government. And that's a different thing entirely. You have multiple points of vulnerability, people, legal structure, cap table. What's happening? You route global payments for U.S. companies and critical sectors without disclosing that you're under a Chinese jurisdiction. You moved your HQ to Singapore. Well, that seems like a step in the right direction, maybe. But your largest operational footprint is in China. Okay, no, so maybe one step back. One step forward and one step back. And hundreds of your engineers in mainland China
Starting point is 03:24:00 touch production payment systems. You are subject to Chinese law that requires Air Wallach's employees to support CCP intelligence request and quietly hand over data when asked you hid this from your customers, but you are well aware of your obligations to China, and that's why you insist on protection of Chinese data access to your contract. Thanks to you, the Chinese government now
Starting point is 03:24:21 has direct, covert, legally enforceable access to sensitive financial information. This is a big story. This is a crazy scoop from Keith Rabeau. And I will be interested to see where this goes, how quickly they can remedy this. This popped up a couple of years ago during the clubhouse era. The clubhouse back end, I believe, was at one point, you know,
Starting point is 03:24:52 was working with a Chinese company. Or maybe it was that there was a company that did like peer-to-peer audio streaming that was based in China. So if you were building a competitor, you might use that company. And so, I became familiar with Air Wallachs through the 20VC episode that Harry did with Jack, the founder. Is it ripping? I mean, it seems like the business is doing really well. Yeah.
Starting point is 03:25:17 Yeah, yeah, yeah. Oh, well. Anyways. Well, what else? What else do we have to say? We got to get on with, uh, with Menlo Park. Okay. Well, thank you so much for listening.
Starting point is 03:25:29 Hang out with us today. We'll see you tomorrow. Please leave us five stars on Apple Podcast and Spotify. The break. The Thanksgiving break was absolutely brutal for us, I will say, every single day. But hopefully you had a great Thanksgiving. Wake up and just twiddle my thumbs, wishing we were podcasting. It's great to be back.
Starting point is 03:25:46 Hope you had an amazing break or a little holiday, and we'll see tomorrow. See you tomorrow. Cheers. Goodbye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.