TBPN Live - 🔴 CODE RED 🔴, AWS CEO Joins, Tae Kim Tells All | Matt Garman, Tae Kim, Tarek Mansour, Matt Mullenweg, Jason Fried

Episode Date: December 2, 2025

(00:19) - Papperger Pushes Rheinmetall to Top (13:35) - 𝕏 Timeline Reactions (16:11) - Anduril Says Failures are Part of Development (20:21) - 🔴 CODE RED 🔴 (38:35) - 𝕏 Timelin...e Reactions (56:20) - Stratechery: Gemini Spurs New AI Infra Race (01:30:37) - 𝕏 Timeline Reactions (01:42:23) - Matt Garman, CEO of Amazon Web Services (AWS), discussed several key announcements at AWS re:Invent 2025, including the introduction of frontier agents designed to enhance software development, operations, and security through autonomous capabilities. He also highlighted the launch of Nova 2, AWS's latest Frontier AI models, and Nova Forge, a tool enabling customers to integrate their own data into pre-training checkpoints to create customized models. Additionally, Garman announced the general availability of Trainium 3, AWS's new chip aimed at accelerating training and inference processes for customers. (01:55:08) - Tae Kim, a senior writer for Barron's and author of "The Nvidia Way," discusses Nvidia's unique corporate culture under CEO Jensen Huang, emphasizing its blunt communication style, agility in decision-making, and meritocratic approach to talent recruitment. He highlights how these factors have contributed to Nvidia's sustained success and ability to outmaneuver competitors. Kim also addresses the company's strategies in navigating challenges such as competition from Google's TPUs and geopolitical issues affecting sales in China. (02:23:05) - Tarek Mansour, co-founder and CEO of Kalshi, a leading prediction market platform, discusses the company's recent $1 billion Series E funding round, which elevated its valuation to $11 billion. He highlights the mainstream adoption of prediction markets, attributing this shift to factors such as declining trust in traditional media, the legalization of such markets, and their integration into daily activities like sports viewing. Mansour also addresses Kalshi's strategic partnerships with platforms like Robinhood and Coinbase, emphasizing their role in driving user engagement and expanding the platform's reach. (02:42:16) - Matt Mullenweg, co-founder of WordPress and CEO of Automattic, discusses the recent live release of WordPress 6.9, highlighting its development by over 900 contributors worldwide. He emphasizes the importance of freedom in technology, advocating for open-source licenses as a "bill of rights for software." Mullenweg also introduces Beeper, a service that consolidates various messaging platforms into a single interface, aiming to enhance user experience across different networks. (02:53:06) - Jason Fried, co-founder and CEO of 37signals, is renowned for developing web-based productivity tools like Basecamp and HEY. In the conversation, he discusses the launch of Fizzy, a new Kanban-style project management tool designed to be simple, colorful, and open-source, aiming to bring vibrancy and ease of use to the software industry. Fried emphasizes that Fizzy is built for their own needs, reflecting their philosophy of creating products they personally find useful, and offers it at a straightforward price of $20 per month with unlimited users and usage. TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devRestream - https://restream.ioProfound - https://tryprofound.comJulius AI - https://julius.aiturbopuffer - https://turbopuffer.comPolymarket - https://polymarket.com/fal - https://fal.aiPrivy - https://www.privy.ioCognition - https://cognition.aiGemini - https://gemini.google.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive

Transcript
Discussion (0)
Starting point is 00:00:00 You're watching TBPN. Today is Tuesday, December 2nd, 2025. We are live from the TBPN Ultradome, the Temple of Technology, the Fortress of Finance, the Capital of Capital, Ramp.com, baby. Time is money. Save both. He's to use corporate cards, bill pet, accounting, a whole lot more, all in one place. That's right.
Starting point is 00:00:19 Why is no one talking about Armin Popperger? He's the CEO of Reinmetall, and they've been on an absolute tear. of course, going to get to Code Red. We're going to talk about Open AI, but we talk about open AI every day, basically. And I thought it'd be interesting to meet the CEO behind the world's fastest growing defense company. It's on the cover of the business section of the Wall Street Journal. When I think high-growth defense companies, I usually think Anderl or, you know, Ceronic or there's so many other companies that are growing very fast in defense tech. Ryan Mattal's been on an absolute tear. They're now basically the same size as Lockheed Martin and
Starting point is 00:01:00 general dynamics. And it was a small company just a few years ago. So Ryan Bataal, they make, you can see the gun that they make in that picture. They make a massive cannons. They make artillery shells. They've been very important to the Ukraine war. So in the last three years, they've been on an absolute tear. They've gone from roughly $5 billion in market cap three years ago to $80 billion in market cap. We've got to ring the gong. We've got to warm up the gong. So bring the dog. 80 billion market cap. They've been on a tear, but they had, and there's been like three, there's been basically
Starting point is 00:01:38 three key drivers to the growth, to the story. We'll tell the story in three acts, as briefly as we can. And while we, while we do, we will say thank you to Gemini 3 Pro. Three-act story about Ray and Mattal, three-act Gemini, Google's most intelligent model yet, state-of-the-art reasoning, next level vibe coding, and deep multi- modal understanding. So, first, they had a head start. This company, they actually started it over a century ago, 1889.
Starting point is 00:02:08 Can you believe that? Very, very old. So they spend their first 25 years basically just stacking up ammo for the German Empire. This obviously comes to a head in 1914 when World War I breaks out. And at the time, the company was one of the largest arms manufacturers. Like they were pretty big after 25 years of just stockpiling ammo, growing, growing, growing, growing as a defense company. World War II breaks out.
Starting point is 00:02:34 But then after the war, they got a pivot. They got a pivot because the treaty of Versailles forces them to switch to non-military products. They say, hey, you got to build. Make some trains. They get fixated on trains. And also typewriters. Not the first group to get fixed. Fixated on trains.
Starting point is 00:02:52 Happens to the best of them. But they have a good run. They stay in business. They keep making trains. locomotives, particularly, you know, they're making big stuff. And then 20 years later, it's the mid-30s, 20, it's 1935 around there. They are, they're starting to get back into weapons and ammo production. They can't stay away.
Starting point is 00:03:13 Uh-oh. Who are they rearming? The Wehrmocked. And World War II, obviously, it's massive for production. They're printing. They're making lots of weapons. But by the end of the war, their facilities, have basically been destroyed by air raids.
Starting point is 00:03:27 They need to rebuild the company from scratch. So after the second war, they get banned from making weapons again until 1950. And so they have to go back to making typewriters. They keep getting relegated to typewriters. You guys, no more guns. That's enough. You have to make some typewriters. And so they get back into defense tech in the 50s, 60s.
Starting point is 00:03:48 The German armed forces gets reestablished in 1956. And by 1979, Ryan Matal is making 120 millimeter. guns that go on leopard tanks that you've probably seen in that image roughly. And so there's lots of M&A, lots of diversification over the next few decades. They expand into automotive and electronics. And that kind of brings us to the second act of the story, which is the Ukraine War. So Russia invaded Ukraine on February 24th, 2022, about three years ago. Ryan Mottal was around $5.5 billion, market cap then.
Starting point is 00:04:21 And three days later, Olaf Scholls, the Chancellor of Germany, gives what's known as the Zeitnwende speech, which is literally translates to turning point. So he says, this is a turning point. Europe has been invaded. We now have a foreign army on European soil. Even though Ukraine's not part of NATO, it feels like, you know, Russia is expanding. If they keep, if they just keep going in the same direction, they're eventually going to be in our hometown. So we got to do something about it. And what does he propose?
Starting point is 00:04:51 He doesn't just say, hey, this is a big deal. He says, no, we're actually going to invest $100 billion, like off-balance sheet from some fund into defense tech. We're going to spend more money. And then, of course, there's a whole bunch of other initiatives that happen. There's the Trump negotiations around how much Europe should pay as a portion of GDP on defense. But basically, it's this major turning point where Europe goes from spending, you know, sustainment levels. Okay, we're going to spend this much every year to. We are going to double or triple or, you know, exponentially grow our spending.
Starting point is 00:05:22 And it's all going to be net new. So you can go and fight for it. And that's what Ryan Mattel does. And so revenue has grown. Helsing is sort of born out of that era. Helsing is like the newer version of Ryan Mattel. Ryan Mattel is like the old, you know, roll up. It's been around for over 100 years.
Starting point is 00:05:37 Helsing, I think, was starting. Helsing was 2021, was most recently in the news because they raised, I think, $600 million from Daniel Eck, right? Yeah, sparked controversy, of course. A lot of people in the Spotify world, the world of music, just think that defense tech is defied back. Oh, I didn't realize that. There was actually backlash.
Starting point is 00:05:57 Totally. I didn't see that. You know, if you're an artist and you, you know, believe in peace at all costs, you're going to probably be against that. Well, it depends. Maybe if you're, you know, POD or you're some other. musician that was played during the war on tear, you could be very pro. The Helsing investment, just events.
Starting point is 00:06:23 But yes, I understand overall. So, revenues grown 50% since 2022, and they are now guiding for sales. I think they do maybe around like 10 billion euros. I was kind of going back and forth on Euros, USD. But they're guiding for sales of $58 billion in an operating margin of more than 20% by 2030. So they have like almost AI growth. level numbers of everything. It feels very similar where there's a, there's a structural change in the way their business is going to work. Same thing as Eli Lilly, same story. There's a couple of
Starting point is 00:06:56 these stocks where there's now sort of a mega trend and they are in position to capture a ton of value as long as they can execute. The big question is, you know, what winds up happening? But the third leg of the stool, the third important piece in this story is the current CEO, the man no one is talking about, until today, Armin Papyrger. He's been called a white-haired Goliath. I love that. CNN. Can we pull up a picture?
Starting point is 00:07:24 Just randomly threw that in. There he is. There's some other photos. And last year, he was targeted in an assassination plot by the Russians. What? So the CNN reported that Russia had made a series of plans
Starting point is 00:07:37 to assassinate several defense industry executives all across Europe. who were supporting Ukraine's war effort. And they also were planning to set up fires in different, there was an IKEA that got lit on fire. There were a number of different attacks. But fortunately, American intelligence discovered the plot and informed Germany in time to stop the attack.
Starting point is 00:08:00 And now the white-haired Goliath is- Ryan in the chat says, this feels like a paid ad. For who? I can assure you it's not. John woke up this morning, we were at the gym, and he's like, why is no one talking about Rhymetol? and decided to write about it in the newsletter today.
Starting point is 00:08:16 That's funny. No, he's on the number of the Wall Street Journal. And so they stop the attack. And so Russia clearly sees Armin Pappinger, Popperger, as a critical to the European defense ecosystem. But separately, there is a debate over where the business goes over the next few years. Because on the one hand, like the NATO inventory requirements are growing a lot. That's going to drive a lot of net new demand for military equipment purchases. And the market's been historically undersupplied.
Starting point is 00:08:50 But on the flip side, Ryan Matal may or may not be able to absorb as much of the demand as they're planning, too. They have lots of integration to do between all their different acquisitions. And also with a potential end of the Ukraine war. Yeah, it feels like the stock would just immediately trade down on news of a peace deal. And it has, even on rumors of a peace deal. Yeah, exactly. Yeah, it's down 15% of the last month. Yeah.
Starting point is 00:09:15 But they're scaling up in the Wall Street Journalists. Earlier this year, Armin Papberger opened a new factory that will allow his company to produce more of an essential caliber of artillery shell than the entire U.S. defense industry combined, surrounded that day by dignitaries, including the head of the North Atlantic Treaty Organization, NATO. The Ryan Mattel CEO is riding a wave of post-Cold War military spending. that is reshaping the global arms trade. Ryan Matal is now the world's fastest growing large defense company and a key player in Europe's quest to rearm its home country. Germany is shedding its post-war reticence on military spending to lead the charge to capitalize.
Starting point is 00:09:55 Papagar has pushed the once obscure gun barrel maker into almost every part of the battlefield from satellites to warships. And that's what people are kind of saying about, there's a lot of acquisitions, there's a lot of new projects, there's a lot of new deals. like, you know, the gun barrels, they've been doing that for 136 years. Satellites, they're kind of newer to it.
Starting point is 00:10:15 Do they have the lineage? Do they have the experience? Can they stick the landing on those contracts? The money's certainly there, but is the expertise there? That's the big question. So his goal is to create a go-to defense company with the heft and breadth to rival the American giants that have dominated the industry since World War II. And if he's writing any software, he's got to get on graphite.
Starting point is 00:10:38 Dot Dev. Code review for the age of AI. Graphite helps teams on GitHub ship higher quality software bastard. Ryan Mittal's stock is up 15x since Russia's full-scale invasion of Ukraine in 2022, giving a market cap of 80 billion, roughly on par with U.S. rivals. And when he took over the job, he started this job in 2013. So he's been CEO of Ryan Mattel for 12 years. The company was $1.6 billion. Overnight success. success is right. Sort of like what happened, Lisa Sue. You know, she's 10x that stock. I mean, the funny thing is that Jensen's also 10X to NVIDIA in that time. But the Lisa Sue story is a little bit more impressive because AMD was really like down in the dumps and she has turned
Starting point is 00:11:22 that company around fantastically. But back to Ryan Mottal. This month, Ryan Mottal set out ambitions to quintuple sales by the end of the decade to the equivalent of roughly $58 billion. dollars. Papberger reflecting on his long tenure at the company told investors that seeing such figures was like a wonder world. It's a wonder world. I love when an executive is speaking a different language and it just doesn't quite translate. Is that what we say? Kind of get the gist. I get the gist. He's happy. I'm happy for him. You know, good job. We had a, we had a buddy of ours who, actually, I'm just going to, I'm going to name, I'm going to name. I was going to keep them anonymous, but it's just too funny.
Starting point is 00:12:06 Somebody's proposed, 2.6 is proposing the TBPNX standard oil X, Ryan Mattel, collab. So we were talking about, we were texting with Sean Frank and Connell McDonald at the Ridge yesterday about how their Black Friday, Cyber Monday went, and they shared a bit on it. And Sean ends it and says, bro, the future is beautiful. And I am so happy to be alive with this, like, an incredible reaction to a successful Black Friday. glad it went well for them. I mean, the Wall Street Journal did report that on a busy Cyber Monday outage at Shopify Haltz transactions. Shopify experienced an outage on Cyber Monday that interrupted transactions for some merchants, but it sounded like rage was not affected? Well, so it was the real issue is the admin panel went down. Okay. It freaked a lot of people out because you're
Starting point is 00:12:55 not able to log in and like see what's happening. Yeah, of course. And also like even if, even if everything's working as standard as expected, like that's the day you're just refreshing the admin panel all day, like, because you're just like, how much money am I making? Like, this is really critical, right? Yeah. So, yeah, there were, there were some reports that a handful of merchants had actually had features on their site go down, but I didn't see any. I didn't see a ton of people saying, like, I lost my Cyber Monday, but obviously, the good
Starting point is 00:13:24 folks at Shopify will obviously be working extra hard to resolve any of this and provide a proper postmortem. Overall, it does seem like Cyber Mondays and Black Friday. Cyber Monday broadly, just went very well. Like, it just seems like consumer confidence was up, revenue was up, spending was up. Harley had a post, he said, total global Black Friday, Cyber Monday sales by Shopify merchants over the last five years. 2021 was $6.3 billion.
Starting point is 00:13:51 2022 was $7.5 billion. 20203 was $9.3 billion. 2024, $11.5 billion. And then 2025, $14.6 billion. So a combination of execution at the, at the, at the. company level and execution at the Shopify level, and then obviously the market plays a big role as well. Yeah, it really did seem like things are just broadly going well, or at least okay. I think everyone's sort of like nervous with crypto up and down, and is there an AI bubble
Starting point is 00:14:26 and how big of a bubble, what will happen? Somewhat of a coat red going on. It has been a Code Red. Before we jump into that story, Profound, get your brand mentioned in chat, GPT. You reach millions of consumers who use AI to discover new products and brands. I was going to say, kind of the Anderol was covered in the Wall Street Journal. The Wall Street Journal has been doing quite a lot of defense tech coverage. Anderl, they had been talking about their approach of not using government funding for testing purposes, which historically a company would get a contract and then they would work to actually make it
Starting point is 00:15:00 The government was effectively funding R&D. Anderil has a more traditional, like, venture-style model where they raise VC dollars. They spend that money to test and develop products. And so they gave a quote that was like, we do fail a lot. But the extra context that was necessary was that that's not happening on the taxpayers' dime. And it's part of their approach of doing rapid iteration. And sort of like going according to plan, of course, that was taken out of context and turned into a headline that was, we do fail a lot. And not so dissimilar from what has happened to Open AI in the last 24 hours where sounds like an internal meeting was leaked. We were wondering, like, what is it? Yeah, let's actually, let's actually reel a little bit more on the, on the Wall Street Journal and all thing.
Starting point is 00:15:57 Because I think it's interesting, and I want to go into some of the response, like how they respond to this. First, let me tell you about cognition, the team behind the AI software engineer, Devin, crush your backlog with your personal AI engineering team. So, Wall Street Journal came out with this story. We do fail, dot, dot, dot, a lot. It's a very funny quote. I actually think it's an awesome quote. I'll get into it. I think they should put on T-shirts and hats.
Starting point is 00:16:22 I think it's actually a very... It's their next campaign. I think it's a very key cultural. I think it's like a don't work at andrel type moment. It makes a ton of sense in terms of like the culture of like fail fast. This is this is not new in Silicon Valley. And yet it's still being reframed as new. I'm surprised that there's like alpha here still.
Starting point is 00:16:41 But let's see. Palmer Lucky says, the valid reasons for slowness are bureaucratic BS and cowardly executives who cater to snide analysts and public market outlets like WSJ that have nothing to say about years late programs and everything to say about a fire that covered 0.0002% of our test site. I'm not even exaggerating.
Starting point is 00:17:02 That's the real number. It is exactly what anyone would expect from testing a system that violently blasts lithium-powered drones out of the sky. This is what weapons development should look like. Heck, Camp Pendleton has over 200 fires per year on their training range, and that is with fully mature weapon systems.
Starting point is 00:17:21 Going on and on about this for paragraphs is so pathetic. Oh, no. Getting close to a fire every single week. day. Yeah. They obtained satellite imagery that reveals the damage to the grass on the weapons test range. The other
Starting point is 00:17:36 examples of the story are similarly absurd. John, you're telling me that the grass was damaged at the explosives testing ground? Yeah. It's very funny. You're telling me there was an explosion at the weapons testing facility. Yeah, it was wild.
Starting point is 00:17:52 The other examples in this story are similarly silly absurd. Oh no, an engine sucked in a piece of fad. Stop the presses. Autonomous boat behaves exactly as designed and stops moving when it receives a faulty command and are all hit by pattern of setbacks. It's just so pathetic. The type of thing that can only be written and taken seriously by people who have no idea how hardware development actually works. And of course, a few other folks in the ecosystem chimed in. Mainly, there's a good post here from Blake Scholl, founder of Boom Supersonic. He says,
Starting point is 00:18:25 If you plan to pass every development test, you'll move slowly and expensively. It's optimal to fail many dev tests. Selective, quote, outtake into headlines suggest a hatchet job, not an honest report on an attempt to do things differently and better. Yeah, I still just think the we do fail a lot is just, it's so ripe for a billboard campaign, a t-shirt, a hat or something, because if you, like the whole thing with Silicon Valley, is that you should fail 99 times and succeed once. Because if you succeed once and fail 99 times,
Starting point is 00:19:01 it's a million times better. It's infinitely better than zero failures, zero successes. Like, you will take a ton of failure for one success. And that's the whole, that's the whole ethos. That's the American ethos. Yeah, it's the American ethos. It's the technology ethos. There's a lot there.
Starting point is 00:19:21 Anyway, back to, oh, Actually, we can wrap up with, you can go read the Wall Street Journal report on Armand Pappager, if you, Pappager, if you want. We've got to figure out how to pronounce this name. He did have one fun line in here, which was, what did he say? He said something like, he said, referring to, so now he's, you know, basically the same value as Lockheed Martin and General of Dynamics. And he said, on the U.S. companies, he said, they come to me 10 years ago. It was a different story. And so he's just flexing the fact that, like, he's now big enough that he, he, he requires, like, you can go visit him because he's, like, made it.
Starting point is 00:20:04 Always a good sign. Yeah, he's, he's taking a little victory lap. And there's some other funny things in here, but you can go read that. Let me tell you about linear. Meet the system for modern software development. Linear streamlines work across the entire development cycle from Roadmap to Release. So let's head over to Red Alert Territory. Gavin Baker responding to the reporting says October, 1.4 trillion in spending commitments.
Starting point is 00:20:33 November, rough vibes. And December, Code Red. Life comes at you fast. It certainly has felt fast ever since that fateful podcast. Yes, that was a crazy turning point. Although there was, there was, there was, plenty of conversation, you know, prior to that around, um, yeah, around, uh, uh, of what the trajectory of open AI would actually look like. Yeah, it's, it's hard to actually understand
Starting point is 00:21:03 the full nuance here. Like somebody in the, in the replies, a rational analysis. Insane, at insane analyst. What a crazy handle. So does debt obligations come at you fast? And it's like, that's not really what's happening here. Like, like the, the, the, the, the, the information reported it was clearly like some sort of all hands that sam altman was uh you know holding a town hall with the rest of the open a i team and he's kind of just saying like lock in that's what he should have said you never say code red you got to say lock in brothers lock in don't say rough vibes don't say code red say lock in say we're taking that hill we're storming their fortress we will grind google jemini team into paste with and we will crush our enemies we will see
Starting point is 00:21:51 them driven before us. Yeah, hospitals learn this lesson. They used to say code red. That meant there was a fire in the hospital and that you would probably want to figure out a way to get out. Yes, yes. They started, is it code blue? Yeah, now they will say code blue.
Starting point is 00:22:06 So if you hear code blue in a hospital, you need to be worried. You need to worry. But maybe, maybe, okay, steel man, steel man here, maybe Sam Altman was using code red in the hospital sense. He didn't say code blue. if he had said code blue we should be really worried but he said code red so he's saying it's not that bad but don't you think they just retired they said i think they just retired yeah so he's saying i'm i'm using retired phrase i'm not i'm not saying code blue if i was saying code brown which
Starting point is 00:22:39 is a hazardous which jemnon three spilled on the timeline it's very hazardous we got a code brown yeah we got code brown um that's a crazy is that real or is that something I'm like meme jokes. No, this is, no, no, I'm reading the hospital emergency code. Okay, okay. Well, anyway, let me tell you about Restream. One live stream, 30 plus destinations. If you want to multistream, go to Restream.com.
Starting point is 00:23:04 Chiching. No, I think, I think if you're, if you're a CEO who's under incredible scrutiny, like you're Sam Alpman, and you have beat reporters at this point who are texting your employees every single day, hey, what's going on, what's on the ground? Give me a quote. What happened? Yeah, so to give people context, a beat reporter might reach out to,
Starting point is 00:23:29 they will actually adopt the strategy of just trying to wear someone down where they will send hundreds of messages to individual people on the team, just over and over and over, relentless, like email, cell phone, Instagram, DM, LinkedIn, just like constantly, constantly, constantly flooding. Hoping that at some point this person just says,
Starting point is 00:23:49 like, fine, like I'll, I'll, Well, the name beat reporter comes from them trying to beat you down. That's the whole point. Is that true? That's where it comes from. No way. You're, are you messing with me?
Starting point is 00:24:01 Yeah. Okay. Okay, okay, okay. I have no idea. But I like the idea of it. It's like, they just try and beat down your employees. It's like that.
Starting point is 00:24:08 They're trying to beat you down. I got a beat reporter on my team. I was like on my tail. Yeah. Yeah, no. Um, I mean, it's certainly what it's, what it's become. There is a little bit of it. No, no, I think.
Starting point is 00:24:21 I think there's beat reporting, there's gumshoe reporting. Gum shoe reporting is where you report, and you're actually walking around the town so much that you get gum on your shoes. That's the idea. It's like you're on the ground reporting, you're walking around the city, you're talking to people.
Starting point is 00:24:35 And then I think like a beat cop and beat reporting is like you're on a beat, like it's a drum beat. Like every day you report on the same thing. And so it's about consistency. It's not, it's not, it's not, what are you laughing at now? Ryan in the chat, if you work in an AI startup
Starting point is 00:24:51 and you aren't drinking Mountain Dew Code Red every day, you aren't going to make it. What if Sam was talking? What if he was just saying we got to lock in? We got to lock in. I bought us a bunch of code red.
Starting point is 00:25:01 I want you all drinking it every day. Yes. It's time to really focus. Yes. And of course, that snippet got pulled out. I just want to know what what person on the Open AI team thinks it's in their best interest
Starting point is 00:25:16 to be, be in a meeting like that and then just go share inflammatory quotes on said meeting? Just leave. Just go make 10 times as much money in a different lab. If you don't like your employer,
Starting point is 00:25:29 just bounce and make more money. Why are you sitting there leaking and just dragging your company down? Don't you have stock options? Yeah, I'm so confused. It's a mole. There's a mole. There's someone inside the organization who's working against them or something.
Starting point is 00:25:45 I don't know. It seems rough. Anyway, there is some praise for Open AI on the timeline, which we should get to from none other than Blake Robbins. Blake says, Open AI is operating on a different level. Play that sound cue, Jordy. The amount they have shipped in the past few weeks and months is incredible. Beals like we are witnessing a generational run. This was on October 6th. Okay, this was on October 6th.
Starting point is 00:26:12 SORA was, I think, number one in the charts at that point. Yes. It's now 21. Yes. Pulse got some excitement early on, but I think people are a little bit not feeling like as excited. Yep. Atlas launched. Yep.
Starting point is 00:26:30 And then it's hard to really gauge what adoption has been like. I know some people that love it. So Eric Sufert on October 6th, quote tweeted Blake Robbins and kind of summed it up. I think he said, indeed, impressive. But the scattershot nature raises questions about the company's discipline and ability to support these disparate initiatives. Is Open AI a frontier research lab, a social network operator, a commerce engine, a hardware company? Because it's hard to do all of that well. And then Eric goes back and finds his old...
Starting point is 00:27:10 And they're still very much care about competing in CodeGen, right? It wasn't even listed. And so if you go back to the BG2 interview or just the BG interview, Sam's answer to the question of how are you going to support the $1.4 trillion of commitments was we're automating science and we're making and we're making like consumer electronics. And the reason that that didn't, to me that was kind of like a concerning. answer because Google has been doing those things for years. Yeah, but they've earned the right because they have 25 years.
Starting point is 00:27:52 Yeah, they're funding it with massive cash flow. Yeah, hundreds of billions of dollars of revenue and so much cash just to go around. And like, it's always been this like academic lab and this sort of like environment where they do side projects. But they've just, they, I think before they started any of that, they had firmly established themselves as like the go-to search engine. They were funding that with – so they were funding these initiatives with cash flow. I believe so.
Starting point is 00:28:18 And even though they've been doing it for this long, it's not like Sundar is going out there and saying, guys, we're actually going to do it. We're going to do an extra $100 billion next year because we're automating science. And we're doing this new consumer electronic device. Yeah, no, no. It is crazy. Let's continue. First, let me tell you about Privy. Privy makes it easy to build on crypto.
Starting point is 00:28:39 Rail securely spin up white label wallets, sign transactions, and integrate on-chain infrastructure all through one simple API. I have a plan. And this comes from the chat, of course. If Sam Altman really wants to set the record straight, everyone's saying, Code Red, oh, Code Red, it's so bad. He needs to come out with a statement, we're going to Baja Blast Gemini out of the App Store. If he says our plan is to Baja Blast Gemini
Starting point is 00:29:06 and Anthropic into the minor leagues of AI research, I think he just wins completely. What do you think? I think people are underestimating the possibility that code read. It was actually red was past tense of read. Oh, okay. So they're talking about the code that was read by the model. Oh, yes, yes, yes.
Starting point is 00:29:25 Oh, what was the code red in this scenario? It might have been RE80. They were talking about the next agent model that was doing code. I have read the code and we're ready for the next pre-training run. I listened to Mark Chen on Ashley Vance's core memory podcast. It's very good. You should go listen. Also, Ashley has a new, a new,
Starting point is 00:29:45 YouTube channel for Core Memory Podcasts. So if you want to find it, head over there. And it was interesting. Mark Chen, I really liked the way he runs that organization. I liked a lot of things he had to say. He had some funny, funny takes, some funny anecdotes, basically just saying, you know, he's extremely competitive. He doesn't want to lose.
Starting point is 00:30:05 He's, he's, you know, going all out right now. And that one of the ways he's dealt with the talent wars is to just go to everyone on his team and say, hey, I'm not going to match dollar for dollar with meta. Like, if you want to make 10 times as much money, yeah, you're free to leave. Like, you can just go. But we are on a mission here. We're a team. And we think what we're building is so big that in the long term, we will be better.
Starting point is 00:30:30 We will be better. We will be bigger. And he also clarified, interestingly, that although there was a big raid and a lot of people from Open AI did go to meta, he was saying, like, there's been, there's been a He was basically like, there's a lot of poaching that's happened from Open AI generally. Like, whenever someone starts a new lab, they always go to Open AI. They're like, we need at least one Open AI guy to know how they do it, right? It makes a lot of sense.
Starting point is 00:30:55 He also said he didn't lose a single direct report. I don't know exactly how many direct reports he has, but he was saying that he didn't lose a single direct report. So maybe that's like, maybe he's trying to say, okay, there were people that were two-loos down. Yeah, yeah, his lieutenant stuck around. It was sort of interesting. But he did say that he also, he sort of echoed Shalto and said that he believes that pre-training, there's still low-hanging fruit there, that Open AI will be doing new pre-training runs, that they have seen that scaling is holding, that there's no plateau.
Starting point is 00:31:29 He also said they have models internally that outperform Gemini on benchmarks. Yes. And obviously he caveated that by saying benchmarks aren't the only thing that matter. So I do think, I mean, it's worth sharing some more. Let's actually play this clip from Ashley Vance here. It says OpenAI has seen Gemini 3 and is both moved and not. We sat down with Open AI's research chief Mark Chen 90. Yeah, yeah.
Starting point is 00:31:58 So to speak to Gemini 3 specifically, you know, it's a pretty good model. And I think one thing we do is try to build consensus. You know, the benchmarks only tell you so much. And just looking purely at the benchmarks, you know, we actually felt quite confident. You know, we have models internally that perform at the level of Gemini 3, and we're pretty confident that we will release them soon, and we can release successor models that are even better. But, yeah, again, kind of the benchmarks only tell you so much. And, you know, I think everyone probes the models in their own way. There is this math problem I like to give the models.
Starting point is 00:32:39 This is funny. I think so far none of them has quite cracked it, even the thinking models. So, yeah, I'll wait for that. Is this like a secret math problem? Oh, no, no, no. Well, if I nod to hear, maybe it gets trained on it. It's going to get so saturated. To speak to Gemini theory specifically, you know, it's a pretty good model.
Starting point is 00:32:59 And I think... I think this is looping. One thing we... Yeah, loop. Having a secret math problem that you give every model to assess it is pretty elite. I keep reflecting on like, so let's read what Prins is saying here. So new interview with Mark Chen from Open AI.
Starting point is 00:33:22 Ashley Vance, the interviewer, has apparently been spending a lot of time at Open AI, including sitting in on meetings. He seems to be writing a book. And he seems to think that Open AI has made some huge advance in pre-training. Pre-training seems like this area where it seems like you've figured something out. You're excited about it. You think this is going to be a major advance. Mark doesn't spill the beans, though.
Starting point is 00:33:39 He says, we think there's a lot of room in pre-training. A lot of people say scaling dead is dead. We don't think so at all. Big question about what that means. Is that scaling RL? Is that scaling dollars in? Is it, oh, yeah. If you invest $100 trillion, you can give it one more IQ point.
Starting point is 00:33:56 It's like, yeah, that would be an example of like scaling, holding, but like no one's going to make that trade off. No one, no one is going to be like, yeah, I'm down. Totally. Spend the $100 trillion. Okay. Okay, so what Sam said in the internal Slack memo. Oh, it was a Slack memo.
Starting point is 00:34:12 Yeah, because he was directing more employees to focus on improving features of ChatGPT, such as personalizing the chat bot for more than 800 million people. And again, we've seen them launch more functionality around this. I think the theory is that this could be a very, like, make the product really, really sticky. Whether or not that's true generally is still unclear. It's certainly people have been very loyal to 4-0.
Starting point is 00:34:41 Altman also said this is in the information piece. Did he mention Baja Blast? He hasn't specifically said Baja Blast, but I think he's kind of alluding to it. He's warming up to talking about Baja blast. Other key priorities covered by the code read include image gen, the image-generating AI that allows users to create a variety of photos. You would include it in your newsletter last week
Starting point is 00:35:06 that you've been going over to Gemini specifically for nanobanana. Yes. So I wonder if this is a broader trend. Does this actually matter? I think it does. I think that the image generation functionality, like fundamentally what LLMs are doing,
Starting point is 00:35:26 what these chatbots are doing is they're basically instantiating full web pages. They should be able to instantiate anything that you could possibly land whether it's a video, an image, a blog post with images embedded, an audio format. Like, it should be able to, like, not just understand everything and give you the answer, but it should be able to contextualize that answer in any format. And so I do think being able to generate images at the top shelf, top tier way.
Starting point is 00:35:52 The big question we were talking to Tyler was, should they say, hey, we're just going to use nanobanana, which is like a crazy thing, but, you know, there is a world where they say, like, hey, yeah, like, we're not going to focus on that. We're actually going to just bend in nanobanana, but we are going to be the front door, the aggregator, and we're just going to be the actual. Yeah, use runway in the background, right? Yeah, yeah, yeah, hand it off to a different team potentially.
Starting point is 00:36:19 I don't know. It seems like that's probably a little bit too close to home, but Ben Thompson has had this claim for a while that potentially Open AI has a strong hold on the consumer market to the point where if they swapped out the underlying model
Starting point is 00:36:36 they would still accrue tons of the value because people don't really know what model is which like I think the average user doesn't do it but first Tyler has a Yeah I mean I think that
Starting point is 00:36:46 especially makes sense in the context of images and video because they're just so expensive Yeah like I think a nano banana pro image is like I think it's like 10 cents No way it's really or okay
Starting point is 00:36:57 that might be per like a thousand or something but it's still it's still they're really expensive Yeah, yeah, yeah. Videos are even more expensive. Videos are, like, really, really expensive. Oh. So I think it makes more sense in that scenario because you would imagine that it's just, like,
Starting point is 00:37:10 so expensive to vend it yourself. It's like you're spending so much resources on that. Yeah. We have to look at this. I believe this is nanobanana. Let me see if I can find this. This nanobanana pro image that, let me see if we can pull this up.
Starting point is 00:37:26 It's from John Gregor Chuk. It says, architects are cooked. AI is coming for you. Prepare accordingly. Have you seen this, Jordan? I did see that. You did see this one? Yeah.
Starting point is 00:37:39 Did you look at the image closely? No. Okay. So, is it? It's one of the funniest images I've ever seen. So basically, this image, it's like a, it has a walkway with like a 40-foot drop to the ground. I mean, it's not quite that bad, but it's close. Yeah, I just, I didn't, I don't buy the,
Starting point is 00:38:00 the theory that architects are cooked just because you can generate like a floor plan or designs for a home, just because the actual process is you're dealing with a city, basically, right? And you're trying to get things permitted. Yeah. It's not like the problem that's just making pretty designs, right? It's the classic, let me see. I'm trying to put this in the chat. It's the classic, like, you know, is the radiologist's job just to look at images and detect cancer?
Starting point is 00:38:37 No, it's way more than that. Okay, so this is the image, and I was actually crying, laughing, because the tagline is, architects are cooked. AI is coming for you. Prepare accordingly. And you see this, and it's like this AI generated image, and it looks like remarkable. Looks like a floor plan. It looks like a floor plan.
Starting point is 00:38:56 It looks amazing. Like, it looks like, okay, yeah, that's like, all the lines are straight. We used to be in the era of, like, any text would be typo, and there would just be crazy lines everywhere. But you zoom in, and it's, like, one of the funniest layouts ever, because you realize that it's just, it's just one massive room with, like, three or four. Okay, so first off, okay, so you come in through the two-car garage, then there's a powder room.
Starting point is 00:39:24 The mud room. So, so first off, there's this mud room, mudroom and laundry with two bath tubs in it scroll up to the right okay just go yeah right there so why do you have two bath tubs next to your coat closet in the mudroom in the mud room in the mud room and then and also like you can't go normally you come out of garage you go straight to the mud room but here you have to go into the main area which is the gallery hall and then you go from there into there and so scroll to the left a little bit so we can see the what is the coat what is the coat bathroom and then there's the coat bath and two toilets And why are there two toilets next to each other?
Starting point is 00:39:59 Remember we were touring that facility and it had two bathrooms right next to each other with no line next to it? Yeah, yeah, yeah. We were in the office, in the crazy office that had the machine, one of the bathrooms just had, it was like, it was like meant to be a private bathroom and it just had two toilets there. We were like, why the two toilets? Yeah, so it's like, so you come in through your main foyer, then there's a master bathroom. Then there's a coat bathroom with two more toilets. And then there's a huge walk-in closet, which isn't even directly attached to anything else. So you have to, like, go through this corridor to get to the rest.
Starting point is 00:40:39 And so this master's suite has three toilets. But then it gets better, dude. It gets better. So go over to the top right hand side. So look at bedroom number two. It's just like off the center. Then bedroom number three is there. Then there's a Jack and Jill bath.
Starting point is 00:40:55 then scroll down. Wait, three, three sinks? Three sinks, no toilets. And then there's another bedroom. And then there's a third bathroom with a third bathroom with a two with five sinks. This is, you might not like it, John, but this is, this is, this is, this is architecture at its best. You have, you have five, you have five, five sinks next to your two. bedrooms, which do, and then also bedroom two doesn't have a, doesn't have a bed.
Starting point is 00:41:29 Anything. It just connects it. It opens into the gourmet kitchen. But then if you scroll down, if you scroll down, you can see that there's like this huge walk guest suite. What is the huge walk guest suite? And then you have this like massive dining room that just makes no sense. And then down at the bottom to kick it off, there's, of course, like the great room that's directly tied into the kitchen with just the most open floor plan. You can pop. possibly imagine. And then if you scroll down, you'll see that there's like just these windows that like, like all of a sudden. Trevor in the chat says bathroom scaling loss. Like, like, why, why are, why are all of a sudden the doors like vertical instead of, this is
Starting point is 00:42:08 supposed to be a top down image? And now I'm looking at these doors and they're like present. What did the comments say? Do the comments say like, hey, buddy, why, why'd you put, you know, three sinks in that one bathroom? Well, everyone, everyone gets that it's a joke. Oh, it was meant to be, it was meant to be a joke. Yeah, yeah, yeah, yeah. This John guy, like, totally thinks it's so funny and is just, like, joking around. And so everyone's just like nightmare fuel, like this is crazy. And John's making the same jokes. It's super convenient off the open floor plan. No kitchen toilet. Like, you know, people, like, and then people just joking about all the different stuff. And I don't know. I mean, you know, is, is AI going to help with, you know, architectural design? Of course. Is it is, is nanobes? Is nanobes? Is it, is nanobes? Is, banana going to randomly one shot, like the perfect floor plant? No, also no. Uh, but, you know, of course there's, there's stuff that's, that's the funniest image. It's so funny. So funny. Anyway, I was, I was actually dying laughing at this thing. Um, okay, back, back to the code red.
Starting point is 00:43:13 Dant. Uh, automate compliance and security, AI that powers everything from evidence collection to in continuous monitoring to security reviews and vendor risk. Uh, yeah, D-DOS. is adding fuel to the fire. Fuel of the fire. He says, this is why OpenAI is in code red in the two weeks since the Gemini launched. ChatGPET unique, daily active users, a 7-day average are down 6%.
Starting point is 00:43:36 He is sharing, to be clear, web traffic data. These traffic sources are so rough. I just feel like people use apps. Like the web traffic is probably a good proxy. It's probably a decent proxy. But even then, I just, I just, I don't know how high intent those users are. Because it's like, do you think you're being tracked by similar web that effectively?
Starting point is 00:44:02 Like, I would hope that I don't have that much spyware on my Chrome browser that knows exactly where I am. Maybe it does, but I would think that, you know, OpenAI and Gemini and Google would be like, yeah, we're not, we're not letting you put a pixel on our site. Do you know, do, do we should have, we should have someone from similar web on the show, explain it to us. Tell us your sources. Tell us everything.
Starting point is 00:44:26 How do you actually calculate all this stuff? Because, I mean, you could just poll people. You could just ask a million people. Hey, what are you using? I don't think that's how this works. But the app store. I wonder if any of the Chrome extensions sell your data. I'm sure a number of them.
Starting point is 00:44:44 Yeah, I have this Chrome extension installed right now. TBPN Timeline viewer. It was vibe coded by someone sitting over there. He doesn't even know what. programming language was written in. This is not true. Did I ever tell the story on the show? I don't know.
Starting point is 00:45:03 Tyler doesn't want to tell it. So Tyler gives us this we use a Chrome plugin to to like track the show when we're sharing posts between us. And this Chrome plugin, he like vibe coded it and he sends it over and I unpack it to install
Starting point is 00:45:17 it. And I'm like, why are there like Node modules here? Like that's usually for like Node.js, JavaScript. on the back end. And he's just like, what are you talking about? And I was like, you don't know that you're using no JS? It's a good extension, sir. Because I think it was just so...
Starting point is 00:45:32 I trust Claude. I trust Claude to make the right decision. I don't even specify what programming language it uses, which is like pretty sick. It's actually extremely bullish for Claude and Cloud Code. It's really good. Anyway, the part of the Code Red, of course, is that OpenAIs Sora app has fallen out of the top 20
Starting point is 00:45:51 most downloaded apps in the United States. on both the app store and Google Play. And so things are falling. I actually opened up SORA today. I looked at it. And there was some cool stuff happening. This is a little bit of a hot take. Like, it was not, there was still a lot of slop,
Starting point is 00:46:10 which I would define as like the, you know, it's a POV video of a bus driver with a bunch of cats on the bus. And it's like cute and funny or like, you know, it's a chipmunk, water skiing, like that type of stuff. That's why you were late for the gym today? No. But I was sneaking a peek at SORA while I was driving. If I die and crash, because I'm looking at Slop, this would be extremely depressing. But... Out of stop sign?
Starting point is 00:46:36 Yeah. I stopped. But there was one cool one, which was more like pixel art, actually. And it was interesting because you remember the Open AI Super Bowl ad? Yes. If you prompt Sora to make that type of content, it actually is really cool. And you can remix it in a very interesting way. And so, like, Sam had taken, somebody else had done, like, a bunch of geometric shapes pulsing to, like, electronic music. And then Sam was able to take it and say, make it orchestral music and make them pastel colors.
Starting point is 00:47:11 And he was able to, like, remix off of that. And that felt like, okay, maybe we're getting into Suno territory. Very odd that Suno and Sora are so close in names. I don't know how that happened. Maybe they should team up or something. Wouldn't be the first time OpenAI has named something. Well, who, who, oh, yeah, I know. I know.
Starting point is 00:47:30 I don't know. But, uh, I don't know. I, I was, I was seeing like, I, I don't think it's fully over, you know. I think it's like, it's in a, uh, it might be in just a trough of disillusionment. You know, no. This could be, this could be a trough of, this could be a trough of a trough is in a trough of disillusionment. The trough is in the trough. It's entirely possible.
Starting point is 00:47:50 But clearly, uh, the vibes are rough and people, are taking shots. Terminally online engineer says just put the ads in the chat little bro in the chat bot because Sam Alman says OpenAI is making a very aggressive infrastructure bet with new partnerships. Okay, but to be fair
Starting point is 00:48:07 this clip was from a long time ago. That was like at least a couple months ago. Yes, yes, yes. And also you can do both. What is interesting is that it's maybe... It sounds like they're delaying ads. Which is, which feels odd because I personally was, I'm maybe the only person that's really excited about ads in Chachapit.
Starting point is 00:48:28 I think it's a good thing for the business. I think it makes a ton of sense. And I was excited to see where that rolls out. I hope that they don't delay it. I think that that's where they should be running. But if they really are losing ground to Google and Gemini and like the Gemini app so quickly, I'm shocked because it feels like the Gemini 3 News, like the launch went well. People were excited about the model.
Starting point is 00:48:49 The model card looked good. The benchmarks look good. but you still have to be pretty tuned in to understand the nuances of the model one way or another. Like, it's just not like the big model smell and the vibes. Like, your average AI user doesn't care if the model responds with, it's not just this, it's that. Like, most people, clearly, that's why it wound up getting RLed into the model. Most people are like, wow, contrastive parallelism. This is epic.
Starting point is 00:49:23 I love it. Thank you. Like this is really... Contrastive parallelism. Yeah. Antithetical parallelism. Like, I've never... This is like a big, big phrase, big word.
Starting point is 00:49:33 Like, this is amazing. And so I'm shocked that there would be such a... I'm not shocked by like a vibe shift in on X and in teapot with regard to how people have been skeptical of the Open AI financing. And so they've been looking for a... a crack to show, and Gemini coming out and leapfrogging a little bit, even if it's just on some obscure benchmark that the end user might not even care about. I was really interested in, I understand that, like, X would jump on that narrative, but I'm surprised to see, if it's true, this idea that, like, there's actually some sort of consumer shift.
Starting point is 00:50:15 And, I mean, it seems like with the red alert comments, like, maybe, maybe it is, maybe it is. Do you think they have to explain the funding gap at this point, or can we all just agree that maybe everyone got a little too excited? Yeah, I don't know. I don't know. I feel like everyone's sort of repriced everything already with the Oracle round-tripping and just this idea that, you know, some of the equity investments, like they are circular, but it's basically just like a discount on their purchases. and, you know, these things probably aren't as binding as we think. And so I feel like the open AI is going to blow up the economy narrative. I feel like that was really oversold and is much – it should be fading, in my opinion. But I don't know. Bucco Capital bloke has been digging into the funding hole.
Starting point is 00:51:13 Apparently, Chad GPT is also down right now. I just tested it. It's not down for me, but the chat says it's down. Well, it's... And X is saying it's down. Oh, really? Oh, wow. Time to Baja blast those surfers back online, brother.
Starting point is 00:51:30 It's time to rock. We need a pump-up speech that doesn't include any negative phrases that can be taken out of context. We need to be Baja blasting. We have to Baja blast. We have to Baja blast our way to the top of the app store. SORA team, I need you to Baja blast. Bill, it's time to, it's time to Baja blast to the top of the app store. You have to Baja Blast at the top of the app store. You have to, and we're going to have to Baja blast some funding into this company because apparently there's a $270 billion
Starting point is 00:52:06 funding hole here. This is from a podcast between Ranjan, who writes at Reed Margins and Alex Cantorwitz at big technology. They did a podcast together. And here's the quote from Bucco Capital Bloke. It says, squaring the total, it leaves OpenAI in a $270 billion funding hole. The math doesn't work. Maybe Open AI should release to the world. Here's how the math can work because I haven't seen anyone state how this can actually work. And so even if you get there, Open AI does fall $207 billion short of the money. It needs to continue funding its commitments, right. So it has in 2030, in 2030, open AI free cash flow will be about 287 billion. That's like insane. That's if this is I this feels like silly to me because if you if you're in a situation
Starting point is 00:52:57 where you have 287 billion dollars free cash flow like you can't raise more debt on that. Like I feel like math tends to work out when you go from a nonprofit. to a $300 billion cash flow a year in 10 years. Like, it's just, everything just forms in front of you. Like, yes, you are building the bridge as you're driving, but, like, that tends to happen when you're on that much of a tear. The bigger question is, like, can they actually free cash with their $287 billion in 2030? So Amazon's free cash flow for 2024 was $38 billion.
Starting point is 00:53:39 And let's see what Google's. yeah this is like a 72 so saying that that uh they're going to do three times more yeah than uh google and amazon so this is the HSBC report is modeling 386 billion in annual enterprise AI revenue by 2030 enterprise AI revenue huh that's these are just huge numbers it's it's almost not worth analyzing um I I I still think the biggest The biggest thing is just understanding how significant, how tied up are these contracts. Well, let me tell you about Fall, the genera media platform for developers, develop and fine-tuned models with serverless GPUs and on-demand clusters.
Starting point is 00:54:27 So what else is going on? We should read through Ben Thompson's latest piece because he's provided a lot more context on Google, NVIDIA, and OpenAI with a post called Google and VIII. And Ovin A&A. And we thank Ben Thompson for always having an even keel. Highly recommend subscribing to Stratory. It's a fantastic publication if you're not subscribed already. And he's a former guest of the show.
Starting point is 00:54:55 So let's read through his latest Monday piece. He says a common explanation as to why Star Wars was such a hit and continues to resonate nearly half a century on from its release with everyone except Jordy Hayes, who hasn't seen. seen it? He hasn't seen any movies? I've seen Star Wars, John. How many Star Wars have you seen? I have to have seen all of them except some of the more, it's been some recent. All of them except some of them. Well, no, the more, the more recent ones. Like, it hasn't there been like a new Star Wars in the last? How many Star Wars are there, Trudy? Is there, there was, is there six?
Starting point is 00:55:30 Six? There's six Star Wars. That's how many movies they've made? Six like real Star Wars. They're six. There's six. I'm gonna, I'm gonna, hasn't there been, like some like everyone calls it the septilogy yeah there's six wait uh aren't there are six there's nine there's three trilogies there's the there's the original trilogy okay prequel trilogy and then the sequel trilogy and then there's also two spinoffs okay so i didn't watch i didn't watch any of like the the the like new ones but you watched the prequel trilogy the sky walker what's about george lucas directed yeah okay okay so so he's a lucas head i'm uh yeah exactly okay so you've seen a new hope. You've seen Empire strikes back. I look at those as like real Star Wars. You've
Starting point is 00:56:13 seen Return of the Jedi and then you've seen Phantom Menace and Revenge of the Seth and return of the something. I can't actually. I actually don't know that much of the Star Wars. But anyway, you should know enough to follow along with this analogy from Ben Thompson. He says, you have Luke, bored on Tatooine called to adventure by a mysterious message born by R2D2, that he initially refuses refusing refusal of the call. This is the classic, this is the classic hero's journey. So he refuses the call. A mentor in Obi-Wan Kenobi leads him to the threshold of leaving Tatooine and faces tests while finding new enemies and allies. He enters the cave, the death star, escapes after the ordeal of Obi-Wan's death. Spoiler alert, Ben, what are you doing,
Starting point is 00:57:02 brother? What if somebody hasn't seen it and they don't know that Obi-Wan dies? It's crazy. And carries the battle station plans to the rebels while preparing for the road back to the Death Star. He trusts the force in his final test and returns transformed. And when you zoom out to the original trilogy, it's simply an expanded version of the story. This time, however, the ordeal is in the entire second movie, The Empire Strikes Back. The heroes of the AI story over the last three years have been two companies, Open AI and Invidia. The first startup called, the first is a startup called with the release of ChatGPT to be the next great consumer tech company. The other was best known as a gaming chip company characterized by boom and bus cycles, driven by their visionary and endlessly optimistic founder, transformed into the most essential infrastructure provider for the AI revolution.
Starting point is 00:57:56 Over the last few weeks, however, both have entered the cave. They're in the cave. This is the cave of disillusionment And are facing their greatest ordeal The Google Empire is very much striking back And I believe, didn't Angene Over at A16Z Coin that, like the Empire strikes back?
Starting point is 00:58:17 Formerly A16Z When independent. Oh, he's independent? Yeah. Oh, I had no idea. So is this what Sam meant When he tweeted the picture of the Death Star? I feel like we never really figured out
Starting point is 00:58:28 What he meant by that. That was before GBT5, I think. Yes. I think he wanted the 2025 vague post of the year. No. It's so vague that even after the release, no, no, no, I think this is it.
Starting point is 00:58:41 I think this is it. It's Google's the empire and he's launching the thing that will take a It still doesn't make sense at the time because, like, open ad was like clearly in the lead of the models. Like 2.5, I think, I think 2.5 was the best job. Not on cash flow. You know, who has more soldiers, who has more researchers, who has more TPUs, right?
Starting point is 00:59:00 Like, you know, it would be fair to characterize, it'd be fair to characterize Google as the empire the whole time. Yeah, I mean, I guess the founding of Open AI, then Google is definitely the Death Star. Interesting. Isn't that their kind of origin story of opening eye? Yeah, yeah, yeah, it was. They were worried about that. Anyway, I enjoy the Star Wars-based analogies almost as much as I enjoy Numerol.com.
Starting point is 00:59:29 Compliance. handled. Numeral worries about sales tax and VAT compliance so you can focus on growth. So Google strikes back. The first Google blow was Gemini 3, which scored better than OpenAI's state-of-the-art model on a host of benchmarks, even if actual real-world usage was a bit more uneven. Gemini 3's biggest advantage is its sheer size and the vast amount of compute that went into creating it. This is notable because OpenAI has had difficulty creating the next generation of models beyond the GPT4 level of size and complexity. What has carried the company is a genuine breakthrough in reasoning
Starting point is 01:00:05 that produces better results in many cases, but at the cost of time and money. Oh, time and money. Throw in a ramp.com ad right in the middle of the trajectory article. I love it. Gemini III's success seemed like good news for Nvidia, who I listed, Ben, listed as a winner from the release. Quote, this is maybe the most interesting one.
Starting point is 01:00:28 Nvidia, who reports earnings later today, is on one hand a loser because the best model in the world was not trained on their chips for proving once and for all that it is possible to be competitive without paying Nvidia's premiums. On the other hand, there are two reasons for Nvidia's optimism. The first is that everyone needs to respond to Gemini, and they need to respond now, not at some future date when their chips are good enough. Did you know that Sundar was, like, people were claiming that he had used the phrase code red, and he back in 2022 at the chat GPT launch when Bard was popping. And so I'm remembering that now, but I missed it. Sundar came out and said he didn't use that exact term, but there was reporting that he did. And I heard a rumor that he also said that he wanted to Baja blast Sam Altman out of San Francisco, out of the atmosphere with a Death Star laser. I want to try to find more historical examples of Baja blasting folks we got a Baja blast you got a Baja blast sometimes so Google started its work on TPUs a decade ago everyone else is better off sticking with Nvidia at least if they want to catch up secondly and repeatedly Gemini reaffirms that the most important factor in catching up or moving ahead is more compute this analysis however missed one important point what if Google sold its TPUs as an alternative to NVIDIA. We're going to talk to TAY Kim, author of the
Starting point is 01:02:04 NVIDIA way about that. He's going to tell all. He's going to tell all. He's breaking his silence. So that's exactly what the search giant is doing. First, with a deal with Anthropic, then a rumored deal with meta, and third with the second wave of neoclouds, many of which started as crypto miners and are leveraging their access to power to move into AI. So a lot of those neoclouds, they found a bunch of power, and they don't really have the right chips yet, or maybe they're upgrading their chips. They might be in a new cycle, and so TPU could be at the top of the menu for them. Suddenly, it is Nvidia that is in the crosshairs with fresh questions about their long-term growth,
Starting point is 01:02:40 particularly at their sky-high margins. If there were, in fact, a legitimate competitor to their chips, this does, needless to say, raise the pressure on Open AI's next pre-training, run on Nvidia's Blackwell chips. the base model still matters and Open AI needs a better one and Nvidia needs evidence that it can be created on their chips what is interesting to consider
Starting point is 01:03:01 is which company is more at risk from Google and why on one hand Nvidia is making tons of money and if Blackwell is good Vera Rubin promises to be even better moreover while meta might be
Starting point is 01:03:13 a natural Google partner the other hyperscalers are not they're not going to be selling you know is we're going to have the CEO of Amazon web services Matt Garman on the show in just 30 minutes. And AWS announced a new chip. And I don't think AWS is going to be buying TPU anytime soon, but we will be asking him that question exactly. And I want to get to the bottom of it. So opening I meanwhile is losing more
Starting point is 01:03:36 money than ever and is spread thinner than ever, even as the startup agrees to buy ever more compute with revenue that doesn't exist yet. That's a wild sentence. And yet, despite all that and while still being quite bullish on NVIDIA, I still like Open AI chances more. Whoa. Oh, Ben Thompson likes Open AIs chances.
Starting point is 01:03:59 Indeed, if anything, my biggest concern is that I seem to like Open AIs chances better than Open AI itself. Whoa. NVIDIA's moats. Wait, he wrote this before, he wrote this before the Red Alert now. Interesting. He's really got a
Starting point is 01:04:15 crystal ball over there. So, NVIDIA's Motes. If you go back a year or two, you might make the case that Nvidia had three modes relative to TPUs. Senior superior performance, significant more flexibility due to GPUs being more general purpose than TPUs, and Kuda and the associated developer ecosystem surrounding it. Open AI, meanwhile, had the best model extensive usage of their API
Starting point is 01:04:35 and the massive number of consumers using chat GPT. The questions then is what happens if the first differentiator for each company goes away. That, in a nutshell, is the question that's been raised over the last two weeks. Does NVIDIA preserve its advantages if TPUs are as good as GPUs and is open AI viable in the long run if they don't have the unquestioned best model? So, NVIDIA's flexibility advantage is a real thing. It's not an accident that the fungibility of GPUs across workloads was focused on as a justification for increased capital expenditures by both Microsoft and META. TPUs are more specialized at the
Starting point is 01:05:11 hardware level and more difficult to program for at the software level. To that, end to the extent that customers care about flexibility, then Nvidia remains an obvious choice. The interesting thing about the flexibility is that isn't SSI a big TPU buyer? I feel like they, I feel like SSI was maybe going big on TPU. And I think of SSI is very much like, we're going to experiment. We need maximum flexibility. By default, I would assume that they're a heavy consumer of GPU because they want as much flexibility as possible. But maybe, the nature of Ilya's research is flexibility within, that
Starting point is 01:05:50 is still afforded within the TPU ecosystem. They're using TPUs through Google Cloud. Yeah. But there's been no... Oh yeah, they're not buying them. But still, it just means... That implies more flexibility because you can just turn it on or off. Yeah, I'm talking about the actual like, like the TPUs is an ASIC. It has, it is like
Starting point is 01:06:08 literally like less features than the GPU. Like a gaming GPU and a, like, the TPU doesn't, I think it doesn't support like FP4 right, or something like that. There's some type of math that is harder to do on a TPU because it's making tradeoffs, right? And so even though I don't understand it,
Starting point is 01:06:26 fully, I understand that TPUs are more specialized at the hardware level. And so if you were to be in like the era of research, maybe you would want something that's less specialized because you'd be like, I'm going back to exploring all sorts of different types of math
Starting point is 01:06:41 that aren't necessarily optimal. Again, if you're buying TPUs through the cloud, you're effectively just buying cloud services. You do have more flexibility because you can say, hey, we want to use more of this or we want to use less. You're not like buying a bunch of servers and chips. On the scale, on the scale thing, that makes perfect sense. Yeah, I think also, like, historically, TPUs have definitely been more restrictive just because of, like, the software was just not as good, or it was closed source or whatever.
Starting point is 01:07:09 And then, you know, yesterday I Don't Patel was talking about how Google is slowly trying to open source more and more stuff from the TPU. So you would imagine that in the future, it should be much easier to use TPUs, generally. Sean, in the chat says, flexibility in terms of, hey, I have this new architecture, do I need to write kernel code from scratch, or is there a nice CUDA module I can use just time, right? Flexibility is engineering work from NVIDIA. Yeah, yeah, no, that's a really good point. So CUDA, meanwhile, has been a critical source of Nvidia lock-in,
Starting point is 01:07:36 both because of the low-level access it gives developers, but also because there is a developer network effect, Dylan Patel was talking about this, You're just more likely to be able to hire low-level engineers if your stack is on NVIDIA. The challenge for NVIDIA, however, is that the big company effect could play out with Kuda in the opposite way to the flexibility argument. While big companies like the hyperscalers have the diversity of workloads to benefit from the flexibility of GPUs, they also have the wherewithal to build an alternative
Starting point is 01:08:04 software stack, that they did not do so for such a long time is a function of it's simply not being worth the time and trouble when capital expenditure plans reach the hundreds of billions of dollars. However, what is worth the time and trouble changes. A useful analogy here is the rise of AMD in the data center. That rise has not occurred in on-premises installations or the government, which is still dominated by Intel. Rather, large hyperscalers found it worth their time and effort to rewrite extremely low-level software to be truly agnostic between AMD and Intel, allowing the former's lead in performance to win the battle. And so, AMD, better performance, better efficiency per dollar, but didn't have the best
Starting point is 01:08:52 software. And now, but now because there's so much on the line, so many, the spending amount is so high, companies will go and work around all the bugs, develop new software that allows them to take advantage of AMD's better performance. In this case, the challenge NVIDIA faces is that its market is a relatively small number of highly concentrated customers with the resources, mostly as yet unutilized to break down the Kuda Wall as they already did in terms of Intel's differentiation. It's clear that NVIDIA has been concerned about this for a long time. This is from NVIDIA waves and motes, which he written at the absolute top of the NVIDIA hype cycle after the 2024 introduction of Blackwell. this article takes full circle. This takes this article full circle. This is from the previous
Starting point is 01:09:41 Ben Thompson article in strategic. It says in the before times, i.e. before the release of chat GBT, NVIDIA was building quite the free software moat around its GPUs. The challenge is that it wasn't entirely clear who was going to use all of that software. Today, meanwhile, the use cases for those GPUs is very clear and those use cases are happening at much higher level, at a much higher level than CUDA frameworks, i.e. on top of models. That, combined with the massive incentives towards finding cheaper alternatives to Nvidia, means both the pressure to and the possibility of escaping CUDA is higher than it ever has been, even if it is still distant for low-level work, particularly when it comes to training. Invida has already started responding. I think
Starting point is 01:10:24 that one way to understand DGX cloud is that is Nvidia's attempt to capture the same market that is still buying Intel server chips in a world where AMD chips are better because they have already standardized on them. NIMs are another attempt to build lock-in. In the meantime, though, it remains noteworthy that Nvidia appears not to be taking as much margin with Blackwell as many have expected the question as to whether they will have to give back more in future generations will depend on not just their chips performance, but also on re-digging a software mode increasingly threatened by the very wave that made GTC such a spectacle.
Starting point is 01:11:03 So Blackwell margins are doing just fine, I should note, he's back to the original article, the modern article, as they should in a world where everyone is starved for compute. Indeed, that may make this entire debate somewhat pointless. Implicit in the assumption that GPUs might take share from GPUs might take share from GPUs, is that for one to win, the other must lose the real decision. maker, maybe TSM, which makes both chips and is positioned to be the real break on the AI bubble. Interesting. So chatyptee and Mote's resiliency. I can read through this one.
Starting point is 01:11:39 ChatGPT, in contrast to NVIDIA, sells into two much larger markets. The first is developers using their API, and according to Open AI anyways, this market is much stickier and reticent to change, which makes sense. Developers using a particular model's API are seeking to make a good product, and while everyone talks about the important of avoiding lock-in. Most companies are going to see more gains from building on and expanding
Starting point is 01:12:01 from what they already always Baja blast. And for a lot of companies that is opening eye. One, I would caveat here, we were talking to a founder yesterday who off the show
Starting point is 01:12:13 who was saying he immediately as soon as Gemini 3 launched spent like 12 hours, 12 hours like just moving over to Gemini from opening eyes. So depending on the product, I don't know that API is always going to be super
Starting point is 01:12:31 sticky. I say winning business one app by one app by one will be a lot harder for Google than simply making a spreadsheet presentation to the top of a company about upfront costs and total cost of ownership. Still, API costs will matter, and here Google almost certainly has a structural advantage. The biggest market of all, however, is consumer, Google's bread and butter. What makes Google so dominant in search, impervious to both competition and regulation. is that billions of consumers choose to use Google every day, multiple times a day, in fact. Yes, Google helps this process along with its payments to its friends, but that's downstream from its control of demand, not the driver.
Starting point is 01:13:07 What is paradoxical to many about this reality is that the seeming fragility of Google's position, competition really is a click away. It is, in fact, its source of strength. And then there's a excerpt from the United States. We can skip this one and continue at the bottom. The CEO of a hyperscaler can issue a decree to work around Kuda. an app developer can decide that Google's cost structure is worth the pain of changing the model undergirding their app, changing the habits of 800 million people who use ChatGPT every week,
Starting point is 01:13:36 however, is a battle that can only be fought by individual by individual. This is Chattebtee's true difference from Nvidia in their fight against Google. And so this, I think, is the most important takeaway is just Ben Thompson creates aggregation theory, This idea of like it's so important to aggregate demand in the modern internet world. It's potentially the only thing you can do you can't really monopolize supply. It's very hard to monopolize supply, but monopolizing demand is something that happens. And the strength of habits is significant. Like we're watching this stuff every single day so we can take the time to, okay, yeah, we should test out this other model.
Starting point is 01:14:24 We should daily drive this app. But for a lot of people, if they have a map that's installed and they've been using it for a year, they're never changing. Even if the model is slightly better over there. They're just not even going to hear about it because they're just like, this is the thing that I use to plan my vacations.
Starting point is 01:14:38 And the thing that I've heard come up multiple times is people that when Gemini 3 launched, they switched to Gemini 3 on desktop, but they stayed using ChatGBTVT on mobile. Yep. And, I mean, to be completely transparent, like the Gemini mobile app has, is really, really struggling to stay connected.
Starting point is 01:14:57 There's something in the, when you fire off a prompt, it doesn't like save it locally and then cache that and then send it off, inference it and then come back. Like, unless you keep the app open, like it will, it will just give you like a server disconnected air. Like I've gotten like dozens of these. And that's going to be a real, like I think it should be something that they should be able to fix in like a weekend.
Starting point is 01:15:21 But, you know, hopefully it's soon. But for a lot of people... Logan. Yeah. Well, they'll get it. They'll get it. But back to the moat and the map, the moat map and advertising. This is, I think, a broader point.
Starting point is 01:15:37 The naive approach to moats focuses on the cost of switching. In fact, however, the more important correlation to the strength of a moat is the number of unique purchasers to users. The strength of the moat is increased by... the number of buyers. Okay. So you can see where this is going with like, NVIDIA has five buyers.
Starting point is 01:16:00 Yeah. And ChattGBT has a billion buyers. And once it has advertised. Or 20 million. How many? Yeah, advertisers might be even more, right? Yeah. So this is certainly one of the simpler charts I've ever made because it's literally
Starting point is 01:16:11 just one line. But it's not the first in the moat genre. So he talks about the moat map. I argued that you could map large tech companies across two spectrums, the degree of supplier differentiation from Facebook where the suppliers completely commoditized, just your friend on Facebook, to Microsoft and Apple, where the suppliers are somewhat more controlled. Yeah, there's the...
Starting point is 01:16:35 What a chart. The more unique buyers of your product you have, the stronger your moat, because it's hard to convince each one of them. And then second, the extent to which a company's network effects were externalized, internalized, internalized network effects are Facebook again, and then externalizes Microsoft. And so putting this together gave the moat map. So who has the network effect versus the suppliers map? If we scroll down there, you can see them.
Starting point is 01:17:04 What you see in the upper right are platforms, the lower left are aggregators. Platforms like the App Store enable differentiated suppliers, which allows them to profitably take a cut of purchases driven by those differentiated suppliers. Aggregators, meanwhile, have totally commoditized their suppliers, but have done so in the service of maximizing attention, which they can monetize through advertising. It's the bottom left that I'm describing with the simplistic graph above,
Starting point is 01:17:27 the way to commoditize suppliers and internalized network effects is by having a huge number of unique users and by extension, the best way to monetize that user base and to achieve a massive user base in the first place is through advertising. It's so obvious the bottom left
Starting point is 01:17:40 is where Chachabit sits. I wonder what do you think, then, about them potentially kind of... Delaying ads? and delaying ads. Probably punch in the air. Him and Eric Suford would probably
Starting point is 01:17:52 ugh. No. And I'm right there with them. I completely agree. Boom. Launch the ads product. Launch the ads product.
Starting point is 01:18:02 Get it out. Come on. Don't delay that. That's the most important thing. So at one point, it didn't seem possible to commoditize content more than Google or Facebook did.
Starting point is 01:18:15 But that's exactly what LLMs do. The answers are a statistical synthesis of all the knowledge the model makers can get their hands on and are completely unique to every individual. At the same time, every individual's user usage should, at least in theory, make the model better over time. It follows then that chat GPT should obviously have an advertising model. This isn't just a function of needing to make money. Advertising would make chat GPT a better product. It would have more users using it more, providing more feedback, capturing purchase signals, not from affiliate links, but from personalized ads, would create a
Starting point is 01:18:48 uh would create a richer understanding of individual users enabling better responses and as an added bonus and one that is very pertinent to this article it would dramatically deepen open ai's moat yeah i i keep going back to uh this idea that open a i needs personalized ads like like instagram like instagram ads like instagram said when he was interviewed multiple times on his ad strategy he was like you know like ads could be bad but these instagram ads are pretty good you know he's he's and people are like read these backtracking it's like no that's fine uh get like do the proper business model like please implement the correct business model uh i'm happy about that um but the interesting thing is that instagram does not service ads when you necessarily when you search for something like
Starting point is 01:19:36 if you're if you're on a video for like a Ferrari like you don't just immediately get an ad as your next thing for like Ferrari of Hollywood like or Ferrari of that really hills like no you get an ad for the toaster that you were about to check out on i actually do like half half half the ads i get on meta are local dealerships in l a yes but but importantly not when you're like searching it's not tied to search and so uh chat chitpd can do the same thing where they can clearly show you um uh where they can clearly show you something that you are about to check out you're shopping for Christmas for this thing. You're searching for the Roman Empire.
Starting point is 01:20:20 Let's show you the ad for the thing that you're shopping for right next to it. It's fine. And so I think that can work very well. Anyway, let's go to Google's advantages. It's not other question that Google can win the fight for consumer attention. The company has a clear lead in image and video generation, which is one of the reasons why I wrote about the YouTube tip of the Google Spear. I mean, Google's advantage in data is insane.
Starting point is 01:20:40 Like, YouTube. So massive. And that's got to be just the compound. and I mean, we're uploading another three hours of video to YouTube today. You're welcome. This one's for you, son. This one's for you, Dennis. But the flip side is like, they also see the entire internet because of the way the
Starting point is 01:20:58 Google bot scrapes, like, the Google searching in Gemini is such a killer feature. Like, it's such a killer feature if they can keep that on and they can actually surface that. And the AI search results are obviously going to get good. They're going to figure out how to surface it. I think I'm still pretty optimistic. But let's see what Ben Thompson has to say. And let's also tell you about Adio, the AI Native CRM.
Starting point is 01:21:19 Adio builds scales and grows your company to the next level. So Google is obviously capable of monetizing users. Even if they hadn't turned on ads in Gemini yet, it's also pointing out as Eric Sufer did in a recent Stratory interview, two of the best collabing. We'd love to see it. That Google started monetizing search less than two years after its public launch. It is search revenue far more than venture capital money
Starting point is 01:21:42 that has undergirded all of Google's innovative. over the years, and it is what makes them such a behemoth today. In that light, OpenAI's refusal to launch and iterate on an ads product for chat GPT, now three years old, is a dereliction of business duty. He's calling him out, particularly as the company signs deals for over a trillion dollars of compute. What are you doing? Sam, get the ads out.
Starting point is 01:22:03 Put the ads in the chat bot. We love it. Just put the ads in the chat bot. And you got to Baja blast some ads into that app. You got to. I want ads and chat chabit. Please. On the flip side, it means Google has the resources to take on chat GPT's consumer lead with a World War I style war of attrition. Ryan Matal callback. Opening eyes lead should
Starting point is 01:22:28 be unassailable, but the company's insistence on monetizing solely via subscriptions with a downgraded, degraded user experience for most users and price elasticity challenges in terms of revenue maximization is very much opening the door to a company that actually cares about making money. To put it another way, long-term threat to NVIDIA from TPU's margin, from TPU's is margin dilution. The challenge of physical products is that you do actually have to charge people who buy them, which invites potentially unfavorable requirements. I always, so, yeah, who both Gemini and ChatchipT will have ads eventually, right? You can bet on that. Who will ramp ad revenue faster? It's hard
Starting point is 01:23:09 not to bet on Gemini, even with a smaller user base, because they have the ad network. work. They have all the customer relationships already. They can just say, like, hey, here's a pop-up. Do you want it? Here's $10,000 of free ad credits. Try it out, right? It's like native. It will already be in AdWords. The example that I use is like how, you know, Zuck was able to take Instagram, which had a lot of users, plug it into the meta ads platform and just scale revenue like crazy. And then do it again with reels. And so, yeah, very, very clear that they both will have ads. And, again, if Gemini can really ramp that quickly, they could, again, like, I do feel like we're moving towards a world where you, like, every consumer will be able to get the best LLM for free, right? I don't, I don't necessarily believe that every American will be paying for an LLM in five years.
Starting point is 01:24:07 And so if Google can get there first and then keep Gemini on the frontier and deliver the best, free, fastest text and image model, that's going to be very, very difficult to compete with. And so, again, like getting to ads faster, it feels like it makes more sense. I like that point. I have a rebuttal. First, I'm going to tell you about Figma. Think bigger, build faster. Figma helps design and development teams build great products together. So, getting to ads first is an advantage.
Starting point is 01:24:41 That's your take. I like it, but there is a little bit of a risk with launching ads first because you could forever be branded with, you're the ads one. And we saw this when Arvin from Perplexity came on the show, and he mentioned this idea of ads in LLM queries, and we all agreed on the discussion. We all agreed that ads were going to come to AI tools because that is the way to get the most people using them and make intelligence free.
Starting point is 01:25:14 And you got in a debate with Mark Cuban over this, for example. And it seemed very logical. But there is the fact that the first major chat app to put ads in their app is going to be a massive news cycle. It's just going to be like national. news. Sam Adman. Exactly. It'll be Open AI has ads now, or it'll be Gemini has
Starting point is 01:25:40 ads now. And so, you don't necessarily, like, it's much easier to be the second mover there, because it's going to be less of a new cycle. And so you kind of do want to, there is a little bit of advantage to being the second mover there, right? Because you're going to get
Starting point is 01:25:56 sort of branded as like, oh, that's the ad supported one. And the other one can add ads and people will be like, oh, yeah, like, I guess. But that's like standard, yeah. there's going to be like a backlash and people will be like oh no I don't like this company blah blah blah blah like yeah I mean the harder the harder thing is just how do you do it right certainly like I do feel like it's different you're going to an LLM people are going to an LLM for advice and recommendations that's different than going to Google and searching and seeing ads at the top
Starting point is 01:26:23 I mean there's plenty of surface area I don't know I'm not saying there's no surface area but I'm just saying like the right way to do ads in LLM is not clear yet Um, yeah, yeah, I mean, they will need to do some experimentation, but I mean, just starting with, um, you know, like Google already has retargeting information. I actually went to, uh, Gemini with a question and it clearly knew everything about me. And it said, you're the host of TVPN and you've started, you founded these companies. And it had, it already knew that probably just because I authenticated with something else. I don't know. But, uh, it should know, okay, hey, we could retarget you with this. Let's put this in. Um, and, and the same thing. with Open AI, like with ChatGPD. There's plenty of spaces where it's like, you're waiting for it to give you the answer. Okay, we're generating you the image. Why don't you show me other images of ads right there?
Starting point is 01:27:14 There's tons of surface area. I agree that there will be a whole bunch of iterations on like what the ideal ad looks like. But yeah, you can clearly get out. So let's go back to Ben Thompson, close this out. Says the reason to be more optimistic about Open AI is that an advertising model flips this on its head. Because users don't pay, there is no ceiling on how much you can make from them,
Starting point is 01:27:36 which by extension means that the bigger you get, the better your margins have the potential to be, and thus the total size of your investments. Again, however, the problem is that the advertising model doesn't exist yet. So he started this article recounting the hero's journey in part to make the easy leap to the empire strikes back. However, there is a personal angle as well. The hero of this site has been aggregation theory and the belief that controlling demand trumps everything else. There's their Google was my ultimate protagonist. Moreover, I do believe in the innovation and velocity that comes from a founder-led company like NVIDIA. And I do still worry about Google's bureaucracy and disruption potential making the company less nimble and
Starting point is 01:28:17 aggressive than Open AI. More than anything, though I believe in the market power and defensibility of 800 million users, which is why I think ChatGPT still has a meaningful moat. At the same time, I understand why the market is freaking out about Google. Their structural advantage, their structural advantages in everything from monetization to data to infrastructure to R&D is so substantial that you understand why OpenAI's founding was motivated by the fear of Google winning AI. It's very easy to imagine an outcome where Google's inputs simply matter more than anything else, which is to say one of my most important theories is being put to the ultimate test, which perhaps is why I'm so frustrated at Open AI's avoidance of advertising. Google is now my antagonist. Google has already done this once. Search was the ultimate example of a company winning an open market with nothing more than a better product.
Starting point is 01:29:10 Aggregators win new markets by being better. The open question now is whether one that has already reached scale can be dethroned by the overwhelming application of resources, especially when its inherent advantages are diminished by refusing to adopt an aggregator's optimal business model. I'm nervous and excited to see how far aggregation theory really goes. fascinating. That's his baby. Yeah. It's a, it is, I agree. It is the correct, it is the correct framing. It'll just be very interesting to see. I, I really wonder, who's going to, who's going to take the leap first? Who is going to, who's going to jump and, and put ads in, in the, in the app first?
Starting point is 01:29:52 It feels like Google should do it. It feels like Google will, we'll be able to do it. Yeah, nobody's going to be like, what? Google is, putting ads in a product. It won't be that surprising. So they should probably move faster. We have some breaking news. What's the breaking news? Jason Freed is joining the show at 2 p.m.
Starting point is 01:30:10 Surprise guest. He's launching Fizzy today. Yes. Canban as it should be, not as it has been. I will wait, we'll wait to talk about this until he joins in an hour and 20 minutes. So a little surprise guest appearance
Starting point is 01:30:26 from a legend. He's calling out his competitors directly. I love when the founders do that. Founder mode. Founder mode. Should we talk about John Gandrea leaving the company? He's out.
Starting point is 01:30:39 I quit. I quit. You like that one. I think that's probably been used before. Some headline, but. Amar Subram. So, of course, Mark German has the scoop, I believe. The Germanator.
Starting point is 01:30:54 Germanator's added again. He says Apple AI chief, John Gendrea, is leaving the company. Amar. Subramanya from Microsoft's jointed lead AI under Craig Federigi. And so we should dig in a little bit to this history. So Swix has a little bit of a deep dive here. He says, Amman brings a wealth of experience to Apple. He's quoting here, having most recently served as CVP of AI at Microsoft and previously
Starting point is 01:31:27 spent 16 years at Google, where he was head of engineering for Google Gemini. Wait, oh, I guess at the end of that, because Gemini's not 16 years old. This is bearing the lead. He joined Microsoft AI four months ago. Wow, what a crazy turn of that. LinkedIn says six months ago, but who's counting? Yeah, who's counting? That's pretty fast. And so, but this makes, this makes sense, considering Apple is partnering with Gemini. And not a lot of people are going to be in a better position to help integrate that into Syria. Sure, sure. than Amar. Yeah.
Starting point is 01:31:57 I mean, I don't know. Maybe there's something to, you know, just having a taste of all the different big tech companies. Oh, yeah, I've been at Microsoft. I know how they work. I've been at Google. I know how that works. I'm ready to rock over here.
Starting point is 01:32:11 German does need to come back on ASAP. I agree, Ragov. He was fantastic. We'll get him in person. Hopefully before. So, German continues. He says, Strange hire for a number of reasons, but it's hard to argue that the Apple job is a bad one.
Starting point is 01:32:25 Anything is an important. improvement at this point. So the bar is as low as it comes. Easy to lay up on the resume. So that'll be fun to see. I'm personally just excited to actually test drive what Gemini, how it works in Siri, how seamless that is. Because if it really is just raise, press the button, get Gemini and it's linked up properly and it doesn't have timeouts and it gets back to you pretty quickly, like that's going to be a pretty powerful experience. That's definitely going to cut down on chat gpte app usage for iPhone users i would imagine underrated threat i would think so like the reason like the there are so many moments where people are you counting out
Starting point is 01:33:08 apple it's not yeah i don't i don't even know that i don't even know that apple will benefit massively from this it's not like they're going to sell twice as many iPhones they're so big it's yeah i don't think i don't think it's necessarily like especially bullish for apple it's an underrated threat for open AI. I would think so. There's a lot of queries that all hit open chatGBT on mobile that are not even like super economic
Starting point is 01:33:35 but just a lot of my usage around like, hey, just trying to learn about something or research or product, et cetera. And if that's just like, again, one tap and you're in there. Yeah. I mean, yeah, the original promise of Siri was, you know, not just, hey, what's the weather today?
Starting point is 01:33:52 But really asking anything. and Gemini clearly solves that for 99% of knowledge retrieval queries. I would be, I think I'm going to be using that a lot unless they really botch it. And I don't know how they're going to botch it. Yeah, I mean, I think the... If anything's possible, yeah. Apple's like, hold my beer. They can be like, every, for privacy reasons, every time you press the button, you have to e-sign.
Starting point is 01:34:17 And it's like, why are we doing that? Yeah. Like, the original, like, Siri kind of vision was like this very conversational. AI, right? But I don't think Gemini has a real-time voice model yet. Like, I'm pretty sure Open AI is the only one that has that. Really? Huh. I don't think that matters at all. Really? Yeah, I really think that... Because that form factor feels like that would be the best thing to be on. Yeah, yeah. You press the button on your iPhone. And then it's just on. And then it's just on. Yeah. Yeah. I mean, I wouldn't be surprised if they can, if they can, like, get that model,
Starting point is 01:34:48 that version of the model out, because it's really just, like, distilled a little bit faster. It's not, some like uncanny breakthrough that the Gemini team will not never be able to crack right so they just have to build that but honestly like I don't know that that's certainly not how I how how I would use it for most things most things I would say okay like like I want I've I've one question get me an answer within a reasonable amount of time and maybe read it off to me or produce like an article that's you know a pretty readable article summarizing the answer to my question and then yeah Maybe there is like a back and forth, but I don't know. Oh, you're getting truth zoned in the chat.
Starting point is 01:35:29 Gemini does have a real-time voice feature. Gemini Live. Yeah, I think it's on the app. Truth zone. I've used them. Try that. Yeah, I don't have a map. So.
Starting point is 01:35:37 Large journalistic force headed door to you. Stand by. Tyler. Anthropic is acquiring Bun. What do you have to say? Yeah, I mean, this is definitely in line with their, like, you know, focus on dev stuff. Yeah. Okay. What is bun?
Starting point is 01:35:55 Bunn is a, it's like a, I don't know, it's like a bundler for JavaScript. It's like a very dev. It has dramatically improved the JavaScript and TypeScript developer experience. They're going to make Claude code even better. I like the Gabriel from OpenAI here is OMG in the chat, which is like a pretty crazy thing to say. But I appreciate it. Claude is one of the world's smartest and most capable AI models for developers. Startups and Enterprise, Cloud Code represents a new era of agented coding,
Starting point is 01:36:29 fundamentally changing how teams build software. In November, Cloud Code achieved a significant milestone. Just six months after becoming available to the public, that's crazy. It's only been six months. It reached a $1 billion revenue run rate. We were always struggling to understand what that meant, right? Yeah, well, so there are two ways to pay for Cloud Code. There's either with your Cloud subscription where you get, like, Cloud Pro,
Starting point is 01:36:52 Claude Max, and there's a certain amount of tokens you can use. And then there's also, you can just directly wire up APIs calls essentially to cloud code. And then you're being charged directly based on usage. So that's probably what that revenue is from. Yeah. And then, yeah, I also have thought about this thing where like, oh, you can break down the number of tokens from the subscription. So it's like your $20 subscription, three-fourths of your tokens are on cloud code.
Starting point is 01:37:16 I mean, three-fourths of your $20 is counts as cloud code revenue. Okay. Yeah, yeah. That makes sense. I'm not sure exactly. Yeah, I mean, I'm sure that they can account for it. So it was founded by Jared Sumner in 2021. Bun is dramatically faster than leading competition.
Starting point is 01:37:31 You say it's a breakthrough JavaScript runtime. Does it compete with V8? I'm very interested in like, it's Node. js. It competes with Node. The thing that Tyler needs to learn. What was that company that Open AI acquired earlier this year for like a billion? I know the one you're talking about.
Starting point is 01:37:47 Isn't that analytics or something? Yeah, but again. I remember at the time people were like, oh, like, opening eyes competitors are not going to be happy about this acquisition. So that comment from Gabriel. Well, congratulations to the Bun team.
Starting point is 01:38:03 Congratulations to Anthropic and everyone on the Claude Code team. Very excited that you're getting to work together for your massive deal. Speaking of other massive deals, Alfred Lynn. Alfred Lynn. Hit the, get that gong ready, John.
Starting point is 01:38:18 You were built like that. What did he do? Alfred Lynn comes in on the board of DoorDash buys $100 million of DoorDash. He's calling it, Lynn Sanity. He's not done stewarding, DoorDash. He's continuing to steward the company with a $100 million buy. And, of course, sends the stock up almost 6% on that. Pretty remarkable. We are excited about it. He ripped.
Starting point is 01:38:47 He ripped. What is this another 4.6 million to donate it to shrimp welfare? Okay, so they, yeah. Basically, the story is anthropic. They were doing some, like, research about smart contracts. And so they had cloud code try to figure out, like, you know, issues in smart contracts. And then I'm not sure exactly where the, like, money came from. Maybe it was for, like, bounties.
Starting point is 01:39:18 but there's some way in which CloudCode basically generated like $4.6 million in like cash from finding these exploits. So then they just Did it actually generate real money or is this like the hypothetical? This is simulated testing.
Starting point is 01:39:34 Okay. Huh. Simulating. So yeah. So it probably means that they could have like basically stolen $4 million from people but they don't want to do that. Maybe they should have if they really want to get up the in other anthropic news. David Sacks says he's still waiting on Dario's support after the New York Times piece was
Starting point is 01:39:55 published. Sam Altman, of course, came in and said, David Sacks really understands AI and cares about the U.S. leading in innovation. I'm grateful we have him. Of course, Dario and Sacks, not the biggest fans of each other. So I don't expect that one coming through anytime soon. While we wait for our first guest, Matt Garman, let's pull up this clip from Huberman Lab, if we can play this. Rob Moore is highlighting Dr. Jeffrey tells Huberman that LED lighting in buildings is a public health crisis that could be on par with the use of asbestos. Many building contractors slash designers are coming to him worried they're going to be sued in asking how to start fixing the issues. Let's pull this up when we have a second. Because I am very concerned about the amount of short wavelength light that people are exposed to nowadays, especially kids.
Starting point is 01:40:52 The group of us that are shuffling around, some of them are saying this is an issue on the same level as asbestos. This is a public health issue and it's big. LEDs came in and people won the Nobel Prize for this very rightly at the time because they save a lot of energy. The LED has got a big blue spike in it, although we tend not to see that. And that is even true of warm LEDs, and there is no red. The light found in LEDs, when we use them, certainly would be using them on the retinney looking at mice, we can watch the mitochondria gently go downhill. They're far less responsive.
Starting point is 01:41:34 Their membrane potentials are coming down. The mitochondria are not breathing very well. Can watch that in real time. under LED lighting. And LED lighting at the same energy levels that we would find in a domestic or a commercial environment. This is why I want to rig the studio with incandescent, incandescent light. Incandescent.
Starting point is 01:41:55 We're going back to candles. Candlelight. Let's do candlelight. How about a hearth? This is the way... If we put a hearth, so we have lights above our heads, I'm sure, or LEDs, killing us slowly and softly, if we put like somewhat a bonfire right above us. That would be the way.
Starting point is 01:42:11 And then we just, when the wood kind of burns out, the show's over. We just go until the... That would be good. I like that. Let me tell you about turbopuffer, serverless vector in full-tech search, built from first principles on object storage, fast 10x cheaper and extremely scalable. And our guest, Matt Garman. And TurboPuffer is at Reinvent.
Starting point is 01:42:29 Amazing. Fantastic. Well, we are joined by the CEO of Amazon Web Services. Matt Garman, thank you so much for taking the time to come and chat with us. How are you doing? Hi. Thanks, you guys for having me. Uh, please take us through, uh, some of the high level announcements.
Starting point is 01:42:45 Obviously, it's, uh, it's reinvent. Very exciting. Congratulations on the progress. Uh, would love to know, uh, what's at the top of your mind. What's on the top of your presentations over the, over the, over the, over the course of the event. Yeah. We had a couple of really exciting announcements today. A couple I'd highlight. First, we, uh, introduced these idea of frontier agents. Yeah. And these are agents both, uh, in Kiro, uh, for software development as well as in operations and security. And these frontier agents are meant to accomplish much, much more than customers were ever able to do in the past where we have these autonomous agents that can help
Starting point is 01:43:20 customers really turbocharged their software environment. So super excited about that. We had some announcements around Nova, which is our frontier AI models that we announced. We announced Nova 2 and our new sets of models. And one of the things I'm in particular really excited about is Nova Forge, which allows customers to actually bring their own data to pre-training checkpoints, mix in their data with Amazon data, finish training the model, and at the end of it, have a custom model that deeply understands their own enterprise data and is just for them. So that's another thing that I'm excited about. And then the third thing is we announced a new chip around Traneum 3 to really turbocharge
Starting point is 01:44:00 the next generation of training and inference for our customers. and so quite excited to get that, and that went GA today as well. That's very exciting. Let's go back and start with the first one. Let's talk about coding agents and your own proprietary models. How are you thinking about positioning those to potential buyers? Do you like the benchmarks these days? Do you think that we're sort of like post all the benchmarks,
Starting point is 01:44:29 or do you think those are still useful tools for a buyer who's making a decision? is it about integration? Is it about cost? How are you positioning them? Yeah. When you think about software development, it's not about pure benchmarks. It's really about what is going to allow you to get the most amount of work done. And when we think about our offering, which is called Kiro, it's really focused on in an enterprise or environment where somebody's doing high-velocity development, they actually need more structure. People love vibe coding and it's exciting, but you can actually get down a path where you get stuck. And you'll often find actually that you spend
Starting point is 01:45:03 just as much time trying to get back to where you were before as if you had just coded it from the beginning. We have this idea of specs that gives you structure to what you're trying to build. And so you can have agents go and start to build around those specs together with you and your team. And it gives you the structure that allows you to go really fast and undo if you need to, can make sure that you're hitting your design requirements. And it really allows you and the agents to operate in conjunction with each other and move really, really fast. And we're starting to build these much more capable agents that can go and actually do long-running tasks for you on your behalf, but all of it is kind of ties into this structure. And we view that as a way to deliver
Starting point is 01:45:40 kind of real development that's going to be meaningful on a large code base with large teams in enterprises where they have existing things, not just kind of single individual people sitting there kind of doing vibe coding, which, you know, you can do vibe coding on Kiro as well, by the way. We think that that's just not sufficient for what real development's going to need. Yeah. Talk to me about what it actually looks like to set an agent off and say, hey, I got a task for you, come back to me in a few days, which it sounds like that's where we're going. We've been tracking the meter benchmark, and it seems like we've been seeing doublings there. But again, a lot of those have been,
Starting point is 01:46:17 the benchmark has been, how long would it take a human to do this task? The actual agent might have done it faster. And so it's not necessarily that you're actually letting something cook over the weekend. What's the experience been like and what have people been reporting about these long-running agents? Yeah. I think the first and actually most important thing is thinking about how you actually kind of have a mind change on how you think about software development, where you think about not about do this task, get it back, look at it, do this task, but how are you thinking about directing a lot of agents to go out there and do lots of different things and let those run for long periods of time where they can kind of have amorphous tasks.
Starting point is 01:46:56 Like, instead of go write me this function, like, try to go solve this problem for me. And then it'll come back and then you can, but if you send out two or three or 10 or 20 or 50 of those things, then your job as a software developer and as a product leader is actually much more around coordinating those when they come back, troubleshooting, make sure that, you know, directing them, course correcting, et cetera. And so I'm excited about that. We've already seen these processes go off and work for multiple hours at a time on particularly like really hard, tricky, And we think those things are going to continue and be more the norm of how software developer teams change what they accomplish.
Starting point is 01:47:33 We think Kiro is going to be the engine that's going to drive a lot of that. Yeah. Yesterday we were talking to Vincent from Prime Intellect, and they do some of this like fine-tuning on smaller models. And he has this thesis, I think that you share, that a lot of businesses will need to take a pre-train and then bring their own data, fine-tune it. not just because it's important from performance and output, but also from cost. But I'm interested in understanding how you think the market will shape out.
Starting point is 01:48:05 Do you see implementation partners and like consulting firms coming in and doing that? I was asking him like, you know, there's a lot of tech startups that are going to be able to do that. They're going to understand I need to build an RL environment around my app. But for larger legacy companies, they might not understand. So how are they going to wind up? using that tool in particular. I think they will. And I actually just want to highlight one piece there
Starting point is 01:48:29 where some of what we announced today is a little bit different. We announced this idea. It's an open training model with Nova. And so the difference of what you just said is people take a pre-trained model and they'll do RL after the fact and they'll try to do some fine tuning,
Starting point is 01:48:43 which is great. But there is actually limits to where that does. In fact, if you do too much post-training, oftentimes those models will forget what they've done at the beginning. They'll start to lose some of their reasoning and their core intelligence. I mean, this is an unsolved problem, except when you go and insert your data in the pre-training phase.
Starting point is 01:49:00 And so what we do with NOVA is we expose checkpoints. You can take a 60-point-percent trained or an 80-per-trained model, insert your data into that pre-training phase, mix it in. We then expose actually Amazon training data to you via an API that you can then mix it together. And so it's like you said, here's all my corpus of corporate data, here's everything that I need to know about my industry. we then mix that in and then finish pre-training the model so you get a pre-trained model that totally understands your company and your data. And then you can go do fine-tuning. You can go do reinforcement learning gyms.
Starting point is 01:49:34 After that, you can shrink them down and distill them. You can do all those things, but on a pre-trained model that deeply understands what your company does. And is that called mid-training now? Is that the right buzzword for that? It's not like we're, and mid-training is a different thing. This is the first time that anyone's ever exposed this idea to deliver pre-training checkpoints where we can mix in your data. No one's ever done this before today.
Starting point is 01:49:54 Very cool. It's first time. Great. Well, then, yeah, on market structure, do you think it's self-serve enough that, you know, large corporations will do it? Or do you need an AI lab? Do you need an AI scientist? Do you need someone to, who can, you know, write TensorFlow or pie torch or something to
Starting point is 01:50:13 implement this? Or is it something where, you know, just a normal software engineer at a large company could go and pull this off the shelf and implement it? Yeah. Well, we'll see. I think we're going to keep working on the tools today. I do think that for some enterprises, they'll want to have some consulting folks that help them with this.
Starting point is 01:50:28 I think we'll have some people where you have some experts that should come and teach how to do this. And I think we'll quickly get the tools to a point where, you know, it's not somewhere where, you know, a non-technical person is going to go do this for sure. But it may be a software developer that tends to be a little bit more on the AI or ML side that we hope is going to be able to go do this without having to have a whole bunch of expertise
Starting point is 01:50:50 about how to go free train a frontier model. Yeah. On the cost side, obviously you're working, you announced a new chip. I imagine that there's, you know, the emergence of some synergies across the models that you're developing, the software you're deploying the cloud and then also the chips. How are you positioning the, like the Traneum ecosystem? Is this something that you're planning on really doubling down on across the entire stack? or do you want to be more chip agnostic?
Starting point is 01:51:21 Are we going to see you buying TPUs in the future? No, we definitely, well, a couple of things that should there all to unpack. The first is we're very excited about Traneum and think it has enormous potential, and we absolutely think there's a benefit to optimizing every single layer of that stack, where we have the best cost performance that we can deliver at Traynium. We have optimized models for you to use and applications and agents at the top of that that we talked about. So we think that whole optimization of that stack is going to be critically important. And, of course, we're going to support choice for our customers as well.
Starting point is 01:51:55 And so we'll continue to offer GPUs from Nvidia as an example. And we have a very tight partnership there. But we do think, and we're quite excited about what Traneum 3 is going to offer for customers. And I do think that we're going to see an explosion of that ecosystem as more and more people get access to those chips and are able to take advantage of the pretty significant cost performance benefits. that you can get from running on Traneum. How are you thinking about open source, the open source ecosystem that you need to build around Traynium? That's the big discussion with the DPU right now,
Starting point is 01:52:30 the question of, you know, Google has some amazing folks. They have some amazing software folks, it seems like. They don't necessarily need to open source everything. And so a lot of people are waiting to see how much the industry, you know, builds open source alternatives independently, versus how much does Google just give away? What's your thought process on building an open source ecosystem
Starting point is 01:52:53 or even just giving developers access to close source software to run efficiently on Traneum? Yeah. No, we're all in favor of having an open set of software to run on Traneum. In fact, we have our neuron SDK, which is open source today and allows everyone to contribute to that. We think that the more that we can collaborate on that software ecosystem to make it easier for people to use chips.
Starting point is 01:53:18 And we, of course, support the broad set of whether it's PITORCH or other kind of open frameworks as well. So we collaborate across the industry on that and are big advocates of contributing to and supporting that open ecosystem. Jordy? I love to get your insight on just like general constraints for AWS as a business, what you guys are doing on the power side.
Starting point is 01:53:43 Is that a real constraint? Anything that you can share there. Yeah. You know, like, it's, as we're scaling incredibly rapidly, we've, you know, we recently announced that we've added 3.8 gigawatts of data center capacity in the last year alone, which is just an insane amount of data center capacity. Thank you. Oh, you're welcome.
Starting point is 01:54:05 And so it's ramping incredibly fast. And it is a constraint. You know, we have more demand than we have supply today for AI. Sure. And as we ramp up the supply chain, we think about all of the constraints. We think about chip constraints. We think about networking constraints. We think about power constraints.
Starting point is 01:54:22 We think about networking constraints, data centers, et cetera. And so we're working really hard to try to remove every single one of those. And with an industry that's growing as rapidly as the AI one is, there's always going to be some constraint. And we work really hard to keep removing blockers every time so we can keep growing fast. Makes sense. well we have a hard stop so thank you so much for taking the time on such a busy day to come chat with us we'd love to have you back on the show and go way deeper but thanks so much and congratulations in all the massive releases we're excited to dig in deeper and keep chatting about them but have a great
Starting point is 01:54:58 rest of your day we'll talk to you soon thank me on goodbye cheers let me tell you about public com investing for those that take it seriously they got multi-ass investing and they're trusted by millions. We have Take Him, author of the NVIDIA way, and a Barron's senior writer, joining the show. Take him. The author of the NVIDIA Way. Thank you for
Starting point is 01:55:20 joining the show. And I'm sorry it took us so long. We've exchanged posts on X many times, and we wanted to have you on the show earlier, but I'm so glad we got to ask right away. Because it's the perfect time to talk to you. Do you have roommates? Not.
Starting point is 01:55:36 No roommates. No roommates. Long time listener, first time caller. I'm so excited to be on this. I'm so excited to have you here. Reminder, everyone, go buy the book. Seriously, Christmas is coming. I can't imagine a better gift for everyone.
Starting point is 01:55:52 For a 10-year-old? For a 4-year-old. I know what my son's getting. He's getting the Invidio way. Jensen Wong. Multiple copies. Multiple copies. No, seriously.
Starting point is 01:56:02 Get 10 copies. A lot of teenagers have read it. It's amazing, and they reach out, and it's an inspirational, entrepreneurial book that a lot of parents are giving to the kids. I love your headset, by the way, because we're actually developing our own TBPN over here headset, because this is just like, this is the ideal set of it. I was telling my friend, I don't care if I look like a dork, my hearing is going. So, like, I get locked in when I have the headset, I could hear everything. No, and it's wired. The worst is AirPods.
Starting point is 01:56:37 Air pods have a tiny lag. If you're doing Zoom calls, it just, it ruins it. But we feel like you're right here at the table with us. It's not Code Red over there. He's Baja blasting. Yeah. You know he's Baja blasting. Anyway, let's, let's, I mean, we have some time.
Starting point is 01:56:53 Let's run through, I want to know a little bit more about your perception of Jensen, your perception of Nvidia, and just set the table for us. we know how what is his management style how does he he has all the direct reports he reads everyone's like to do lists every day they have tons of employees they never fire people like what makes invidious culture unique set the table for us so then we can go into the opportunities and challenges with that framework in mind so the first thing i found out about their culture is it's very blunt like i think in most companies and you guys have done startups but i don't know If you work for large corporations, bureaucracy builds a process that gets ossified.
Starting point is 01:57:37 Vitya is the complete opposite. If things are not going well, he'll chew you out in front of the whole company. And that kind of blunt mentality, I think, you know, sparks better performance because you don't want to be embarrassed in front of Jensen in front of the whole company. But also, it just sparks agility. Like, when I talk to people at Intel or Google, like the biggest problem they have is meeting paralysis and you need to get findoffs from like five different executives.
Starting point is 01:58:07 And Nvidia, like, you have a meeting. Jensen makes a call. He seeks out the right information and you move. So there's agility at NVIDIA. The other thing is just a meritocracy. Even from the beginning, like 30 years ago, Jensen's always asking, who's the smartest person that you work with? Who should I try to recruit?
Starting point is 01:58:25 And from the beginning, like, Dwight Dirk's, who's one of the top people right now, he recruited him because he talked to this other guy and he said, oh, Dwight's really smart. Like, I really enjoyed working with him. And he just almost ruthless in the sense. He just goes after him and then recruits them and bring people aboard. So this meritocracy, agility, speed, and just getting rid of the internal politics, I think, really separates him. How has most of the team becoming millionaires affected the culture, if at all? I think they have that thing in the first 10, 15 years. People started getting sports cars and putting them in the parking lot.
Starting point is 01:59:08 But a lot of people... Let's go. That's the best news of you. That's fantastic. Sounds like it's had an incredible impact on the culture. Jobs finished. You don't need to respond. You don't need to say anything more.
Starting point is 01:59:19 I think winning culture breath winning, and people want to stay with winners. Yeah. Right? You want to win on the track then. A lot of the people I meet in terms of colleagues, they work for a chip company for five years and the chip doesn't work out. Yeah. And you just wasted five, seven years in life. So you want to stay with a winning company, and Nvidia's been winning for 30 years. So it's kind of like winning if it gets winning, you get the talent, and then the talent stays.
Starting point is 01:59:44 Like there's so many top executives that NVIDIA that stayed there for 25, 30 years. Yeah, and there's also, there's, there's some benefit of people when, if, if somebody doesn't have a scarcity mindset, right? And they're just playing to win. Like, they're just like, they're, they're no longer thinking like, oh, if we can just get to that next milestone and get this secondary sale. And if I, if I can participate and if I can vest my two years and sell into the next tender offer, then I'll, you know, they're just like, we're good. All that matters is just being as elite as we possibly can be and just, uh, and doing it, uh, for the love of the game, basically. And a lot of these people that, like, they made it.
Starting point is 02:00:24 They could retire. They could have retired 10, 20 years ago. But they stay and still work 80 hours, 100 hours a week, because that's what's expected. Like, even the marketing people, NVIDIA are working 80, 85 hours a week. And that kind of mentality, I think, is different at the companies. Okay. Let's shift into the competitive dynamic. I mean, NVIDI has been, I was revisiting the performance of the MAG-7 since the Don
Starting point is 02:00:49 ChatGPT. It's been three years exactly. Nvidia, by far the winner, top 10x on market cap. The next closest company, I think, maybe 4xed by comparison. And so the clear AI winner in the
Starting point is 02:01:05 public markets, the most obvious AI trade that just completely ripped. Now, you know, there's this whole narrative of like, how strong is their moat? What is the TPU mean? Is the TPU going to be significantly competitive? is there going to be margin compression?
Starting point is 02:01:21 How have you been processing this new narrative that Nvidia might face serious competitive threats because they're so on top of the world, everyone owes them so much money that people are saying, I've got to get a discount from somewhere, and I'll go to Google maybe. I want to talk about this 10x move. It hasn't been like straight up to the right. There's always been, every three, six months, there's always a reason to sell Nvidia. like the H-100 problem, the transition to Blackwell, China, ASIC, Brockcom competition.
Starting point is 02:01:55 So this stuff has been happening, this entire 10-X move up, and the media loves to latch on to the latest thing to worry about, right? We had EAPSEC earlier this year that the entire media establishment was, it's over for the AI trade. AI models have become so efficient when it was actually the opposite because the reasoning models, and there was an exponential demand for compute. So I find it amusing, like, the whole world kind of discovered that Google had a really good chip in the TPU. Which they've been working on for a decade, too.
Starting point is 02:02:28 They've had it for 10 years, right? They've offered it to clients in 2018. This is nothing new. The Ironwood specs, you know, which I always take with a grain of salt with specs, even in Vidiya, they talk about 25x improvement with Blackwell ones, more like... I'm really just focused on the name Ironwood is goes pretty hard. It's pretty good. They got some good names over there in Google. Ironwood came out, all the specs came out in April.
Starting point is 02:02:55 Like, this is not, all this stuff is, isn't new. Yeah. And another thing I want to say is TPUs, their chips, we're going to Stanley estimate those huge decline in TPU shipments in 2025. And Google, at Google Cloud, Nvidia GPUs took more share than TPUs this year. Like, no one talks about this, right? And now everyone's going, Ironwood is going to,
Starting point is 02:03:18 take over the world and the Nidio's in trouble. Is that just because we're still early in the Gemini 3? No, it was a Gemini 3 thing. People are like, Gemini 3 is the best model in the world. It was the best model of the world for a whopping six days. And ChatGPT, it's still number one on the App Store. Yeah, but I mean, six days, it was unseeded by Claude, which was also Anthropic, which is also TPU potentially in the future.
Starting point is 02:03:41 Yeah. And ChatGPT, Microsoft, the head of Microsoft AI Cloud, who needs Godgretory, that opened. is training their next models on the G.B. 300 M.D.L.72.M.U. That just went live in October. Actually, earlier today, the NVIDIA CFO said it's going to take six months. So I was a little disappointed in that. Wait, six months for the training run? Yeah. Well, she said the first models on Blackwell, like on the superclusters are going to take
Starting point is 02:04:08 six months. So it's going to take time. It's going to be another six months. Open AI going to get there. The Clod and Gemini benchmark gains with pre-training, that's the most bullish thing for the whole AI industry, right? AI adoption is the scaling laws
Starting point is 02:04:27 are intact, everything's going to work out, and Open AI is going to get there when they build their next training, right on the next model. So, going back to TPUs, thanks goodness, a shout out to semi-analysis. They do the
Starting point is 02:04:41 best channel work in the industry. Everyone freaked out on Friday, right? They read that semi-analysis. No, oh, no, total boss of ownership. TPUV-7. They're going to destroy it. But, like, people that actually know the industry, it was flaming bullish for Nvidia.
Starting point is 02:04:57 Like, it just, it was, like, so obvious in my face because Dylan and the Sammy analysis, they said, wait a minute, the next TPUV8 is not going to be that great. They lost a ton of people, and the set function up in performance is not going to be that great. So, you know what's going to be great? NVIDIA's Vera Rubin, which comes out at the end. end the next year. So no company is going to switch.
Starting point is 02:05:20 People are saying it's the Rick Rubin of Chip. Are they related? Anyway, so Barra Rubin is going to be dramatically better at the end of next year. And even if, you know, the ironwood, which just became generally available in their ramping right now, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, to put workloads from CUDA and VATGPUs and put them on. There's always problems when you put them on a new chip. Yes.
Starting point is 02:05:51 And let's talk about TPU customers, right? Everyone freaked out that META might spend a few billion dollars in 2027. That sounds like a lot, right? That's less than 1% of Nvidia's expected revenue. Sure. It's nothing. And Ben Thompson was very smart and astute. He's like, who's going to buy the TPU?
Starting point is 02:06:10 Who are the biggest buyers of AI chips? They're the hyperscalers, right? So maybe meta will put a portion of their workloads 1% in the video's revenue. Is Amazon going to buy TPUs? Well, John just asked the CEO of AWS if he was going to buy TPUs. He dodged
Starting point is 02:06:26 that question. He didn't say yes. There's no way in hell they're going to buy TPUs. They have their own trainium. They're not going to support their number one, like, one of the number one... Their arch rival. Yeah, they're not doing that. Is Microsoft going to buy TPU? You should have been like, yes, I'm going to buy
Starting point is 02:06:41 one, so I can, like, study it. Microsoft's not going to buy T.C. They're the number two player in cloud computing, and they're not. Are the neoclouds going to buy TPUs? Now you're going to say, yes, Google has some neoclots. You know what happened with those neoclots? They're financially backstopping those neoclots.
Starting point is 02:06:59 So Google is financially giving money and backstopping the debt for those neoclots. So there's a handful of small neoclots, but is core we're going to buy TPUs? Probably not, right? Who are the other customers of AI chips. Enterprises, companies, sovereign
Starting point is 02:07:15 AI. Yeah, anybody that wants to run like a fine-tuned model, some small model, something like that. Bairns. With a bite. So, like, if you just go down on our first principles basis and look at the customers of AI chips, they're going to stick with Nvidia.
Starting point is 02:07:33 The millions of developers know Kuda. So you don't have to, you really need, like Dylan talked about this, is you really need, like, top-notch, like, software. sophisticated engineers that can, like, work with TPUs and learn jacks and all that stuff. So most people don't have those crackerjack engineers, right? So they're going to stick with NVIDIA because everyone's used to NVIDIA. NVIDIA is backwards compatible and forwards compatible.
Starting point is 02:07:59 So like 20, 30 years of this stuff. And if you buy it, NVIDIA COLAFF, the CFO talked about this morning. It's, you can use it for training and they can use it for. inference, and it's all on the same architecture, and it's going to work. I talked to an AI startup CEO a few months ago. He tried AWS training. It looks a lot cheaper. Total cost ownership. But then it crashed. There were bugs, their reliability. They couldn't figure out what happened. And they just threw up their hands. I give up. No one is going to, like, if you have reliability problems, bugs, crashes, the best thing about NVIDIA is all that stuff
Starting point is 02:08:38 has been ironed out over the last 10, 15 years. If you have a problem, you can figure it out. Yeah, it's like giving an F1 driver, like a car that is unreliable and saying like, hey, go race. Go race. Have fun out there. And then it's like, you know, D and FDNF immediately. Spex is awesome. It seems great. But then when actually build your business on it, you put the future of your business onto
Starting point is 02:09:03 something. The number one thing, it's not price. It's like it better. Oh, better work. It better work. And with the video, it works. Yeah, but react to this idea that Dylan Patel was joking about as TPU is a stalking horse. So this idea that Sam Altman is already saving 30% on his Nvidia purchases effectively
Starting point is 02:09:26 because just the threat of going to TPU is enough to get NVIDIA to make an investment or slightly discount in one way or another. I don't think that's reality and I don't think that math actually works because I think he's confusing the AMD deal where AMD gave, you know, free warrants to open AI, right? First of all, the deal's not done. Sure. It's a letter of intent. None of these deals are done. They're not done. AMD's done.
Starting point is 02:09:55 They signed an agreement where they're giving away a percentage of their company through these free warrants. The Nvidia deal hasn't been signed yet. And they actually, there's language in the tank queue that it might not happen. But doesn't Open AI have to buy AMD chips in order to get the warrants? Yes, yes, yes. And so it still could be that they don't actually end up going through with the purchase and then they wouldn't get that's a good point. Yeah, yeah.
Starting point is 02:10:17 So the whole like, let's do a size step here with the circularity and all that stuff. All this stuff, it's like one gig at a time. There's milestones on Open AI. There's milestones on AMD, technical milestones. They have to achieve certain targets. So all this talk about, you know, everyone loves the, the big number that adds up five years of KAPX. A lot, like that, it could get, the leverage could be up or down depending on how things
Starting point is 02:10:46 happen every each year of the way. So, you know, it might not be that big number if Open AI doesn't come out with an amazing model or AMD isn't able to hit the milestones they set for their next 450 MI chip, right? So, like, it, you know, don't worry about five years. take it one year at time. Right now, demand is off the charts. Now, going back to the NVIDIA 30% discount, like, that's not how equity investments work. If NVIDIA does invest $10 billion, $10 billion up to $100 billion, say that, say that, Nvidia gets ownership of the company. It's not like a freebie, right? You're giving away ownership of your company. So it's not really a discount. You're getting, you're getting
Starting point is 02:11:27 ownership of the company. So I don't really believe in this 30% discount thing because Invidia, Jensen will say they're investing to accelerate open AI and they're looking forward to to open AI going back. I mean, it's definitely creative. It's definitely a new structure. I'm just trying to I would steal man in that
Starting point is 02:11:48 if I'm an entrepreneur and somebody comes to me and they're like, I'm going to invest $100 million in your company over a series of milestones and you're also going to buy something from me. I'm like, yeah, I'm taking some dilution but realistically like this is a way less of a headache. Like, where else was I going to get $100 billion from in Open AI's case?
Starting point is 02:12:08 Like, it's a great, it's a great source of funding that, yes, it will be diluted, but the whole structure is all diluted all the time because of all the different ownerships. Except for this kind of sentiment thing that we had the last few weeks, so Open AI hasn't had any problem in raising money for venture capital. Yeah, yeah, yeah, it's true. It's not like Nvidia is the only source of funding for Open AI. Like, everyone wants in that revenue. new run rate, it's like $5 billion to $20 billion at the end of this year. So what was you taking on the code red? What do you think about the code red?
Starting point is 02:12:41 So I saw that you showed that Ashley Vance Yeah. Yeah, Mark Chen. Mark Chen was talking about how they kind of focused a little too much on reasoning and their pre-training muscle wasn't there. Reasoning, we could talk about this later. Reasoning is like the biggest kind of accelerant of
Starting point is 02:13:02 AI demand in the past year. So I think it's actually really good. And supposedly Ilya was doing the research for reasoning. Reasoning is awesome, right? But they kind of focused on reasoning the past year with 01 and 03. And now they're like, okay, we have to go back to pre-training. So Open AI knows that pre-training still works because Gemini 3 had great pre-training results and Cloud Opus 4.5 did.
Starting point is 02:13:27 So now they're going to do the pre-training. So they have their focus on one thing. and now they're going to do the other thing and make their model much better. I do agree that OpenAI has been a little too maybe diluted. They're doing apps.
Starting point is 02:13:43 They're doing hardware. They want to compete in AI infrastructure against Microsoft and Oracle. They want to compete in AI chips against Nvidia. I thought it was really interesting. Like Satya repeatedly said
Starting point is 02:13:57 he wouldn't name who he's talking to but it's like, I think it's important. that we realize this is not a zero-sum game and this could be a win-win partnership. That was during the Anthropic NVIDIA deal with Microsoft, right? He fed that a couple times. And I think the person that he's talking to is Sam Alck. Right?
Starting point is 02:14:18 Like, let's, let's, Nvidia and Microsoft made OpenAI as successful. They were the partners. Like, why are you competing with the partners that brought you to dance? Right. let's go back focus on making the best AI model in the world
Starting point is 02:14:34 and don't compete with Nvidia and Microsoft maybe five years from now but it seems a little aggressive to compete with them right now let's talk about China there's been a ton of debate over
Starting point is 02:14:48 NVIDIA selling chips to China, legacy chips, older chips we've gone back and forth on it so many times what's your current thinking about best policy for
Starting point is 02:15:01 Nvidia exporting chips to China generally? I think the best thing is to keep it one or two generations behind the current say of the art. This is a really nuanced policy that people, you know, everyone's either hawkish or dubbish.
Starting point is 02:15:17 The best policy is to keep I don't want to use the word the Howard Lutnik used that got China very upset and forced. Forced China to like tell this company's not to buy H-20. But the best policy is to get China still on the NVIDIA SAC. So, NVIDIA gets $50 billion of revenue per year that can help R&D and fund R&D and make
Starting point is 02:15:43 the chips even better. Like, Nvidia and the U.S. already won, right? They have 95% market share. Like, why are we going to give $50 billion of oxygen to Huawei and all these other Chinese AI chips? companies that now Chinese companies that need to buy AI
Starting point is 02:16:04 chips are going to buy Chinese AI chips. Like, why not keep China on the NVIDIA tech stack, one or two generations behind, don't give them the best stuff, but maybe one generation by it. I think that would be the best compromise for both sides. But I don't know what's going to happen.
Starting point is 02:16:20 Where do you place a likelihood that the Chinese market has opened up again at some point in the next 12 months. Maybe 50-50. I mean, that sounds like a cop-out. Like, I was much more positive six months ago, but you know, this has been just so crazy. First, the Trump administration banned H-20. Then they didn't ban it. They said it was fine. But then China was like, no, you hurt our feelings. We're not going to let companies buy the H-20. And then they ban it. Yeah. And then maybe Trump is going to let NVIDIA sell the H-200
Starting point is 02:16:54 or a Chinese-specific version of Blackwell that's kind of hobbled a little bit. Who knows? Like, Nvidia needs to convince the Trump administration and then China to buy the chips. The worst part of it
Starting point is 02:17:09 is China was willing to buy the H-20 and just all the kind of geopolitics and hurt feelings that ship has sailed. So I don't know what's going to happen. But I do think the ideal situation, situations, NVIDIA could sell one generation behind, make $50 billion a year, and keep the
Starting point is 02:17:30 competition from Chinese AI startups out of the way. And even understanding that at some point in the future, China is buying effectively zero chips from NVIDIA, but it would be five, ten years in the future? It was like you have to assume that like... Right now is there. Right now, The amazing thing is it's 0% right
Starting point is 02:17:59 and Nvidia's revenue accelerated for the first time in two years Nvidia's revenue accelerated in this latest quarter and this is like not talked about enough this is the first quarter that the NVL 72 AI server has been available in volume and then revenue just skyrocketed without China which is incredible right
Starting point is 02:18:20 And that's why I'm so bullish over the next few years, because next few quarters, let's say that. Because this product cycle is going to last at least three, four quarters. The key tell is the revenue acceleration first time in two years. And the other key tell is that the networking segment for Nvidia was up 162% year over year. And typically, a lot of these data centers and these neoclouds buy the networking stuff six months ahead of time.
Starting point is 02:18:46 So the next six months from now, Like the GPU numbers for Blackwell and the NVL 72 server is going to be, it's just going to be bonkers. It's going to be off the charts. And people don't talk about these NVL 72 AI servers. They're $3 to $4 million, right? There's 72 GPUs, 144 dyes, one and a half tons, 5,000 cables. And the prior version was 8 GPUs. So these AI servers, I call it the iPhone 3G moment.
Starting point is 02:19:15 Do you guys remember the iPhone 3G? This is big for the Christmas shoppers out there If you want a gift It's a step up from the NVIDIA way I reckon Or you want to bundle something Picking up an NVL 72 Yeah, it's only $4 million
Starting point is 02:19:29 Yeah But for the right 12 year old in your life Or the potential the intern I think Tyler won't put it under the tree But it could be fun Keep it in the garage It's like a Lexus You put it in the in the driveway
Starting point is 02:19:42 With the bow on top That's the way it does With the foreg you drop the forklift come drop Exactly. I got you I got you in NVL 72. Enjoy. And then we have the reasoning model thing
Starting point is 02:19:55 where exponential compute and companies are actually seeing huge but cursor, 40% productivity gains. H. Robbins and 40% shipments. Rock and mortgage, 80% reduction in paperwork
Starting point is 02:20:09 cost processing. This is like the next year, because of AI reasoning, because of the NVLSMD2, that's why Amazon and Microsoft said every quarter this year they raised their CAPEX and everyone's like they're going to cut their CAPEX no every single quarter they raise their CAPEX
Starting point is 02:20:27 that's because they're seeing the demand and that's why Amazon and Microsoft are going to double their data center capacity over the next two years I mean that that's crazy right it's September quarter more leasing there are more data centers leased than entirety of 24 this is like exponential step function up And people aren't talking about.
Starting point is 02:20:48 They want to talk about TPUs, like, destroying NVIDIA. What about some of the kind of demand guarantees that have been happening? Is that a concern at all? Do you think about it much? Is it not really? I mean, demand guarantees, like, you're talking about NVIDIA and Coreweed. Like, when that happens, analysts every quarter are on the conference call, like, did you use that, you know, demand guarantee? Like, it's not happening yet.
Starting point is 02:21:18 CoreWeave's 5-year-old GPUs that everyone says are useless are 100% utilized, right? H-100, a massive cluster before it expired, probably a three-year contract, they got like 95% of the pricing. This is, like, unheard of. And the reason is there's overwhelming AI demand, and there's not enough capacity. Overwhelming AI demand, not enough capacity. and people just are just they don't care about
Starting point is 02:21:48 what's happening in the real market this is real life facts evidence numbers and video going from 56% revenue growth to 62% revenue growth on 57 billion dollars
Starting point is 02:21:59 with Darrow China revenue I mean we're a bonkers we talk about the stock price being up 10x the revenue is up 10x in like two years this is like beyond history
Starting point is 02:22:12 the last 30 years years of following technology. I love how, Evan, you're the, you're the only person without NVIDIA fatigue. You're just like, you're not bullish. David Gagins of, you're the David Gaggans. They can't keep running. They can't keep running. This is not me just like making stuff up. This is like numbers are there right in front of everyone. You make good points. You make good points. I like it a lot. We'll have to have you back on the show soon. This is a lot of fun. Yeah, let's do it. Let's, let's make this a regular thing. Yeah, this is fantastic. To have you on finally. Thanks so much.
Starting point is 02:22:43 Thanks for all your reporting time. The book is the NVIDIA way. Get it at wherever books are sold. Get 10 copies. Give it to everyone in your life. Also, give the gift of fin.a. The number one AI agent for customer service. Automate your most complex customer service queries on every channel.
Starting point is 02:23:08 Up next. Oh, yeah, we can just go straight into our next guest. Let's bring in. We have TARC from CalShare. with some massive news. Tarek, great to see you. Locked in. From the off the floor.
Starting point is 02:23:19 Stream. Good to see you. Hey, guys. Thanks for having me. Very excited to be here. You are locked in. Look at that backdrop. Fantastic.
Starting point is 02:23:26 Please introduce yourself. You've been introduced. Give us the update. What's the news? Let's ring the gong. Well, we just raised our series E. We just raise a billion dollars. 11 billion dollar valuation.
Starting point is 02:23:40 Great wind up. Great wind up. I was waiting for the gong. A fucking sick moment. Yeah, great to have you on. I was thinking over the, I don't know if anybody had a crazier Thanksgiving holiday than you. It was, there's a lot, there's a lot going on last week. So nice to come out of that with a, with a big announcement.
Starting point is 02:24:02 But yeah, maybe kind of just update us on, I think everybody has been following the prediction market wars. The more important story, I think, is just how much. Some people are calling it bloodbath, actually. Yeah, I mean, just like it's been a battlefield on the timeline. But yeah, I think like what's happening in the background is like this explosion of this, you know, new asset class that, you know, again, I think in your announcement earlier, you were saying a few years ago, there nobody really cared at all. And now, you know, you and the industry broadly have millions of users. So it's pretty unprecedented. But yeah, what's been the latest on your mind?
Starting point is 02:24:45 I mean, I think the thing that's happening right now is British market's, I think, I've gone mainstream. I think every inch of evidence is pointing towards that. And I think that the thing that we're seeing is there's sort of one of these rare shifts in consumer behavior that you don't see often. Like they don't happen. Like changing the behavior of a customer, the habits of a customer is a rare thing, and it's unique. And when you see it, you have to really go after it with all
Starting point is 02:25:11 your might. And it's, you know, there's like a number of things that have to align for that to happen. And I think they're aligning for prediction markets. I think it's happening. And I think there's, you know, one factor is the fact that people are not really trusting the sort of legacy media and legacy sources of information and they go to prediction markets to get smarter. The other one is that they're legal now. You know, cashies took on this sort of battle over years to legalize this entire market and, you know, set it up as a legitimate financial asset so that anyone can participate. And three, I mean, I think we're all kind of, we sort of caught wildfire this, this year. I mean, I think the, we're seeing people, there's a little bit of this
Starting point is 02:25:52 phenomenon where you cannot watch a sports game without looking at the Calci odds live and the Cali charts. You cannot talk debate about a topic about the future without, you know, talenting somebody to put a position on Cali on the app. So it's a huge announcement. We're very excited about it. And honestly, really feels like we're just scratching the surface of what prediction markets can be. One thing I've noticed when
Starting point is 02:26:16 I'm watching a sports game is there's sometimes an integration with Kalshi, sometimes with a competitor. What's actually going on? Legacy sports book. Yeah. What is actually going on? I feel like a lot of people who are just passively observing the timeline are seeing a lot of like announcements and partnerships with the partnership economy the part
Starting point is 02:26:36 and people are joking about it like what's actually going on what's at stake with some of these partnerships what have you done and what does it actually mean because it feels like if you do a partnership with a specific league that doesn't necessarily mean that I can't get odds on that event somewhere else so what what is actually going on with the partnership economy I mean I'll tell you kind of our approach to this so we are building you know our focus is building on a business. It's very metrics driven and sort of for context. So we're doing a billion and a half of volume a week now. Wow. And, you know, we're market leader by a meaningful margin. I think depending on sort of how you measure it, so we're something around 80 to 90%
Starting point is 02:27:15 market share now. And I think any partnership we do, we bucket them in a bunch of categories, but they're all focused on like actually driving legitimate volume and legitimate use case into the product. So our partnership with, you know, platforms like Robin Hood. I mean, Coinbase leaked. It's coming in December, and prize-fix, Weebel are kind of in that bucket. Then we have partnerships with a series of partnerships coming around news. One of them leaked this morning in the New York Times article. But they're also very... One sentence, 10 leaks.
Starting point is 02:27:48 Everything leaks these days. You know, we're just like, nothing is news anymore. It's like sort of, you know, it's all leaks. But the point is we're focused on things that drive legitimate use to the products. and then drive you legitimate utility to the partner. And so, you know, whether it's a broker, obviously, you know, this could be a big revenue line for them. And if it's a news network, it's a complement to the reporting that actually makes the reporting more accurate. And, you know, reporters love truth and prediction market spring truth.
Starting point is 02:28:18 So you could see the synergies in how they fit. Okay. Yeah, yeah, yeah, that makes a lot of sense. Yeah, what, yeah, I think the, some of the big news had allowed. week is that Robin Hood is entering and kind of potentially trying to verticalize the product experience on their side what can you say about the I guess like how you see the structure of the market evolving you guys are in exchange Robin Hood is a brokerage sounds like they're trying to actually build an underlying exchange themselves how much should how much should sort of observers of the
Starting point is 02:28:57 industry look to how the how stock trading and stock markets, stock exchanges have evolved versus prediction markets. Like, what does this market kind of look like in five years, 10 years, as much as you can kind of pull out a crystal ball for us? I mean, maybe the basics is like, and you've seen this a little bit in AI, right? After you see the success we've had, it's basically indicative of like, okay, there's a massive market opportunity ahead of us. And when that happens, I think you're going to inevitably see a ton of competition.
Starting point is 02:29:32 And generally in those markets, like the sort of massive surge of competition, whether it's brokers, there's some of the sports books like Daxons and Fanduel coming in, it's just usually a sign that there's a lot of good things to come for that market. It's a sign that, like, you have big companies reprioritizing their entire roadmaps to go all in after this. And that's a positive for us. Like we're market leader in a market that, you know, everybody is starting to believe is going to be ginormous. In terms of the specific question of market structure, I mean, like, you know, we have obviously the exchange, we also have our direct products in some ways
Starting point is 02:30:06 are competitive with some of our partners. And I think, you know, the same way that we're working with a lot of different brokers, over time some brokers are going to sort of diversify and work with a number of different exchanges. And that's how these sort of market structure evolve over time. And the only thing that matters, the kind of the thing that stands out is similar to any other market, it's product and product velocity is, are you putting out products faster than anybody else, and are you putting out products better than everybody else? And I think Kashi has had a pretty incredible track record of setting the pace in the
Starting point is 02:30:34 industry. At least if you look at the last year, we've set the pace on the industry and everyone's following. And I feel pretty good about us continuing to do so in the next 24 months. How do you think about the market structure? I think everyone's wondering, like, obviously this is a new market. It's unlocking entirely new sort of asset classes. And it's it's obviously big. Everyone's excited about the numbers, but is this natural monopoly? Is this duopoly? Like, how many winners will there be? How do you even think about the market structure? Is there some return to scale? It's interesting. I kind of like don't think much about that. I think investors love to sort of, investors do this thing where they're sort of going
Starting point is 02:31:13 rationalize all of it in five years. You know, everybody's going to be super smart about how they're like all figured out. But like, look, I think that like it's a very nascent thing, right? It's, it's, You know, like, it has some similarities to ride share. It has some similarities to the Draftings Fandau era when that happened. It has some similarities to the online brokerage industry, and it has some similarities to financial exchanges, like CME. So where does it fall? It probably somewhere in between all of these sort of buckets.
Starting point is 02:31:40 And probably not exactly the same as any of these other buckets. And I think that you'll see more of, I think with enough scale in financial services, but also true for any industry, everyone gets into everyone's territory. And so the only thing that matters, again, is sort of what companies are going to rise above the others in terms of product velocity and product quality.
Starting point is 02:32:01 And that's just what we're narrowly focused on. There's a question from the chat. Can you explain how external market making works on Kalshi? That's been a, for some reason, a hot topic recently, but market makers are part of any financial market. You kind of need them to basically have liquidity in markets. And actually, Kalshi and prediction markets have less customer to market maker flow than traditional markets. If you look at options, for example, it's like the vast majority is, you know, Jordi to a market maker like Citadel.
Starting point is 02:32:32 Whereas on Kalshi, actually, the vast majority is, you know, Jordi versus John. And then some of it goes to market makers. And it's an open, transparent orderbook where everyone is competing on price. And we have actually a separate company called Kalti trading that trades on the exchange, but they're a very small percentage of any liquid markets. Really, their function has been, for new markets are a little bit weird. Yeah, that makes sense because if it's some really, really niche thing, who's going to put in the first 500 bucks? Exactly.
Starting point is 02:32:58 And they're not very profitable. It's actually, we really, like, they're really focused on providing a good customer experience that we bootstrap markets rather than like any meaningful part of the business model today. And if we took it out, it'd be actually a worse experience. So I think it's definitely not positive for the ecosystem. But it's a bit like Uber, you know, when the adverse interest got impacted. the taxis. They were coming out with all these reasons, right? Like, you know, about all these kind of random reasons, but I don't think there's much truth to it. Yeah, so, so I'm sure you can't comment on any specific lawsuit. There's a number of them. But what has been the, I think there's quite a lot of prediction markets experts that have looked at some recent lawsuits again against prediction markets and said they clearly don't understand how this works.
Starting point is 02:33:46 like, can you comment at all in some kind of like misunderstandings broadly? Yeah, so we, what Calci has done is first regulate prediction markets as a financial instrument under this agency called the CFTC. People have been hearing more about the CFT recently because it also regulates crypto. And that's one of the main financial agencies. There's the SECC that does stocks and CFT that does commodities. And then we did the same thing with elections and now we did the same thing with sports. And the way that it works is, like, like financial markets, those are regulated at the federal level. And so the law around these markets is just federal.
Starting point is 02:34:24 They kind of report to a federal government and federal regulator, not a state government and state regulator. And there's a bunch of reasons for that. But it's kind of how the constitution was formed, which is some stuff makes sense at the federal level and some stuff is more local and makes sense at the state level. And we are one of these things that fall under the federal level. And federal law preempts state law. So if you are okay on the federal side, state law doesn't really kind of
Starting point is 02:34:46 apply to that exchange. And that's why we have one regulator, which is the CFTC, our federal regulator. And again, like, I think it's normal with, like, when something so disruptive happens to an industry, the people that are adversely impacted are going to come after it and come up with all sorts of arguments for why it shouldn't exist or why, you know, Airbnb was terrible and all these different things. But at the end of the thing that drives it long term is, is this a great product on our consumers loving it and using it? And the answer is yes, in this case. Off of the success of Kalshi and Polly Market, there's been a bunch of net new prediction market startups that are created.
Starting point is 02:35:24 Is there a possibility that this market like ends up having these sort of like niche, maybe more like vertical marketplaces? Or do you think that the platforms at the greatest liquidity and the deepest liquidity will ultimately just absorb those submarkets? It depends on how narrowly we define prediction markets versus broadly. Like I really think of prediction markets as kind of just like a next general, like expansion of financial markets to touch to anything. Calci means everything in Arabic. But really, if like if markets kind of progressively grew over time, calcium, what we did is just like kind of widened that set dramatically over what it could touch.
Starting point is 02:36:08 So I could see some, you know, startups innovating on like specific verticals over time. and doing reasonably well. But there is real concentration of liquidity and concentration of volume that happens in these types of markets that is hard, I would say, to battle with. And so I think at least from that aspect, like I think the cards are probably mostly shuffled already.
Starting point is 02:36:32 That makes sense. Last question. There was a viral clip of you talking about Donald Trump Jr. Do you have anything more to share on his involvement? Because I was watching that and I was like, yeah, it's kind of like, hey, where are we going with this thing? It seems like politicians have a deep insight on how campaigns use these prediction markets, but can you share anything more about his involvement in the company? Yeah, I mean, look, first of all, that clip is a clip and, you know, how these clips are taken. But, you know, who needs context?
Starting point is 02:37:05 Yeah, it's like, we don't need context. It's a completely, you know. Anyways, look, I think that, you know, you know, we have done, like, one of the main products that took us mainstream was an election market. And that brings a lot of attention from politicians on both sides of the aisle. And you see it, you know, trauma at the time was using his prediction markets all during the election. And actually, Mamdani more recently was using his calcium odds pretty consistently during his election. And so in some ways, like you're going to see a lot more, like prediction markets are going to touch financial markets,
Starting point is 02:37:35 are going to touch the news and they're going to touch the political process because they bring more truth to all of the above, all of these categories. And in some ways it's good that we get more and more, I would say, like, politicians involved and, like, engage with these markets. The one thing I'll say about this, and, like, again, it's very, it's in the same bucket as the, you know, as the other things that we discuss where, like, there's industry dissidents that are against prediction markets that find all these different reasons for why prediction markets might be bad. But the thing that happened is not this administration necessarily, even though this administration is pro-innovation, is we won that lawsuit on the election market, which has really redefined, the landscape, with the boundaries of what the financial market is. And that lawsuit was one, you know, is in the court of appeals in G.C. with relative, it's a very progressive panel. It was a panel of Democratic judges where we won three zero. So people want to make it out of it to be a partisan issue, even though I don't think truth needs to be a partisan issue. It's just, you know,
Starting point is 02:38:31 these markets work. People love them and they generate a lot of insight out of them. And I think that will win the way, win the day at the end of day. Last question from my side. How does the CFTC view when a market participant has some type of alpha or non-public information and they're betting on a market, you know, based on that information. From my view as somebody who, like, gets data from, you know, we work with polymarket, we look at, we use polymarket data on the show. If somebody has insight information and they're trading on that information, it actually makes the markets more accurate. So in some ways as a user who's just like viewing markets, it's, I want people that have inside information on global events to be trading
Starting point is 02:39:21 so that the markets actually better reflect reality. But what is like the CFTC's view on that type of activity? Because like things get thrown around all the time insider trading this or that, but I don't actually know like what the actual law says. Yeah, that's a great, like that's actually a great question. It's a point of debate in this land. But I think there's some distinction. So Cali is a regulated exchange. So everything we do, in some ways, a lot of the laws and the rules are very similar to what you would expect in New York Stock Exchange and some of the traditional financial markets. The question of insider trading is interesting because what you just said could also apply to the stock market, right? Like if you want to accurately price
Starting point is 02:40:03 the stock, maybe we should that inside of trading happen. And the reason why it's actually not allowed is because it makes the game unfair. It makes the market unfair. And if the market is unfair, liquidity dries out. People just stop participating. And that's why you have to have reasonable rules of the road where people can reasonably expect to be treated fairly in this marketplace where there's no kind of asymmetric or structural advantage for one participant versus the other. And we take the similar approach here. So if you actually have insider information, which is information that like you're not supposed to reveal to the public you're not supposed to trade on it because trading on it is a way to reveal it to the public and and so and so that
Starting point is 02:40:46 makes kind of more balanced more fair marketplace and I think we're very focused on that but it's a very interesting question it's it's one the industry is battling with but we take a hard stance on inside of trading yeah because if somebody goes and and they go and they vote in a local election and they see like okay I talked to I talked to somebody there and they said they were voting this way and I talked to another person they also said they were voting this way and then you somebody trades on that information like is it actually like is that you know how how how do you define that type of activity right it's like anybody could go down to the polling you know any anyone could go down to the polls and and kind of like uh uh or voting center and
Starting point is 02:41:28 just see like ask the same question right so anyways well i was going to say it's the same as the stock market, right? If you go and sit in front of Walmart and count everybody that's going in and out and then, you know, during the day and forecast their sales from that, that's actually fair game. Now, if you call your cousin at Walmart and ask them for information they have internally that they're not supposed to reveal to the public, that's inside our trading. And I think we have a very similar line here. Yeah, yeah, that makes sense. Very cool. Well, super helpful. And yeah, congrats to the whole team. It's a pretty massive milestone. Huge. And yeah, great, great getting the update.
Starting point is 02:42:03 Thanks so much for taking Thanks for having me We will talk to you soon Talk soon. Have a good one AdQuick.com Out of home advertising
Starting point is 02:42:11 made easy and measurable plan buy and measure out of home with precision Our next guest is Matt Mullenweg from Automatic He is in the
Starting point is 02:42:20 Restream waiting room Let's bring him They are A parent company of WordPress.com Tumblr I think are going to remain Welcome to the show
Starting point is 02:42:30 Matt How are you doing I think We have you on a hot mic. We might have you on a hot mic, hopefully not. Welcome to the show. About to come on the screen. Yes.
Starting point is 02:42:39 All right. Well, a little drum roll. You're all about to experience TBPN for the first time. So, Matt, our audience is live. We're streaming on them. They're streaming on us. How are you doing? We're doing.
Starting point is 02:42:52 Fantastic. How are you doing? Good to me. Howdy. Thank you so much for, I know this is a little non-traditional. So we're kind of like two hours into like our big annual address, the state of the word. It's kind of like our state of the union speech. But thank you so much for allowing us to connect them. I'm kind of a lot of folks in the room have never heard or seen
Starting point is 02:43:12 TVPN before. So this will bring a lot of new folks into your world, and I'm excited for some of your world to learn about WordPress. Yeah, give me the state of the word. And then also, I want your personal word of the year. We've been debating what the word of the year should be over here. Ooh. So state of the word, and I'll say the state of the word is strong. Okay. That's good. Let's hit the gong. We're hitting the gong for that one. A strong state of the word.
Starting point is 02:43:44 We actually just did a live release of WordPress 6.9. So WordPress does major releases three times per year. We were able to do it right here on stage. We had a little button that we pushed. We got to get it in a long next time. That was pretty fun. Don't worry. I didn't just ship it again.
Starting point is 02:44:01 But, you know, one of the first. things about WordPress is it's a it's not just built by one company but it's a community of WordPress 6.9 over 900 contributors from all over the world different countries different languages different companies all coming together and so that was pretty exciting my word of the year and actually a thing we were just talking about is I'm going to choose freedom so powerful it's technology like starts to influence more and more of our lives you know how we travel who we dates the things we learn the news where it's supposed to you know the sort of
Starting point is 02:44:33 freedoms that are embedded in an open source license. I like to refer to open source licenses sort of like a bill of rights for software. It gives you an annualable rights that no company or person can take away from you. And that freedom and agency, I think, is really, really important and something that I think, you know, is technologists or builders that we should try to embed into everything that we do. Give us an update on Beeper. I was super fascinating. I was super fascinated by that product. I love walled gardens. I also love tearing down the walls of gardens. It seems like a good shot across the bow of the I-Message walled garden. How's the progress going there?
Starting point is 02:45:13 Are you using the service personally daily? Are we going to see a lot of growth there? Well, obviously, I'm using a daily. So I would think of it not as like replacing a walled garden, but more like allowing your gardens to come together. So I'm sure you like me, I have friends on lots of different networks. And some of them always love. to use WhatsApp and some of them always love to use, you know, Instagram or LinkedIn DM sometimes.
Starting point is 02:45:38 I even get some interesting stuff there. And I hate it when I miss these messages, you know, because, you know, checking all the different apps sometimes or the notifications I might miss something. So think of it not unlike how email clients, you know, can bring in lots of different email accounts. Beeper takes all the different networks where your friends already are and maps them together. Now the plus and minus is that you're, it's not going to replace the networks. Like I still keep all the different sort of specialized messaging apps because, like, for example, if someone sends you an Instagram story, when you click on that, you're going to want to load Instagram, for example. So I think of it as complementary and hopefully even increasing the usage in a very small
Starting point is 02:46:14 way right now. It's pretty nascent. But in the future, think of it as like sort of a different interface. So you might still have like the dedicated apps, but then having this all in one inbox that you can sort of manage everything, tag people, have folders. And it does cool features like scheduled messaging across all platforms or even just like weird heuristics that are pretty simple to do but like show me all the not don't just show me unread but show me all the people i've messaged that haven't messaged me back yet oh yeah sure sure uh we we've talked to some young hackers some startups uh who are building you know sort of beeper competitors and their whole value prop is like we've figured out a way to get it into the i message ecosystem uh do you
Starting point is 02:46:55 think that uh we need a new regulation there or some sort of law change or some result to actually open up iMessage or do you think that with enough tricky hacking it can be done well technically it's it's not hard well it is hard it's very possible to reverse engineer these networks yeah however as we saw with sort of a previous iteration of beeper if the network really really doesn't want you to do that um it's probably not good to pick a fight with a trillion-dollar company so perhaps these things might happen to open source or something but as a commercial company I think ultimately you have to be somewhat respectful and try to complement these networks so how beeper works today
Starting point is 02:47:42 is we don't support iMessage on the mobile or Android in theory we could but Apple has indicated that's something they don't want we do support on the macOS clients we have a way to integrate with sort of iMessage using some APIs that are available on macOS and so on macOS we can bring in your iMess But again, I'm building this for the long term, and we are a commercial company as well. Sure. So, you know, we want to work with the networks. And, you know, perhaps there can be regulations like the European DMA are things that can encourage interoperability.
Starting point is 02:48:17 But ultimately, I think that the sort of people who run these networks have to see a longer-term benefit for them. And for things like, you know, some of the other networks I mentioned that people works with, I think their business model and everything, the increased usage is really useful for them. I think for today, Apple's business model, particularly in the U.S., kind of the lock-in effect to the device business, which is, of course, where they make a lot of money from my message, probably indicates that, and less forced to, I doubt they will adopt sort of iMessage interoperability. But who knows, sort of like they used lightning for a while and eventually got USBC and all over a life's got better.
Starting point is 02:49:00 Who knows what will happen in the future? Talk about links on the internet. I feel like we're at a point in time where social media platforms are trying to keep users in their own applications so that they can monetize them to the full extent. Meanwhile, you have LLMs, which are ultimately doing a lot of the same thing. They're taking content from all over the internet, trying to keep users in the individual applications. It feels like WordPress in many ways is making moves to kind of like almost fight back against that. I might have that incorrect, but I feel like it's important. If you're
Starting point is 02:49:38 running a business independently online, it's great to have people like on your own website so you can develop a deep relationship with them. But what is your view on that? We're very much anchored around X as a business. Obviously X has had issues with links or, you know, chosen to demote them and the algorithm over the last couple years. But give us kind of the state of the union on links. That's a broad one. Well, I will say X is actually a great example. And I've talked to Nikita about this. So they now, they've shifted some of the balancing of links, and they now have this really nice in sort of app browser. So you've probably noticed that now. That when you load a link, you actually still have the ability to like
Starting point is 02:50:20 and re-blog and everything. And I think that's kind of the future. So I do think that you can have things that are complementary because so much of the great content and everything is more on this open web. It doesn't have to be fully embedded in the app. But that is sort of a technological change. So I would say actually point to X as someplace where I think things are going in the right direction. Although I do agree that's sort of time when links got really de-boasted and everyone had to do it as like a reply was kind of weird and sucked. So for WordPress publishers, you know, we support so many different types of websites. And different types of websites, I think, might have different motivations.
Starting point is 02:50:56 So, for example, a popular plugin for WordPress is called WooCommerce. It's the e-commerce plugin. It actually runs on about 8.9% of all websites in the world are now running this e-commerce plugin. You can think of it like an open-source Shopify. And if you're selling something, a merchant, you just want to sell the product. You might not necessarily care that someone comes to your website to buy it. So some of the new things that are happening in partnership with Open AI and others, where we're allowing products to actually be like browsed and bought inside of the LLM are pretty exciting.
Starting point is 02:51:28 I also think that the incentives of these open source chatbots in particular are very complementary to the open web. So, for example, like if you're on Amazon, Amazon really wants you to say, or eBay or Etsy or something like that, they want you to stay in their marketplace on their system. But when you think of how Google works and sort of the growth of Google and the open web, you know, they have their search pages, but they also would link out. And that was the whole part of their business model and how they grew. We're seeing that with the chatbots as well. And in fact, something I talked about a little bit earlier
Starting point is 02:51:59 is that the traffic from bots, both from them crawling but also user-initiated actions is exploding and has already surpassed sort of human traffic and it'll be interesting to see where that goes in the future. So, you know, there's never a better time, I think, to invest in having a domain, but also invest in publishing. And, you know, just like you might have a direct relationship. Like, for example, I suppose I could get like a, you know, chat GPT to summarize today's
Starting point is 02:52:24 TVPN episode, but it's more exciting to watch it. I think that creator is developing a direct relationship and brand is going to be part of the future as well. Very, very cool. Well, there's so many more things that I want to ask, but I know you're in the midst of your own presentation. So thank you for tuning in. Come back on soon, and thank you for having us. The view is spectacular as well. It's a pleasure to me.
Starting point is 02:52:49 I love to come down and hang out when I'm in LA next. That'd be so much. Thanks so much. We'll talk to you soon. Great chatting. Have a good rest of your day. See you.
Starting point is 02:52:56 Bye. That was the first. com. Exceptional sleep without exception. Fall asleep faster, sleep deeper. Wake up energized. That's our sponsor. 94 last night, John.
Starting point is 02:53:09 I think I smoked you again. I lost your phone. Well, we have Jason freed in the restream waiting. Let's bring him in to the TV. There he is. Jason, how are you doing? doing. Good to see you. Good. You? Congratulations. Massive news today.
Starting point is 02:53:23 Break it down for us. What's up? Was there big news today? I missed the news. What was the news? You're just calling out everybody. You are the news. Trello. Name and names. He's naming names. A lot of people don't do that. A lot of people say, oh, the competitors, the best in class solutions, the Gartner hype cycle. No, you call them out. You put them on the map. We had some fun. Yes. We launched a new product today called Fizzy, which is kind of a fresh take on Kanban. An old idea. Obviously, he's been around for a long time.
Starting point is 02:53:49 But we have a different spin on things, different take on things, and felt like it was time to do something new and kind of bring it back to the basics and also add some fun and color and vibrancy, which is missing in the software industry. I feel like people might be colorful in a sense, but the products are very much the same. And so we wanted to do something different. And that was what we did today. Why new name, why not a new tab in an existing product? Right. Well, Basecamp, which is our biggest product, has a con bond in it. We call it Card Table there.
Starting point is 02:54:21 But, you know, the thing is is that Basecamp is very popular, but it's, you know, let's say there's 100,000 accounts, right? 100,000 companies use it. It's a small number in the end. And there's a lot of people who can use something like Fizzy that are not going to use Basecamp. Basecamp is a much bigger system. It's for bigger projects.
Starting point is 02:54:39 And there's a lot of small things that people need to do and organize and track. And so building a small standard. alone thing just feels like it makes more sense, frankly, for this kind of thing. So, I mean, do you have an idea of, like, who is the target market? Startups, individuals, like, you use this to plan your Thanksgiving dinner? Yeah, I mean, the target market is me and us. Basically, we build things for ourselves. I don't think about who we're making things for because we're making things for us, always. And the idea is that, you know, actually, let me just say this. I find the best products in the world are made by
Starting point is 02:55:12 the person who's making them for themselves. That's been my. experience, like enthusiast products. And then other people find them and other people discover them and you find out that you're like other people and other people are like you and they kind of dig it, you know? And so I said to someone this morning that that I feel like TVPN is that way where sometimes when I'm driving home, I want to watch I want to watch TVPN but I'm like we just we just made it. I just lived it. I should probably watch something else. So I never I never watch the show myself. Sometimes you just call me. and say, hey, we've done a podcast today.
Starting point is 02:55:47 We do a podcast on the fly. Let me just talk about tech news more. I'd love to know about the actual process for building the product. Who was staffed on the team? How many people? What time period? When did you start? Do you have a designer, developer?
Starting point is 02:56:04 Is it all just, what's the prompt? I imagine you just use one prompt for this? One prompt. All it was. All you needed. Yeah. So, you know, what's interesting is we actually also open. open source this. So this is fully open source. It's a SaaS product and fully open source. So you can run it yourself for free, which means you can go into GitHub, actually, and look back at the very first commit about 18 months ago and see everything we did along the way. All the changes we made, all the dead ends, all the starts and stops, exactly who was involved on our team over time. And it's changed. So we had, typically we have two designers, one or two designers on something. Then there's other people who chime in here and there who jump in here and there, different programmers, jump.
Starting point is 02:56:45 in different times, but it's fully documented, which is very rare. You'll almost never, ever see this in commercial soft, basically almost never, sometimes, but almost never, especially going back to day one. What ends up happening is you can do this thing where you can basically, on launch day, you can clear the log, basically. And then from that point on, people can see what you're doing. But we opened it up from day one about 18 months ago. So it's actually all in there. The team sized in total, probably about six people worked on it here and there over 18 months. But for the most part, it's usually two or three people working on something at a time. How do you think about pricing these? Yeah, I feel like as in 37 signals
Starting point is 02:57:25 fashion, pricing will be opinionated. So I'm excited to hear how you, how you guys approach this one. You know, we don't really, well, we have a price, but I don't know if it's the right price. Never do. It's 20 bucks a month. Unlimited users, unlimited usage. One price, no chart, no table. no contact us just a price tag like if you wanted to bought a pair of jeans or peanut butter it'd be like how much is it talk to the sales rep they're going to look you up and down they're going to say well how much how much should this person pay right what watch are you wearing all the things right so it's 20 bucks but we give you a thousand cards for free so there's no time limit on the trial you get a thousand cards for free and if you never use a card is like a you know
Starting point is 02:58:06 like a to do item or something sure if you never use them up it's free forever okay and you can also run it for free if you want to run it yourself open source course. Yeah. Yeah. So we're basically just serving as a host. If you want to just turn it on, sign up and be going, we'll host it for 20 bucks. Currently, look, this is an introductory price. We could change the price six months from now. If we do, we'll let people lock in where they were. We're not going to change prices on them. But we might raise it. I don't even know what we'll do. But we wanted to pick a number that was fair. The other thing I wanted to do is I want to price this more like an accessory. This is not the only tool. The software industry is interesting because it thinks that whatever it makes, it's the only thing anyone ever needs. right the thing is is people need a lot of different things and so fizzies not going to be the only thing you have it might be one of the many things you might use and so we kind of price it that way it's like an accessory 20 bucks a month kind of a no-brainer unlimited users uh cancel any time no upfront anything and it just feels like that's the right place to start we'll see it where we end up but that feels good for now if you if i pay you to host it where is it hosted um china
Starting point is 02:59:11 We have a few different data center, so it's not the, well, it's in, it's in our, it's on our hardware. It's like, it would be easy to just throw this on ABS, but like you're the one company that doesn't just do that. That's right. So we have a data center in Chicago. We have one in Amsterdam. We have one in North Carolina. So we have in a few different spots. And it's all on our hardware and other people's data centers where rent space and data centers.
Starting point is 02:59:40 that said again you can also if you just don't trust us don't want us to do it you can put on your own stuff including like a simple droplet like a digital ocean something whatever you can find that that will host something basic will work for for this as well i mean you actually can host it in alibaba cloud if you want it's open source that's the whole point of open source i could put it on so someone does there's an AI company that uh recently had a code red uh have you ever had a code red ever once not like that uh not like a competitive pressure code red let's make sure we kind of focus on this competitor but we've like screwed up yeah and had had all hands on deck to fix something i mean there was a moment i think did you learn did you ever learn the the like
Starting point is 03:00:26 did you ever get overly fixated on a competitor and sort of like learn that because because there's there's like that's like why see like uh uh just law right like don't overly focus on competitors. Like, you're probably not going to die as a company because of your competitor. You die because of, I think they say, like indigestion or something like that. Right. Most wounds are self-inflicted. Yeah, exactly. But sometimes you have to actually have a lesson be fully ingrained. You have to learn it the hard way. I'm curious if that was the case. I think there was one time way back when we used to have a product, we sold a new product now called campfire but way back in 2006 we launched campfire which was a real-time chat group chat
Starting point is 03:01:13 yeah and back then we could not shove this down people's throats like nobody understood group chat for a business it just was very very hard to sell and to move and was a very small product for us and then slack came out and i saw it i remember oh shit like yeah that they nailed it like we just yeah it was crazy because them nailing it was it was irc like i used irc back in the and the hashtag channels like everything like there were all the primitives had been like battle tested in irc the other thing is slack doesn't feel like that outside of the world in terms for even from a i'm sure you have opinions on slacks like design but it it doesn't even feel that like you guys probably could see that and be like oh that that's like like like the the design was opinionated and you know
Starting point is 03:01:59 fun slack felt fun um i mean irc of course was is very geeky and whatever but yeah the fun fundamentals were there, but Slack had a wonderful onboarding experience. It felt fun. They had great integrations. They just kind of like totally leapfrogged us in that world. And that was like fine, but it did. It was the first time I felt like I felt that sort of nervousness in my stomach. Now, I didn't feel it against our business because base camp is a very different kind of product and it was fine. But it was campfire specifically because I was frustrated. I was trying to figure out how to make it better. And then I saw them come out like, oh shit. Like, yeah, that. that, that's how you do it. So that was one time, but I just don't think there's any reason
Starting point is 03:02:41 to focus on competitors. I just don't, you can't control them. You don't know what they're going to do. You don't know if they're going to be around in three months or three years. You don't have the same economics as they do. So it doesn't really make sense. Like, for example, I'll take, hey, our email service, hey.com. We have 40,000-a-thousand-thousand-paying customers for Hay, right? Which is, if you were Gmail, it'd be an absolute abject failure to only have 40,000 paying customers. They've been shut down. They've been shut down years ago.
Starting point is 03:03:08 In seconds, right? But for us, it's a multi-million dollar business because we have 60 people here. So for us, it's a great business. So, like, I can't go, well, Gmail is killing us. They're not killing us. They're doing their thing. We're doing our thing. So I think you've got to, in my opinion, the only person you actually compete with
Starting point is 03:03:23 are your own economics. Like, that's not a person, but the only thing you compete with are your own economics. If you can make it work, if you can make it viable, you're fine. If you can't. Is your revenue more than your costs? Your costs. You compete with your costs. Meeting with your costs, yeah.
Starting point is 03:03:35 Every business needs an AI note taker. What are your opinions on AI note takers? If they join the call, are you admitting them, or are you letting them stay? I'm pretty harsh. I always let them sit out in the cold. I never let them in. We don't have meetings. So I don't even, I couldn't even invite one in if I wanted to.
Starting point is 03:03:54 We just, we don't do that. But I will say, I have been in a few calls recently that other people have set up and there's been like an AI transcript. And it has been quite handy. It's really pretty impressive when it works really well. strangely, Apple can't seem to get voicemail transcriptions to work at all. Have you seen? Apple is just struggling with all the basics on transcription, even just talking to your phone
Starting point is 03:04:16 and like, Whisper works. It works in the chatypdia app. It works everywhere else. Apple just has not implemented it properly. And it's not crazy AI God. Like, it's literally just take the words that I'm saying and write them down verbatim. And that is a huge, and that's a huge benefit because if you're in a business call, sometimes I just want to search the actual transcript.
Starting point is 03:04:36 I don't even need you to summarize it or put action items or go do things for me. Not agentic, none of that. Just actually write down exactly what I said so that when I say, you know, we had, you know, when I say AWS or whatever, I can go search for when that happened in the transcript. And a lot of, a lot of companies just haven't been able to implement that. It's been weird. I agree. I think, frankly, that is one of the best use cases.
Starting point is 03:05:00 It's not even AI, though. It's just, it's great transcription software. is very, very handy. And I think, like, this is the thing, like, it's, it's, transcription software's been around for a long time. It's gotten better and better and better, but it's not like AI, really, you know. I mean, it is very useful. In other ways, it has been AI for 20 years.
Starting point is 03:05:16 It's been the original AI in many ways, you know. It's like, throw a bunch of data at it and try and estimate what things are. Even, like, OCR, these similar things, they're just per, they're not a GI. They're AI in the sense, they're narrow. It's the recommendation algorithm on YouTube or TikTok or, in Netflix or you know this specific you upload a you take a picture of a receipt does it understand the text in there even if it's kind of a dark photo yes that's specific narrow AI and that's great but we need to actually get those things working on our phones I agree someday
Starting point is 03:05:49 yeah we left AI out of busy by the way we made a just conscious effort we actually had some in for a while and pulled it out and had it back in and pulled it out yeah I'm just like I want to remove stupid from this I don't want to add intelligence I'm going to remove stupid yeah from the software. So it's just so straightforward that it just works. And you don't even feel like, God, I wish I had AI for this or for that. So V1, no AI. We'll see what happens down the road. Again, it's open source. People can contribute. So the really interesting thing with Fizzy is that there is a world where you can just actually sit back and do nothing on AI. And if AI is real and valuable to your users, they will get it stuffed down their throats via their OS, via their
Starting point is 03:06:26 browser, because Atlas is going to be trying to jump up. Herplexity Comet is going to be try and puppeteering their fizzy. And the rest of the system that they're using, whether it's their phone or their laptop or their desktop, it's going to bring the AI to bear with computer use. And so you might never have to build it. Yeah. I think so. In fact, this is actually interesting, really quick. Recently, OpenAI added a base camp connector to chat GPT. And we didn't, we didn't even do anything. So they did all the work. And they just sent us an email saying, hey, we're launching this base cam connector like in a few weeks, like, great.
Starting point is 03:07:01 I'm like, this is fantastic. We don't have an MCP server. They just did it. And so I just think more, to your point, I think more and more of that's going to happen, which is it's going to be available in the OS or someone else is going to do it or whatever. And to spend all this time to build it
Starting point is 03:07:15 into the product specifically, I just don't feel like it's the right, the best use of initial, an initial V1 should be focused on the product itself and not the other things that it could possibly do. Again, later on, maybe there's stuff that comes in, maybe people via the open source version submit some PRs that have some AI stuff. We'll see where it goes, but we didn't need it for V1.
Starting point is 03:07:34 Yeah. There's just always a question of where the AI lives. Like, do you need to go and pre-train your own model to answer questions? Or if you set up a good knowledge base, will you just get sucked into the next pre-trained automatically? And if we can just go to chat GPT and ask about you and you'll be there. Anyway, Jordan. I want to keep hanging out for an hour, but we do have to wrap the show because we're going to look at a studio. We have one last question from David Sennara?
Starting point is 03:07:58 Yeah, one last question from our mutual friend, David Sennar. Could we get a risk check? What watch are you working on launch day? I might be the only person who coordinates their watch with their software. It's possible. So I'm wearing today, I'm just wearing a, I'll take it off because I don't know how to quite hold it up otherwise. This is just a vintage Hoyer from 1974, which is a birth year watch. Let's see, hang on.
Starting point is 03:08:22 Whoa. Oh, birth year watch. Hang on, hang on. I love the, yeah. Hang on, let me see. The focus is hard, but there you go, there you go. There we go. I love the, I love the orange.
Starting point is 03:08:33 Thank you. It's colorful. Fizzy is colorful. Fizzy's full of color. It's the most colorful watch I own for the most colorful product we've ever made. I knew, I literally said, I knew you were going to match. I knew it was going to be intentional. This is so good.
Starting point is 03:08:48 It's a little sad, a little bit. It's a little bit sad. It's fun. It's joy. This is fun. It's joy. This is amazing. I don't have like,
Starting point is 03:08:56 Like a green, why were you guys wearing a green jacket a few days ago? What was that about? It was Shopify. Black Friday, we were celebrating commerce online. And so we wore Oh, just green for money? Solid green suit. No, it's the Shopify's signature color is green. Oh, shop. I didn't even know that. Yeah, Shopify's green. Yeah. Okay. It is a look confusing because we use a dark green in our brand theme. And so it actually paired up pretty nicely. We also have yellow suits for when we, for when there's big ramp news. We will wear solid yellow suits. Yeah, you've Those are fun. I've seen that.
Starting point is 03:09:27 Hard to miss. Well, Jason, open invite to the studio. We'd love to hang out for like a full hour. Everybody. Everybody's loving it. Everybody. Let's do it some time. I love to.
Starting point is 03:09:38 I think I got an email about that. So we'll figure that out. Amazing. Appreciate it. All right. Thanks for me on today. I appreciate that. Talk to you soon.
Starting point is 03:09:44 Very exciting. Thank you. Bye. Get bezel.com. Shop over 26,500 luxury watches that you're not going to believe it. But this is actually the next ad read up. Fully authentic in-house by Bezels. of experts and we got to close out the show so I'm going to tell you about wander.com
Starting point is 03:10:00 book of wonder with inspiring views hotel grade amenities dream beds top tier cleaning 24-7 concierge service there are so many more posts that I want to get to but there's a lot get to them there's a new arena mag out you got to go to arena mag check it out we are featured in this arena mag issue zero zero six the three martini lunch we had julia on the show of course to talk about it but now it's it's in print there's a lot else going on and we'll be back to tomorrow. Sorry to cut it off. We're having a lot of fun.
Starting point is 03:10:28 I would be in a very bad place where we weren't podcasting tomorrow. But fortunately we are. So we'll see you tomorrow. Leave us five stars on Apple Podcasts. Thank you for hanging out. Goodbye. Have a great evening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.