TBPN Live - Grok Deep Dive, 200,000 GPU's in Memphis, There's No Moat?, Time is Money, Tell Me a Joke

Episode Date: February 18, 2025

TBPN.com is made possible by:Ramp - https://ramp.comEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - ht...tps://getbezel.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://youtube.com/@technologybrotherspod?si=lpk53xTE9WBEcIjV

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Technology Brothers, the number one live show in tech. Jordy, lock in, because we are live from the temple of technology, the fortress of finance, the capital of capital. That's right. Today is Tuesday, February 18th, 2025, and this show starts now. We got a great show for you guys. We got a Grok 3 deep dive. We're breaking it down. I got early access. I ran the same deep research prompt through OpenAI deep research and through Grok. We're going to break down each analysis. And the prompt I used is about AI and how these LLMs are progressing and some benchmarking stuff. So it'll be a good educational experience for you. And you'll also get to make your own call on which
Starting point is 00:00:42 one you think is better. But you got to play with these to understand them we got a ton of other stuff there's a deep dive in the wall street journal about how elon musk and sam altman kind of their relationship soured we'll be putting that in the truth zone if there's any untruths in there and uh it's also a massive launch day because our boy alexander carp his book is officially live. Let's go. The Doctor. The Technological Republic, Hard Power, Soft Power, and the Future of the West. It's available now. If you pre-ordered it, it should be on your doorstep soon. It's available on Audible and we highly recommend it. We're big fans of Karp over here at TBPN. And so let's kick it off with Grok three. I got access, I think on Sunday, I had it do, um, the first prompt, which was tell me a joke, make sure it's actually funny, original and insightful.
Starting point is 00:01:36 My favorite comedians are Shane Gillis, Norm MacDonald, et cetera. Structure the joke like an actual standup set, not like a knock-knock joke or a basic question-answer type joke. And so here's what Grok said. You couldn't have been more clear. Couldn't have been more clear. And let's see if AI is taking my job as a comedian, news podcaster, tech analyst, venture capitalist. Word sell. Word sell. Grok says, you ever notice how AI is supposed to take over the world, but it's still out here struggling to understand my mom's accent when she yells at Siri? Like, we're all worried about Skynet, but meanwhile, Siri's over here going, I'm sorry, I didn't catch that. Did you say turn on the lights or burn down the house? And it's not just Siri, right? All these tech bros are like, AI is going to replace
Starting point is 00:02:22 your job, man. But I'm like'm like bro ai can't even replace my fantasy football lineup without benching patrick mahomes for a kicker meanwhile i'm getting texts from my buddy who's a product manager and his apple watch is like meeting starting and he's sprinting across the office like he's running the dakar rally ai is not taking over the world it's too busy trying to figure out if we're asking for a weather report or confessing to a crime. By the time it gets there, we'll all be living on the moon anyway, probably because some founder convinced us it's the next big VC flex. It's so, so this is so jarring to listen to because it sounds like, like it's structured
Starting point is 00:03:04 in a way that it should be funny, but it's just completely not. And so your, your brain is sort of like yearning for the punchline zinger and it just never comes. You're perpetually in the state of, of, of like hoping you're going to hear that sweet note and it just doesn't come so but uh i do think the you know obviously the the the benchmarks for grok 3 are incredible there will go into all that there's a lot of good stuff in here i think what's interesting about this is that grok 3 clearly this is trained on my feed because uh in my last 10 tweets i posted to car rally, Patrick Mahomes. I also posted something about, um, uh, in here, I said, uh, uh, uh, product manager and the Apple watch. Remember that post? You actually fed that
Starting point is 00:03:54 one to me. And so it clearly took that text in and said, Oh, okay. Uh, this user, this person who's prompting me, uh, likes Apple watches and product managers and we'll get those jokes maybe. But it didn't really integrate it properly in an interesting way. And I was, I was texting with one of the XAI guys and was like, this is really interesting. Like, I don't know if I like this, but I think this could be really cool for certain prompts and it could make it so that sure. Sometimes I want a vanilla, like clean LLM installation where I come to it. It doesn't know anything about me. And I give it a really robust prompt and it doesn't assume anything.
Starting point is 00:04:32 And then other times I'm going to want it to really know, oh, this guy's like super in group in this particular niche. He's down the rabbit hole on this. So all of a sudden, if I'm explaining, you know, like VC fund dynamics, I can be super jargony and super and super in group and super detailed. But then if I'm if I'm explaining like oil and gas to this guy, he doesn't know anything about oil exploration. So I got to keep that high level. That's actually a benefit to me. And that would be really cool. So I think fine tuning the the models on the individual person's timeline, whatever you can get information about them That could be very cool. But in this case, it was very very silly
Starting point is 00:05:10 Which is so funny to have this many things that are aligned to the stuff that you post out and botch it this hard I'm getting texts from my buddy Who's a product manager and his Apple watch is like meeting starting because you posted the White Lotus reference of the product manager and he's sprinting across the office like he's running the Dakar rally. It just doesn't combine at all. How does that go with AI supposed to take over the world?
Starting point is 00:05:37 I don't get it. Anyway, Grok3, I don't think there is a good comedy eval right now. They're not even benchmark so benchmark we talked about this what we before we came online the evals are hyper fixated on these sort of ultra complex problems and they clearly massively struggle on like a comedy eval right like that would like somebody should make the comedy eval 100 that basically says i asked know, all these foundation models to make various types of jokes.
Starting point is 00:06:07 This is the funniest joke. I tested it against real humans to see if they laugh. Exactly. And so that's a totally valid way to test a model because being able to produce jokes is a, you know, there's sort of comedic intelligence. Totally. Totally. And so it's not just about solving these sort of esoteric problems that nobody thinks about all day long.
Starting point is 00:06:30 I think that Grok could potentially get the most extreme product market fit. And what this like specific response tells me they're going towards is they want to have like the current thing happen and then somebody be able to generate a post that's hyper relevant to what they talk about that for that specific moment in time and in many ways when when when the timeline is just is just model on model on model on model that will be like some form of yeah you know comedic general intelligence yeah we were joking about um x potentially setting up the at chat handle i just saw uh mike solana posted uh chat what's going on with these airplanes and and just saying like hey chat like what like is this
Starting point is 00:07:12 real chat is this real is like a common online internet phrase these days and so uh i i could definitely see uh grok be instantiated as a as a player character, an NPC on X in a way that it could interact with a lot of posts or be someone that's integrated. It's also interesting looking at the XAI launch stream, looking at what Elon was pushing the engineers on. So he's like, OK, we have this breakthrough. We have the cutting edge LLM. When are we going to solve Riemann? He's talking about the Riemann hypothesis, which is the kind of like the hardest mathematical problem
Starting point is 00:07:50 that we haven't, the humanity hasn't solved yet. And the XAI engineer is kind of joking. Well, as long as we have a random string generator, like something that just randomly generates words and then a validator and enough compute time, you should just be able to brute force it. And they're kind of joking around, but it's very clear that they are focused on fundamental math fundamental physics breakthroughs in science and technology and that's just a completely
Starting point is 00:08:13 different vector of optimization against comedy anyway let's move on to ben thompson who already posted an update breaking down real real quick let's talk about the the actual launch of grok three because i'm just sitting there hammering through some emails last night and i'm actually watching pm effort pm effort die and i hear patty in the cage be like yo the grok 3 announcements going up and and they just kind of haphazardly throw up live stream yeah and then just like sit there with one camera yeah and it was it was it's kind of cool in that it's authentic. They're not overly fixated on production value
Starting point is 00:08:48 or anything like that. There's been a number of sort of model releases that were structured, you know, open AI is continuously drilling into people's heads. We are the Apple of AI. Yeah, more product focused. Product focused,
Starting point is 00:09:00 but also just like the production value of their presentations are at a different level. And so anyways, I thought it was fascinating that they just had one camera showed up, streamed it, you know, basically bounced. It was no bells and whistles. And so cool approach, similar to safe super intelligence, which is basically saying like, we're not even going to release products until we have AGI, which we'll get to. Yeah, yeah, yeah. I mean, the whole AI landscape's fascinating. The model layer's commoditizing, but there are so many different product strategies
Starting point is 00:09:32 on top of the models. And it's very cool to see that XAI is just forking, delivering straight into X. I love that. So Ben Thompson at Stratechery, highly recommend that you subscribe. He kicks off with a quote from Bloomberg. He says, XAI showed off the updated Grok 3 model. They call it the smartest AI on earth across math, science, and coding benchmarks. Grok 3 beats Google Gemini, DeepSeek V3, Anthropic Cloud, and GPT-4.0. They announced this during a live stream Monday. Grok3 has more than 10 times the
Starting point is 00:10:08 compute power of its predecessor and completed pre-training in early January. Musk said the presentation alongside three XAI engineers were continually improving the models every day. It's literally within 24 hours, you'll see improvements. And that's true. They are really rolling out stuff very, very, very quickly. They also introduced a smart search engine, which I tested and you, interestingly, you don't need to click a button to trigger it. Uh, it's just, it's just like, if it thinks that you want, uh, you know, some deep research, it just goes and does it. It's called deep search. Uh, it's a reasoning chat bot that expresses its process and of understanding a query and how, and how it plans its response.
Starting point is 00:10:45 And we'll take you through one of the results from, uh, Grok three is deep search product. Uh, and then they also intend to release a voice-based chat bot as soon as possible. So, uh, Ben Thompson writes Grok three appears to be one of, if not the best performing base model in the world, it is topping the usual benchmarks but not if you include o3 which is interesting that was not in the bar charts which we'll get to uh and tops all categories the highest score in lm arena yeah uh obviously we're xai fans x fans here grok fans but the the sort of selective positioning of new models against models like definitely needs to be called out because people just sort of decide all right who are we going to sort of compare ourselves to
Starting point is 00:11:30 in this very moment yeah because everybody just wants to show you know yeah i mean honestly like there's there's so many different vectors because o3 can be very compute intensive then there's o3 mini then there's o3 mini high and so you know there's this big like value trade-off between yeah if you let the l if you let the llm reason for hours and you spend two thousand dollars per query you can get remarkable results but then yeah is that really a fair benchmark against something that costs a dollar to inference 10 cents and so uh all of a sudden you need to plot these on like an x and y graph and then eventually you need to plot them on some sort of like, you know,
Starting point is 00:12:05 unintelligible tensor graph or something. Anyway, let's go to Andre, Andre Carpathy, the absolute dog who is not a fair observer because he's worked with Elon, but he's also worked at open AI. So maybe he's a little bit more, a little bit more fair than most people think.
Starting point is 00:12:22 I really enjoy his analysis. He says, the impression overall I got here is that this is somewhere around O1 Pro capability and ahead of DeepSeek R1, though of course we need actual real evaluations to look at. The impression I get of DeepSearch is that it's approximately around Perplexity's DeepResearch offering, which is great, but not at the level of OpenAI's recently released Deep Research, which still feels more thorough and reliable. And I found that Deep Research is just better at spitting out 5,000 words. Most of the other products spit out 1,000 words. And so you just get more out of Deep Research. Which is what you want from Deep Research.
Starting point is 00:12:58 You're not looking for a perfectly summarized handful of paragraphs. Exactly. The entire point is to let you kind of figure out what's important about what they- Karpathy goes on. He says, as far as a quick vibe check over two hours this morning, Grok 3 Plus thinking feels somewhere around the state-of-the-art territory of OpenAI's strongest models,
Starting point is 00:13:18 O1 Pro for $200 a month, and slightly better than DeepSeek R1 and Gemini 2.0 Flash thinking, which is quite incredible considering that the team started from scratch one year ago. This timescale to state-of-the-art territory is unprecedented, and that is undeniable. This gets at the first Grok 3 takeaway. It's simultaneously surprising and not surprising.
Starting point is 00:13:36 Start with the latter. XAI famously built a 100,000 GPU cluster in Memphis, and now has 200,000 GPUs, and used 20, 000 of them to train grok 3 and so they're still building the larger cluster so even though you'll see a lot of people posting on the timeline oh you know grok 3 100 000 gpus that's not accurate they they only use 20 000 for this but it's still impressive yeah and just to highlight again this is the largest cluster ever the 20 000 20 20 000 gpus that's not true the hundred thousand is the largest ever uh i believe so so he's saying the largest ever for xai or the largest ever for any foundation model i don't know i asked i asked both chat gpt and grok to pull this uh
Starting point is 00:14:20 grok says gpt4 use 25k gpus while grok3 used 20 000 gpus now chat gpt says that uh that gpt4 was just thousands 20k gpus a100s and so it seems like around 20 20 25k has been the the top to date it's honestly unclear from these but clearly elon knows that he's just he's just willing to sprint towards these yeah big big numbers as fast as humanly possible 100 and so um uh and sure it resulted in a top tier model that's not surprising giving given it is what scaling laws predicted but it is comforting as there was a question as surprising given it is what scaling laws predicted, but it is comforting as there was a question as to whether scaling laws had hit a wall. And people were saying this during the DeepSeek kind of fiasco. What is surprising is that XAI is only 19 months old.
Starting point is 00:15:18 The company has gone from incorporation to the largest GPU cluster in the world to arguably the best model in less time than it has taken OpenAI to go from GPT-4 to GPT-5, albeit with substantial updates along the way, including the big 4.0 update just this past weekend. It's both a testament to Elon Musk and his team, Jess Gass, CEO of NVIDIA, Jensen Wong. So to me, this tells that our read on the Elon bid, which was only last week,
Starting point is 00:15:49 it feels like a month ago at this point, but Elon bidding $100 billion to buy some, not entirely known amount of open AI, means that he believes he either was purely doing that out of spite to toss like a wrench into that process, or he believes that the product layer is what actually matters because he's able to achieve incredible performance purely at the model layer. But even as Karpathy is saying, the product layer is not there yet. And certainly Grok has nowhere near the real consumer usage that OpenAI does. Yeah.
Starting point is 00:16:27 It's interesting. So he goes into competitive implications. He says, I did get access to Grok three, but not the thinking and search models. I actually don't know if I have access to the thinking and search models. It's unclear to me from a product perspective. And he has to say that he finds himself using Grok two quite a bit, almost entirely via its integration into the X app. It turns out being plugged into a pre-existing distribution channel is very useful. And this is a crazy bull case for Elon buying X. But the challenge is that X
Starting point is 00:16:55 is a very specific type of audience that is not necessarily reflective of, certainly not the LinkedIn audience. Like you could argue that OpenAI should go do a deal with LinkedIn and say like, make us your default LLM and just distribute us to like the normie masses because like long-term it's going to be more important to be there. Yeah. Yeah. He also says, I also never use it anywhere else for me. Product matters and chat GPT remains my product of choice. I feel the exact same way. I'm also curious to what extent ChatGPT deep research's advantage over other competitors like Google, Perplexity, and now XAI are due simply to O3 still being the best model versus
Starting point is 00:17:35 OpenAI spending more time crafting a better tool. Regardless, the impetus for OpenAI to focus on being a consumer tech company is as strong as ever. OpenAI is the only AI lab to have organically built its own distribution channel. And they really, really need to get an advertising product out the door to ensure they are serving free customers the best possible models, which makes perfect sense. This also explains why. That has to be pending this quarter potential. Yeah. And there's this question of, use uh microsoft as the partner there i i believe netflix was considering that as well because microsoft has a really big ad network from linkedin and from uh and from uh bing uh even though you don't think of them as a big advertising company they built all the algorithms yeah the structure and all
Starting point is 00:18:18 the inventory and so certainly not going to be meta yeah um and so this also explains why open ai is tying up with oracle and softbank and trying to become a for-profit xai raised six billion dollars last november and actually owns its gpus musk also said on the live stream that xai is building out a 1 million gpu cluster let's go size gone for the one million cluster. Boom. Let's get that in the handier spot. We've got to get this more dialed in. Actually, we need a button to hit on the set.
Starting point is 00:18:50 We have that. It just raises the gone. Oh, it raises the gone. It's hilarious. And then, of course, he ties it to Ilya Sutskever in Bloomberg is raising more than $1 billion for his startup at a valuation of over $30 billion. We'll talk about this later in the show. Vaulting the nascent venture into the ranks of the world's most valuable private technology companies. Green Oaks is in. They're putting in 500 mil, said the person who asked not to be identified. Green Oaks is also an investor in AI company Scale and Databricks. The round marks a significant valuation from the $5 billion that Sutskever's company was worth before.
Starting point is 00:19:28 First off, a disclaimer. Daniel Gross, a frequent Stratechery interview guest, is the CEO of SSI. I never realized. I didn't know that. Is that breaking? Are we scooping? Are we scooping Ben's scoop right now? I'm going to go with the scoop.
Starting point is 00:19:41 It's too much. People get confused. No, I think this was like i think this was actually like publicized like very early on interesting that ilia wanted to be the technical lead more than anything else that's cool but that's very cool dg you know he's going you know you know he seems like for everything i've seen just extremely extremely motivated to win and be in the most important places constantly. Yeah. It's funny.
Starting point is 00:20:07 He also had an accelerator. Yeah. Sam had an accelerator, small accelerator called Y Combinator. So the accelerator to foundation model pipeline is very real. You know, the other path, the counterfactual, the path not traveled here. Daniel Gross sold a company to Apple in AI, got acquired, should have taken over the CEO role. Yeah, yeah, yeah. This is something we advocate here on the show.
Starting point is 00:20:31 If you get your company acquihired, start asking the CEO, hey, what's the succession plan? How can I get in the boardroom? Hang out outside of the board meetings. Try to flag people down. It'll only take four quarters to get a little time with the key players.
Starting point is 00:20:44 And then from there you know just start you know it's uh it's such an underrated strategy right now i think i think like there there's massive value in being a founder and actually going in and going for the top job going for the top job it sounds too ambitious to be true but i think someone will eventually do it and we'll all be like wow it's amazing yeah a lot of a lot of founders end up getting acquihired and then are not super happy with the leadership at the company. So just become the leadership. Exactly. Become the leadership. And it sounds like a joke, but it's not. It's actually a better path than just like resting, investing. And then you're like, oh, what do I want to do next? I'll have to do the same company or like become a VC
Starting point is 00:21:19 and you're kind of lost. Instead of like, wait, I just got acquired by a company that does the same thing that I was doing, but they have a thousand times more resources yeah the only problem here is that i'm not in charge what if i was in charge yeah solve that problem anyway uh that's my little stump speech um and so uh they have pledged this is wild about ssi so they have pledged not to release a product until it has achieved a safe super intelligence. So they're going to raise so much money. Well, XAI explains why it is absolutely viable to have started late, provided you have the funds catch up quickly. And of course, SSI talent speaks for itself. SSI is also I would imagine a big NVIDIA customer. Again, I don't know that to be true. But what else might the money be going towards? Which brings this conversation back to chips. On one hand,
Starting point is 00:22:03 chips remain the gating factor to the easiest route to a frontier model on the other hand this is why deep seek matters they're they're not the only route to something that is at least competitive the debate on the value of an attempted export ban remains an open one anyway uh fantastic article absolutely wild i totally respect ssi saying we're not going to ship until we have we we've achieved our goal which is like the kind of approach you can only really take if you're Ilya and like Daniel Gross where you're basically saying yes we're going to need to raise billions of dollars you're going to just have to trust us that we're going to and just like bet on us that we're going to deliver that one of the things that every foundation like the the war, the foundation model wars right now, which will be studied eventually because it's creating this sort of foundational technology that presumably our entire society will be dependent on at some point.
Starting point is 00:22:54 It is so distracting for these companies and the teams and the investors when there's new models releasing every week. And you can imagine everybody at Anthropic last night was watching the Grok demo live, not working. Everybody at OpenAI was doing the same thing, right? So it's just wildly distracting to constantly be forced to ship and ship and ship when you can tell that they're able to make incredible progress internally without even getting user feedback, right?
Starting point is 00:23:24 They're basically,on's basically elon's basically saying hey i just need another 20 billion dollars and i'm going to make a much better model like yeah i yeah sure we'll make it available through x you know the xai x integration yeah but uh elon wants to solve the reeman hypothesis he is the customer and he has his eval and once it solves the reeman hypothesis he'll be happy yeah and and that's kind of it which is fascinating i think another hot take that we were discussing earlier was uh there's this big push around like the the model layer is commoditizing yeah uh and people say that as like a bear case yeah but i was wondering if you flip it around like people make tons of money in commodities all the time, right? Like oil, big oil,
Starting point is 00:24:07 like oil is a commodity. Still worth producing oil. Hugely valuable. There were massive companies built on it. There were small companies that just did little exploration and just had a little plot of land, a little track. You know, actual infrastructure technology providers. So I wonder if there's a world where, you know, intelligence is so valuable and LLMs and AI is so valuable that, you know, yeah, even if you're like the sixth best model, you can still sell the intelligence if it's at the frontier level to the market at a market clearing price that still generates profit above what the NVIDIA GPUs cost and you still reap the reward and the $30 billion valuation is completely justified in the same sense that, you know, Hey, yeah. If you, you know, you're like, Hey, I got a plot of land in
Starting point is 00:24:50 Texas that I'm going to pull some oil out of. Yeah. You don't have a monopoly on olive oil, but you still made money. Yeah. And so I wonder if that's how it would play out. The, you know, if LLMs, you know, replace a lot of work broadly, knowledge work specifically, it's easy to say, hey, these are trillion dollar markets that we're playing in. It makes total sense that there's five to 10 companies that are going to raise billions of dollars to go after that, right? Because the prize is so great, potential prize is so great that, you know, Ilya raising at 30, you know, and Daniel Gross raising at 30 billion posts is actually, you know, Ilya raising at 30, you know, and Daniel Gross raising at 30 billion
Starting point is 00:25:26 posts is actually, you know, it's, it's maybe it's, it's, it's, it's just ends up looking like a great investment of like, how do you potentially get a, you know, 50 X on, on, uh, on a multi billion dollar check? Like maybe that's how you do it how it is so crazy when you think about it that way like when you think about like uh crypto it's like a big new technology it a lot of people are into it it's just like a cool thing and yeah the market cap of all the crypto combined is around a trillion dollars social networking also like very valuable cool new technology market cap around a trillion dollars phones cool new technology market cap around a trillion dollars. Phones, cool new technology, market cap, around a trillion dollars. Electric cars, cool new technology, market cap, around a trillion dollars. Operating systems, Microsoft, around a trillion dollars. Cloud computing,
Starting point is 00:26:15 there's around a trillion dollars of value. And so it's like, in what world does AI and LLMs not wind up being around a trillion dollars of value i get that like it could go any direction and like how you slice that up it could be very equal it could be one winner take all and you know hey people are going to have their bull case for x or bull case for their open ai anthropic whoever but you know it just feels like like the stakes are definitely one trillion right yeah if we look out in the future uh even even if we, even if we're not being like super, super intelligence pilled and we're just thinking about like, yeah, like this is a new technology and it will be a consumer market or an enterprise market or any of them. Uh, like, will there be an opportunity to make a bunch of money? Probably. Yeah. So anyway, let's move on
Starting point is 00:26:58 to Yaxine. Uh, he is, uh, he's working for Elon over at X, a good friend of the show. He has been he's become a bit of an open AI hater. But let's hear him out and see. We got some poster on poster violence here. Oh, yeah. This is clearly a rune caricature or the same exact character. And, you know, you know, dancing around the data center. And so he's arguing that there's no moat with this. He says, moats are Silicon Valley headcanon, wishful thinking. They need it to exist.
Starting point is 00:27:33 They make you believe that you're doing the thing that would be impossible for others, all to justify the fundraise. When they say moat, what they're saying is proprietary monopoly. I'm here to tell you that the monopoly is a myth the proprietary monopoly that simply does not exist open ai could open source all their models and it wouldn't really harm their business existence you would only harm their ability to fundraise everyone should take peter teal more seriously just be different i mean what what a poster i mean he's he's layering on a bunch of things that people could potentially disagree with in the first few paragraphs uh which positions him well to then extrapolate on all those ideas now that people are kind of riled up a little bit totally so he says is capital emote in previous
Starting point is 00:28:14 times capital was emote uh but as technology progresses everything becomes cheaper skill labor are now being automated levels io can commandio can command a few O10M-valued companies, order-magnitude $10 million-valued companies, spawning them every three months for fun. He doesn't need to raise. I like that. We've got to put Levels in the cage, by the way. He would just be out in a day.
Starting point is 00:28:38 Yeah. Especially with the latest CodeGen slop spitting out PHP server code. It's free. It's free. The software i produce for myself on a daily basis would have taken a team of engineers now it's one hour in in between work and dinner it's post scarcity it's unprecedented yep we are at the point where the idea that capital as a moat is not being sold by the people who need the funds instead the idea is being pushed by the people with the sad, sad capital that has nowhere to go. What is the point of money when legions of engineers who actually just slow you down when you really only need 8X, H100, and two frogs to create outsized impact? Your real competitor is the end user writing their own
Starting point is 00:29:21 software. I used to work at AuthO. We shipped login as a service. It sold itself. Many Seattle homes were paid off by sales commissions of AuthO contracts. It wasn't Okta. It wasn't PingOne. Our biggest competitor was our customer themselves writing their own login page.
Starting point is 00:29:38 We had to sell them on it not being a good way to spend their time. So he breaks down the serious AGI players this is personal opinion open AI nah remember Yahoo okay yeah I've seen we know we know who you're pushing for here the serious players vertical robotics manufacturers and Majuco shops I don't know what that means been joke Oh DJI unitree Tesla and Earl deep-se Jane Street, Palantir, social media platforms, Meta, vertical AI companies, XAI, DeepSeek, Google.
Starting point is 00:30:12 Are your ML researchers wearing knee pads and a sweaty tank top plugging in Ethernet? If yes, you're going to make it. ETH Zurich, I will not elaborate. Distributed decentralized AI. Bootstrap companies with a lot of freedom, people building novel devices. Interesting. What can't be easily replicated?
Starting point is 00:30:32 Software used to cost money and AI is just software. There is a limit on individual model intelligence that yields diminishing returns and it is my belief that we will rapidly exhaust any room available. It is my belief that agi will run on consumer hardware and any proprietary models will only be run as convenience uh what can't be easily
Starting point is 00:30:51 replaced is knees on the ground soldering irons and the majuko majoko rl sauce and godlike product managers yearning invest accordingly this is financial advice for billionaires this feels like uh like it feels like basically 40 posts combined into one article. But one thing that's interesting. So when we were talking earlier about intelligence being commodified and some people saying, oh, it's not investable. It's a commodity. If it's a commodity, especially at these prices, you're saying, hey, commodities actually are very valuable. There's a ridiculous amount of demand for them.
Starting point is 00:31:25 The hard part, what he's outlining here is that, you know, everybody knows that there's a big market for oil. Like you can see what the price is. If you can get a barrel of oil out of the ground, you can sell it at this sort of market clearing price for that grade of oil. Right. There's like, you know, oil that turns into jet fuel and whatever. But there's like it's a commodity.
Starting point is 00:31:44 It's sort of priced in the open market there's fully a world where intelligence gets to that point too and then the challenge is what uh yassin is saying which is uh here with ai is like what can't be easily replicated is knees on the ground uh soldering irons etc etc and so the same thing with oil it's like the challenge with getting oil out of the ground is you got to get oil out of the ground, right? You need dudes with trucks to go drill for oil and then make sure the drill is running properly. Make sure you have water, make sure you have power. You know, the, the, the drill is breaking all the time. And so it doesn't seem like data centers will maybe be that as much of a challenge to keep operational, but it's certainly not easy to operate these things highly efficiently, right?
Starting point is 00:32:31 Yeah. Interesting. I don't know. Yeah. If it's a commodity, where is the most valuable part of the commodity? The producer and the supplier producer. Yeah. But is that energy? Is that NVIDIA or is that the LLM deliverer? Right. It's an open question. Anyway, let's move on to some reactions on the timeline and then we'll give you a breakdown of Grok 3 versus OpenAI Deep Research. So Creatine Cycle, who's been on the show before, says,
Starting point is 00:33:02 Sorry, Autismos, it's time to grow some arms and social skills. And Jordy chimes in and says, getting diced continues to be underpriced. And he says, printing this out and hanging it on my wall in pack Heights. Uh, and he just shows the, the grok three,
Starting point is 00:33:17 uh, reasoning data. It beat. Oh, three mini high, uh, reasoning. People were upset about this.x two slides later says uh they
Starting point is 00:33:28 omitted 03 from the chart in the live stream for some reason so i added the numbers for you and then uh but this was reasonable because 03 isn't public and those benchmarks can't be verified yet so i think it's understandable that they didn't include it but then again grok 3 with reasoning isn't publicly available so people can't validate that eval so there's a lot of like questions over like what counts for an eval that goes on the charts but i think they both have you know fair arguments there yeah they're like we can validate what our model yeah yeah we cannot validate what you're doing because it's not available so we know yeah yep yep we cooked yeah and so uh uh shiel monot gives a little context on the the grok colossus 200 000 gpus
Starting point is 00:34:15 in memphis tennessee built in 214 days i think semi-analysis and dylan patel have done a deeper dive here that we'll have to dig into. There's some really fascinating things going on here. Sheil says, why Memphis? They found an old Electrolux factory, added trailers to add cooling, one quarter of the cooling capacity of the US and trailers for power generation. They use Tesla mega packs to smooth out power fluctuations. The next training cluster will go from a quarter gigawatt to 1.2 gigawatts using GB 200s from NVIDIA. There's a fascinating thing that leaked because of Lama. When Meta open sourced Lama 3 or some piece of that, they had a line or like a function, a piece of code in the training data that basically said like, what was it?
Starting point is 00:35:06 It was like power plant not blow up. And so basically what it does is when you're training, you're doing all this complex math that's using a lot of electricity. And then all of a sudden there'll be like moments where you're maybe just like, okay, we're done doing the math. We're gonna like save the model for a second.
Starting point is 00:35:25 And so all of a sudden your power will just drop to zero. And if the power like substation or the power plant is sending tons of electricity and then all of a sudden it doesn't get any, it can blow up or something like that, or it can be bad. And so they built like a special piece of code that basically just has it does random math can you imagine the reason that that got implemented yeah oh yeah these engineers are just like it might pull up you know just tapping away you know uh vibe coding and then
Starting point is 00:35:57 uh yeah they just get a call hey it's like hey like what are you guys doing right now? Yeah, yeah, yeah. Yeah. I mean, it would be crazy if there was a major power plant explosion. Yeah, and so just going back to the Colossus, the cluster in Memphis, this was always in many ways. I think we talked about this last year. The bull case for Elon building a foundation model company was that he has the boots on the ground experience to spin up heavy infrastructure manufacturing facilities, you know, figuring out how to manufacture batteries at scale for Tesla in the United States. And so having the playbook and being able to pull talent from, you know, a company like Tesla or SpaceX and say, hey, we need to build this data center in 214 days, like unbelievable. Like many people would say you know even you know linking this back
Starting point is 00:36:46 to la people are not going to have their homes rebuilt oh yeah four years that's crazy they did this they built it they built a the biggest you know cluster ever in 214 days that's a great point where they just found an old factory and said we're just doing this right now yeah so that's and that's really elon's like yeah edge here for sure yeah no and it and it it is so part of the culture at tesla um the uh i mean they're making millions of starlings so the cto at deterrence yeah um uh was at tesla specifically working on their battery cell engineering and so he's just bringing that just crazy ethos around his name's henry around speed yeah and it's just amazing to witness that's awesome so uh yeah let's move on to word grammar word grammar says i'm feeling very similar to how i
Starting point is 00:37:30 felt six months ago open ai did something groundbreaking now two other labs have replicated it soon the rest will follow making it a pretty undifferentiated market um yeah we've we've talked about this a little bit i mean uh, it's unclear. If OpenAI had not launched deep research and the reasoning model, would DeepSeek have been able to come up with it? Is the idea valuable or is everyone kind of on the same path? What do you think? It's interesting. Because everybody's competing against many of the same benchmarks to show that they're competitive they're all focusing on getting better at many of the same things
Starting point is 00:38:12 when what we really want and which is why SSI is interesting is they're basically saying we're not going to release anything publicly until it's what we want it to be yeah so we're not going to build based around these benchmarks yeah and so that I actually think it's what we want it to be. So we're not going to build based around these benchmarks. And so the, I actually think it's, it, the competition is good and bad. It's like a double-edged sword, right? It's good in that it's sort of pushing the pace and everybody's motivated to, to rapidly improve these models. But at the same time, it's not so great because we're getting models that consistently just look more and more and more like each other. And consumers are not even going to be able to notice the difference so they're just going to probably default to what they already
Starting point is 00:38:49 use and love or you know maybe you're using grok in the in x but not well here's an interesting twist on that so paul calcraft says uh elon posted an early grok three screenshot saying he asked what he what grok three thought of the information and it said it's garbage and then paul callcraft uh asked the same question what's your opinion on the information and it told him it's a solid outlet and you know what's going on here it's fine tuning on the person who queries it their timeline and so grok3 when elon prompts it it knows elon doesn't like the information let's give him an answer that confirms what he believes and it's the same thing for paul calcraft and so this is going to raise a very interesting question uh where people will kind of be in like llm eco echo chambers almost where they're like oh yeah i
Starting point is 00:39:44 already told me that you know uh you know microplastics aren't a big deal because i've been posting about that or microplastics are the worst thing ever because i've been posting about that yeah this is this is exactly what already happens on x broadly the x echo chamber is so super real yep where you just sort of you know for me i'll see somebody i'll see very like offhand you know randomly somebody will have like a blue sky in their profile yeah and i'm like oh i haven't seen that in a while yeah but uh someone else is seeing that constantly oh yeah and they're probably using blue sky they're like oh yeah everybody i love uses blue sky yeah right um uh yeah it's uh it's
Starting point is 00:40:24 very interesting and you know what's going to happen here is that uh the the the mainstream media is going to do these tests where they go you know what they did on like uh you know tiktok and instagram where they'd be like i followed a bunch of nazi content and then it gave me more nazi content and so it's like a nazi algorithm uh they'll probably do the same thing where they'll they'll like go into these llms talk to it for a really long time find essentially fine- fine tune it on the idea that like they're crazy. And then it'll be like, it gave me crazy responses. And it's like, yeah, I get it. It probably shouldn't do that. But at the same time, like you really like twisted its arm and it's not doing that for most
Starting point is 00:40:58 people. So I don't really have that much of a problem with it. As long as there's like some guidelines here and like it's directionally, like it doesn't automatically steer you towards some negative echo chamber anyway uh word grammar has another banger post here uh on a roll uh richard sutton's bitter lesson died january 22 uh with deep seek and it was born again on february 17th less than a month uh so this is of course about scaling laws and Rich Sutton's bitter lesson, which we'll get into. Of course, the idea that algorithms, while important, are not as important as scaling and compute power. And so scale is all you need. Let's go to another post from Leron Shapira. He says, people in 2022, Elon can't actually keep these servers online. Elon now the
Starting point is 00:41:47 bonus tab in the app makes you takes you to the highest benchmark frontier model. And so yeah, real narrative violation for everyone that said Elon wasn't going to be able to keep X or Twitter like online. He's like not only doing that, but the reason that you know it wasn't an issue is I don't have a memory of wanting to use twitter and or x and not being able to and i think i would really i would remember that for my whole life right such a such a daily habit if you're going on there and just nothing was was loading um it's uh yeah he he really mogged everybody that that was calling him out on that yeah well andre carpathy shared more evals uh because he got access to grok three he says on thinking uh grok
Starting point is 00:42:36 clearly has an around state-of-the-art thinking model the think button and did great out of the box on my settlers of katan question which is create a board game web page showing a hex grid just like in the game settlers katan each hex grid is numbered one to n where n is the total number of hex tiles make it generic so one can change the number of rings using a slider for example katan the radius is three hexes single html page please pretty good uh few models get this reliably right the top open ai thinking models oh one pro get it too but all of deep seek r1 gemini 2.0 flash thinking and claude do not it did not solve his emoji mystery question where he gave it a smiling face with an attached message hidden inside unicode variation selectors even when i gave it a strong hint on how to decode it
Starting point is 00:43:25 in the form of Rust code. This is like such a, I don't know if I could solve that, Andre. I'm sorry. That is a very complicated eval, but it makes sense, I guess, and probably valuable. It should break that eventually. The most progress I've seen from this is DeepSeek R1, which once partially decoded the message. It solved a few tic-tac-toe boards He gave it with nice clean chain of thought I uploaded the GP to to paper I asked it a bunch of simple lookup questions all worked great And so he's he has a whole bunch of evals that he can kind of rip through and I think that's a really cool way To do it. It's like you know do some web programming build a website like to sell tic-tac-toe
Starting point is 00:44:01 You know decode this mystery do this riddle and then he also asked an hour i'm assuming what you know yeah exactly he said he played with it for like two hours and so he did the same thing with deep search he asked uh what's with the upcoming apple launch any rumors and he liked the answer there why is palantir stock surging recently he gave it a check white lotus season three where was it filmed and is it the same team as seasons one and two what toothpaste does brian johnson use it didn't get singles inferno season four cast and where are they now which i don't even know what inferno is but uh singles inferno i guess that's a show he watches which i love
Starting point is 00:44:35 uh and then what speech to text program has simon wilson mentioned he's using so um he says i did find some sharp edges here uh the model doesn't seem to like to reference x as a source by default interesting uh though you can explicitly ask it to a few times i caught it hallucinating urls that don't exist a few times it said factual things that i think are incorrect and it didn't provide a citation uh it told me kim jong su is still dating kim min su of singles inferno season four which surely is totally off, right? This is hilarious. Andre, are you into Singles Inferno?
Starting point is 00:45:10 I need to get up on this. This is good stuff. The impression I get of deep search is that it's approximately around where perplexity deep research offering, which is great, but not yet at the level of deep research. Random LLM gotchas.
Starting point is 00:45:24 Sadly, the model's sense of humor does not seem to be obviously improved. This is a common LLM issue with humor capability and general mode collapse. Famously, 90% of the 1,000 outputs asking ChatGPT for a joke were repetitions of the same 25 jokes. Crazy.
Starting point is 00:45:41 Got them dead to rights. GPT5, you're on notice. You better be funny. Even when prompted in more detail away from simple pun territory example give me a stand-up uh i'm not sure that it is state-of-the-art humor example generated joke why did the chicken join a band because it had the drumsticks and wanted to be a cluck star. That's a real knee slapper. They're honest.
Starting point is 00:46:12 The jokes that are from LLMs are so bad that sometimes they're very funny because it's just like, wow. It will be amazing when they do get to the point because I think everybody can see that the general belief is that we'll get there yeah but like think about having a comedian in your pocket that knows what you think is funny and it's basically somebody will make a rapper app that's just a big button that you the laugh button that you press it just generates a new joke it just gets better and better and it even hears your response to it so it starts to learn on yeah that would be really good training data actually yeah they could do that uh okay so let's move on to uh grok3 and open ai deep research i got your uh your analysis there so i went to
Starting point is 00:46:58 both grok3 and open ai deep research and i asked it this really really long prompt i said uh i'd like you to build me a definitive table of LLM models and break down their evolution, specifically their relative sizes. So I want to know, you know, Grok three came out, it uses 20 K GPUs. I'm not exactly sure how many flops that is, but I would like to put all of these in the same terms. Like they're all an equivalent number of flops. I want to know the progression of GPT one to two to three to four for the pre-training runs. And then I want to know the same thing that happened at, I said, anthropic, it was transcribed incorrectly to entropic, which is wrong. Uh, and I want dates of when these were released. Then give me some uniform benchmark like chatbot arena or MMLU. Uh,
Starting point is 00:47:38 I really need you to go pull all this together. I'm like, come on, do this for me, bro. Uh, and then I'd like you to investigate how the bitter lesson and Moore's law are interacting right now. Essentially, the bitter lesson states that you always just want to throw more scale, but it's unclear to me what the benefits of scale are proportional to the capabilities of these models. So my basic thesis is that we're potentially hitting diminishing marginal returns. We're going to see less gains from increased scale. And so the real question is Grok 3 is a 20K GPU run. They're planning to do a 100K GPU run. And the question is, does the 5X or 10X increasing compute power, does that increase the model's final capability usefulness by 1%, 10%? Does it double it? Does it 10X it? And that's what I'm
Starting point is 00:48:20 trying to predict. And I want to see these based on trend lines. What kind of results should I expect? And specifically, I want, I want you to not just qualitative results, not just quantitative results, but qualitative results. Uh, like what do we expect a 1 million GPU cluster trained model to be able to do? Is it just high IQ? Is that the best measure? Like, what should we, how should we be measuring these things? And is there an evolution that like, uh, you know, we have no top, no max, uh, do we just continue to push on? The final question is like, well, does this continue to get better as we get to a billion GPUs, a trillion GPUs, a Google GPUs? Uh, what are we getting for that? I guess the question is, is intelligence unlimited? Uh, even if we assume
Starting point is 00:49:01 that we're on the right algorithmic path, that's the main question. And so basically I just wanted to like, you know, dump out a ton of research and give me a bunch. The only, the only thing that could handle that kind of prompt is your secretary, right? It's a lot of, it's a lot of stuff. I mean, you just basically dumped, uh, yeah. Uh, but, but you, you certainly, and so, and so what's interesting is if you look at page two and then page like six or somethingpus gpt3 10 000 gpus and then gpt4 25 000 gpus which is what tracks to the other and yeah and on the other one it says a thousand and then 20k um and so the the big question here is on onops. This is the real standard metric is number of flops. And the jargon term for this is just using the order of magnitude for number of flops. So you can think of GPT-3, you call that an E23 model because it had 3.14 times 10 to the 23rd flops.
Starting point is 00:50:24 And so you call that an E23 model. GPT-4 was an E25 model because it's not just more GPUs, but it's also more training time and more powerful GPUs. And so you can't just say, oh, they used 5X GPUs because they might have run them longer, more power, all these different things. So you really just want to do flops because that's the most standard and so uh deep mind chinchillas at e23 model llama 70b is an e24 model and grok 3 is an e26 model and i don't know how i missed chinchilla oh chinchilla's a big one because that was the paper that yeah that defined the the scaling laws so interesting i never i never people talk about the pre-training uh scaling law they call it chinchilla scaling laws because it came from the chinchilla
Starting point is 00:51:10 paper got it uh and so uh in here uh now this is where we get weird because chat gbt claims that grok 3 is an e26 model but grok 3 claims that grok 3 is an e25 model and so i don't know who's hallucinating here i don't know who's more accurate they could they could both be hallucinating to be clear they could um no one is necessarily right and so grok 3 claims that grok 3 is less flop intensive than gpt4 but gpt4 claims that grok3 is more powerful than than grok than gpt4 and so they're both gassing each other up i guess in a weird way i think open ai is more accurate here but it's very funny i don't know it's all it's all very it's all very odd and like i don't have time to fact check all of this. So I'm kind of just like vibe interpreting this. But let's read some of the analysis here from the bitter lesson.
Starting point is 00:52:12 The bitter lesson argues that AI progress relies, and this is from Grok3, and then we'll read the chat GPT version. The bitter lesson argues that AI relies on scaling compute and data over bespoke algorithms. LLMs embody this. GPT-3's success came from brute force scale, not architectural breakthroughs. It was a very simple algorithm. They just gave it the entire web and they trained it a lot with a lot of GPUs. It was the first like big training run basically. However, your thesis about diminishing returns challenges this. Does Moore compute always yield proportional gains? Moore's law has slowed. That's the law that says that doubling transistor density every two years with GPU performance now scaling via parallelism. And so we are scaling compute, but we're doing it by just getting more and more GPUs. LLM training outpaces hardware gains relying on cluster size
Starting point is 00:53:03 rather than per chip efficiency. So we're not just doubling the training run every year. We're like 10xing it because we're just building these monster data centers. So scaling laws. Kaplan and Hoffman show performance scales predictably with compute parameters and data. This implies a 10x compute increase yields 25 to 60% loss reduction, not 10X. MMLU gains reflect this, GPT-3 to GPT-4 is a 160X compute jump for a 50% score increase.
Starting point is 00:53:36 I still don't really know how to interpret all this. We're gonna have some guests on to break it down. Yeah, and so, I don't know it's interesting it uh the main thing is that like i think when most people play with this they're like it's great it's good but it's not like a complete step change from what else i'm seeing online no the the the the big headline takeaway from last night is that XAI is catching up on raw model capabilities in a very short period of time. In terms of product usefulness, it doesn't seem to be there yet, right? Because even you're seeing, okay, well, you trained on all of my posts, but I actually don't even necessarily like that that much. And so they're, they're, they seem to
Starting point is 00:54:26 be on a collision course with open AI from a capability standpoint, especially as they get into these, you know, next runs. But, uh, but again, that this is going back to my earlier point. I look at all this, you know, Elon, you know, hitting these, uh, hitting these evals, but then simultaneously still wanting to spend $100 billion on OpenAI when he could just, you know, spend that money on XAI. And he's, you know, again, we don't know if he's doing it to throw a wrench in the process, but it seems like he would love to be chairman of OpenAI is like my read of the situation. This is interesting.
Starting point is 00:55:14 The chat GPT deep research is, I think, much more readable as a research paper. And again, that's them innovating at the product level. And what Ben Thompson said is he's not sure if they just have a better model that's just producing better outputs or they just are refining the actual product layer more. Yeah. Yeah. So they say increasing, increasing model size and compute generally improves performance, but with diminishing returns.
Starting point is 00:55:35 And so that's the real question here of like, like, you know, if we a hundred X, the models like we're planning to, is if it just gets a little bit better, it might not actually be able to do real work still because it might still break down and hallucinate a little bit like yeah like we're we're we're all hoping for the model that just like never hallucinates
Starting point is 00:55:56 ever and can do a ton of stuff and hold everything in memory and which may not ever come maybe i think we will get there it's just the question of like you know what if it's what if it's a thousand order of magnitudes like you know like that would be yes the scaling law would hold like the scaling law would be correct but like it would take a long time to build like a dyson sphere for that basically um early work by open ai showed that large language model loss follows a power law. Performance improves predictably as model parameters and data scale up. And this was that picture of like GPT-3 to GPT-4, more parameters, more data. And so everyone was going bigger, bigger, bigger. For example,
Starting point is 00:56:34 going from GPT-3, 175 billion, to models like Gopher 280 and Megatron Turing 530 improved benchmarks, but not proportionally to the huge jumping compute required. DeepMind's 280B Gopher outperformed GPT-3 on knowledge tasks, reading comprehension, and fact-checking, yet it saw little gain in logical thinking or common sense tasks, despite 60% more compute. This suggests some capabilities plateau unless new approaches are used, and that's the reasoning model, right? And so diminishing returns are evident on many benchmarks. As models grow, metrics like MMLU, a broad knowledge test, improve, but approach an asymptote near human performance. And that's
Starting point is 00:57:16 frustrating because I would want linear progress that just blows through human performance, but it's really like we're kind of, maybe we need new evals. I don't know. GPT-3 achieved only around 50% on MMLU while below the 90% expert human level. So that's the human level benchmark. Gopher did 60%. Chinchilla did 67%. GPT-4 jumped to 86% nearing human experts. However, GPT-4's training used roughly two orders of magnitude more compute than GPT-4 jumped to 86 percent, nearing human experts. However, GPT-4's training used roughly two orders of magnitude more compute than GPT-3 for that last 30 to 40 point gain. So it's still very important, but it's just like, yeah, diminishing. Each additional few points towards 90 percent is increasingly expensive in compute. Similarly, coding and math benchmarks saw massive leaps from GPT-3 to GPT-4, but further gains beyond GPT-4 are smaller. In XAI's Grok 3, pushing to an unprecedented 200K GPU cluster only yielded moderate improvements over the prior state of the art.
Starting point is 00:58:17 Hallucination. They didn't use all 200K GPUs. And here it's saying that they did. Well, yeah. If you look in the god like yeah they open ai specifically says in their research that they say up to 200 000 nvidia gpus 100 to 200k gpus over four months but we know that's not true yeah and so uh it's a mess i don't know we're we're close to something it cool. I like it, but we've compared,
Starting point is 00:58:46 we got to step. We've compared their, their models. Should we talk about their dynamic? Yeah, absolutely. Let's move on to the wall street journal. Uh,
Starting point is 00:58:53 the inside story of how Altman and Musk went from friends to bitter enemies. Kick us off. On the first full day of the second Trump presidency, Elon Musk was in the white house complex when he got word that his nemesis was about to hold a press conference with the president this is amazing amazing uh first sentence here i don't know who's writing this he turned on the television he turned on the television and watched his open ai's chief executive sam altman and a beaming donald trump touted a 500 billion dollar investment in AI infrastructure called Stargate. And this is a hilarious dynamic, if it's actually true, where he's turning on the TV.
Starting point is 00:59:28 He's got X open on his phone. He's just shaking his head. Like, one, Sam is an absolute dog for even making this happen. You've got to be pretty brave to just go into the White House, the lion's den. Elon's got like six of his kids running around they know you're they know you're a target um that's great so props even making that happen masa i'm sure was was you know pulling making uh making some magic happen too so despite having rarely left the president's side over the preceding few months musk was blindsided by
Starting point is 01:00:02 the announcement according to people familiar with the matter. Musk fumed to aides and allies about the announcement, claiming Stargate's backers didn't have the money they needed. The deepest, which is true, right? It sort of came out that they didn't have it. It was more of like a roadshow, you know, SPAC announcement type thing. That was Altman's success navigating Trump world via a carefully coordinated series of recent meetings in Palm Beach and phone calls with the White House while keeping the plan secret from the president's first buddy. First buddy. Yeah, buddy cop. They're going buddy cop mode. Buddy cop mode.
Starting point is 01:00:35 But Trump's got a lot of buddies. Altman and Musk co-founded OpenAI in 2015, but their relationship soured when Musk left in 2018 following a power struggle. It worsened when musk responded to the launch of chat gpt by launching his own rival startup xai this week the feud went nuclear when musk followed the stargate unveiling with his own bombshell a hostile 97.4 billion dollar bid for the assets of the non-profit that controls open ai a decade after joining forces they are now fighting for control of the very thing that brought them together in one of the highest stakes and most personal fights in recent business history the outcome could determine everything from the future of a world-changing technology to who will help
Starting point is 01:01:13 set the nation's technology agenda with the new president interesting this article is based on the conversations with more than a dozen people familiar with altman and musk relationship over the years as well as open ai and and Musk's business and political decisions. We got to go back and watch the Sam Altman, Elon Musk podcast. Have you seen this? They had their own show together? It was Y Combinator had a series of interviews on their YouTube channel. And Sam went and conducted a number of interviews.
Starting point is 01:01:42 And one of them was with Elon. And so they're sitting in the Tesla factory, just chopping it up about like entrepreneurship and wisdom. And it's just like, it's crazy to see. I'll, I'll, I'll continue. So in many ways, Sam Altman, 39 and Elon Musk, 53 couldn't be more different. Well, Musk was beaten up and verbally abused as a child. Altman was a teacher's pet whose parents routinely told him he could be whatever he wanted to be. Where Musk was often abrasive, Altman tended to tell people what they wanted to hear. And while Musk was an engineer, steeping himself in the details of rocket and battery design, Altman is a technology-obsessed intellectual, reading widely across philosophy, science, and and literature and penning essays on
Starting point is 01:02:25 how society should organize itself but both have strikingly similar taste for power let's go taste for power that that should be one of our hiring metrics yes john what do you think this guy's taste for power is you think he's a he's real hungry yeah i mean uh if you're in here you can tell ben's got a real taste for power absolutely uh if you're in here you can tell ben's got a real taste for power absolutely yeah if you're in an interview and somebody asks you what do you where do you want to be in 10 years and you just say supremely powerful i think you get the job i think you get the job oh yeah yeah that that's the video look at this very poorly that doesn't even feel that long ago it's it wasn't that long ago the cinematography was not incredible but uh i think it's probably 2016 or something the andrew tate
Starting point is 01:03:12 yeah early 2015 wow yeah that is wild seeing them hang out just bros just guys being dudes doing a pod together you'd love to see it it's too bad i mean they keep coming back to the fact that mom and dad are fighting you know these are two you know everyone everyone picks a side but ultimately these are americans they're building technology i love technology i love america yeah i i wish they could just be on the same team and and build something awesome together i hope that they can have a reconciliation. That's my biggest hope. Uh, well, um, and so for years, the millennial Altman looked up to the Gen-esque Musk as a hero, a real life Tony Stark, who provided a counterexample to the country's technological stagnation that Altman railed against when he was president of the startup
Starting point is 01:04:00 accelerator Y Combinator. Altman met Musk years earlier when Y Combinator partner Jeff Ralston introduced them. Oh, I know Ralston pretty well. Oh, they misspelled his name here. And helped arrange for Altman to tour Musk's SpaceX rocket factory. Interesting. Altman's time leading Y Combinator from 2014 to 2019 put him at the epicenter of power in Silicon Valley. He became known as a fixer with an unrivalvaled rolodex who could call in favors for the startups He invested in or punish investors who crossed them. This is 100% true. He was great at what he did during this time He was fantastic This his special talent was raising money
Starting point is 01:04:39 Which he would do by arriving in his signature uniform of jeans and sneakers curl his small frame up cross-legged in a conference room chair and unspool a vision so grandiose, compelling and earnest that it often seemed like investors were powerless to keep from funding his projects. It's great. I mean, great storyteller, great fundraiser. And what's interesting is that like, yeah, he raised some money. He raised a lot of money for his projects, but he was also an angel investor who got tons of deals done for other people. He set other people up with funds. He was marshalling capital all over the place.
Starting point is 01:05:13 I had heard that he was able to invest in Stripe at a $700,000 valuation. I don't know. It was like 10 on 700K or something. I think he's a billionaire just from Stripe. Yeah. Basically. Somehow. I don't know.
Starting point is 01:05:27 Yeah. But I mean, yeah. Phenomenal investor. So anyways, for anyone listening, if you want to be a billionaire, just put 10K into the next Stripe. Yeah. It's not. Yeah.
Starting point is 01:05:36 That's a good advert. It's actually, people tend to make it really complicated. Yeah. Why are you overcomplicating it? Yeah. Think, oh, it takes years. Exactly. And all this stuff, all this hard work.
Starting point is 01:05:43 No, just take 10 grand and just put it into the next stripe yeah uh and i mean he's even doing it more recently with oklo that uh nuclear startup which is public and it's a spack and it's a successful spack the stock's way up uh what a narrative like if you if you see people talking about it you know on x the company they call it sam altman's nuclear startup yeah so like that is probably like the the dominant narrative causing it to pump yeah i actually have to check i own a little i own a little bit of it you know oh you oh no um okay not financial advice here i am we are going to check on public yeah i am down four percent today so oh it's not it's pulling back it had to well it had to make up your own mind uh go on public check it out and see if it's for you yeah maybe anyway not that
Starting point is 01:06:35 we we certainly do not recommend no we don't recommend individual stocks but we do recommend becoming a multi-strategy multi--stage, large asset management firm. Yeah. And running your business through public. Yeah. Become a long, short equity hedge fund. There you go. And it all starts with public. In early 2015, Musk and Altman began having regular dinners each Wednesday in the Bay Area. Their conversations tended toward the apocalyptic, how the world might end, how they might prepare for it, to where they might have to flee.
Starting point is 01:07:07 A likely cause, they agreed, would be artificial intelligence that grows smarter than humans and impossible to control. That May, Altman suggested they create a Manhattan Project to develop Artificial General Intelligence, or AGI, that is as smart as humans at most tasks. They wanted to ensure Google, which had a huge lead in developing the technology didn't end up deciding what it would mean for the human race such a fascinating solution you know like like there is another world where you're like we're just like we're just going to try and like you know use military might to just actually like ban this development of this technology yeah like you could do that i wouldn't advocate for that at all. And I think AGI is like,
Starting point is 01:07:46 I think the whole, the whole history of this stuff is going very well. I'm very satisfied with how it's going, but you know, there is another world where you're like, Hey, yeah, like there's this technology and like,
Starting point is 01:07:55 we need to, you know, steer people away from development of this and like ban this and control this and then have international coalitions and, and you know military might to build it up like there are things you can't build uh you know dark web drug marketplaces are a good example it's worth noting that sam altman and musk had these meetings and then altman decided that he was going to get really into building bunkers yeah sort of doomsday
Starting point is 01:08:22 you know uh set up fly on the wall uh so i would love to have been a part of those conversations and so they joined forces they raise up to a billion dollars uh musk pledged to supply the lion's share of the money and they would lead it as co-chairman the co-ceo thing is always wrong yeah they should have figured that out early on it's very clear that they didn't play out like okay like what if this is actually like a trillion dollar opportunity like who will who will want to run this and they were both like well obviously it'll be me it's not there's no question their relationship began to disintegrate in 2017 pretty quickly i mean this is 2015 they're having dinners 26 17 they started disintegrating
Starting point is 01:09:05 after open AI researchers realized they would need far more money than a nonprofit could raise to develop advanced AGI. We talked about this, how if you want to summon the AI god in a box, you got to be a capitalist. It's not happening in a nonprofit. You're not just writing some clever algorithm with some altruistic. Although we. Although we are interested in taking PETA private. Exactly. There's a lot of stuff. Taking it private. No, it's the for-profit conversion.
Starting point is 01:09:30 The for-profit conversion. For-profit conversion. Then it'll be private, and then we'll take it public. And then we'll probably take it private again. As the private equity guys do. Toma Bravo will take it public, take it private. Yep, yep, yep. You know the post-it document.
Starting point is 01:09:44 Yeah. That's great. According to one of their emails, Elon demanded majority control and to be CEO. Altman's successful move to block his mentor would mark the beginning of the rupture. He convinced another co-founder, Greg Brockman, to back him over Musk.
Starting point is 01:09:58 Brockman reeled in OpenAI chief scientist Ilya Sutskever to also back Altman. Brockman and Sutskever wrote in an email to Musk that since OpenAI was founded to avoid an AI dictatorship, it seemed like a bad idea to create a structure where you would become a dictator if you chose to. Within hours, Musk wrote back that this is the final straw. By early 2018, he had left the company and Altman took over leadership. This was very under-discussed at the time because OpenAI was such just a research lab
Starting point is 01:10:27 and they were doing little things here and there. This was definitely not in the news in the same way. Like 2018, people were talking about SpaceX and Tesla mostly. So over the next few years, they focused on research in 2022. They released ChatGPT. This became huge. Turned out to be one of the most successful and transformative consumer technology products of the century in the company of the iPhone,
Starting point is 01:10:52 Facebook, and TikTok. As shocked as the rest of the world that AI had gone mainstream and upset that he wasn't a part of it, Musk began publicly criticizing OpenAI for moving too fast and not taking safety seriously. He signed an open letter calling for a six-month pause on ai development uh within a few months launched it's funny a lot of people called for similar you know signed similar letters calling for a pause on new podcast formation oh yeah of course that didn't get through nope you can't stop it the era of progress moves but one way yeah uh in 2024 he attacked altman in a new venue court after suing open ai and its ceo that february he withdrew the suit in june refiled it in august and amended in november they've been going back in the fort in the back back and forth in the
Starting point is 01:11:37 courts for a long time musk's lawyers declared the perfidity and deceit are of shakespearean proportions you know it's a good you gotta pay top dollar oh yeah that can that can hit that's not less than a thousand dollars an hour that's a that's a five thousand dollar an hour that's 10k potentially altman and musk were bitter lm could never yeah and so altman said musk was bitter that he left before the company succeeded as musk's legal attacks escalated, Altman watched with growing alarm as Musk grew closer and closer to Donald Trump, campaigning by his side and spending hundreds of millions of dollars to support him. You want to continue?
Starting point is 01:12:15 Yeah, I mean, there's lines from Altman interviews where he's like, you know, I would hope that, you know, the president wouldn't sort of pick favorites when we're here all trying to develop this transformative technology. And to Trump's credit, he seemingly hasn't picked favorites, specifically with an AI. He obviously has a good relationship with Elon, but we haven't seen him go out and do any sort of press conference for XAI or the Memphis data center. It's such a narrative violation. Everyone thinks trump is going to be super corrupt with this stuff
Starting point is 01:12:49 and he's just like you know what making he's having his cash bonanza yeah our listeners know that like yeah he's um deal guy deal um deal guy anyway uh altman first mentioned stargate to open ai's board in 2023 as a way to vastly increase the computing power his company could tap to develop and operate ai he originally brought the idea of microsoft asking it to invest upward of 100 billion but in the wake of an episode in 2023 when altman was ousted from the ceo perch for five days the tech giant balked says get your get it together guys we're not putting up a hundred billion for a guy who can't hold on to his job he gets upset it's so funny like there's so much drama here uh altman soon found partners one was soft bank uh altman had known him since his white
Starting point is 01:13:37 combinator days second was larry ellison uh longtime friend of musk who was hung out to dry when xai pulled out of a Texas data center project that Ellison's company Oracle was working on. Altman agreed OpenAI would take it over. The project grew into the foundation for Stargate. I didn't know that. Interesting. That it had originally been an XAI project, but then I guess Musk wanted to do the data center himself and own the whole stack. Isn't Ellison heavily invested heavily invested in x i don't know yeah i think so i mean they're all super conflicted and like you know it's all just like not heavily yeah yeah so he i think he put in at least a billion okay look that up and i'm gonna keep reading in december's
Starting point is 01:14:20 masayoshi sone played golf with the president-elect at Mar-a-Lago and announced his intention to invest $100 billion in U.S. infrastructure projects alongside Trump and Lutnick. Their press conference effectively previewed Stargate without making any of the details public, which ensured Musk still didn't know about OpenAI's involvement. Here's a great AI overview, a Gemini overview. Okay. Quoting, you know, the washington post as a source larry ellison co-founder of oracle invested one billion dollars in x however as of september 2024 he lost 720 million of his investment according to the washington post which clearly he didn't lose his investment he had a paper loss he had a paper and then it was back up maybe but now it's back
Starting point is 01:15:06 up yeah so very odd and i have to put the washington post in the true zone and so altman goes to the inauguration facility festivities he doesn't sit with the other tech ceos alongside musk there was like the high like you know the real like killers were up there with like zuck and bezos and musk were up in like kind of a balcony area and then altman was down with theo vaughn and alex alex wang yeah it was a bunch of cool like little micro crews i think jake paul was over there for some reason uh the next day altman and his partners arrived at the white house where they were more fully explained their plans for stargate to trump trump told the group he wanted to go ahead with the announcement. The new president loved that
Starting point is 01:15:49 they were aiming to invest $500 billion during his term, a number sure to make headlines. It makes sense. He wants a lot of jobs, wants a lot of economic growth, spend the money in America. And so Musk gets upset. He's fuming to aides about how the partners didn't really have the funding lined up for the project. He called it fake on X. Musk was already plotting a counter move and had been considering making a bid for the nonprofit. Musk said he was inspired to make the bid because OpenAI was in the midst of becoming
Starting point is 01:16:18 a for-profit company. And he believed Altman planned to undervalue the asset of the nonprofit, which would become an independent charity with a stake in the for-profit but musk's more primal message was for investors let's go to war with sam altman uh altman was at the paris ai summit when news of the bid broke and so he got he got uh like tmz'd a little bit with uh you know the reporter saying hey what do you think about this and he's like oh my god i guess i can't deal with this basically he was saying like i just want him to stop really bad that was that was the immediate take and so he said open ai is not for sale and the board has unanimously rejected mr musk's latest attempt to disrupt his competition
Starting point is 01:16:56 said brett taylor chairman of open as board i thought brett taylor's building an ad company separately i guess he is i think he's probably doing both. Any potential reorganization of OpenAI will strengthen our nonprofit and its mission to ensure AGI benefits all of humanity. OpenAI's rejection comes as no surprise, says Musk's lawyer, Mark Toberoff. Musk had said he wanted to save the company from the dangerous direction in which his co-founder had taken it. It's time for OpenAI to return to the open source safety- focused force for good it once was he pronounced we will make sure that happens altman responded in his signature nice brand of nice guy savagery probably his whole life is from
Starting point is 01:17:36 a position of insecurity he said on bloomberg tv i feel for the guy i don't think he's a happy person i do feel for him wow like it's wild because I think if Elon would tell you, I don't want to be happy, so don't use that metric to try to judge me. His motivation is to put a data center on Mars. Yeah, that's true. Anyway. Absolutely wild.
Starting point is 01:18:04 Well, we got some breaking news thank you to ben for surfacing it uh from perplexity okay he's also they've been in the timeline recently because um they're fighting it out everyone's in the trenches perplexity ceo likes to use sam altman's replies as marketing engine for perplexity it's seemingly uh his strategy now so uh perplexity says today we're open sourcing r1 1776 cool name gotta give him credit for that a version of the deep seek r1 model that's been post-trained to provide uncensored unbiased and factual information to keep our model uncensored on sensitive topics we created a diverse multilingual evaluation set of a thousand examples using human annotators and specially designed llm
Starting point is 01:18:45 judges we compared frequency of censorship in the original r1 and state-of-the-art llms to r1 1776 you can see a chart uh ben maybe you can pull it up or we can do it in post we also ensured the models math and reasoning abilities remained intact after the uncensoring process benchmark evaluations showed it performed on par with the base R1 model, indicating that uncensoring had no impact on core reasoning capabilities. That's cool. I mean, hot take, I don't really care about censorship and LLMs all that much because how many times are you doing research on the Tiananmen Square Massacre like you know
Starting point is 01:19:25 it's like yeah most of the time I'm like make this recipe list or like you know pull some GPU data together or you know something that's like it wouldn't even be censored either way but uh it's cool that they did it and in 1776 obviously very American branded very uh you know American dynamism coded and uh probably like a fun stunt marketing stunt and it's good because there was a lot of like is deep sea secretly bad and like poisoning it so i like that they like did the work to go get it up i think that's cool so i think perplexity can be very useful yep i don't use it a ton. I use it from time to time. I find it can be fantastic.
Starting point is 01:20:08 It's just more so the friction of opening perplexity and searching ends up being high for a lot of this sort of faster, I just opened Google search and I got the information I wanted, even though it was like, I asked how much did Larry Ellison invest in X? And Gemini botched it and gave this highly politicized answer, but I still got the data
Starting point is 01:20:30 that I wanted. So I think perplexity would have done a much better job of that. I should probably just like start, you know, using it more. But overall, like their product strategy right now feels very reactionary. Like it seems like the CEO is sort of like watching what other people are doing, figuring out how to make it about them, right? Hey, here we have our like we basically made R1 into a better consumer product for Americans.
Starting point is 01:20:56 Look at us. And he's launching their sort of deep research competitor, which again, even Karpathy was saying not quite on par in terms of how people are using deep research broadly today. So yeah, overall, you know, just focusing your messaging
Starting point is 01:21:12 on your top competitor, your former boss's reply section. I'm not super, super bullish on that strategy. What's interesting is perplexity does have another feature like tab that I actually think is really cool. And I was super bullish you know what's interesting is perplexity does have another feature like tab that i actually think is really cool and i was super bullish on but then i didn't wind up using all that much it's it's an algorithmic feed of news stories that are ai generated so like the top story for me is grok 3 which obviously i'm interested in talking about and then you click
Starting point is 01:21:42 on it and it shows you a bunch of sources and key features. And it's very nicely organized and it's not like super ad, ad riddled and stuff. And I thought this was really cool, but at the same time, like I would rather experience the Grok three launch on X in like the messy timeline.
Starting point is 01:22:00 And so I haven't been using that as much as I thought. And we were talking about this yesterday. Like, is there room for a new hacker news or a new uh a new uh tech meme and perplexity kind of built it I think they did a great job on product execution in that tab I think it's quite good but I think that in 2025 people want a feed that is more chaotic more combative more uh more comedic entertaining entertaining exactly so i want to see the actual live stream that i can just go watch the the definitive source i want to see the community notes i want to see carpathy and then i want to
Starting point is 01:22:38 see growing daniel meme it and i want to see rune respond and i want to see Yaxin respond. And I like getting my news that way, honestly, more than, hey, there's this sanitized AI summary. That's helpful, but it's not as fun. Yeah, it's interesting. So I'm looking at the free productivity charts right now, which a lot of these AI apps are under productivity. So Perplexity is at 26, Grox at one, ChatGPT is at, or sorry, Gro's at one, ChatGPT is at,
Starting point is 01:23:05 or sorry, Grok's at two, ChatGPT is at one. And so this is just a snapshot. In the app store right now. No way. This is a snapshot of moment in time.
Starting point is 01:23:12 Where's DeepSeek? I thought they were going to destroy everything. DeepSeek is number three. Okay. And then perplexity is down at number 26. Okay.
Starting point is 01:23:20 Behind things like HP Smart, their printer. No, no, no. The Ringtones maker, the Ring app, random VPNs. Speechify is actually ranking quite a bit higher. My boy. Have you met Cliff? The founder of Speechify?
Starting point is 01:23:37 Absolute dog. Yeah, you had mentioned that. Beast. Beast on the bench press. Perplexity is 26. They have 127 rankings. Chat on AI, which is basically an open ai alternative has 194 000 rankings and is only at 28 this is like a seemingly like no name
Starting point is 01:23:55 you know sort of the most blatant rapper that you can think of um so i don't know. I think that I don't see this model getting them to be in the top five. So to me, if they're launching R1-1776, do consumers in the app store actually care about that? Because if you're competing with Google, you need to not be worried about how cool x is going to think your product releases and more so how you go from 26 on the charts to top three right if you actually want to be competitive he clearly wants to compete with sam every single day he's responding to sam so yeah i think um i think this launch is super cool i definitely want to play around with it but um you know at the end yeah i've honestly seen the same thing on on uh in oddly in the nicotine pouch world where i've seen like people be like oh i'm launching a new nicotine pouch with like an american flag on it and like
Starting point is 01:24:56 okay it'll get like you know a thousand likes on x and then it's like, okay, like, do you have $10 million to spend with the 7-Eleven, Walmart and, you know, quick trip to actually get your product in stores? It's like the thing that actually drives like the adoption loop is like completely separate from popularity. Totally. And it's just, yeah, it's interesting. I wonder like perplexity, I think is cool because like they're not just doing the vanilla, like we're just doing another chat bot. There's already 17 of those that have like billions of dollars. At least they're trying a different product instantiation. I think that is cool. Um, but, but yeah, I, I do wonder, um, how they can get, how they can get just more, how can I fit it into my life and how it can really get to
Starting point is 01:25:45 you know solid traction uh and yeah i mean maybe speed you mentioned speed like if it was faster than google and it was something that i could make my default search in safari on ios somehow the challenge might be down for that if you're sitting in google chrome working it's always going to be faster to hit command and immediately start searching now now now if if arvin the ceo of perplexity made a compelling case on x that i should go into my chrome tab and reroute the default chrome search to perplexity and showed me how to do that i might do that and and try it. See how it goes. But it has to be a faster product than Google. And Google can continue to make that super hard. Or they could just say you're not allowed to build competitive. And again, it might not be like an actual consumer like viral growth strategy. Like it might not.
Starting point is 01:26:38 Yeah. Anyway, should we wrap up, Grok? I think we did a full deep dive. Yeah, that was great. Now, you know, Where are we on time? We're at 1220, an hour and a half. Okay. Well, let's go through some timeline. We got a bunch of good stuff. By the way, I'm loving the new layout. Yeah.
Starting point is 01:26:55 It's great. Yeah. Thank you for the comments on it. We're trying to make the show better every single day in different ways. Sometimes we wake up with a bad sleep score, but we're still... I actually had a... That's a good question. How's the sleep score? I had a rough night. I don't know. I went to bed a little late. I'm sure I'm getting docked. Let's see. I have to give a little anecdote. 82. I got destroyed. I got 83. 642. Can you imagine if
Starting point is 01:27:22 they were just in the background listening like adjusting them to try to make us like super competitive wait what'd you get i got 83 83 you beat me i beat you by one no but so last so i made a mistake this morning i got up at five yeah uh my eight sleep warmed my bed yep like five you know whatever to to so that i i was sort of like waking up naturally yeah and then at five, it starts doing the vibrating alarm, which is really nice. The issue is that I snooze the alarm. I didn't turn it off.
Starting point is 01:27:51 And then I just got out of bed and I left. And so I was in the car and she's texting me being like, please turn off your alarm. Which is funny because normally people are like, turn off your alarm because you're still sleeping. Yeah. But turning off your alarm because you're already out. Already in the car.
Starting point is 01:28:09 That's great. But it's connected via Wi-Fi. So I was able to just like turn it off. I mean, for what it's worth, I think I still like the eight sleep got me to a place where even though I didn't have the best night's sleep, it went a little bit further. The six and three quarters hours of sleep I got were fantastic.
Starting point is 01:28:28 Fantastic. I'm ready to lock in tonight. I started that point by saying we actually are genuinely focused on how do we improve the format of the show every single day. There's the format, there's the camera,
Starting point is 01:28:40 lighting, overlay. So thank you to everybody that gives us feedback. We see it in the chat, we see it in the DMs, comments, etc. So thank you to everybody that gives us feedback. We see it in the chat. We see it in the DMs, comments, et cetera. Trying to make it better every single day forever. And yeah, it's a fun journey. Keep grinding.
Starting point is 01:28:56 Let's get into the timeline. Let's go to Paul Graham. Andreas Klinger says, this picture will be framed by dads everywhere. Paul Graham says, every time we come back to Silicon Valley, my 16- 16 year old son gets a massive dose of cognitive dissonance when he notices that apparently smart and reasonable people seem eager to obtain something he's convinced is utterly worthless yeah someone says interesting what's that and he says my advice it's like he set this up to go viral like
Starting point is 01:29:27 he it's like he almost framed the first tweet to be like yeah somebody's gonna ask and then i'll drop the bomb on them yeah because he could have just said like you know my he could have revealed that it was his advice but it wouldn't have been as good and it would have been like maybe cockier or something i don't know it was a great post it was very funny great poster yeah and i don't know uh yeah i i don't have a 16 year old yet but uh it'll be interesting to see if he uh enjoys my advice at that point well well what we'll do is is i'll give advice to your sons yeah you give advice to my kids yeah and they'll respect it because they're like oh this is like my dad's business partner yeah i got to respect what he says. So even if they go through that awkward,
Starting point is 01:30:06 you know, teenage period where they're sort of rebelling, we'll still be able to, you know, deliver the... Also, I mean, I'm planning to go full reverse psychology and have all my advice be like, move to Brooklyn, become a DJ. Yeah, exactly. Look, you're 16.
Starting point is 01:30:24 I want you to be doing drugs. I want you to be doing drugs i want you to be doing when i was your age i was doing yeah and then he's like i'm dad i'm going to the military and i'm starting a hedge fund getting a job at goldman exactly working 18 hours a day exactly exactly uh i don't know it's fun it's funny to see uh pg he he is he's been posting about his kids for 16 years yeah yeah posting through it he's been he's been posting the the the dad it's good to see him come back he had that post that many people were commenting on the feng shui oh yeah in his room but you know he's posting through it posting through it that's great uh let's go to david holes founder of midjourney. He says,
Starting point is 01:31:05 the biggest frustration of a hardcore technologist in San Francisco is how many big companies, both tiny and gargantuan, are kind of fake. Investors often can't tell the difference between
Starting point is 01:31:16 story and substance, and armed with a billion dollars, it may take a decade for it to fail. As an observer, when those companies finally fall, you expect to feel some sense an observer, when those companies finally fall, you expect to feel some sense of satisfaction,
Starting point is 01:31:27 but actually you only feel sadness and a sense of waste. Dude, I know exactly who he's posting about. Perhaps this is just the price we pay to live in a place that is so supportive of wild ideas and risky ventures. A lot of dumb stuff is going to happen and we just have to be okay with it. It's just emotionally hard. It's like, it's such a funny take on like, oh, I, you know.
Starting point is 01:31:51 No, no, there's some fraud or whatever. And he's like, and he's like, you know, most people would be like, oh, this is like, you know, bad for the economy or wasting money. Or, oh, like I should be getting the money to build something real. And he's just like like this is just emotionally draining yeah i love how zen he is yeah it's so good uh anyway uh yeah i think the real crazy thing is that is that like him lots of people in silicon valley have identified the new crop of frauds yeah mainstream media
Starting point is 01:32:22 silent they're not doing investigative journalism anymore yeah kind of like a weird like hey maybe mainstream media like we need that hey maybe we need some investigative journalism like where is the uh i don't know the the the john kerry rue of this generation yeah uh i don't know if john kerry is like retired or still working but uh gotta start figuring these things out mainstream media because there's some bombshells out there right now. But yeah, I mean, when you think about the high-profile, hard-tech companies right now that are building, that have a lot of hype, that have raised a lot of money,
Starting point is 01:32:56 the challenge is, you know, what he's saying is like, yeah, if you are a well-versed observer or you have any type of, you know insight into the company you can kind of point out like what he's saying of great great story not a lot of substance right which is what he's pointing out and that by itself is not fraud right like you can there are plenty of companies that have great stories and just not a lot of substance right maybe they're building something and they're building just not a very good version of it but they still you know continue to raise money yeah i mean and so in the like post.com world there was a guy named barney pell who started a company called moon express who was
Starting point is 01:33:32 trying to deliver like basically build a moon colony great story raised a bunch of money didn't get anywhere it failed and it wasn't a fraud it was just like like all the investors were like obviously bummed but they were like yeah like i wanted you to try to build i don't know it might have been a legal thing but i uh but but i'm pretty sure what happened i need to actually you know deep dive the company but um but i'm pretty sure it's like yeah if you if you're as an investor there's a lot of times when you put money in a company and you're like hey i wanted you to try this and i know that it's a 10 chance that it works out and if it fails yeah no hard feelings uh nikhil in the chat says hindenburg meets hindenburg research meets the information oh yeah that would be good somebody does that it's it would it would get uh you slap that behind a
Starting point is 01:34:14 paywall and you'll be you know have have some nice revenue in no time yeah yeah it's great uh everyone nabil says well articulated it's not ideal but the alternatives feel worse there does have to be a way to improve anyway let's move on to mateo over at eight sleep he says it's so exciting to see eight sleep on tv during the final of the premier padel in riyadh supporting the number one team in the world of coelho and tapia i don't know much about padel but i think it's on the ground no patel padel is the cool version of pickleball not to throw too much shade at uh valentere uh palantir's pickleball paddle i i think it's great if you're you know doing paddle esque sports but um but yeah it's just a faster, higher paced, more athletic, more intense version.
Starting point is 01:35:05 Boom. Boom. Got your Eight Sleep hat on. There you go. Thankfully, I caught that. That would have been deeply embarrassing. Yeah. I never would have recovered.
Starting point is 01:35:13 But yeah, I mean, it's cool that Eight Sleep is, I mean, it's such a fun brand. Because like, yeah, it's a consumer tech company, hardware company. But they get to go and play in F1 and pro sports teams. And it's a lot of fun. Anyway, getting eight sleep, it's a no-brainer. We love eight sleep here. Speaking of other sponsors we love, let's move on to Ramp. Aaron says, this is why I love Ramp.
Starting point is 01:35:38 And he's quote posting Ramp's official account. It says, last Friday, our team attended the eagles parade in philadelphia for some on the ground journalism what we found might shock you and they have all these they did this funny video of like uh you know people with ramp signs at the philadelphia celebration it's great they're they're really they're getting so much out of the super bowl ad it is crazy i guarantee this thing is roi positive uh which is more than you can say for a lot and it shows like taking that scrappy approach of saying hey we've made this big investment but I guarantee this thing is ROI positive, which is more than you can say for a lot of Super Bowl. And it shows taking that scrappy approach of saying,
Starting point is 01:36:07 hey, we've made this big investment, but let's make the most of it. I guarantee you Doritos ran their ad, and they were like, cool. Pat on the back. Nice, we're done. We'll see you next year. And Packy's a big Eagles fan. I think he grew up outside of Philadelphia. He'll correct me if I miss that,
Starting point is 01:36:25 but yeah, love to see Philly fans get a win. Having fun. Yeah, that's great. Well, let's move on to a bezel deep dive. We got just ad on ad on ad. Now these ads are brought to you by AdQuick. I can mix in some reviews that have ads in them. Okay. Yeah. Yeah. Let's do that. Let's go to some reviews of the ads. The ads in the reviews are presented presented by ad quick, by the way, to be clear, the best way to buy out of home ads for your startup. So I got the first one. I honestly, every time I read these ads, it just makes my heart sing. It's get better every time. So I haven't seen these zero to one, but unhinged by somebody named John, uh, the technology brothers podcast. Isn't just a show. It's a founder's dojo,
Starting point is 01:37:05 a 10x brain gym, and a capital allocators confessional all wrapped into one. Every episode is a masterclass in thinking from first principles, moving fast, and breaking every norm except the ones that print cash. Amazing. John and Jordy don't just talk tech. They talk trajectory. Listen long enough and you'll either build something great or realize you never had the stomach for it in the first place. But let's talk execution. You know what else requires ruthless efficiency? Managing your finances when the system wasn't built for you. That's when Purple comes in.
Starting point is 01:37:36 So this is a really cool startup. Purple is the first banking and benefits platform built for the disability community because FinTech forgot millions of Americans who actually need financial tools built for the disability community because fintech forgot millions of americans who actually need financial tools built for real life checking accounts ebt integrations uh able savings all in one place purple built the mercury for people with disabilities because dealing with social security makes raising a series a look easy check it out at with purple.com that's cool that makes so much sense yeah yeah yeah it's kind of like all of tech wanted to focus on how do i reinvent the amex you know yeah like uh platinum card how do i make this cool credit card that has like restaurant reservations meanwhile like massive opportunities sitting here with purple to just build tools for this you know narrow uh you know subset of people who will switch from their primary and you imagine that
Starting point is 01:38:26 most of the people that are feeling that frustration are not immediately like i gotta go build a startup whereas you know a lot of like obviously like ramp isn't a highly competitive like they need like imo gold medalists and like the best venture capitalists in the world and like tons of money and and team members because everyone who's built a company has thought, you know, I need better CFO software, right. And like, uh, expense management and corporate cart. So that's going to be a very hot market. This is like a place where you could actually probably go in, break in, make a statement and then, and then grow your business off of that. So I just think that's fascinating. And that's just something that like, it would never come up on a market map. It would
Starting point is 01:39:03 never come up in like a brainstorming session. you're not going to hear some thread guy say like once he sells for a billion somebody some vc will be like we need the market map for disability tech so awesome thank you john for the review i got another one okay and then we'll jump into some other ones relentless alpha by username i'm having fun five stars the technology brothers are at it five days a week consistently delivering fresh alpha right out of the oven on the most relevant news in tech let's go it's like taking a peek into the elite group chats you've never been a part of with two to three hours of top tier daily content from the brotherhood i no longer have time to catch the latest flop guest
Starting point is 01:39:40 rotation of the week that's brutal brutal um but there have been some flops lately. Even though Naval went on, I thought that was cool. Oh, yeah. I got to watch that one. This review is sponsored by Psychedelic Science, the premier psychedelic conference globally hosted in Denver this June. Don't miss 300 speakers covering the latest breakthroughs in psychedelic research alongside 12,000 attendees. Marc Andreessen can yap about ayahuasca one-shotting founders all he wants, but just because it's no longer contrarian to be into psychedelics doesn't mean they deserve newfound hate from Silicon Valley. I think that's correct. They're not without risks and they're not for everyone, but there's no debating their immense potential for
Starting point is 01:40:17 healing creativity. And let's be honest, fun. Register today to learn about the latest psychedelic research policy and culture this summer. Great, great. He sort of predicted any pushback from our audience and just nailed it. It was good. Sounds very cool. I imagine this seems like something Tim Ferriss would attend or speak at. I wonder if he has already, but very cool. Thank you to I'm Having Fun. Great username for a business like that. Yeah, yeah. Very interesting. Any others?
Starting point is 01:40:46 We got more. I'll just rip through them again. This one's also sponsored by ad, correct? Okay. Let's go. This review is, and then there's also another ad in there. Okay. Separate business. Cool.
Starting point is 01:40:56 No wonder this is from Cody Ames. No wonder everyone I know told me to watch this. If they said X is ahead of the world, John and Jordy are ahead of X. They take the largest feats and most interesting developments from startups in BC to broader technology and business, then extract it into an engaging and easy to listen to format.
Starting point is 01:41:11 Couldn't recommend this podcast enough to anyone who needs a quick and easy way to digest the biggest news you won't find anywhere else. I also heard they're the most profitable podcast, which is no shocker. This is the obvious spot
Starting point is 01:41:22 to advertise your company or any news you have. So actually, Cody didn't put an ad in here. That's my only critique of this review. He was talking to us about OpenAX, his company, which is very cool. And I believe Cody is also a car guy. He's maybe rebuilt it. I don't want to get it wrong because it would be very offensive.
Starting point is 01:41:42 But AMG Hammer. He's a hammer guy. He's a hammer. Yeah. And so this is like the most legendary Mercedes, kind of like Mercedes muscle car, essentially. I mean, a fantastic driver's vehicle. And I think he's restoring, rebuilding, modifying, recreating one. But he's invested a ton of time and obviously deserves the respect of the tech community for his incredible automotive innovations.
Starting point is 01:42:09 Yeah, and I got the last review, which is short, but perfect from username DudeWhat'sMindSay on Friday. He says, FPJorn of Tech Podcast, five stars. Nothing more needs to be said. Fantastic review. Fantastic. Do another review, put an ad in it. I feel like we need to give back for that. Well, speaking of watches, we got a promoted post from Bezel.
Starting point is 01:42:37 Let's talk about the Rolex Day-Date, the president in 18-karat yellow gold with a factory diamond set bezel and green lacquer dial just landed on bezel this isn't just a watch it's a status symbol a power move on the wrist let's break it down the watch of the elite launched in 1956 the day date was the first watch to display the day and the date in full and so if you don't know about watch complications you can kind of think about it in there's a hierarchy here you You add each complication up. You start with the time only, just got the time on your wrist. Then you add a date complication, date just, just tells you the date, just the number. Then day date is going to tell you it's Friday the 26th, it's Tuesday the
Starting point is 01:43:18 18th. Then you can get into more perpetual calendars, which keep the time out for years and years and years, annual calendars, which are right for 364 days of the year, and then you have to reset. And then you get, there's even an eternal calendar there now that goes out like a thousand years or something. And then there's moon phase complications, all sorts of complications. And so, but the day date is great
Starting point is 01:43:40 because you just look down, you immediately know, hey, it's Tuesday the 18th. And this is the watch of the elite. It's been the go-to for presidents, CEOs, and anyone who needs their watch to say, I make decisions that matter. I love it. Why is it called the president? The first U.S. president to rock the day date, Lyndon B. Johnson. By the late 60s, it had earned the nickname the president's watch.
Starting point is 01:44:05 That's pretty good marketing. Yeah, it's so sick. It was worn by world leaders, moguls, power players. This watch run things. Rolex doesn't make the Day-Date in steel, only precious metals. That's the rule. This one, 18K yellow gold,
Starting point is 01:44:18 forged in Rolex's in-house foundry because even their gold has to be better than everyone else's. It has a factory diamond set bezel, pure opulence. You know, we love opulence here. There's a difference between aftermarket diamonds and Rolex diamonds. Rolex hand selects and sets every stone under strict standards, meaning this bezel isn't just flashy, it's flawless. Green lacquer dial. This is Rolex's power color. Rolex green equals money, prestige, and legacy. The deep lacquer finish on this dial
Starting point is 01:44:44 gives it a richness that pops against the gold. And so a lot of people see these watches and go, oh, Rolex, it's just so expensive, and it's just such a flashy thing. I enjoy this stuff because I'm a nerd for this, and I think it's really cool that even the most minor little details have a story, and it's as engineering-minded as anything else. And so that's why I'm excited about this stuff. I think you need to date it soon because the first time we met, I distinctly remember asking you at some point, I was like, wait, so do you want to be
Starting point is 01:45:13 like the president someday? I was just like, I was trying to figure out like what you wanted to do. Turns out it was be a podcaster. But you know, that's a well-trodden path. The reality TV to the White House is well-trodden path the reality tv to the white house is is well established at this point um but yeah uh also if you've seen glengarry glenn ross the watch that alec baldwin wears is a uh gold rolex uh day-date presidential uh and he
Starting point is 01:45:37 pulls it out there's this iconic scene where he says look at this watch this watch costs more than your car he says coffee is for closers. Always be closing. That's where that comes from. And that was another movie, I think, in the 80s or maybe 90s that popularized the presidential Day-Date even more. And the presidential bracelet is particularly special here. It's the ultimate flex. It's the three-link president bracelet.
Starting point is 01:46:01 It was designed specifically for the Day-Date. It's smooth, seamless, and also hides the class because true luxury is effortless. It's one of the most comfortable bracelets Rolex makes. So would you wear it? Let us know in the comments. I think you can pull it off. Head to the bezel app, download it. And it's just, yeah, there you go. Oh, look at that. I mean, it's just so iconic. Do you, Jordy, do you know about how you should tell if you should wear silver or gold? Are you familiar with this? The veins thing? No, I'm actually not. So there's this thing where a lot of women know about this with makeup. There's this idea of like, if you have warm undertones or cool undertones, and then, and then for a woman, you should match your foundation to that.
Starting point is 01:46:46 And so there's this critique that's going on on like Instagram right now that I randomly saw, which is like Republican woman makeup is like, I guess the left is like critiquing Republican women for doing their makeup wrong. And a lot of the things that they get wrong is that they have cool undertones, but they're using warm foundation.
Starting point is 01:47:03 And for men, oftentimes it's, if you have cool undertones, you're better off with silver. And if you have warm undertones, but they're using warm foundation. And for men, oftentimes it's, it's, if you have cool undertones, you're better off with silver. And if you're have warm undertones, you have, you're better off with gold. And so the way you figure this out is you look at your veins and if your veins look blue, you have cool undertones. And if they look more green, you have, uh, you have warm undertones. And so the classic example is just like, you know, I'm obviously like Swedish and very like Northern. And so I have blue undertones, blue blood. And more like Italian guy is gonna have greener undertones.
Starting point is 01:47:33 And that Italian guy is gonna look good in gold, right? And so when you think about the, you know, the mafia guy, the Italian guy, the Tony Soprano, he's gonna have more green undertones and that's gonna to lend itself to being able to rock a gold watch. Now I recently bought a, not even a real gold watch, just a, just a watch that happens to be gold. I think it's maybe gold plated. Uh, but, uh, just to try it on and see if I can pull it off because I want to dip my toe in before I really try and go full
Starting point is 01:48:00 gold. But, uh, for now I've been sticking with silver and with silver and I've been very happy and I think it matches my tones and my colors and my style. Same here. I'm glad that I'd never even heard of that rule. Yeah. But fortunately, I'm blue. I mean, it makes sense. Like, look at you. And we're both rocking silver. Exactly. Nice. Well, let's move on to the next promoted post. You got to stage these out. Well, I mean, this is like, yeah, it's move on to the next promoted post. You got to stage these out. Well, I mean, this is like, yeah, it's an ad for public, but it's really a blog post that we would talk about on the show anyway. It's from their founder and co-CEO, Leif Abraham. He says, we ship a lot in all areas of business.
Starting point is 01:48:38 By the way, he has a very unique Casio, which we'll have to have him on the show sometime to show it off. But anyways, he should have used the Casio in the picture here. Oh yeah, totally. Yeah. We'll have to get him on bezel and see what he picks out. And so he says, we ship a lot in all areas of the business. It's obviously cultural. And one of the philosophies we embrace is what we call pace management. And I thought there was an interesting little management tip for founders because Leaf is obviously running like a very high performance organization and he probably has some interesting stuff to share.
Starting point is 01:49:12 And so he says, time is the most precious resource we have and we must ensure that we manage it well. A new method we will be talking about frequently that you all should embrace is pace management. And this was actually just an email that he sent out to the team. And so it's kind of cool that he like published this after the fact,
Starting point is 01:49:27 this is a good example of going direct. Just like he didn't, he's not like becoming an influencer. He's just taking content that he's already writing. Exactly. And then just publishing it. Yeah. So he says, as a manager, you are responsible, not just for the quality of the work, but also for the speed at which it gets delivered. Time savings compound dramatically over time, letting time slip away costs everyone money, not just the company, but also for the speed at which it gets delivered. Time savings compound dramatically over time. Letting time slip away costs everyone money, not just the company, but every one of your coworkers. Here's how to manage pace. You got to take ownership of the pace, drive the pace of the project, despite knowing that the urgency you instill might make some people uncomfortable at times. You got to break this project into small manageable pieces. You need frequent deadlines
Starting point is 01:50:04 with quick check-ins. Let's regroup next week as the enemy. Check in with the group progress at least every 48 hours. I like this. This is the Chris Saka thing. People ask, oh, when do you want this? Q2, Q3? Q tomorrow.
Starting point is 01:50:16 Isn't that what he said? Q Friday. How about Q Friday? I like that. That's fun. And this is very similar. Yeah, check in more frequently. Don't throw things.
Starting point is 01:50:24 Yeah, next week. That was fun. And this is very similar. Yeah. Check in more frequently. Don't throw things. Yeah. Next week. Keep teams small. Make sure every person knows exactly what they are responsible for and that everyone else on the team does too. And then project specific channels. Sometimes it helps to create project specific channels. Dedicated space allows for focused discussions, even on the smallest details.
Starting point is 01:50:44 So some good uh some good advice from leaf over at public and if you're looking for a portfolio you're building a company and you're wondering if you're going fast enough yeah well we're doing we went from one day a week to two days a week to three days a week the pace is right eventually we'll be doing eight nine ten days a week ten days a week to three days a week the pace is right eventually we'll be doing eight nine ten days a week ten days a week let's go uh let's move over to sam altman uh he's thinking about open source and stuff he asked x for our next open source project would it be more useful to do an o3 mini level model that is pretty small but still needs to be run on gpus i thought you're saying he asked rock i was like yeah i thought sam was asking rock for that'd be very funny advice uh meta or the best
Starting point is 01:51:32 phone sized model we can do and uh this was sitting right at 50 50 and then dylan patel chimes in from semi-analysis he says i can't believe x users are so stupid not voting for o3 mini is insane you can literally already distill 40 clod stupid. Not voting for O3 Mini is insane. You can literally already distill 4.0, Claude 3.5, DeepSeek V3 into sizes that will run on phones. And it's just funny that like, yeah, obviously like that, the hackathon kills just kind of generally.
Starting point is 01:51:55 Continues to be a dominant post. Phenomenal. Phenomenal. And should we report a murder? Yeah. This one needs to be reported. All right. So this is from BC Braggs.
Starting point is 01:52:09 He says, I'd like to report a murder. Uh, Delian says I've ended up leading more rounds in the last four months than the prior three years combined. Not sure what it means, but I guess we are so back question mark. Chamath says means you've decided for whatever reason that time diversity doesn't matter in portfolio construction. Spoiler alert, it does. Dallian says, ha, thankfully not too many 2021 SPAC shares in my portfolio construction either. Quick 10x ratio, quick work. This one, I'm surprised that
Starting point is 01:52:42 maybe Chamath was down to get ratioed and just felt like becoming a supporting character in the timeline for the day. Coming for Duncan is the most dangerous thing. You're playing the Cobra on the timeline. Yeah, Cobra. And then also trying to critique somebody for time-based portfolio construction
Starting point is 01:53:04 where Chamath basically spearheaded the largest investment into this novel sort of financial product into highly speculative companies in a very condensed time period. So it's just like kind of set yourself up for that. Also, is Chamath a seed investor? Has he done seed stage investing? I don't think of him as like seeding big in the other grok the g-r-o-q at seed i think so okay he incubated didn't he i i i purely think of him as like growth stage and like uh spac guy now uh but maybe he does but like yeah delian like you know i i don't think like if you're doing seed deals like the time-based
Starting point is 01:53:45 portfolio construction yeah so he put 10 million into grok g-r-o-q which was a google team that spun out yeah i i saw a funny thing about grok with the q uh everyone's talking about grok 3 which has a k and harry stebbings is like here's my interview with Grok CEO. And it's the GROQ. So he like knew that he should drop that to like, you know, farm off of the energy or something that people would pull over. I thought that was funny. Anyway, uh, let's move on to, uh, the new president of Syria. I thought this was funny uh this guy had quite the path bucco capital bloke says intern al-qaeda then associate al-qaeda then senior associate al-qaeda then insurgent at various and then presidents of syria and uh this is from an interview the new president
Starting point is 01:54:41 of syria ahmed al-shara did Leading, some small YouTube channel, I guess. And he says, I joined Al-Qaeda because I was a 19-year-old and there wasn't any other venue to take part in politics. Also, because I want skills and experience. I already built all government institutions in Idlib before the offensive. This ensures we can immediately take over and prevent anarchy and yeah uh i mean obviously like you know the american propaganda during the war on terror was very much like oh it's like a bunch of like terrorists and caves they're just like completely running around but it's like no they have time sheets they have time cards and they have to-do
Starting point is 01:55:20 lists and tracking and inventory management and their. Yeah. Yeah. And like, uh, uh, what, what, what's the big one? Uh, like what's the net suite?
Starting point is 01:55:29 Oh, uh, ERP basically. Um, and yeah, I mean, uh, the,
Starting point is 01:55:37 these like large terrorist organizations are like, there are, I mean, he's obviously joking with like the intern associates, senior associate, but there are like career paths and like tracks that you can like move up in. Yeah. It's, it's, it's worth noting that if you look at terrorist organizations or drug cartels, you don't become these sort of nation state level players or, you know, multi,
Starting point is 01:55:59 multi-billion dollar enterprises without being highly organized and having real leadership hierarchies and stuff like that so that's why i've joked before about pablo escobar being the greatest cpg founder of all time because even if you told somebody you can break every law under the sun they would not be able to bootstrap a company to 22 billion of annualized revenue in a decade right oh we got to put deep research on this or the fans. Jordy and I were on a call with David Senra last night from the Founders Podcast, and we were debating how big was
Starting point is 01:56:35 just the cocaine industry, I guess. How would you benchmark that against big oil, big tobacco? What's the market size? I was arguing that it was probably over a trillion dollars in market cap if you applied some sort of price to earnings multiple on however much money is going on. What was the total revenue? How big of a player was this guy? And should you put, I don't know why I'm blanking on his name. Who's the greatest CPG founder again? Pablo pablo pablo escobar uh should you hang his jersey in the rafters uh of i mean certainly greatest criminals of all time but uh
Starting point is 01:57:12 you know how big was his enterprise and i wonder i wonder how many employees he had working for him if he really like mapped out the entire org yeah is it like thousands or tens of thousands but imagine if you get the same revenue on that $22 billion as figure. He would have a hundred and potentially whatever comes after a trillion. Quadrillion. Quadrillion. Quadrillion. Quadrillion, yeah.
Starting point is 01:57:36 Anyways, I got another post here from Arfor Rock. If you're not following Arfor Rock, great poster. He's come out the gates sharing sort of behind the scenes on rounds that are getting done. He'll typically talk about around two to three months before it gets announced. So, uh, his point of view is that he does it to help make rounds more
Starting point is 01:57:54 competitive. Cause you can imagine if you're a big, uh, capital allocator and you see our for rock posts, such and such round getting done and hit up your associate. And you're like, why haven't we seen this yet? And you're like, phone with them and so definitely i'm sure we're gonna do some sponsored posts with them momentum no i think we should have him on the show for a segment where
Starting point is 01:58:13 he speaks through like an anonymous like voice you know trent you know some some make him sound uh like steve jobs or something like that and just talk about um you know all the stuff he's seeing but he's got an update here. This was underreported. USV came out with a new core fund in 2024. They have historically taken the approach of staying small, focusing on returns, focusing on the real craft of venture and that they're not sort of a high volume investor. They're making a handful of new bets, typically doing them very early. And so he outlines like their total fund performance,
Starting point is 01:59:01 which their first fund in 2004 was about $100 million fund. It ended up grossing $1.5 billion. So almost close to a 15x there. And then you just go down the list pretty much every single fund except their 2019 opportunity fund which uh for those that don't know an opportunity fund is more hey we have a bunch of early stage companies that have done well and so the 2019 opportunity fund is the only fund on this list that isn't like top decile, basically. And it's sitting at 1.3 percent IRR. And you have to imagine they sort of deployed that fund at the absolute top into their companies that they knew already that were good companies that just were kind of a little bit frothy. So anyways, could still turn around. Who knows?
Starting point is 01:59:41 But, you know, one of the goats, certainly the New York City goat, almost deserving of a place in the Holy Trinity. But hasn't quite scaled and had the impact. Yeah, hasn't scaled yet. Cool to see, though. You got to leave it as a trilogy. You can't make it quadrilogy. Quadrilogy. It never works.
Starting point is 02:00:04 Well, let's move on to promoted post from wander uh we got a great spot up in big sir jordy have you been in big sir i have i've driven through it some of my fondest memories as a kid are really big sir i went up there with my family went uh inner tubing down the river i remember that really fondly. Founders Fund just had a big event out there for all the defense tech folks. It's really California at its best, right? It's coastal. It's rugged.
Starting point is 02:00:33 You've got the mountains meeting the sea. It's totally fantastic. So there's a beautiful photo here. What's cool about the Wander is that you know there's going to be fast Wi-Fi. So I feel like we can go stay there we can still hit uh hit you know stream and not worried about reliability it's actually we're not in a wander for pmf or die unfortunately and that's been like the number
Starting point is 02:00:56 one issue is that we're trying to stream hd video uh and and so showing up to an airbnb and not having you know quality wi-Fi is a terrible experience. Wander fixes that among a bunch of other stuff. And so Zachary Slayton, MBA says, what a great weekend spent in Big Sur. Not only were the views and driving credible, but our Wander house made the experience just that much better. Great experience, great service, amazing amenities, highly recommended. Take a a look share some beautiful videos and pictures and this is so funny because you know uh 18 likes pretty small post we love it it's a great post but now you're gonna get a video reply from this podcast zachary welcome to technology just you know basically going forward now that we're smart partner with wander it's like anyone
Starting point is 02:01:44 who posts about wander is gonna get random clips from us uh but we love the post and thanks for sharing because uh this is a place where i might stay and uh i'm glad to see it on the timeline amazing um i actually have some more breaking news uh we got a company called high touch raises 80 million at a 1.2 billion dollar valuation they came out of yc okay we'll have to ask gary about them um this one came out of YC. Okay. You'll have to ask Gary about them. This one came out of nowhere. I don't know what their last round was done at, but I'm looking on their site right now and they're already working with Spotify,
Starting point is 02:02:14 Aritzia, Warner Music Group, PetSmart, TripAdvisor, Whoop, Cars.com, Plaid, Ramp. So they work with Ramp, so they must be great. Calendly, GitLab, Docu gitlab docusign greenhouse and they do like marketing personalization tools okay with ai um anyways um absolutely massive one it's always a number of times recently maybe it's a sign of a little bit of frothiness or just these companies are growing really quickly you hear about a company for the first time when they're raising at north of a billion yeah and for somebody that's like highly tuned into um the the sort of flow of new companies that doesn't happen that much yeah
Starting point is 02:02:50 yeah i mean yeah you go back to like yeah like hearing about anderle 2016 like yeah their first round i think they raised 10 million dollars and that was a lot but it's like they raised it everyone in tech heard about it and it was i don't know. I don't remember the first valuation. It was probably like 10 on a hundred or 10 on 80 or something. And then now it's like, you're learning about companies when they're already unicorns. Uh, but also,
Starting point is 02:03:13 you know, B2B, you know, kind of like, you know, under the radar founders, that's kind of the nature of these things. Um,
Starting point is 02:03:18 but good luck to them. And I'm sure there are some powerful metrics underneath that raise. Uh, I guarantee it, uh, because the stuff's hot and there's a lot of marketing automation to do. So we'll have to do a deep dive on them. That'd be cool. Maybe have them on the show.
Starting point is 02:03:32 Let's go to Blake Robbins. He says careers are shaped by people, not logos. And he posts, he's been changing up his format. I like this. Some screenshot essays, more or less bringing stuff to the timeline. He says, at 21, I joined Ludlow Ventures
Starting point is 02:03:48 with no experience in venture capital. Instead of giving me a strict set of rules, Jonathan Brett did something that shaped my entire career. They trusted me to chase my curiosity.
Starting point is 02:03:56 They gave me agency when they had no, every reason not to trust a new grad. That trust rewired how I think about ownership, risk, and possibility. I got lucky.
Starting point is 02:04:06 The conventional wisdom in tech is clear. Join a high growth startup to accelerate your career. The logic makes sense. Rapid growth creates opportunities. Responsibilities expand faster than org charts. And you learn by doing rather than watching. But this advice misses something fundamental. Who you work for matters more than where you work. I completely agree with this. It's like if you have a good person that you're working with, it's way more valuable than just the logo. Yeah. And they define your mental model for excellence.
Starting point is 02:04:35 Tech loves to worship pedigrees. The other thing is when you're coming out of, let's say you go to college, you come out of school, it's so hard to evaluate if people are truly world-class. Like for me, I came out of school it's so hard to evaluate evaluate if people are truly world class yep like for me i came out of school uh i was working on a bunch of youtube stuff working with sean and connor at the ridge i love them as people i they were clearly very talented but it actually took me five years and like to work with a bunch of other people to realize how good they were yeah and uh yeah it's been it's been it's been awesome to see but again it's so hard to figure out like just yeah obviously there are there are normal distributions and bell curves like all over
Starting point is 02:05:16 even high performing organizations like the worst harvard grad like the dumbest harvard grad is going to be like way dumber than the smartest, like, you know, I don't know, community college grad, right? Like, cause there's going to be overlap in these, in these, uh, in these normal distributions. And, and that happens within companies too. He says tech loves to worship company pedigrees, X stripe, X ramp, X and roll. These labels imply excellence, but they mask the real story. What matters is who mentored these people? Great companies don't create great talent by default. It's the leaders within them who These labels imply excellence, but they mask the real story. What matters is who mentored these people. Great companies don't create great talent by default.
Starting point is 02:05:47 It's the leaders within them who nurture ambition, challenge thinking, and build environments where people thrive. Yeah, there's a lot of people that are like, oh, I was early at this company. And then you see some of the early photos and you're like, why aren't you in that photo? I've gotten absolutely cooked hiring for logo pedigree.
Starting point is 02:06:03 Totally. Like not every time, but like almost 50% of the time uh and the challenge is somebody joining as the uh somebody joining as the you know thousandth employee at an iconic company they could be great yep they could be great but not even have an impact right like they could just be great and sort of exist within the structure and so um yeah you just got to really really press people and not let them sort of slide into roles based on having worked at relevant yeah i mean you think about the path of like lockheed groom who was not just like at
Starting point is 02:06:36 stripe i don't even know if he was that early at stripe but he went into stripe and then it very quickly became like the money guy for patrick Yeah. Right. And it's like he was running like their ventures team more or less or like doing something with investing. And then Sam Altman set him up with a fund and like it was very quickly he was on like a massive like, you know, high growth thing and got a ton of AUM. And that's very different from like, oh, yeah, I went in and like I was kind of on a team that wasn't like super high performing and I just kind of like, you know, work nine to five. What does Jeremy Giffon talk about?
Starting point is 02:07:09 He talks about off the org chart, running the ventures program at a company where the founders care about that program a lot. Yeah, no meetings, no bosses, but employed by the company. That's like an ideal scenario, if possible. Yeah, it's fascinating. Big news, massive promoted post from Sotheby's. A lot of people want to check this out. Banksy's crude oil from the collection of Blink-182's Mark Hoppus
Starting point is 02:07:42 will headline the modern and contemporary evening auction at sotheby's london on march 4th so if you're headed over to this auction house uh check out uh the collection of mark hoppus from blink 182 i thought this was fun to share i know a lot of you guys in the in the uh in the community are uh art collectors and often at sotheby's so you want the head heads up and now you know. Let's move over to... Bo. By the way, Bo just texted me and said, it cracks me up how little John knows about UFC
Starting point is 02:08:13 because he saw that we talked about it yesterday. So we're going on record right now. John, you got to learn about the number one sport in America. And we got to go, we got to go to a car. I was playing it up a little bit, but it was very funny. That's great. Like who's this schmuck calling out my friend,
Starting point is 02:08:35 Bo. I don't know who this is. It's like, of course it's like a UFC guy. Anyway. Uh, great. Anyway.
Starting point is 02:08:44 Uh, we love him. Uh, let's go back to wander kyle tibbets this is a big promotion episode lots of ads sometimes you do a lot of bucket pulls and we did a lot of deep dives sometimes where they do a lot of bangers that's the whole well after this let's slam through a couple posts okay so uh kyle says one of the best designed app launch at launch i've ever seen stoked to be an investor great Great job at the Protector team. So Nikita Beer has been advising a company called Protector, which allows you to book armed agents. They're debuting in Los Angeles and New York City at number three on the App Store. And so this is kind of the latest Nikita project. He pumps out a lot of stuff. What's your take? I think this is a company he's advising. He's advising.
Starting point is 02:09:28 Yeah. Potentially through intro. So I thought this was interesting because back in 2014, Uber was really hot and there was an Uber for everything. And there was a company that went through YC with this, basically this exact idea. It was called like Bouncer or something. I forget what it's called. It was like book Uber for bouncers. I sent it into the chat at some point. Um, but you know, maybe that idea wasn't, it wasn't the right time for that, but maybe this is the one. Um, what's
Starting point is 02:09:55 interesting is that, uh, disintermediation is always a risk with these platforms. Like that was the problem with the dog Walker apps is that, is that once you find a good dog walker on a dog walker app, you just say, Hey, let's not use the app. Just come every week at this time. Uh, and so that's always been a risk. Whereas with Uber, like even if I have a guy who's reliable to take me to the airport or like you, you have a guy or like, Oh yeah, I know a guy who gave me his card. And like, if I'm going out with friends for a long time and I want to rent it, rent the limo for the whole weekend or something like i'll i'll call him but you know
Starting point is 02:10:27 i'm not just going to have a guy follow me around all the time yeah yeah it really comes down to really comes down to trust how much you know with so i'm sure there's an opportunity for protector right just the pure novelty of oh i just called this like armed security guard to where i am uh uh jeremy talks about this right like there's lots of opportunities of take things that of, oh, I just called this like armed security guard to where I am. Jeremy talks about this, right? Like there's lots of opportunities of take things that the ultra wealthy love and then make them accessible to the masses through these sort of on-demand platforms.
Starting point is 02:10:54 So I'm sure there's a market opportunity for this. I'd say overall, again, you call an Uber, you don't care who picks you up as long as the car is generally clean and they're fast. Whereas things like this, security is ultra high trust, right? So you would much rather have somebody that you know well and that you know has like great training and that you have a good relationship with because if they're needed in any way, you know, you want to know that you can rely on them.
Starting point is 02:11:20 I think that maybe the app can actually do a good job there because if they're onboarding agents and they have a very rigorous process, it's totally possible to have an app. At the same time, if the customers don't demand that, you know that the apps and the platforms will coalesce towards something that's a little bit sloppier. But that's fine because if you're on a marketplace app and you're just like, look, I just want to be in Tulsa and I want it to be the cheapest thing ever. And I don't care if the furniture is from Timu, like, yeah, there's an app for that. Um,
Starting point is 02:11:52 but you know, this, I, it's, it's kind of, you know, it's up to them how they curate the marketplace. Um,
Starting point is 02:11:58 it will be interesting. I have no idea how he got this to number three in the app store so fast he's a master you can you because there can't be that many people like you're telling me there's more people booking armed agents than booking vrbo's like there's ways there's ways so here's a potential way so you use you know these sort of algorithmic uh feeds to generate you know views on your content let's say you're producing a bunch of organic content and running ads there's ways to do these sort of like pre-downloads or almost like pre-ordering the apps where you can like get somebody to say like i want to i'm i want to download this and then when you release it you can drive all that let's say a hundred thousand one day and then
Starting point is 02:12:37 you'll rank sure even though even though you know maybe the next day you won't because i mean i i love the idea of armed agents being on demand i think that it's probably like a very good thing and will increase safety and reduce crime, which I think is good. But I would hate to live in a country where an app for booking armed agents is more popular than nice vacation rentals. You know, that's like a very black,
Starting point is 02:13:01 like our society is truly degrading if it's like, oh yeah, people are like, like the gun app is more popular than like the gun store is more popular than the luxury goods store. Anyway, uh, good luck to the team. Awesome launch. Love to see that Nikita is working with it. Uh, very interesting to follow along and see where that goes. Uh, anyway, we got a promoted post from a Koenigsegg.
Starting point is 02:13:20 Uh, there's a 2021 Koenigsegg Regera out. Uh, obviously you're going to have to bid against Sam for this one. Yeah, you think they posted this knowing that Sam would see it and not be able to help himself? For sure. For those that don't know, Sam Altman loves Koenigseggs. Loves all sports cars. He's a McLaren athlete?
Starting point is 02:13:38 Absolute fiend. Fiend. He loves them. As he says, he's doing OpenAI because he loves it, and he's also buying cars because he just loves cars. He's a simple guy. I'm very pro Sam Altman, the car guy. That's one of the main reasons I'm rooting for him.
Starting point is 02:13:53 He was saying that he was using deep research to find this sort of obscure Acura. Yeah. I guess he was successful at it. Obscure. It's an NSX, bro. It's like one of the most famous Acuras. He could have said, I need to find with with this sort of sure yeah that's true yeah anyway zero to 60 in less than three seconds top speed of 250 miles an hour if you're looking
Starting point is 02:14:13 for a new daily driver maybe you work in ai maybe you just raised a billion did some secondary pick up a koenigsegg yeah maybe maybe you work PETA and you're, you see the money coming from the for-profit conversion and you're like, yeah, I'm going to be rich. Pick this up. Let everyone know that you're serious about the for-profit conversion. And when you pull up to the investor meetings, they're like, really PETA? And they're like, well, he's already got the Koenigsegg Regera. So yeah, he must be the real deal. He must have a really good plan for how he's going to monetize this going forward. So we'd love to see it. Anyway, we quote tweeted this on the PMFR die feed,
Starting point is 02:14:52 but we thought we'd cover it here just to throw a reply towards Starter Story. If you're not familiar with Starter Story, YouTube channel that profiles, does video interviews and profiles on independent builders, hackers, entrepreneurs, and they did one on Blake, our boy in the cage. And so starter story says he taught himself how to code with ChatGPT. He built three iPhone apps, made 10 million in revenue. We called him up and
Starting point is 02:15:20 asked him to break it all down. They talked for hours, but cut out all the fluff to give you all the alpha. I love it. And so there's a 20 minute video. Uh, Blake is an absolute beast. He breaks down a bunch of different business ideas. And if you want to know more about player one in the cage, you can watch this 20 minute video to get an idea of who Blake is. An absolute dog. An absolute dog. It is one Oh seven.7 how we doing on time we're good cool i uh i think we should wrap yeah let's wrap pretty soon um let's go to uh do a duolingo oh this is good uh trey stevens is doing an event on march 6th if you're in san francisco go check it out uh lee mar Braswell over at Kleiner Perkins, another Holy Trinity venture capital firm. She says, join me on March 6th at an event with
Starting point is 02:16:12 Trace Stevens where he's discussing one of my fave posts ever. Choose good quests. Link in comments. Hat tip to his co-author, Marky Wagner, who's an absolute dog as well. Trey is one of the most mission-driven people I've ever met, as well as an inspiration for practicing Christians in tech. When Trey talks about both building great companies while also finding purpose, I listen. If you haven't read Choose Good Quests, it's fantastic. I highly recommend it. I think he should turn it into a book. I hope he does. but you can go hear him talk about it at this event. And so go find the Luma and go, uh, go to this event. If you're in SF on March 6th, anyway, I think that's a good place to close out. Oh, we got to talk about your bags. Uh, you've been, you've been killing it, uh, with Nvidia. Uh, uh and somebody somebody had the same idea as you uh mckay
Starting point is 02:17:06 wrigley says can we talk about how unbelievably stupid the market reaction to deep seek was compute got more useful and everyone sold literally today for grok 3 elon goes oh yeah we bought another 100k of h100s and mentioned a new 1.2 gigawatt data center nobody is going to buy less gpus and this shows that the the price of nvidia is exactly what it was a month ago right before the deep seek drop and i think you bought the bottom actually yeah when when deep seek launches sunday they have their fake run up in the app store uh everybody is sort of talking broadly about Jevons Paradox and all this stuff. It just seems like it was, for a company to go down 15% like that,
Starting point is 02:17:51 it just seemed way oversold. And I have no idea what Nvidia stock does over the long term. But yeah, it just bought right as everybody was panicking and just back up 15% since then. Making it look easy,ordy making it look easy uh but we are in the news business we're in the casting business not the financial advice business so that's right good luck out there stay safe avoid doing anything that your mother would not approve of well said have some good etiquette focus Focus on your manners. You got to coin that. Have you coined that? I'm working on it.
Starting point is 02:18:25 What would your mother say? Yeah. Too many letters. That's my current philosophy for how you should behave yourself, especially on the internet. What would your mother say? Make your mom proud. What would your mother say about you launching that meme coin and rugging it immediately? What would your mother say about you getting in a petty fight with another technology leader publicly on X?
Starting point is 02:18:47 What would your mother say about you asking your date to split the check? Embarrassing. Don't do that. Embarrassing. You got to dive into that one next time. Yeah, we'll cover it eventually. We're going to be doing a big etiquette deep dive, big manners update, giving you how to use each fork, how to tie a tie. We're going to be working on that.
Starting point is 02:19:07 How to drive stick. These things are important. How to make your bed. Anyway, leave us a five-star review on Apple Podcasts and Spotify. And keep watching us and expect a lot of new updates. We're cooking. We are cooking. Thanks for watching.
Starting point is 02:19:20 We will see you tomorrow. See you tomorrow. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.