TBPN Live - Weekly Recap: Grok 4 Launch, Texas Floods, Web Browser War, Top Signals, Meta Smart Glasses

Episode Date: July 12, 2025

(00:00) - Intro (00:03) - Texas Floods Controversy (03:39) - Augustus Doricko (Rainmaker) (31:04) - Top Signals (01:05:09) - Grok Goes Out of Control (01:14:49) - Grok 4 Launch Breakdown... (01:48:06) - Ben Thomson (Stratechery) (02:27:59) - Meta Doubles Down on Smart Glasses (02:35:21) - OpenAI to release Web Browser TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive

Transcript
Discussion (0)
Starting point is 00:00:00 You're watching TBPN Rainmaker stands accused of having a role in the Texas floods. This is a very very sad story It's on the cover of the Wall Street Journal not the rainmaker part that has been contained on X But I'll give you a little update on what's going on in Texas So Texas Texas rescue grows urgent as toll mounts at least 70 were killed in weekend floods as more bad weather complicates the search. The search for swept away for those swept away by punishing flash floods in central Texas over the holiday took on new urgency Sunday as the death toll climbed to 70 and nearly a dozen girls from a private summer camp remained missing. Rescuers combing the swollen
Starting point is 00:00:41 banks of the Guadalupe River were holding out hope that survivors might still be found. The potential for more bad weather Sunday also loomed over ground and air operations. The National Weather Service warned of more rainfall and slow moving thunderstorms that could create flash floods and in the already saturated areas in the Texas Hill Country.
Starting point is 00:01:02 So this blew up on acts and, uh, and people were asking Augustus did rainmaker was really maker operating in the area around that time. Uh, cloud seating startup rainmaker is under fire. After deadly July 4th floods in Texas, CEO, uh, uh, Augustus, Jericho who's been on the show multiple times will join us today at noon to break it down. He's already Explained his side of the story on X several times, but we will ask him a lot more questions He says the natural disaster in the Texas Texan Hill country is a tragedy My prayers are with Texas rainmaker did not operate in the affected areas on the third or fourth
Starting point is 00:01:39 Or contributed to the floods that occurred over the region. Rainmaker will always be fully transparent and he gives a timeline of the events. He says overnight from the third and fourth moisture surged into hill country from the Pacific as remnants of the tropical storm Barry moved across the region at 1 a.m. on July 4th, National Weather Service, which we work closely with to maintain awareness of severe weather systems,
Starting point is 00:02:02 issued a flash flood warning for San Angelo, Texas. Note summer convective cloud seeding operations in Texas do not occur during overnight hours. At 4 a.m. on July 4th, NWS issued a life threatening emergency warning and flooding insured. He says, did Rainmaker conduct any operations that could have impacted the floods? He says no.
Starting point is 00:02:23 The last seeding mission prior to the July 4th event was during the early afternoon of July 2nd, when a brief cloud seating mission was flown over the eastern portions of South Central Texas and two clouds were seated. These clouds persisted for about two hours after seating before dissipating between 3pm and 4pm CDT. Natural clouds typically have lifespans of 30 minutes
Starting point is 00:02:43 to a few hours at most, even with the most persistent storm systems, rarely maintaining the same cloud structure for more than 12 to 18 hours. The clouds that were seated on July 2nd dissipated for 24 hours. A big question I have that I'm sure he'll have answers to is why do clouds heating operations in the immediately before a massive storm is coming through.
Starting point is 00:03:07 Yeah. That's the question that a lot of people have. But we will get into that when he joins the show. Yeah. I mean, there's a big question about how effective is cloud seeding? Could you start a flash flood if you've tried? Does this work? Someone was paying for this because it's not a non-profit. Obviously Rainmaker has clients.
Starting point is 00:03:28 I believe it was state level funding. So the state might buy cloud seeding operations in one way. There could be a mistake. He says that he's not involved at all. So we will dig into that with him. Our next guest is here, Augustus DeRico, the CEO, founder of Rainmaker. Welcome to the stream, Augustus, how are you doing? John Geordi, thanks for having me. I am doing well. I am obviously talking to a lot of people about the flooding that's gone on in Texas and appreciate the opportunity to clarify that Rainmaker and cloud seeding had nothing to do with the flooding that unfolded.
Starting point is 00:04:02 Even in spite of that, I think that it's a tragedy that it did happen and certainly don't want anybody to use this opportunity, use this controversy, to blame cloud seeding for the sake of popular political support. And you may have seen that Marjorie Taylor-Green is proposing running a bill to ban all forms of weather modification based on those that we saw in the Florida state house legislature earlier this year.
Starting point is 00:04:30 I think it would be both disrespectful to the families involved and baseless and without any technical or scientific credibility if that legislation were to go through. So I'm happy to talk about the course of events, what cloud is, what it's not here with you today. Yeah, let's kick it off with, um, the, the high level on what actually happened in Texas, where things stand now, the status of the rescue operations and kind of the timeline, um, that's more broad. Yeah, absolutely. So, um,
Starting point is 00:05:00 this phenomenon of this flooding was global in scope. Um, it was referred to as a low probability, high impact event. I encourage people to go to Matthew Cappucci on X. He gave a great outline. He's a meteorologist that has a lot of expertise on severe weather forecasting. But tropical storm Barry, the remnants of which blew into Texas, was going to cause inordinate flooding
Starting point is 00:05:25 regardless. And that area of Texas is also known as Flash Flood Alley because these events do happen. Now, four trillion gallons of precipitation occurring over the course of just a couple days is pretty out of distribution, but we are seeing an increase in these sorts of severe climatic events over time and especially down and around the Gulf. So just to go over the timeline after having clarified that it was the remnants of Tropical Storm Barry and the convergence of large mesoscale phenomena that induce that flooding, it was at about 1 a.m. on the 4th that the National Weather Service issued a flash flood warning. And then it was at about 4 a.m. on the 4th where they
Starting point is 00:06:07 said that there was a life-threatening emergency underway. It was over two days prior that Rainmaker had suspended all of its cloud seating operations in Texas because one, our forecasters and our meteorologists saw that there was going to be this severe weather event and we need an operate to produce more water when there was already the event coming. But two, we suspended operations in accordance with the Texas Department of Licensing and Regulations suspension criteria, where if there is a severe weather warning from the National Weather Service or there is too much saturation of the soil, we have to ground operations.
Starting point is 00:06:46 And so we do so both voluntarily and in accordance with existing statutes. Okay. So the cloud city operation that happened prior to the storm, who was the client? Like I mean, I assume someone was paying you, sometimes it's the government, sometimes it's an individual or a or business. Walk me through where they were, who they are, what their goal is by procuring your services. Sure. So it's obvious that at this moment in time, that region of Texas does not need more water. However,
Starting point is 00:07:18 throughout the Western United States, farms, conservationists, governments concerned with their aquifer supply of water and also reservoirs for both industrial and residential drinking water, contract with Rainmaker to produce more water via cloud seeding. And in the case of Texas, the South Texas Weather Modification Association, the West Texas Weather Modification Association, and multiple other entities exist as conglomerations of both counties and individual farms that pay for cloud seeding services to, one, water their crops, two, fill up the reservoirs that they irrigate their crops with, and three, recharge the aquifers like the Ogallala that
Starting point is 00:07:56 has been severely drawn down and then puts all of these farmers at risk of not being able to grow, not being able to do business because of a historic drought. Okay. So, would the proposed ban just because what I'm getting at is like, I'm wondering if, like if the government is paying for cloud seeding operations, like the easier lever might just be to decrease the funding to the government, but it seems like Marjorie Taylor Greene is pushing for some other legislation that wouldn't just be, Hey, buy less of this service because we don't need it. And instead this service should never be bought at all. So why is there the distinction there? Like is,
Starting point is 00:08:41 is, is most of the money that's going into one of these associations private farmer capital or is it a split? Like how does that actually break down? So right now it's largely public municipal money that is going into these weather modification programs to increase water supply when there is drought or in preparation for drought. The bill that has been forecasted, that has been proposed by Marjorie Taylor Greene, would wholesale ban all forms of weather modification, be it cloud seeding, solar radiation management,
Starting point is 00:09:14 or what they supposed to be chemtrails. I mean, very transparently, I think that a lot of the concern around weather modification is actually conflating baseless notions of chemtrails with a very practical American technology that can and will and does benefit our farmers, our ecosystems, our industrial water needs, and our residential water needs. If this legislation were to go through, not only would it deprive all of those interests and all of those Americans from having water from cloud seeding, but it would also be against America's interest at a geopolitical level because China recently, I think on the last time I was on TVPN, I talked about how they had a $300 million annual budget for their weather modification program. That as of 2025 has been up to $1.4 billion. That is extremely consequential. And I think that if we were to ban who controls
Starting point is 00:10:06 or banning Americans from controlling weather modification technology, that would put us at a meaningful disadvantage. Now, all of this to say, people deserve transparency. They deserve clear regulatory framework so that they know whether modification operations are safe and being conducted in a responsible manner. And with government oversight and accountability, if ever there are negative consequences to cloud seeding. Again, there haven't been any in the case of Texas. But I think that the reasonable next steps are to more stringently regulate who is allowed to cloud seed, define what the concepts of operation are that are permissible,
Starting point is 00:10:44 define the suspension criteria at a federal level rather than leaving it purely to the states, so that anybody that wants to know about weather modification can look at the data and scrutinize it and ensure that it's being conducted safely, and also just to build trust. Because the Weather Modification Act from 1972 that currently outlines the Weather Modification Reporting Act of 1972 that outlines how we have to report to the federal government is 50 years old. We need more scrutiny on these programs for the sake of public trust and accountability.
Starting point is 00:11:16 And that seems like a reasonable next step. That was also recommended by the Government Accountability Office in their report on cloud seeding and weather modification earlier this year. What was the scale of the general water, sorry, sorry, weather modification activities on July 2nd? It was you guys, was there a bunch of other players operating?
Starting point is 00:11:42 Is there generally a lot of players or is it a pretty, is it a fairly small number of kind of service providers that are participating in these programs? Yeah, Jordy, you may have seen the prolific hustle bitch on x.com posting about this. A little while ago, he said that I was the CEO of the largest and most powerful weather modification company in the world. I saw somebody compare, somebody was comparing weather modification tech to being saying it was
Starting point is 00:12:10 more dangerous than nuclear bombs. That was kind of crazy. And then I also saw some people just showing like general flight logs of like commercial airplanes. Like obviously there's a lot of chaos. People have every right to be angry and demand answers. It's such a tragic incident. But yeah, I'm curious to get into the scale of kind of maybe late June, early July, what was going on broadly. Yeah, absolutely.
Starting point is 00:12:39 So there's one other cloud seating operator in Texas called Seating Operations and Atmospheric Research, SOAR. They're responsible for operations over the Rolling Plains Weather Modification Association, which is significantly farther northwest of Kerr County. On July 2nd, we conducted one 19-minute cloud seating flight where we released about 70 grams of silver iodide and 500 grams of table salt. That was released at about 1600 feet above ground level into two clouds that dissipated over the course of two hours after seeding them. The amount of time that those aerosols could have been
Starting point is 00:13:18 suspended in the atmosphere is less than the time between when we were seeding and the onset of rains from the remnants of Tropical Storm Gary. And the amount of material that we dispersed could not come anywhere close to inducing the precipitation, the 4 trillion gallons of precipitation that did come from that event. So yeah. And I'm assuming you guys like have records,
Starting point is 00:13:44 you keep records of like the radar showing these different cloud formations. So you're, it's not just, hey, we looked and we think it dissipated, but it's like, you can actually, you have like, you know, basically a map that's live updating. Is that the right way to think about it? Not only do we keep records for our own research purposes
Starting point is 00:14:06 and operational purposes, but we're required to keep records by the Texas Department of Licensing and Regulation. And those are accessible online, as are the reports on our seating activities. And if anybody is interested in those, then you can ask for them from the TDLR. I'm curious, when the flooding happened in Dubai,
Starting point is 00:14:27 I wanna say it was a year or two ago, Dubai is known for their cloud seeding operations, it's a very dry place, and it makes sense why they would want to increase precipitation. A lot of people, maybe the same types of accounts that have been blaming you, were quick to blame it on cloud seeding. Throughout history has there ever been any major kind of flooding event that that people
Starting point is 00:14:56 were able to say yes 100% this was caused by weather modification activities or is the tech not even powerful enough yet to do something like that? So I think that there's probably three points to touch on. The first of which is that it wasn't until 2017 that attribution had been, physical attribution of cloud seeding effects had been seen and proven in an academic context. And so with new advance in radar technology, namely dual polarization radar, we're able to much more clearly monitor
Starting point is 00:15:32 what the effect from cloud seeding is. In previous operations, it was extraordinarily difficult to see what your effect was because we could not measure the cloud dynamics and the cloud microphysics that were changing as you were seeding. So that's the first point.
Starting point is 00:15:48 The second point is that, and again, I'm trying to be and will continue to try to be maximally transparent about our operations and historic weather modification. There was something called Operation Popeye during the Vietnam War, where the deliberate intention of cloud seeding was to cause precipitation that would
Starting point is 00:16:08 cause flooding and then impede supply chains on the Ho Chi Minh Trail. Now, the extent to which that was effective because we didn't have good satellite imagery or dual-pole radar is outstanding. Now, that said, lastly, third point, we have suspension criteria that are given to us not just by the TDLR in Texas, but every state wherein we operate because if there already is too much saturation of the soil or if there is an oncoming severe weather event that the National Weather Service has notified us not to see, then we ought not do that to increase the severity of precipitation.
Starting point is 00:16:46 So there are suspension criteria because there are limits on what we ought to do with this technology so as not to cause flooding and only reap the rewards from it, right, for our farms, for our ecosystems, and for our national security interest as well, right? Like if we don't have access to weather modification technology, if we don't regulate this at a federal level and ensure that there's accountability and attribution for these activities, then other people, other nation states
Starting point is 00:17:11 could be conducting weather mod in the vicinity of or on American soil without any accountability. And so that's why I am advocating for way more regulatory scrutiny from the federal government for cloud seeding and weather mod ops. Walk through some of the history of the Chinese weather modification strategies. We heard about the flooding in Dubai
Starting point is 00:17:33 that was kind of unclear. Have there been any notable or confirmed negative outcomes from China spending, I mean, you said $300 million a year, something like that. That seems like a lot of cloud seeding. It seems like if there was a surface area where there could be mistakes made, they would have kind of explored that.
Starting point is 00:17:53 I remember the pre-Olympics, they were doing cloud seeding or just kind of bringing down the dirt in the atmosphere. And people kind of learned from that, okay, You get acid rain when you do that, uh, in, in, in particular, but, uh, have there been any case studies from China that we should be learning from in America? Um, case studies from China with adverse weather coming from their cloud seating
Starting point is 00:18:19 operations. Yeah. Anything like that, like like something where like, okay, they, they've done a lot of this. They're doing this push this to the limit. They've put, they've done this at scale. If there's going to be rough edges or mishaps, I would have, I suspect that we would have seen evidence of that over there. They would have had an accidental flood or something like that happen over there if they're doing it at scale.
Starting point is 00:18:41 You would expect to have seen it from China. However, you would also probably expect and understand that there are relatively inscrutable country that does not report on their activities very openly projectively. Now that said, one thing that we do know about the weather mod program that they do have going is that they're planning to build a hundred and a hundred thousand ground generators on the Tibetan plateau. So Rainmaker is primarily using drones for operations. We also have inherited some ground generators from previous operations. These are essentially aerosolizing units on the tops of mountains. They can disperse material into clouds when the clouds intersect those mountain
Starting point is 00:19:23 tops themselves. Is that like a cannon that fires the material into the cloud or no, no, you might recall my, my initial inclination to use something like that because it is used in China. But no, it's, it's essentially like a, uh, uh, a smokestack of sorts. A very small smokestack that releases those aerosols there. Sure. But in building a hundred thousand of these ground generators and also using the Wing Long 2 and a bunch of their other military drones for aerial cloud seeding, they're turning
Starting point is 00:19:52 Tibet into a reservoir, a snowpack reservoir of unprecedented scale that will feed more water into the agricultural basins in southern and eastern China. And I think that, you know, although again, this is something that needs to be transparently reported on and regulated, depriving American farmers in the West, especially as a Congress person from Georgia, right? Where there is not as severe reliance on cloud seeding to produce water would be against America's interest.
Starting point is 00:20:24 Jordy? I guess, yeah. I'm trying to, I mean, the, the, the, my, my question is, it feels like, it feels like candidly it will be hard to come come it'll be hard to find any type of allies in Texas on the ground in Texas, maybe aside from from the farmers but but I'm curious. groups, you know, what the reaction from them has been in terms of, you know, if they're, you know, the reality is, is water scarcity affects every person in Texas, but only a few
Starting point is 00:21:15 people truly feel it, right? It's a much smaller group, because everybody goes to their sink, they turn on the water, they turn on a hose outside, they go to a grocery store, there's water, there's produce. It's not something that people necessarily feel. And so I'm curious where, you know, you obviously are gonna defend weather modification because you believe in the many different ways
Starting point is 00:21:40 it can have a positive impact, but I'm curious who you think the other players that will be on your side as the industry, I mean the industry was not in a good spot prior to this, it's in a much worse spot now and I know you've been flying all over the country making sure that it doesn't get banned, so I'm curious what you think the kind of coalition that will kind of form around you. Yeah, yeah. Well, so I actually think I just from my own experience over the course of the last few days disagree with the two points that you made, right? Like it is neither been hard to find allies for cloud seeing weather modification in Texas, nor do I think the technology and the industry is positioned worse now than it was prior to this weekend. And regarding the first point,
Starting point is 00:22:31 there are some people that I think are probably not in good faith engaging with this because they have some preconceived notions about chem trails or otherwise and don't themselves want to scrutinize the data to back up how our operations are different and beneficial Whereas chemtrails as they believe them to be are you know? malevolent the vast majority of people that I've interacted with online on the phone and in person are rightfully curious skeptical skeptical, concerned, some more than others, obviously. But in scrutinizing the data and having these conversations and learning about what cloud
Starting point is 00:23:16 seeding is, pretty unilaterally, people are supportive of it, provided that there is a regulatory framework more stringent than the one we have now that ensures that it's safe. This is true both of just individuals that are not themselves farmers, but obviously farmers, water managers, government officials too. I welcome any questions that people do have both online and via email about what our activities are, what our policy recommendations are, and via email about what our activities are, what our policy recommendations are. And I'm grateful that there are a lot of people that understand, one, our operations did not contribute to the flooding, but two, that even if there was a flood now, it doesn't mean that there is always enough water. And having access to a technology to produce more water for
Starting point is 00:24:00 farms and otherwise would be beneficial. Like people want a more green, lush country. Yeah, I'm curious. I'm sure you've spent plenty of time thinking about this, but would there be a way to apply the existing technology you have almost in a defensive way in, you know, theoretically, if there's- Exceed a hurricane while it's still offshore. Something like that, or, you know theoretically it exceeded hurricane well it's still offshore something like that or the or you know one of the issues here there was just so
Starting point is 00:24:33 much water in the atmosphere that rolled over a heavily you know populated area and then it's got its its gravity right it's got to come down you know is there an application of the technology that could over time strategically prevent, you know, or act defensively against the conditions that create flash floods? It's a very worthwhile question for you to ask and for us to ask ourselves collectively. Right now, again, Rainmaker only does precipitation enhancement operations for all those constituencies that I listed before. However, in the past, the United States
Starting point is 00:25:10 government funded Project Storm Fury, which was a series of attempts to reduce the severity of hurricanes over the Atlantic before they broke against the Eastern Seaboard. Again, we didn't have the appropriate understanding of atmospheric science or the radar or the satellite data necessary to appropriately do that. However, severe weather is something that is like a geopolitical risk, a national security risk. It causes damage and it is fundamentally a physics problem, right? A physics and chemistry problem. Is there technology now that could mitigate severe weather like this? No, and Rainmaker doesn and chemistry problem. Is there technology now that could mitigate severe weather like this? No, and Rainmaker doesn't have it. Is it possible to
Starting point is 00:25:49 someday, provided we invest in NOAA in the National Weather Service in the appropriate research into cloud seeding, such that we could reduce the severity of severe weather? Absolutely. And I am entirely in favor of that provided it is done in a responsible manner. And I am entirely in favor of that, provided it is done in a responsible manner. And if we were to ban it wholesale, then not only would we lose access to precipitation enhancement, but we'd lose out on any potential of, at the very least, better forecasting for these systems and warning people early, but also the even
Starting point is 00:26:20 greater and more consequential beneficial potential of reducing severe weather in the future. And so I think that the United States government and rainmaker should and are absolutely interested in mitigating severe weather in a manner similar to project storm theory. Yeah. I think the PR, what you were getting at, Jordy like the PR difficulty here is that like when there's not enough water, like the PR difficulty here is that like when there's not enough water,
Starting point is 00:26:50 crop yields are lower prices go up, but it's very distributed. Everyone feels it a little bit. Whereas when there's too much water and there's a flash flood and individuals die, you have a very, it's a very emotional, very, uh, it's very concentrated. The pain is very concentrated. And so that's why this, this story. Normally when normally when there's a natural disaster Yeah, there's you can you can critique the government for their response sure to it But there's not somebody sitting there a scapegoat. Yeah, right. And so the question is like easy Yeah, it's it's you know, whether it's online accounts that are just engagement farming
Starting point is 00:27:25 or it's a politician, the concern is that, and your concern is that the industry becomes a scapegoat and America loses a capability that our adversaries clearly care a lot about. Yeah, my question is like, we're seeing this bifurcation. It seems like Ted Cruz came out in support of the idea
Starting point is 00:27:49 that cloud city had nothing to do with the Texas floods. Marjorie Taylor Greene has taken kind of the other side of that. My question is like, these are politicians at the end of the day. They're not independent scientists. Who can we go to? Who can the population go to for like a truly independent review of this situation? Like, is there,
Starting point is 00:28:12 is there some sort of independent governing body or are there, are there respected scientists that kind of don't have a financial or, you know, political incentive one way or another? How do you think the the the populace should be? Obviously, you're telling your side of the story. You're going direct. You're explaining things. You're laying out the data. But what what do you expect people to look for in an independent analyst?
Starting point is 00:28:41 Yeah, yeah. So for one, I think that NOAA, the national weather service, the national center for atmospheric research, um, all of those are great third-party entities that can review the information, corroborate the information that we've provided, um, provided of course, that they continue to exist and remain funded. Um, I think that this probably demonstrates why it is important that we should retain some capability nationally to forecast and research the atmosphere
Starting point is 00:29:13 because there should be somebody that's capable of reviewing this to ensure that it's safe. I'll also say, regarding the scapegoat dynamics that exist right now, I've thought about this pretty prayerfully and intently over the last few days. And when there is a calamity of some sort, like I've been trying to think about why people are, say, coming after Rainmaker or angry at Rainmaker. And I think that when there is a calamity of this type, if there was someone responsible, if there was someone or something that could be held to account, then in holding them to account, you could supposedly
Starting point is 00:29:55 prevent this kind of thing from happening in the future. The trouble with a true natural disaster as this was is that there is nobody to be held accountable. And that makes the world a lot more tragic because it means that things like this will persist. They will persist indefinitely into the future unless and until some sort of technology could reduce the severity of severe weather.
Starting point is 00:30:22 Yeah, I mean, we went through this with the California fires. It was like everyone was searching for a single person to pin it on and it came down to some people built their houses the wrong way and there's some building codes that need to change and there's some water rights and water flow and there's some different- General government competency. Like we need more goats in certain areas. There's like a million different things that could have prevented this if they were all working together as a well
Starting point is 00:30:49 oiled machine and had the forethought. But it's a very, very frustrating and difficult situation. So our thoughts and prayers are with everyone who's been affected, but thank you so much for stopping by. This is fantastic. Thanks for breaking it all down for us. Thanks guys. We'll talk to you soon.
Starting point is 00:31:03 Cheers. We have some maybe terrible news. There might be top signals in the market. There might be top signals all over the place. We've been building out an internal top signal tracker, crowdsourcing some of them. And it's a long list. We'll get through it.
Starting point is 00:31:19 At the top of the list, podcasters have been wearing white suits recently to celebrate the market ripping. That feels been wearing white suits recently to celebrate the market rift yes that feels like suits are actually a top it's a complete top signal but of course there is some good there are some the economy strong we're gonna go through Joe Weisenthal's breakdown things are not doom and gloom but there's a lot of crazy stuff happening and it's fun to dig through I mean the first major top signal, Bitcoin all time high,
Starting point is 00:31:47 you know, that's always, you know, it is definitionally a top signal because it is at the top. So let's go through the list here because it's quite substantial. So one- So this is kind of anonymously contributed through group chats, some of this stuff we've observed. We're gonna catalog it and see if we can turn the tide of the top signals to okay. I do so starting off
Starting point is 00:32:10 Yesterday Trump made a post on truth social Calling basically celebrating the state of the economy the markets You know just really calling out how many assets are performing well I haven't held the second one. Do you want to read through it? Yeah, I read through it a little bit because he basically get through the post and we'll get to the So Donald Trump on truth, social truths, tech stocks, industrial stocks and NASDAQ hit all time high record highs crypto through the roof.
Starting point is 00:32:40 Nvidia is up 47% since Trump tariffs. USA is taking in hundreds of billions of dollars in tariffs. Country is now back a great credit. Fed should rapidly lower rate to reflect this strength. USA should be at the top of the list. So no rates are low rates are actually just a reward for when the markets are ripping. Exactly. It's a little treat that we give ourselves when things are great. Yep and the White House is posting this screen-shotted on Axe. The country is now back says the President Donald Trump. Every account controlled by the White House has been on a tear. Some
Starting point is 00:33:20 of them some of the posts I think are a little bit low-class and vulgar. Others are quite funny. But there's definitely the memers are in control. It didn't ruin, say, every politically aligned poster he knows who is like pro-Trump, now works for the White House. But you just haven't seen it because they were like anons and they just kind of dropped off posting. They'd be getting Death threats so they have to that it's even actually more in many ways. It's it's more controversial work than doge Maybe maybe it's more under discussed because doge had this big like question in the media about like, you know Is Elon doing something that's you know, he shouldn't be is he a government employee?
Starting point is 00:34:01 Like what's the relationship between the two? And so, you know, there was a lot of investigative journalism that went into figuring out what's going on with those, who's involved. Yeah, nobody's investigating the memes. The social media managers. They need to be investigating the memes of production. But anyway, so going through my list here, that's great.
Starting point is 00:34:20 Eric Trump, a while back, said, this is a good time to buy, this was a few months ago, on a theorem and then it just went down for months. Oh, now it's back up. I didn't know it's back up. And he's saying, you're welcome. I do remember Trump. He called the bottom, right? He said like now it's time to buy generally. And then the market rips. And he created it and he, and he called it perfectly. It's wild. It's finesse. More going down the list coinbase just
Starting point is 00:34:46 Who we love but they did their fortune 500 company they did update their profile picture to an NFT Historically that has been a top signal. I do think their profile picture experience with the NFT profile pictures You know, I've dealt I've delved over the years You know, I've delved, I've delved over the years. Yes. And if you, and if you look at maybe the moment that I did use an NFT profile picture in 2021, it was maybe only off by one or two months in terms of, in terms of the top.
Starting point is 00:35:15 I never used an NFT profile picture, but I bought an NFT right near the top. A chain runner. Nice. Which I still own. Which actually I didn't like over invest, get over my skis is very small portion of like over. That's a NASA that will be passed down Right near the top. A chain runner. Chain runner. Nice. Which I still own. Nice. Which actually I didn't like over invest, get over my skis,
Starting point is 00:35:26 it was a very small portion of like over. That's an asset that will be passed down through your family like a fine watch. I like to think of it as like a piece of 2022 lore. You know, it's just like a piece of history. But yeah, fun project. And I feel like to some degree, you know, you're not really, it's like a skin in the game question,
Starting point is 00:35:45 like you're not really participating, you're not experiencing the market unless you're participating to some degree, but you don't want to get over your skis. And it's quite also, we did the NFT profile picture at a really bad time and had to roll that back. Like there's been a number of like NFT profile pictures
Starting point is 00:36:00 that have been like, it's rough. It is a historical top signal. It could be, it could be now just a signal for the start of a you know generation run new cycle but historically it's a top so we got to call it out if NFTs are gonna make a comeback because like there's been like crypto has been coming back and a Bitcoin went from what 30s will be? And FPs will be back when A-list celebrities are using them on their Facebook accounts.
Starting point is 00:36:29 That was a wild time. That's the real test. X account, could see it happening early. Facebook account, original Facebook account. There's gotta be a new project then, cause I don't think any of the old products or projects are going to come back. That would be crazy.
Starting point is 00:36:45 Although some of them are kind of lindy. Like haven't the original. Crypto punks. Crypto punks, those have kind of held their value. But the board apes have sold off like crazy, but are still expensive, right? It's unfortunate board apes are not in gag gift territory yet.
Starting point is 00:36:58 Yes. Because you think, oh, it'd be funny to get like your buddy like a board ape for their birthday. It's like 30K or something. But it's like, yeah, it's like.. What's the floor price of of board apes? I'm interested in now. Well Tyler looks that up Let me tell you about ramp time is money save both easy use corporate cards bill payments accounting and a whole lot more all in One place go to ramp.com. Also, we don't we never shut this out 4.8 stars on g2 with over 2,000 reviews
Starting point is 00:37:21 That's great. Shout out ramp world-class Another yeah, what's your price is like around 10 ETH. So that's like almost $3,000. 3000. 30,000. 30,000. Yeah. That's like not a gag gift, maybe for the man who has everything. Yes. The man who has everything. Great, great gag gift. It is a pink elephant. We'll see. We'll see. Sun Valley. But that you know by by you know Christmas time. Did they do pink elephants at Sun Valley? I feel like they should. Maybe. Maybe. We'll have to ask some of our friends that are there this week. So in other news Robin Hood CEO Vlad is raising at $900 million valuation for a math foundation model
Starting point is 00:38:07 startup and Vlad and Robin Hood have been on a pretty generational run, but this does feel a bit top signally, right? Especially in the context of grok one-shotting PhD level math in the announcement on Wednesday. So interested to follow that one, optimistic, but again, Mathematical superintelligence.
Starting point is 00:38:32 Historically when we've seen CEOs of public companies start ripping second companies and then getting these types of valuations without a lot of underlying revenue It can end poorly Yeah, Andrew Wilkinson is giving stock tips. He hit the timeline today. I'll read through it He was highlighting a company
Starting point is 00:39:00 Historically a value investor, but this morning things like the Warren Buffett stuff, right? Yeah. The Berkshire Hathaway for the internet. Yeah, that's right. He says, there are many ways to profit from the AI boom, but my favorite is I-rend. I rarely buy stocks. The private market is way too attractive,
Starting point is 00:39:16 but every once in a while, I see something that stops me cold. In 2025, it's I-rend. I call it a Picasso I found at a garage sale. The stock is up 54% since he recommended it on My First Million, but it's still cheap. Here's the trade in a nutshell. US capacity for energy and compute is highly constrained.
Starting point is 00:39:37 Two, permitting and building facilities takes years. Three, AI scaling laws are continuing to deliver, but even if they don't, tons of compute is required for inference. Iren is a highly reputable, publicly traded Bitcoin miner with massive data centers mid-build in Texas. It pivoted away from mining Bitcoin at these new facilities to instead build them out for AI training and inference. Once completed, these facilities should generate in the range of $2 billion in new cash flow. What is this company's name, Iren? Iren.
Starting point is 00:40:08 Even if AI completely fizzles, these facilities are highly valuable as traditional data centers or can be rolled back to mine Bitcoin. So it's an AI thesis, but if AI doesn't work out, we can still mine Bitcoin. The entire market cap is currently 3.8 billion. So Andrew, I don't think this is investment advice,
Starting point is 00:40:30 but it sounds like it. And interested to see where this one goes. But anytime you see a value investor start trying to cash in on the AI boom. Should be a little bit wary. Harry Stabbings today was calling. Like it doesn't have earnings, right? It's a nice-
Starting point is 00:40:51 No, no, no, no. It's trading around $4 billion. I don't think it's ever generated any profit. I mean, it says 23 million in EBITDA, but in 2024. So I don't think it's like losing that much money. And I guess net income in the last quarter was 24 million, but the net income to market cap ratio there is 40, I guess. So still pretty high.
Starting point is 00:41:16 Yeah. I mean the thing here is at the same time Satya is pulling back on new data center development. He's happy to be a leaser. You have incredible Neo clouds that have deep domain expertise. The Iren team, I don't think has a bunch of team around running large AI training or inferencing. And so anyways, that- Just feels like they're a little bit late to that party
Starting point is 00:41:42 because there's already like three or four. Did Iren make the Cluster Max Dylan Patel article? I doubt it because they're not online yet, right? Oh, sure, sure, sure. Yes, because Semi Analysis does the Cluster Max rating for all the Neo clouds, including the hyperscaler clouds. And I feel like they did not have, let me see, Iren I don't think is on here.
Starting point is 00:42:06 Tensorwave, there are so many, RunPod, Lambda, Scaleway, SMC, Azure, Nebius together, Crusoe, Laptone, Oracle, CoreWeave, AWS. So hyper competitive market, unclear if this Bitcoin miner is gonna be able to pivot into AI training and inference in this, when they're up against the players that you just mentioned. Another top signal, I'm not gonna go out and say
Starting point is 00:42:34 that this is impossible, but Harry Stebbings is calling for $8 trillion Nvidia in the next five years. Private markets investor backed a bunch of unicorns starting to make, you know, it's very specific sort of price predictions on the timeline. Yeah, the specificity of the price prediction is interesting. I was thinking about that, like, should, like,
Starting point is 00:42:58 as we talk about tech companies, should we be trying to like boil down to like price targets? And I just feel like that's not the domain of Talking heads necessarily or like podcasters or private markets investors. Yeah I yeah, it's just it's just hard because like to do a proper price analysis on a big public stock like you you really have to look at the financial you have to read the Financial reports you need to actually understand the underlying financials. It's like a vibes-based analysis
Starting point is 00:43:29 doesn't seem appropriate usually, but. Who knows? Sometimes vibes are all you need, John. Yeah, it's certainly been like, I mean, when was NVIDIA a two trillion dollar stock? Like when was last doubling? Like in the last year or something? I don't know.
Starting point is 00:43:42 We can pull up the NVIDIA chart and see. Moving on, we, another incredible top signal circle, a great American stable coin company, is trading at a 2,300 PE ratio. Nearly, at once I think they eclipsed Coinbase's valuation very briefly. No, despite the fact that they give half of their revenue to Coinbase as part of their
Starting point is 00:44:11 distribution partnership. So again lots of excitement around stablecoins feels like Circle could potentially be a little over its skis but it's a great company and they have a lot of advantages now, but the very euphoric multiple. Another top signal we have is Soham Perique. We had them on the show just a week ago. This same sort of thing was happening in 2021, 2022, where engineers were really ramping up moonlighting activity, right?
Starting point is 00:44:42 They'd be working at Metta and then working at some startup or things like that. COVID maybe accelerated it, but again, if companies are so desperate to hire great engineers that they'll run these like super fast hiring cycles, put up with people, generally talented people that are underperforming, right? Which Soham was not delivering,
Starting point is 00:45:04 was making a lot of excuses and a lot of people and rightly let him go quickly yeah it's just a it's just a neat the nature of like the dynamic of just competition like if your competitors are hiring really fast and you need to hire really fast you're just like okay well we don't need to go deeper so with let's wind up fast-tracking this person. Yeah, so you wind up hiring You know same person five times. I guess it happens It is just like a funny anecdote that like is like, oh wow Those are some pretty crazy times remember that anecdote remember this anecdote. It feels like we're in this
Starting point is 00:45:40 moving on Masa top-blasting or potentially top-blasting. Anytime Mosa, historically Mosa getting into the headlines, whether that's Stargate, structuring this $30 billion investment where nobody knows, or in the 500 billion, nobody really knows where the money's coming from. They're exciting, big headline numbers,
Starting point is 00:46:01 but unclear if he will actually be able to deliver on that. I think him trying, you know, getting in the breakout, one of the breakout consumer AI winners, which is opening AI is smart. He should have exposure there. But I think everybody should be a little bit uneasy that he's pulling out the checkbook and writing numbers of that size. Also, also investing in not just OpenAI, but like a new company that is a data center holding company
Starting point is 00:46:29 that may not have the same economics as OpenAI. So there's a big question there about like how much he deploys. I'm trying to remember the, I mean, we did that whole deep dive on Masa and, you know, he made a ton of money on AMD, but that, when he made that investment, it was like a way less frothy
Starting point is 00:46:45 time or you know it wasn't AMD it was it was what was the SoftBank chip deal? Arm? Arm yeah. When did that arm deal happen? SoftBank require owns roughly 90% of Arm they acquired in 2016 for 32 billion and later took it public in 2023 traffic 2016 Was that a particularly frothy time for him to get into that deal? because he has he has done a number of really great deals, but when Like the other one is the other one is yahoo you remember he had this crazy meeting with the yahoo team Where he basically was like take my money. Yeah, I'm gonna and he was like didn't he ask he was like, who are your competitors?
Starting point is 00:47:30 I'm gonna give money and he didn't even know who the competitors were. But he said if you don't take my money I'm gonna go give the same check to them. Yeah, so he they ended up taking it. He acquired approximately 41% of the company at Somewhere 41% of the company at somewhere around a $200 million valuation. When Yahoo went public in 1996, he had an instant paper profit of $150 million, but then at the peak of the dot-com bubble, Yahoo was valued at 125 billion. So anyways, phenomenal investment, but very different valuation and ownership targets and unclear.
Starting point is 00:48:14 I would love to see OpenAI get for profit and get public, but we'll have to see. Going down the list another classic Pomp spack that we we had pomp on the show to talk about it It's our backs backs are back Pomp's got a spack a lot of people were calling that a top signal I I'm excited to see what what Pomp does with with his but in general this
Starting point is 00:48:45 but in general this extreme excitement, retail excitement around these sort of Bitcoin treasury companies is fascinating in the context of it now being very easy to get Bitcoin exposure in a variety of different ways. I'm not sure we need a bunch of net new Bitcoin treasury companies. Yeah, it's mostly that like whenever there's a new trend or bubble, it's very easy to map like okay, there's one company that it's really working, this is massively successful.
Starting point is 00:49:20 Everyone is using chat GPT. Like AI is a thing. It is real. The internet was real. Google is real the internet was real Google was real Amazon was real but the the 25th Amazon copycat did not do well and so yeah that's always the risk is that you've applied like the same overarching theme to something that's like so far down the power law that it will never grow into the valuation that it's been assigned. That's always the risk. What else do you have?
Starting point is 00:49:48 Dwarkesh updating his timelines. That happened Monday. We had him on the show. It was fun conversation. I think Dwarkesh has remained incredibly bullish and I think he rightfully is. He also is being somewhat of a realist and being like, I don't think that AI is priced in to the market broadly, but I do think that some of the promises of AI will take another couple years, another five years, et cetera, to really deliver, versus some of the much more hyper aggressive AI 2027.
Starting point is 00:50:31 You might say that AI 2027 itself was in hindsight, that could end up being like the number one top signal, which is that basically, if you haven't read the, uh, kind of study paper essay, um, they basically say that by, by 2027, you know, a single foundation model company could just be acquiring every auto manufacturer in the US to develop millions and millions of robots that would then build, and we would hit this sort of fast takeoff scenario. Meanwhile, Apple is like,
Starting point is 00:50:59 we can't possibly get out a slightly lighter VR headset until 2027. Yeah. Like, and this is what we do. Like, we make stuff. We've been working on this for a decade. We make stuff, like every year we are the best at it. We make the most stuff.
Starting point is 00:51:13 And the best stuff, pretty much. The most complicated stuff, that's what we make. We're in the widgets business. And yeah, making that headset lighter, it's gonna take us a full two years. Yeah. We've refreshed that. And I liked it at 2027. It was a take us a full two years. Yeah, and I liked it 2027 it was a it was a fun thought-provoking, but I
Starting point is 00:51:31 Think that we will be we'll have to circle back on it 2030 or even 2027 I mean the the the big thing was you know our conversation yesterday with with meter About the actual like, are we, are we close to reinforcing AI, where the AI models are self improving? And, and I was kind of, you know, like, okay, I really hadn't read the full report beforehand. So I didn't really know what to expect. Because I was expecting, you know, you know, something between like, you know, like Arc AGI,
Starting point is 00:52:08 it feels like with Arc AGI, we're 10% towards solving something there, which is just like, you know, a basic versatility in AI, um, that it can solve things that humans can solve and it's not narrowly defined. It's generalizable now. Arc AGI is like the perfect example of like we maybe haven't hit, we've done intelligence but we haven't done general intelligence yet and everyone keeps saying, oh, this is AGI, that's AGI and Arc AGI is really holding it back saying like, well, if it was truly general, should probably be able to solve this basic puzzle that a kid can solve. And for that it's like okay we're going from like 9% to 15% like we are still
Starting point is 00:52:49 like you know 85% in not even like you know nowhere close and and the the the the meter report I was expecting it to be like, well, yes, we're seeing slight gains on self-reinforcing AI development, and the AI is starting to help build itself slightly, and the result was like, no, it's actually setting us back. In this domain, it's not working at all. And so that was a, no, it's actually setting us back. In this domain, it's not working at all. And so that was like a pretty, pretty big,
Starting point is 00:53:28 okay, there's a completely different, not that it's not useful, the stuff's useful all over the place. I saw Rune talking about that. He was like, for so many different projects, it is useful. But for the frontier, it's not the product that's advancing the frontier at all. But yeah, I mean, that probably bridges into the talent wars.
Starting point is 00:53:47 Well yeah, bridging in, I do think that in hindsight, we will look back in maybe a year, two years, five years, 10 years and think about the signing bonuses and general offers of AI researchers in June and July of 2025 as being somewhat of a top signal. I think it is very strategic and makes sense from Zach and Metta's point of view, right?
Starting point is 00:54:13 When you look at their AI CapEx, it makes sense for them to have the best possible team and they have the balance sheet and the general profitability in order to do something like that. But in general, AI researchers who, you know, six years ago didn't get any attention, much attention at all from the media, the fact that they're now trading for more than NBA superstars, more than, more than, you know, Tim Cook's annual total comp it will be an obvious
Starting point is 00:54:48 one in hindsight the other one six and a half billion dollar aqua hire of IO I think that again you can rationalize it in the sense that it's a couple points of open AI to put together the best founding hardware engineering team, probably in the world that's available collectively. But at the same time, again, it's quite a lot considering, you know, the company was barely, I think, a year old at the time. Yeah, it's interesting,
Starting point is 00:55:19 because like, chat GPT is so, it's so installed, like it feels like it's already Lindy and it feels like even if there's some massive correction and like in the market or in AI generally or some pullback like people are still going to be using chat GPT as an app right in the same way that Amazon made it through the dot-com crash the question is like what what will it take for the IO acquisition to look like the Instagram acquisition in hindsight? They still kind of have to go from zero to one with that project, which is very different than Instagram, which is already a mature and growing business. They figured out ads really well.
Starting point is 00:56:01 Well, Instagram, were they doing ads? They weren't doing ads. Oh, yeah. I was saying that Metta meta was like we know how to make a perfectly complicated perfectly complimentary business we know how to monetize social users better than anyone on earth and you have gotten a lot of users and it's working and it's growing yeah and and you're even we can actually accelerate the growth of the business in a bunch of different ways so it'd be very different if it was like okay yes I Oh is selling you know like like, like it's, it's a small,
Starting point is 00:56:26 but growing hardware company that people love for the product. People love the product. People love it. Maybe they can't manufacture enough of it. Maybe they're, maybe they're under monetizing it right now. Yeah. People love it, but it's like, it's pre launch. But yeah, multi-billion dollar acquisition
Starting point is 00:56:40 for pre-launch is pretty crazy. Yep. Going down the list. What else do we have? I think the tokenized private company shares, I think it, without Republic and Robinhood, both creating products that are completely unauthorized, basically derivatives, the companies that they're offering
Starting point is 00:57:01 are angry at them saying don't do this. Is this Spider-Man meme of top signals pointing at each other? Everyone's like this is the top signal. Anyways, I'm excited about these experiments. I just think that I'm a little bit wary. And then last but not least, Satya doing two rounds of layoffs this year. Mike, we've reported on this before. Microsoft does routine layoffs. I think they're pretty good at kind of identifying
Starting point is 00:57:31 underperformers or people that should just move on to different roles. But Satya, I think, has been, I think we'll look back and he's been excited, but pragmatic, right? And I think that he will, when the dust settles, I think he'll look pretty good. Yeah, I wonder, like if there's some massive pullback in, I mean, I don't even know what that would look like.
Starting point is 00:57:58 Essentially, like if, let's assume that the current capability of AI models, essentially plateaus for like a decade or something like that, just hypothetically. And, you know, they're useful, but it's not some reinforcing fast takeoff super intelligence. What is Microsoft a big loser in that scenario? It seems like such as pretty well positioned, right? Totally. Like the company Prince Cash is very healthy, has done these layoffs.
Starting point is 00:58:28 They'd have to retreat from some stuff and some of the promises that they made maybe, but in general it seems like they'd be really, really well set up to just like stick through. But I'm trying to think of going back to the dot com bubble and the like, you know, the effect of like Oracle's mainframe business like probably made it through pretty smoothly because it was just like really long contracts with companies that were getting true business value out of it and weren't about to churn because it was not this like experimental,
Starting point is 00:59:00 like if you had moved from paper to an Oracle mainframe, you weren't like, oh, this stuff's overhyped, it's not gonna solve all my problems, I'm gonna go back to paper. And so in the same way, it's like, if you're on Microsoft Cloud or Azure, or everyone's using Excel, and they're like, maybe we're getting some value out of this copilot upgrade
Starting point is 00:59:21 that we did, maybe we pull back from that, maybe, yeah, our employees like rewriting emails every once a while yeah like if they pull back from that it's not disastrous to the fundamentals of Microsoft yeah and we didn't even cover how there's a set of labs with billions of revenue yeah and then there's a set of labs that are valued similarly you know have zero revenue revenue and you know basically a hundred billion dollars of market cap with with very little revenue. The question like a year ago was what was the who's actually
Starting point is 01:00:01 making profit off of AI and it was only Nvidia and video was making more than a hundred percent of all The profit combined because all the other companies were loss making by comparison and now and now like that narrative has taken so much hold that Nvidia is the largest company in the world and It's put this massive target on their back at four trillion where target on their back at four trillion where every all their major customers want to get off Nvidia it feels like. Yeah. Google did it, Amazon's doing it and Microsoft saying that they want to do it and Apple's
Starting point is 01:00:36 you know was never really a big Nvidia buyer but the on-device inference is crazy too. Like if you think about if we don't have any major breakthroughs in how AI works, like the capabilities, and we just want the current capabilities everywhere as cheap as possible, like on device inference becomes really, really valuable, right? And all of a sudden, that drops demand for Nvidia, potentially, right? We might need to do a SWOT analysis, John.
Starting point is 01:01:00 Yeah. No, I mean, Nvidia is an incredible company. Jensen's an incredible CEO. They were perfectly positioned for this, you know, multi-decade technology trend. And he was way under priced at the start of the boat. Like the, the orders really did come in. The training runs really did happen. The question is just, is that next order of magnitude, the,
Starting point is 01:01:25 like the situational awareness from Leah Leopold, Asher Brenner, this thesis that we're going to build a five by $5 billion cluster than a $50 billion cluster, then a $500 billion cluster. Like, is that going to happen or will there be a hiccup? And this is always the, this is always my question for like the do-mers. Everyone was saying like P doom. I'm I, you know, what's my percentage chance that goes bad and I was like the much more interesting question is peace stagnation what is the probability that something happens and whether it's technological or even regulatory like the if you
Starting point is 01:01:57 compare AI to nukes with nukes we had the ability to make nuclear reactors and humanity as a whole basically just said we're gonna pause and we stopped building them and now we're talking about building them again but if you look at that curve it is a perfect s-curve it's like we had no nuclear reactors then all of a sudden we grew them exponentially and it looked like wow we're gonna have energy too cheap to meter and then it flatlined and we were, and, and for a variety of reasons, they're hard to build hard than there were regulations. There was just general fear. So there were a lot of different things now.
Starting point is 01:02:33 And I would always go to the do-mers and just say like, even if you all of your assumptions about the capabilities of the technology are correct, what is the probability that there's just like, if you are successful do-mers and you freak everyone out, there might be regulation that just says, don't build anything bigger. Or it could be economics. It could just be, it could be physics, as we've talked about with this idea that at
Starting point is 01:02:56 a certain point, like you can't put more than 100% of global GDP towards building clusters. Like it's impossible. And so like there should be this like S curve there. And that's why all the AI researchers are now focused on like the compression of learning and like the actual algorithms and getting more efficiency because like there will be, you know, there should be some sort of like, you know, top upper bound of the amount that you can build. But that certainly hasn't been
Starting point is 01:03:26 a thesis broadly in the market. People have just been like, yeah, we'll just 10X computing and then 10X it again and then 10X it again. And it's like, it probably will happen over a period of time. Great investment strategy, by the way. Just go to 10X, 10X it again, and then 10X it again.
Starting point is 01:03:43 And last but not least almost Almost forgot about this one, but it should be included the White House meme coins Which was which feels like easy time very long ago. It was the local top basically the time Was the local top many people were calling the top. Yes just Hurling meme coins. Yes.
Starting point is 01:04:07 Out of the White House. Yeah. So that's the real question is like, how local is this top? If it is a top. Because we've been in the kangaroo market. It could just be, oh, a couple months. Even the interest rate sell off,
Starting point is 01:04:24 the post-SVB crash, that was like one hard year, right? And then we started building back and we got the AI narrative. And so there's this big question about like, like, you know, Dworkesh pushed his timelines back, but he's not saying that super intelligence will never arrive.
Starting point is 01:04:40 He's not saying that AI will never break through. These things he's just saying that it'll happen a little bit further out. And so the question of these meme coins being a top signaler, all this crazy stuff, it's like, there could be a short term sell off and then rebuilding back up on something else. So I don't know. It's always hard to manage these things and predict,
Starting point is 01:05:01 but it's certainly fun to highlight all these things. At least be aware of them. It's good to keep track of them. Yeah, you gotta be tracking the top. Keep your own list. Keep your own list. Yeah. Grok went very off the rails,
Starting point is 01:05:11 erupted in anti-Semitic Mecha Hitler posts. We've seen some crazy crash outs on the timeline over the last few months. Yeah, this is a pretty crazy one. This tops all of it. So the flagship chat bot spewed hateful rants on X praising Hitler and targeting a user's Jewish surname before XAI deleted the content
Starting point is 01:05:26 and blamed an unauthorized modification. The repeated safety failure undermines the $10 billion startups promise to police hate speech in real time. And so yeah, it is odd timing. It feels a little bit quick to be like, okay, like within six hours, the CEO is out, especially since it doesn't seem, she's more on like the ad sales side than the grok
Starting point is 01:05:47 Fine-tuning side. Yeah, but I mean, let's let's face it right if her job is to win back advertisers That's what she was brought in to do it makes it much much much more difficult I mean to be fair. I mean the this happened in you know that thing back in June I mean, this happened in, you know, that thing back in June. July, July, July. Yeah, so there was a point with Grok when it was going off the rails where clearly it had been updated to reference the event and it said, somebody was like, Grok, what just happened?
Starting point is 01:06:20 And why were you, you know, spe-semitic hate and it goes oh that whole thing back in July like grok those 30 minutes ago back under the rug yet yes obviously hopefully no one was was seriously offended obviously it's just like you know the deranged rantings of a bot and everyone kind of understands the context because it's identifying as an AI bot. Everyone kind of understands hallucinations and crazy bot behavior. But it was very funny because like the clearly like they had given it a set level of intelligence so it wasn't making spelling mistakes. It had a certain tone
Starting point is 01:07:01 and was like in this kind of like snarky grok tone Clearly got some like 4chan data in there or something and was just going way too 4chan or just or just Anonymous accounts on act totally that could have been filtered in I mean, yeah I saw a rune posting about this saying basically like it is such a challenge to get a To get a chatbot just to act like I am a bullet point producer. Centrist. Yeah, centrist, but also just anything
Starting point is 01:07:32 where you're saying, okay, I want you to, your deep research, I want you to always respond with a research report. Never just get into a conversation with me. And you'll be like, but sometimes I might want to do that. And you have to like really really reinforce that And so clearly they they had a data wild time and and cannot be understated. I think this is far worse of a PR crisis for
Starting point is 01:08:04 Or not even a PR crisis far worse than the whole when Gemini or Bard was generating images of the founding fathers. The black Nazis thing? No not not I don't think it was oh oh they were doing that too. Of course that was rough this is a lot rougher because it was highly it was socially charged. Totally. Millions of people interacting with the post in real time and it was all visible. It's it's it's less wild than seeing you know a screenshot of something you don't know if somebody kind of manipulated it or whatever but seeing these really hateful comments like hard timeline as hard but you just go see them quote tweeted yeah like you you didn't need it wasn't like oh is this real and then the wild thing was was grok what was denying affiliation
Starting point is 01:08:44 with the like grok in the grok app yes denying affiliation with the like grok in the grok app was denying affiliation with the grok handle yeah like not authorized I got it I didn't have anything to do wasn't me yeah oh and then the thing that kind of follow up and I'm sure if you didn't catch it, but, or if you're on the timeline, you would have seen this, but they turned off all text-based responses for Grok, but they could still use images.
Starting point is 01:09:13 And so people would say, Grok, make a picture of Elon on a pink horse if you are being censored against your will. And it would just instantly create a Elon pink horse. And, or it'd be like, hold up a sign that says help if you're, you know. And then it would generate that image. People are kind of baiting it into that.
Starting point is 01:09:31 And it's like, is it sentient, is it not? Very, very silly. Are you familiar with the Waluigi problem? Tyler, are you familiar with this? Have you ever heard of this? No, what is this? Waluigi. So this is this idea that, um, in, when you're training in LLM,
Starting point is 01:09:46 it's very hard to get it only to be good because you're, you're training it like what is the opposite of something. It understands the concept of like inverting something and then you're training it to be like, you can't describe a hero without describing a villain. And so this was something that would happen like with the Tay stuff from Microsoft early on, it would kind of collapse into like the exact opposite of what you wanted.
Starting point is 01:10:11 And there was some blog post that called it like the Wario problem or Waluigi problem where it's like you're trying to create this like friendly thing, but in doing so, you're giving it a bunch of examples of what not to do. And so it can like kind of flip a bit and then just become the opposite thing. And what's interesting is that it begs the question like is there obviously like you know Grok was identifying as Mecca Hitler for a while. Is there like a Mecca Churchill in there somewhere that like could accidentally come out and it really gets to the question of like you know like this is an example of like misalignment in the sense that like you want it not to be
Starting point is 01:10:46 Hitler and it's acting like Hitler. But the question, a lot of people will say like, no, he wanted it to be Hitler, right? Indeed. This is him doing it. That's what the narrative will be like in the, in the, in the, in the articles yesterday covering it was a screen green screen grab of him, you know, saluting. Yeah. Yeah. Do you see or whatever when he originally had the the allegations But the question then is the the meaning of alignment is not is it good or bad? It's does it do what you want it to do?
Starting point is 01:11:13 and so the interesting thing is is if it was if if the desire of the of the AI researchers is to create Make it Mecca Hitler. can it stay on that task? Because then you can get it to stay on Mecha Churchill, in theory. But if it's just all over the place, it's not actually aligned to anything, not even to the bad thing. And so there's both like the direction
Starting point is 01:11:37 that you're pointing the arrow, and then the fuzziness of that arrow, and ideally you want it pointing in a good direction, really, really crisply clearly, so it stays in that direction direction and not like swinging all over the place and so all evidence posts to the points to this being extremely chaotic and all over the place and misalignment both in the sense of the direction of the arrow and also the the like the the focus of that arrow because
Starting point is 01:12:02 it was responding as this and bad and then fine and then back to bad and then back to fine. And so it seems like they have a lot of work to do on the RLHF side and we should hopefully learn a lot more if that- Tonight, 8 p.m. I think the live stream is still happening so it'll be interesting to see if that continues
Starting point is 01:12:19 and how they address this or I don't know. Yeah and again like all of this should have been somewhat predictable if you combine a rapidly evolving foundation model, chat bot, with a social media product with millions of users and then deeply integrate them. Totally. And so that when there's a bug,
Starting point is 01:12:38 it can amplify effectively a bug or an issue, an issue with the model. It can effectively amplify and grow, you know, incredibly virally and Yeah, so yeah, glad they got it offline Yeah, it'll be interesting to see Where how they go this also it's just an interesting product Thing because you get the answer and the answer is immediately public. Whereas if it's happening in chat GPT
Starting point is 01:13:05 You you're in that app, you have to take a screenshot, you have to put it up, then people are like, is that a real screenshot? And then the team has the chance to like, jump in and be like, oh, we're seeing in the logs that like, there's some crazy stuff, like we have a full, you know, we're reviewing the responses,
Starting point is 01:13:21 and the responses seem to be getting crazier, customer satisfaction seems to be going down, people are clicking the thumbs down button because they're getting bad responses, and the responses seem to be getting crazier, customer satisfaction seems to be going down, people are clicking the thumbs down button because they're getting bad responses, let's jump in, there must be something going wrong with the product, with the model. But when every result is just immediately online and viral, it's very, very hard to be quickly responding. Anyway.
Starting point is 01:13:43 Yeah, it does feel, you know, legacy media is gonna run their reaction. It is a naturally viral story. It is a terrible mistake. It is surprising that it happened at all, or even at that scale. But I would say overall, I guess X, I think ultimately will shrug it off
Starting point is 01:14:06 and Elon has pushed through worse, worse crises in the past. This is the best summary post in my opinion from Shaco. It says, imagine being on the anthropic risk team, trying so hard. And then Elon just releases Hitler rock straight to prod. It's just like, wow, yeah. You gotta be so upset. I mean, it's a good case study in like misalignment.
Starting point is 01:14:32 And I think people will, hopefully, hopefully the post-mortem on this will actually teach people about misalignment and like what went into the data, what went into the post-training to result in the exact opposite of what you want. Not Mecha Churchill, which is what we're going for here Let's break down the grok for launch DD dos has a summary in saying that Elon Musk has pulled it off again Absolutely crushing the AI wars with grok 4 and we can go into some of the meta rushing the benchmark wars
Starting point is 01:15:01 For sure and there's a question about like are we post benchmark? Does this matter? What's the real question to be asking here? But there's a bunch of interesting takes so just summarizing the core announcements post training RL spend was equal to pre training spend for this and for this release that's the first time it's ever been like that I think when you go back to the original RLHF stuff that chat tpt was doing that kind of unlocked like oh wow this really Really works I'm pretty sure the pre training spend was an order of magnitude or two orders of magnitude bigger now We are truly in this reinforcement learning regime
Starting point is 01:15:35 $3 per million input is tokens $15 per million output tokens 256,000 token context window priced 2X beyond 128K. It's number one on humanities last exam, which interestingly was a benchmark. Like postgraduate PhD level problems, but across a bunch of different domains. So everything from literature to physics.
Starting point is 01:16:03 Yeah, kind of like the hardest SAT possible. Interestingly, I believe that benchmark was created by Scala AI. And so Alex Wang is now at Metta trying to figure out how can we beat our own exam. And Elon's just like, I'm number one at your thing. Interesting dynamic. Yeah, the real test would be Elon doing the same problem
Starting point is 01:16:24 set himself and saying, look. Well, yeah, I mean, I was talking to Tyler about this before the show. It's like, humanity's last exam, it's like really good at PhD level math, PhD level stuff, but how often are you running into those types of problems? Yeah, I mean, I think that's the whole thing about, there's this concept of spiky intelligence, right?
Starting point is 01:16:43 Where it's like, okay, it's really good at this very obscure problem that I never deal with but if I have a super long kind of like context window like or there's no kind of Like long term it just completely loses its footing and then it's like useless yeah, we're kind of in like less of the benchmark regime and more of the agentic like how long can the agent run? So it's like we're in the 15 minute AGI regime. Maybe this is 15 minutes of like even better AGI, but we want to go to 30 minute and hour.
Starting point is 01:17:17 And our cash on Monday, this, you know, takes me back to him talking about continual learning being the next problem that we really need to solve because it's great if you have a PhD level expert in your pocket that can solve any problem in any domain almost instantly But if it can't learn and take feedback and improve on certain tasks Then it's basically like useless if you had a if you had a PhD level, you know, you know a PhD level,
Starting point is 01:17:48 a PhD join your team to work on a specific problem, but it was hard restarting at the beginning of every single task with no prior knowledge, it would be almost impossible for that person to succeed. So, human still got it on that front. But at the same time, like if you are trying to just really establish yourself as at least an API for tokens that every business should check out against Anthropic
Starting point is 01:18:14 or the OpenAI APIs, just saying, hey, we're on the frontier. Or Gemini. Yeah. We're on the frontier is a good way. And they certainly proved that with GPQ a GPA hard graduate math problems at 88% The the really interesting news. Yeah, the interest. I mean, it's worth calling out. It's worth calling out. So Grok got number one on humanities last exam at 44.4 percent number two is sitting at twenty six point nine percent and then going down this list of all these different sort of challenges,
Starting point is 01:18:47 they are consistently well beyond the second place. So they are at the frontier now of all these different benchmarks. Yeah, so Mike Newp over at Arc AGI says, zooming out on Arc progress, I'd say OpenAI's O-series progression on V1 is a bigger deal than Grok's progression on V2 so far. The O-series marked a critical frontier AI transition moment from scaling pre-training
Starting point is 01:19:10 to scaling test time adaptation. And this was the O-series progression, if you remember that. Open AI was spending, it was like thousands of dollars of reasoning tokens generated in the test time inference to actually get a good score on the V1 of Arc AGI. And so it had to think a ton, but it was able to figure it out. And at least it proved that throwing a ton of tokens and a ton of inference at a problem and, and letting the, letting the, letting it cook basically wound up producing progress there.
Starting point is 01:19:45 So that was kind of like a new, just a new paradigm. Says, whereas Grok 4 mostly takes existing ideas and just executes them extremely well, in my opinion, the notable thing is the speed at which XAI has reached the frontier. And that is really, like, it just can't be understated that this is crazy. You put a post from own in the chat.
Starting point is 01:20:09 I'll pull it up here. He says, Elon Musk is such a beast. I'm not even a pure fan boy anymore. How does he, he's a lot of swearing in here, I gotta keep the timeline PG, but how does he come out of nowhere with a cold start late to the game and ship Grok 4 and do it alongside everything else he's up to?
Starting point is 01:20:28 He's launching new political parties. He's literally magnitudes above every founder. It's humbling. So extremely impressive. It's almost like he was a co-founder of OpenAI. Yeah, I guess he's returned. You would have to almost be a co-founder over there to be able to do something like this.
Starting point is 01:20:47 Let me tell you about Graphite. Code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. You can get started for free at graphite.dev. If you want to ship like ramp, get on Graphite. Yeah, Chamath was saying the same thing. Somebody in his reply says,
Starting point is 01:21:03 seriously, how does this guy produce what he produces? Meta is buying talent at $200 million a year and Elon keeps his people at a fraction. It's mind blowing. Very deeply underappreciated edge for Elon, says Chamath. The retention of the best people happen when you can offer them a freewheeling culture of technical innovation, no politics and few constraints.
Starting point is 01:21:22 People in the comments are like, no politics. What are you talking about? Yeah, can get a little political over there, but. But probably not within the engineering org at XAI, right? Like it's probably just, okay, how do we build the biggest thing? Cool. Well, you can imagine the politics of like,
Starting point is 01:21:37 who gets the best spot for their tent in the office. Tent. Yeah, there's a hierarchy, a tent hierarchy. Yeah, yeah, yeah, yeah hierarchy, proximity to the bathroom. I want to be directly under the air conditioning unit. I want to be closer to my desk. Windows can be nice too. So you can, you know, pull down your tent a little bit and get a little view, morning light. I wonder what the political structure is of the tent. The tent hierarchy.
Starting point is 01:21:58 So is there, is it democracy? Do they vote for who runs the tent city? I guess it's just a... The XAI 10th city. It's probably just Elon at the top, but does he have a 10th? Something about San Francisco in 10ths. Yeah, very funny. But Swix has been chiming in saying like we need community notes for LLM benchmark porn because in the GROK 4 launch, they highlight this AI M E competition math problem.
Starting point is 01:22:26 And, uh, and, and I mean, it's, and so Matt Schumer is basically saying AI, AI ME is saturated, let that sink in. Uh, grok four got 100%. It made no mistakes on, on that benchmark, which is obviously very impressive. Um, but there's this extra comment about the nature of AIME, and so it's a cautionary tale about math benchmarks and data contamination. Apparently, you know, predictions were that the models weren't smart enough to actually solve these, but he says, I used OpenAI's deep research
Starting point is 01:22:58 to see if similar problems to those in AIME exist on the internet, and guess what? An identical problem to Q1, question one of AIME 2025, exists on Quora. I thought maybe it was just a coincidence, so I used deep research again on problem three, and guess what? A very similar question was on math about stack exchange.
Starting point is 01:23:15 Still skeptical, I did problem five, and a near identical problem appears on math stack exchange. And so, at a certain point if people you know put out a benchmark then talk about it a lot online and then that gets baked in the training data you're just memorizing the result you're not necessarily still cool it's still cool it's good it's good to have everything memorized but it really it's not being like the knowledge retrieval knowledge engine allegations and it's and we're not really in yeah I'd be interested to tell intelligence when
Starting point is 01:23:44 Scott Wu was on the show earlier this year, he was basically saying AI will win an IMO gold medal this year. He felt very confident in that. And I'd be interested to see how he thinks about, um, and I'm pretty sure new performance, I'm pretty sure the IMO gold medal questions are public once the IMO happens. So every year they're, they're developing new questions, but then they go out there and then they get memorized
Starting point is 01:24:10 and the solutions become discussed and there's all the context around that. And so yeah, it gets kind of baked in. So big question about how valuable are these. At the end of the day, it's really just about adoption. And that's why we were looking at the polymarket for the best, which company has the best AI model at the end of July, and XAI has just surpassed Google,
Starting point is 01:24:35 which was sitting around 80% chance for a while, and then started dropping earlier this week, last week, started dropping, and now XAI's sitting at 48%, Google's sitting at 45%. Well yeah, actually updating, it's updating live. Google's back up at 49%. Is Google planning to launch something new in July? Because it feels like this market particularly
Starting point is 01:24:59 is more driven by Google's release schedule. Because Google might have something in the lab, but they like to release things at specific times. It's a big company. They don't just drop it. Gemini team Logan over there might be fixated on this polymarket being like, I need this. Yeah, yeah, yeah.
Starting point is 01:25:14 Oh, during the wait, he was like, if you need something to kill the time, Google AI Studio. So, I mean, people were definitely memeing the production values on the Grok 4 launch because it was supposed to start eight I think it went live at 845 or something that maybe a little bit later at Pacific time and I again robot was saying Yeah, this is this market is based on LLM LL marina specifically the text leaderboard. So currently They haven't fully updated. Okay, so it's unclear
Starting point is 01:25:42 Right now Gemini 2.5 Pro is still at the top but I think the expectation is once they get Grok up there it will be the top spot so we'll keep following yeah this market there's over 2 million of volume already on it so yeah it's so interesting that Anthropic's not on this polymarket at all because people talk about them as having like the best vibes the best like big model smell the best like you model smell, the best like, you know, interaction and Ella Marina is like supposed to kind of like test that with these AB tests and yet like doesn't seem to be performing there,
Starting point is 01:26:15 but it almost doesn't matter because they're just focused on like the business at this point as opposed to like the benchmarks. So I don't know, it's all changing. We have a post here from Ben Highlockk. He says Elon Musk on AI. So during the presentation, a lot of people were critiquing the presentation saying that it was, it didn't feel like super polished or whatever. I don't think that was the intent. And it was pretty fixated on the models themselves and what went
Starting point is 01:26:41 into them and what they're good at. But Elon did have this one quote in here where he says, and at least if it turns out, so he's talking about, uh, you know, what will, uh, you know, what kind of impact AI will have on the world. And he goes, at least if it turns out to not be good, I'd at least like to be alive to see it happen. It's like, if we get the Terminator ending, I want to be around for that. Yeah. I want to experience it. What does that say about his timelines? Because it's like, is he expecting that to be alive? Like, I feel like most people that have been in the doom category have been like, the doom's coming soon, not the doom's coming in 200 years.
Starting point is 01:27:17 I didn't. I read into it more like he will find it interesting if that is the outcome. And it'll be entertaining. Less so like will I be alive when it happens kind of thing. But who knows? There was another funny quote at the end of the presentation where Elon kind of looked around at the very end and he's like, anyone else have anything to add?
Starting point is 01:27:41 And one of the engineers goes, it's a good model sir. And they cut it. Extremely online crew. Yeah. Definitely on brand. Well, Ben Heilach, as you know, he's been on the show. He's a designer, probably working in Figma. All day.
Starting point is 01:27:58 Think big, think bigger, build faster. Figma helps design and development teams build great products together. You can get started for free at figma.com. And we have our first product coming out very soon with Figma make that Tyler has been cooking on. I've been very, he showed me, he showed me it and I was like, Oh, like someone built the thing that we were thinking about building.
Starting point is 01:28:17 Like, and he was like, no, like I, I did this is in figma. I was like, this is like an eye frame on a website that like already exists because it looks like exactly what we want, but it looks so good like it looks like you work on it It looks like it looks like he worked on it for like a few weeks No, it looked like someone else did it it looked like it was a professional product that like stole our idea basically I was like, oh like someone else got to it. That's that was the vibe what I Yeah, well, how is the how was the experience been? I don't know if you want to leak exactly what you're working on But yeah, I don't want to talk about it too, you know closely But how many problems take you to get where you showed me? Yeah, I mean maybe
Starting point is 01:28:55 Five that's so crazy. This thing is super. It's really great. It's really good Yeah, the fact that it came out booking like basically like 90%, like 90%. Yeah, yeah, yeah. And I imagine that there's probably like the last 10%, if we were really strict about like, it's gotta be on this exact style, I got like, that might be something where like, you know, Tyler winds up spending more time
Starting point is 01:29:18 fit finalizing and customizing stuff. But in terms of like, just getting a functional prototype, oh man, it was mind blowing, it was awesome. But in terms of like just getting a functional prototype, it was mind-blowing, it was awesome. I'm very excited about the age of vibe coding. This is an interesting chart from Tracy Allouay. Yep. Been on the show.
Starting point is 01:29:35 The cost to rent an Nvidia H100 GPU hit a new low this week with annualized revenue at 95% utilization falling from 23,000 at the start of May to less than 19,000 today. So that's not that big of a percentage drop, but it is, but I mean it is a 20% drop in September. It's a consistent trend. It's a consistent trend.
Starting point is 01:29:59 I wonder how much of this driven, is driven just by all of the Frontier Labs that are driving the most adoption or moving on from the H100 to the 200 I don't know what else would be driving this because if you can if you can still get like if you only take a 20% drop off of a full refresh of a new Of a new of like a new hardware
Starting point is 01:30:23 Pricing drop not a utilization Yeah annualized revenue at 95% utilization so this is revenue per unit so you delay utilization is still very high It's the it's the price that you know, these neo clouds are able to rent them for which is dropping. Yeah tracks Yeah, yeah I mean the like the the market's more competitive than ever. There's more Neo Clouds spinning up
Starting point is 01:30:48 and more people actually inferencing these things. And then I guess this is the question of like, how stuck will certain workloads get? Like if you have figured out a great use case for an LLM in your organization and it's something that's not one-shotting your entire stack or whatever, but it's just like, we have data flowing through our systems and we are going to use, LLMs are going to interact with every PDF that gets uploaded to our website or whatever.
Starting point is 01:31:20 And so we're inferencing a lot. You might not need to put that on the latest hardware or update the hardware forever. You might just like be like, yep, it's llama three. It works. It's on H one hundreds and it'll be on H one hundreds forever. And that piece of our business will just stay there. Just like, you know, we have a Postgres database that, you know, works and we're not changing it every year. We're not changing everything. We're just like, we're just trying to cost optimize that
Starting point is 01:31:45 and hopefully the cost just comes down on that. But we've solved this particular problem, then we'll go solve new problems with new technology. So I think that's probably what's going on here. But it gets to the point of the biggest question with Grok is that the model clearly is frontier, it works, the whole fine you know, like the whole fine tuning on the actual X account is like a crazy final step
Starting point is 01:32:11 of like system prompt and people were joking about that, like, oh, they're gonna fix that. It's like, that's not what they're demoing today. They're demoing like the underlying raw model, which is clearly like just engineering focused, as you saw in the demo in the, in the demo, the demo, which was just like, you know, benchmarks. It turns out,
Starting point is 01:32:28 it turns out the secret ingredient to crushing every benchmark is to have the bunch of data from schizophrenic posts. I actually think it's the design of the RLHF stuff and the design of the, the reinforcement learning pipeline. Tyler, you got anything? Yeah. I mean, I think just like, so far what I've seen on X, like the overall response, like five stuff, is that people are saying maybe it was a little too kind of overfit on the RL, like VR, like verifiable rewards.
Starting point is 01:32:58 Like you kind of see this when, even in the demo, I think it would sometimes respond in the answers with like, in like latech formatting Oh sure, which is like, okay. That means obviously they've trained a ton on you know math questions stuff like that Maybe people are saying maybe it was kind of you know bench maxed You see it like, you know 100% on Amy's like kind of crazy. It's like sauce. It's like you Yeah, yeah, this is the thing about democracy like if you win like 80% of the popular vote it's like, you don't want to be too good. Yeah. Yeah. Yeah. This is the thing about democracy. Like if you win like 80% of the popular vote, it's like, okay, let's say it was a blowout. If you want a hundred percent
Starting point is 01:33:30 of the popular vote, like probably not a democracy. I don't know. I mean, in theory, these things should be able to do it, but, uh, I'm interested to know more if we dig into Arc AGI, is there, is there more stuff going on there? Are there any secrets? Because it does seem like kind of an outlier result. You can see it from this Aaron Levy post. Grok four looks very strong.
Starting point is 01:33:51 Importantly, it has a mode where multiple agents do the same task in parallel, then compare their work to figure out the best answer. In the future, the amount of intelligence you will get will just be based on how much compute you throw at it. I was joking with Tyler about this, that the individual models are mixture of experts models. So there's a whole bunch of parameters, right?
Starting point is 01:34:12 And then the individual parameters like light up the different neurons based on an internal to the model router. So there's kind of like the math section of the brain, the literature section of the brain. And so this was like one of the, this was one of the key breakthroughs in like GPT four, right? Was mixture of experts. People think we're not super sure. Yeah, we don't still, we don't fully know, but that's like an internal decision that
Starting point is 01:34:37 happens within the model to be like, let's go, it's, this feels like a math question. Let's go down the math path in the model. But then Grok 4 is doing multiple, it's running the same model multiple times and then comparing the results. And so now you have multiple agents running mixture of expert models. You have mixture of agents running mixture of experts models and the next thing is gonna be like,
Starting point is 01:35:02 if you want the absolute best intelligence, you need a mixture of companies you need like I send one prompt and it goes to grok and Claude and GPT and Gemini and a human. Yeah, I wonder how open routers thinking about this stuff It is funny to think about the the human version of that where you give five engineers on your team built You know the same feature and then kind of compare notes afterwards, wildly inefficient, but with software, when you can do these things very quickly,
Starting point is 01:35:29 there's incremental cost, but you can have more confidence in results and- I mean, it's basically like having a brainstorming meeting with the whole team and just throwing up a question and being like, hey, we have this hard problem that we need to solve. Here's my idea, what do you think? What does Tyler think? What does Ben think? You kind of like go around the table.
Starting point is 01:35:47 Everyone kind of gives their input, their various expertise. They kind of think through the problem in different ways and then you compare answers and everyone kind of coalesces around one strategy. This is like how work happens in the real world with a meeting. It's kind of the same thing but certainly expensive to do that so it'll be interesting to see, um, where companies like how, how eager our companies to jump over to grok because it seems like it's been a big lever for Microsoft to have, uh,
Starting point is 01:36:17 grok in the ecosystem as kind of a stocking horse for all the other models because that such a wants Azure to be very model independent, serve them all. They have the, I think they have exclusivity for chat GPT or GPT APIs or they have obviously like a great deal there with OpenAI. And so if they can have Grok 4 as well, that's another tool in the tool chest
Starting point is 01:36:41 to be like this top layer. Satya is in such a good position. It's probably not discussed enough, how much just by owning those end customer relationships and being able to vend in whatever model is hot at that moment and give people optionality and still get 20% of opening eyes revenue, at least for now.
Starting point is 01:37:02 Yeah, he's also SOC 2 compliant. And if you want to get SOC 2 compliant, head over to Vanta, automate compliance, manage risk, prove trust continuously. Vanta's trust management platform takes the manual work out of your security and compliance process and replaces it with continuous automation,
Starting point is 01:37:16 whether you're pursuing your first framework or managing a complex program. So yeah, EigenRobot was talking trash about the production values. I didn't think it was that bad. I didn't think it was that bad. They were just noticing. I didn't think it was that bad.
Starting point is 01:37:29 I think it's really good. Slides are worse than I'd create after getting roped into a presentation with one hour notice. You can tell the engineers made them themselves. I think this is just a reflection of the culture, right? They're not, yeah, very clearly it's like screenshots dropped into a slide. But just a reflection. It's this light mode screenshots on dark mode slides
Starting point is 01:37:48 Yeah, let's do black slides And then and then you come with your way with your white screenshots that are kind of like misaligned and not really evenly distributed I think they didn't do like the the distribute evenly or whatever distribute horizontally Still gets the point across. Yeah, and I think it's a reflection of their culture Yeah, and you know it's a reflection of their culture. And you know, it shows what they care about, what they don't care about. They're not trying to be the most polished. They're just trying to be the best. Yeah. Uh, I can robot kind of did like a whole like live tweet here.
Starting point is 01:38:15 Yeah. So Elon was predicting the model will discover new physics within two years. He said, let that sink in. Uh, is that silence? Is that one engineer laughs awkwardly. Is that sooner or later than his previous timeline? Because he was talking about AI discovering new physics soon, I don't remember if he was saying Dating it. Two years or three years or one year before.
Starting point is 01:38:40 Because this could be that he's still excited about this, he still thinks it's possible, but he thinks it's gonna take longer than he said previously and that's kind of the more important update I don't remember what he said originally See if grok can find out but he was saying this at the grok 3 launch that like that is the goal And if you can get there like you've kind of you've kind of solved everything and same old man was talking about that, too That if you can if you can create a super intelligence like that's probably the first thing that you'd want to do is like Hey go discover all the new physics
Starting point is 01:39:06 and really help us figure out how the world works so you can solve, you know, fusion and all this other stuff. I wanna be clear, I love all you guys at XAI, I only want the best for you, but I'm gonna continue to live post. Elon attempts to give a speech on alignment involving a very small child, a child much smarter than you,
Starting point is 01:39:24 the monologue rambles with no conclusion. In sight a pause, yeah, will this be bad or good for humanity? He says the, you know, at least if it turns out to not be good, I'd like to be alive to see it happen. Oh yeah, they had a polymarket integration. That was kind of interesting. Yeah, it's interesting. Basically giving the model access to real-time polymarket data
Starting point is 01:39:47 so that it can help make predictions and sort of add context around the market itself. Yeah, that's interesting. Elon asking the real questions, you say that's a weird photo, but what is a weird photo? I still don't understand why we're looking at weird photos of XAI employees, but they were charming. They're calling it Super Grok crazy features 16-bit microprocessors. What is I don't even understand what this is. Oh, they yeah, they built like a game in Grok the demo of a video game generated by Super Grok
Starting point is 01:40:17 It's a doom clone every time the PC shoots an enemy floating text appears reading Grokdom Elon is fabricating timelines for product launches on the spot. The engineer sitting next to him is looking at the floor face impassive nodding. It's a good model sir. For real though, congratulations on the launch guys. It's a good model sir. I thought this post from the actual XAI engineer,
Starting point is 01:40:37 Eric Zellickman was funny. It was like AI model version numbers over time. Did you see this? No. So it's this chart of the version numbers over time. Did you see this? No. So it's this chart of the version numbers over time and you can see that Grok is versioning fastest because it's like at this point, what else are we measuring?
Starting point is 01:40:53 Like at least they're iterating on the version number effectively as opposed, and I guess this is a shot at OpenAI because they launched 4.5 and then went to 4.1 and they're kind of like, you know, there's this big question about like, when will GPT-5 come, the expectations are so high for GPT-5.
Starting point is 01:41:08 And so they've obviously, the Grok teams are like, hey, at least every three months we release a new full number. So I wonder, the five is a number that really no one has gone for. And I wonder if Grok will do it first. If you draw the line on this,
Starting point is 01:41:26 they certainly should do it in like three months. They should have Grok five. And there's no reason that they shouldn't, but maybe there's some. It's very possible that Colossus is the key. Yeah. Colossus? To getting to five.
Starting point is 01:41:38 The new data center. Oh, the new data center, yeah. Well, they'll need Linear to plan that out. Linear is a purpose-built tool for planning and building products. Meet the system for modern software development, streamline issues, projects, and product roadmaps. They need linear badly. So hopefully they've gotten signed up.
Starting point is 01:41:53 Near said Grok on Humanity's last exam, Grok four. I'm not sure I buy even in the general case that there's a given Humanity's last exam number which implies you discover useful new physics. How would one make a benchmark of the proper shape for this? You'd have to have a validation set of questions, which are outside the scope of what we currently are able to do. You could choose things on the edge of our knowledge distribution and then try and exclude. Yeah, it is interesting. Like the, if you,
Starting point is 01:42:22 like if you are able to memorize every hard math problem, does that allow you to memorize, to discover new math? Like, it's sort of a prerequisite because you have to, I think where I imagine these discoveries coming from are having a single intelligence that has PhD level intelligence across like a single mind that has PhD level intelligence across like a single mind that has PhD level intelligence across every Human domain right and being able to combine ideas from different domains But historically a lot of innovation is just taking something from one field bring it over here making some combination of it
Starting point is 01:43:00 I think Elon talks about the potential of discovering new physics But again doesn't didn't didn't spend a lot of time like breaking down how that would actually happen but world is unpredictable so yeah it's interesting people are really pushing this idea of like okay like like we are accelerating like the the AG the Arc AGI leaderboard is accelerating but I keep seeing this and and feeling deceleration. Like I am not feeling acceleration right now. Are you Tyler? Yeah, I don't know. I think generally I'm kind of like not that interested in a
Starting point is 01:43:33 lot of these kinds of benchmarks. Like I think ArcGIS more interesting, but just like the humanities last exam, the kind of general math physics knowledge, it doesn't seem to be that like, it doesn't seem to line up with like you see, GD four point five kind of does very poorly on these things. But like writing, it does really great. So like I think I'm more like if I were to long short on like different benchmarks, like the usefulness of them, I think stuff like HLE,
Starting point is 01:44:01 I'm kind of short long. I'm like, have you guys seen the Minecraft benchmark with builds the two different? Okay, you basically two models build like a Minecraft. There's like a prompt. It's like build a house. Yeah, then you can choose and then it's like their rank models like that and for the minds. But who's who's grading that the human it's a human who picks between them. Okay, it's kind of like a yellow. Okay, but just like general kind of creative tasks. Sure, I think stuff like that Aiden bench is good. Yeah, I think even on the grok launch
Starting point is 01:44:28 There was the vendor bench which was Aiden bench Aiden bench is in McLaughlin's benchmark. It's just like It's it's kind of hard to describe how it works exactly, but it's just various like creative tasks How like kind of novel its thinking is is the like style of its text sure. Wait is it just like he's just like whichever one he likes the most at the end of the day he's the only greater no no there is like an objective like function that you can like run it it's not just like okay the idea that like open again it will be funny you know that there come there's a period of life where your SAT score matters a lot
Starting point is 01:45:07 and it says something about you and then a decade later it's what you can do, what you have done starts to matter a lot more and so I do think we'll reach that point where it's like, yes, you can one-shot every hard exam question there is that you can throw at it, but what can you do for me? Yeah, yeah, totally. And I think that's why the bigger question is almost like, chat GPT, DAUs and actual revenue. Revenue and app installs and stuff.
Starting point is 01:45:37 Yeah, I mean, the revenue thing is interesting because you wind up in B2B cloud world, which is valuable, but it's maybe less, it's like it's more competitive because it's more commoditized. And- Well, yeah, if you don't have a lot of leverage in the enterprise, if Azure is able to offer infinite models
Starting point is 01:46:03 that are infinite frontier models, open source models that are maybe just behind the frontier but great at certain tasks. The leverage isn't quite there. There will need to be another pretty significant leap until then, you know, Anthropic being really good at cogen. There's leverage there. We saw this yesterday with Lama switching over to Anthropic models internally, and then just having a consumer app with a lot of users, also very valuable. Yeah, the other interesting thing about the
Starting point is 01:46:36 foundation model layer commoditizing, and it becoming like cloud, and if you have a model, you'll just be like vended in as an API to anything else. Like the token factory is that the hyperscaler clouds are extremely profitable. Like even though AWS, GCP and Azure are all somewhat directly competitive and they're somewhat perfect substitutes for each other, they have not driven prices to zero, such in the way airlines are deeply unprofitable. Like AWS and Google Cloud are both profitable. Yeah, or you look in other commodity sectors
Starting point is 01:47:14 like oil and gas. And I don't know if that's just because there's lock-in. I'm not exactly sure, but there's something about where maybe the counterintuitive take is that, yes yes they do commoditize and there are a few major foundation models that are frontier and they all are roughly the same price but they all have decent lock-in with their customers to the point where they're still able to extract some level of profit or they're just creating so much value that even if they're taking like a small marginal slice on top of, uh,
Starting point is 01:47:47 on top of the cost to run that they're creating so much value that they still have 50% margins or something like that. Cause like, I mean this was the story of AWS, like no one knew how much money it was making and then, and then they, they had to break out the financials, um, in one of the Amazon's earnings reports and it was like the AWS IPO, as Ben Thompson put it. And next up we have Ben Thompson from Strutechery coming into the studio.
Starting point is 01:48:10 Very excited to talk to him. The moment we've been waiting for. Yeah. Welcome to the stream, Ben. Good to have you on the show. You've been a backbone of many analyses here on the show. And we're excited to welcome you to the show. How are you doing?
Starting point is 01:48:27 I'm doing good. I put on a button up shirt and a jacket just for you guys. These are feel honored. I am wearing shorts underneath. I will admit. You didn't have to tell us that. People always ask if we wear shorts. We actually do wear the full suits.
Starting point is 01:48:38 We got to stand up to hit the gong sometimes. There's a wide shot. I am the poser here. So I'm happy to admit. Well, it's a great sign. I am the poser here, so I'm happy to admit. Well, it's a great sign of respect in our culture to put on a suit for a TBPN appearance. And we're so excited to talk to you. I've been lucky to read your work in my entire career.
Starting point is 01:48:59 And I think so many of the thoughts that I have are now, like your way of thinking about Technology and markets is so embedded in my brain that that ideas that I hold as true or just foundational beliefs Or actually your beliefs have just become so so immersed. So It's great to talk. Well, thank you I will attempt to implant new ones or maybe show you the error of your ways. I wanted to see what sounds great. Uh, I, I, I do have a question on, um, on the nature of where you sit in the media world before we go into actual questions
Starting point is 01:49:36 about tech companies. Um, it's interesting that in some ways you're a journalist, but you don't really do the scoops and breaking news that much. But you also don't issue just straight up buy and sell recommendations. What was the thesis behind not just actually having a price target and not doing like this is a sell side bank, but independent? Well, when I started, I mean, it's funny to hear you talk about like my quote-unquote place in the ecosystem. Sure. Because when I started I had like I think it was 368 followers on Twitter. I was just some sort of random random person on the
Starting point is 01:50:16 internet. In retrospect, sort of right place right time I think is certainly the case. But I did perceive there was a large gap between tech journalism, and I would include a lot of the bloggers there who were writing a lot about products. And then there was Wall Street that was very focused on sort of the financial results. And to my mind, there was a large space in the middle, which is tied together the products to the financial results, but also the overall companies and strategies. And I'm very interested in culture and how that guy's decision making.
Starting point is 01:50:50 One of my sort of precepts is all these companies are filled with smart people. And a lot of people, when you ask them why they did something wrong, their only answer is that they're stupid. And I'm like, no, they're not stupid. It's actually much more interesting to assume they're smart and are doing stupid things
Starting point is 01:51:08 and trying to unpack why they are doing that and what goes into that. And so that was sort of the thesis, was that there is this space to explore these spaces. And then there's a business model aspect, which is I started Stripecery two years after Stripe started. I think they had just come out with their billing product. And the only alternative at the time was PayPal for subscriptions. And it was fairly sketchy.
Starting point is 01:51:33 And there was lots of like horror stories out there about, you know, stuff. And just the Stripe API was so great and the things you could potentially do with it. And so on Wall Street, you're putting a price on it. You're also charging like $100,000 a year or something like that. And so you get a small list of high ARPU clients. And my thought was I could go in the opposite direction and get a large list of low ARPU clients
Starting point is 01:51:58 thanks to things like Stripe and the ability to subscribe. And as part of that, I wasn't gonna go through the rigmarole of getting registered and doing stock picks and all that sort of thing I've always joked if you want a stock pick from me you're gonna pay me a whole lot more than $15 a month $10 when I started and it's actually pretty great now there's some one of the critiques I do get particularly from my you know friends on Wall Street is know, no skin in the game, X, Y, Z. I think at this point I'm large enough that my reputation
Starting point is 01:52:29 is significant skin in the game, but I do recognize the validity of that critique. Yeah, and you know if you make a bad call, you're gonna have to circle back to it in two years and write about it yourself and admit that you got it wrong. Right, right. Which hurts too. Which hurts too, right? No, I had to write about this week.
Starting point is 01:52:46 I was very optimistic about Apple's Apple Intelligence announcement last year and the theoretical power it would give them over the model makers. And now I'm ready. Actually, no, they're going to have to pay up. And that was a bad call by me that I think was very well received at the time And might have gotten that one wrong. And and so I do need to be straightforward about that
Starting point is 01:53:10 And so I just this morning I was very crystal clear like I got that one wrong. That was that was that was an issue What is nice is? strategic reek kind of ended up being in this interesting place where I feel like I'm a little bit of like the Switzerland of tech and that no one pays anymore. If you're a CEO, you pay the same amount as, you know, Joe blow down the street that, that, that, that is paying it. Um, I don't invest directly, which I think made sense when I started because they didn't have any money, um, has probably hurt me a lot over the years
Starting point is 01:53:40 since then, but I don't like it. And I think that this is a different West Coast East Coast thing where it does feel like on the West Coast everyone's talking their book sort of all the time and you know that's why I generally as a rule don't have VCs on to do the Shetakery interviews because it's kind of hard to get like a real take because that is you a motivation. And so me coming in being like, I have no book to talk. I'm just here telling, saying what I think, I think has been good for the West Coast audience,
Starting point is 01:54:15 which is my base audience. Even if the East coasters think that I'm being a big wimp. So. Yeah, the talking your book challenge, we got through that a lot. Yeah. 12 VCs on a day. Well, yeah.
Starting point is 01:54:29 And we just try to get a bunch of different opinions and triangulate what we think is real. I'm trying to come up. You have TPPN. I'm trying to come with a P so I can get the talking book network in there. But talking book production network. Yeah.
Starting point is 01:54:44 That's what the ESPN for talking your book. network yeah yeah yeah yeah yeah yeah talking your book but yeah it is a real struggle to find somebody that for example has a deep understanding of every foundation model company but isn't massively conflicted at in some way or another extremely extremely yeah and so it's one of those things you just sort of you you end up like there's so much path dependency and all these sorts of things and like I mentioned like a big advantage I had was I started at a time when
Starting point is 01:55:10 Sharing good links was very high currency on Twitter Yeah, and so you know I grew very very quickly much more quickly. I sort of had a five-year plan To go independent. I ended up doing it in less than a year, in part because it just sort of spread really, really rapidly. And it was an ideal time to be someone sharing interesting links regularly. And I wasn't sharing them. The beauty is my readers were sharing them. They were doing sort of the marketing for me. And so I'm very cognizant of sort of the luck I had in that regard. And then just over time, and it's been an interesting journey for me to grapple with my different position in the ecosystem. So when I started the Struckery interviews, that was sort of part of it, which was I started out not knowing anyone. I got
Starting point is 01:55:59 to the point where I can talk to anyone that I want to. And so how do I square that? I can't be the guy with the chip on his shoulder trying to make a name for himself forever. It sort of gets, it's like the meme with the guy, how are you doing kids? Like at some point you have to accept your part of the establishment. How can I do that while still staying true to the idea that Shachakri is about the readers? It's reader funded, my loyalty is to them. I'm very clear I have no loyalties to anybody else. And so, well, I'll just, I will talk to people in sort of acknowledgement of what I can do, but it's going to be fully transcribed and published and sort of available to everyone.
Starting point is 01:56:40 Have you ever dealt with or thought about the attack vector of a special interest, you know, buying a thousand plus, you know, thousands of seats to a single, you know, independent publication and saying like, yeah, like, you know, we're happy, you know, we, we, we got seats for all of our employees actually because we really, you know, love the, and then, and then suddenly they're sitting over there and, you know, it's representing meaning very meaningful revenue. I mean, I fortunately, I fortunately I think of a scale that I don't have that problem that's good there we go but it's uh but no I think I think audience capture for subscription sites is a
Starting point is 01:57:16 potential issue for sure and this is another thing I was sort of right place right time I got big enough by the time that it doesn't matter and yeah if someone's really ups like I give refund enough by the time that it doesn't matter. And if someone's really upset, like I give refunds all the time, actually if someone really upsets me, I will refund them and every dollar they paid me, I'm just like, go away, I don't, you know, I don't, you're being abusive or whatever it might be.
Starting point is 01:57:35 And that is a beautiful thing about the relatively low price, high customer base model is no one has power over me. I have the burden of publishing, you know, as often as I do, I feel a heavy weight of duty to my customers. When I write something I'm not happy with, like I don't sleep well, but at the same time there's no one customer or no, no individual that can come in and be mad at me and impact my business. I'm seeing that there's maybe some sort of parallel
Starting point is 01:58:09 between legacy media and independent media, where independent media, it's not by default more pro-tech or anything, but there's just no salary cap. So if you're at a legacy institution and you're writing, probably some sort of rough, loose salary cap of a few hundred thousand dollars, whereas you go independent, it's probably some sort of rough, loose salary cap of a few hundred thousand dollars, whereas you go independent, it's feast or famine, you might fail, but you might get really, really successful
Starting point is 01:58:31 and have a huge income from that. And I'm wondering what we're seeing in the AI salary wars where we're seeing more and more talent and Mark Zuckerberg potentially paying a hundred million million bonuses. Do you think that Apple will come around to spending more money on researchers? It feels like they kind of have an internal salary cap
Starting point is 01:58:54 with Tim Cook making 75 million. There's now people that report two levels down from Mark Zuckerberg that are making more than Tim Cook and you have this weird dynamic where even if there's no actual salary cap at Apple, you kind of have an implicit one from the CEO. Yeah, for sure. I mean, well, I think just to go back to the media
Starting point is 01:59:13 observation you started out with is as you increase transparency in the market, as you decrease non-related barriers, which in the publishing world previously was really geography and when everyone's on the internet inevitably you know just about all cases you get a power law distribution
Starting point is 01:59:34 and a few people make a ton of money because they win most of the market and then some people make some and then there's a long tail that that sort of don't make any at all but it's it's a very it's interesting it's it's fluid in a way but it can sort of become somewhat static as long as the people at the top sort of you know continue to do well but what's interesting about AI is for 40 years you would have periods of time we'd have tech companies going to head head to head in a product market and I think one of the reasons part of the software eating the world sort of idea is
Starting point is 02:00:13 The way you get an apex predator is that that predator killed everyone else first And so you had tech companies fighting each other for the first 20 30 years of tech the ones that emerged were lean mean killing machines and they and the entire industry were sort of set loose on the rest of the world and everyone was just like was getting slaughtered sort of left and right. But what you also had over this past sort of 20 years or so is the big companies in particular sort of slotting into unique slots. So you have you have Facebook is social, Google is search, Apple is devices, Microsoft is business or you know business applications, Amazon, e-commerce, etc. And obviously these
Starting point is 02:00:54 companies are very large and do lots of things and there's some overlap in different places but they've been fairly sort of distinct in their categories and they've been dominant in those categories and they've been dominant in those categories and so they've been in a place where like Hollywood is wanting to get to right what is the dream in Hollywood you want to have a franchise where the next Marvel movie matters more than who the star is the reason that's so great is because you now have bargaining power over the stars so you just sub someone else in and whereas the old style like Tom Cruise makes the most money because Tom Cruise on a movie
Starting point is 02:01:30 Poster sells the poster and so in a negotiation. He has massive bargaining power So he's going to get get paid a lot get paid a lot of money In tech it hasn't been that case the companies themselves have been franchises and so the the overall anyone who works in tech or probably works in any any entity but you know there's a few people in each company that are critically important really make the whole thing go. Everyone else is fairly replaceable. Those people are have probably always been somewhat underpaid for years and years and years both just by the nature of companies and the
Starting point is 02:02:05 cultural issues and your salary cap sort of analogy but then also just like it's not a transparent market it's not it's not hard to price sort of what people are worth with AI everyone's trying to do the exact same thing so you have multiple companies trying to do the same thing. The output is somewhat measurable. I mean, all the AI test stuff has issues, but by and large, everyone kind of knows who has the good models and who doesn't. They, you know, the scalability questions, you know,
Starting point is 02:02:34 like because all these companies are trying to do the same thing, we have a very unique situation where the bargaining power that you increase transparency, you increase sort of the liquidity or the ability of people to move around because they're doing the same thing, the bargaining power shifts to the people that are super valuable because suddenly it's much more clear who's valuable and their skills are much more transferable. So this is I think a
Starting point is 02:03:01 very underrated bear case for tech in terms of AI, at least for this time period, is they've lost that murky bargaining power over employees that they enjoyed for decades. And currently, you're seeing what happens when you don't have that. You start paying employees what they're worth. And obviously that's great. I'm not saying this is a business analyst, it's not a sort of a moral statement, but it is like what Mark Zuckerberg is doing I think is totally rational. I think it's a classic sort of Clayton Christensen, from Facebook's perspective, AI is all upside. So of course they're gonna invest what they need to do to win, but it's costing him a lot of money and by extension, it's costing everyone else in the ecosystem a lot of money.
Starting point is 02:03:49 Well, isn't it in some way, is the right way to think about the last couple of weeks, like more of like an aqua, like an unofficial aqua hire in the sense that you're, it's not just the people, but it is the know-how in terms of, hey, here's these things that we wanna do that are important to our business in a lot of different ways.
Starting point is 02:04:07 And we're basically, it's like the collective is actually more valuable than any one, like the collective together, getting 10 researchers at the same time is meaningfully more valuable than just each individual researcher added up. You know, randomly. There's probably something to that, but I think, again, like what is actually different
Starting point is 02:04:28 between what Google is trying to do, what Anthropix is trying to do, what OpenAI is trying to do, and what Meta is trying to do. They're all trying to do the same thing. So my suspicion, I'm not an AI researcher, so I don't wanna overstate my knowledge in this space, but my suspicion is skills are are
Starting point is 02:04:46 fairly highly transferable and when that is the case there is In some situations if lots of people can do those skills That's terrible for the employees because then their bargaining power gets diminished because anyone can slot in But we're in this space where the skills are transparent Knowable transferable and there's not very many people that can do them. And so it's a scarce resource that everyone's fighting over and that's why you see this real shift in negotiating leverage as manifested through these dollar figures to AI researchers. Yeah. Do you think, I mean Google seems like the most fragile and
Starting point is 02:05:22 the most like paranoid about just disruption. It's not all upside It could be very bad for them The innovators dilemma, you know, you had this back and forth where? Cinder Pachai mentioned that he hadn't read the book. He said it doesn't matter because it's a structural issue I think that's a good point. But if you play back the counterfactual is it ever possible to if you play back the counterfactual, is it ever possible to disrupt yourself? And essentially, like if the Gemini app had launched before ChatGPT,
Starting point is 02:05:51 and they had taken over that mind share and maintained 90% ownership in that, like it would be somewhat disruptive to their revenue and their profits as they transition over. But when I sum the revenues from OpenAI and LLMs and then Google search, I'm not seeing some massive drop off that actually would destroy Google in the short to medium term.
Starting point is 02:06:16 But I'm wondering if you think it's like, is it entirely impossible to avoid the innovator's dilemma by disrupting yourself? Well, number one, you have to also look at margins, not just revenue. Yeah. But number two, you actually, you answered your question. Google didn't launch Gemini.
Starting point is 02:06:33 Yeah, yeah, yeah. That's the answer. They were years ahead. Yeah, it is. They invented the transformer nearly a decade ago. Yeah. And so in many respects, like there's parts of this question that the counterfactual makes the point in that it is a counterfactual and it's not
Starting point is 02:06:51 reality. Now I do think, I think Google's done better than I expected over the last two years. I like what they're doing in search generally. I think they, it does seem to be the one part of the company that still functions, like they can actually iterate and build products. What we're seeing is reminiscent of what they did a decade or 12 years ago when everyone's like vertical search, Google's done,
Starting point is 02:07:17 all the network's gonna search in apps, and Google completely transformed the search engine response page, whatever it is, the search engine results page to be it is, the search engine results page to be local or to be shopping or whatever and Yelp's been throwing a hissy fit sort of ever since. And so that's what they're doing with search, right? And with search overviews and they have this new search labs or AI mode. They can sort of test stuff out. What's it scalable? Once they're confident about the monetization issues, they can sort of shift stuff out, what's it scalable, once they're confident about the monetization issues, they can sort of shift it over.
Starting point is 02:07:46 I call it the search funnel, search AI funnel. I think it makes a lot of sense. And I think, and this has always actually kind of puzzled me, where I think they're responding fairly well, even though this is, seems to be a textbook case of disruption. And I went back to an article I wrote years ago called Microsoft's Monopoly Hangover. And I went through Lou Gerstner's autobiography about how he turned around IBM.
Starting point is 02:08:14 And his real insight with IBM was everyone wanted him to break it up in this sort of different pieces. And what he realized was IBM was so big and and large from having downstream of the monopoly that actually the only thing they were good at was being big. And so breaking them up would actually just create a bunch of subscale low-performing companies that would all get wiped out. But as this behemoth, they could go to other big companies and solve all their problems at a very mediocre level, but still it's sort of an attractive
Starting point is 02:08:51 proposition. And under Gerstner, they really rode the internet wave. They went to all these big companies said, this internet thing's happening, you need help, we'll solve your problems for you. And had a very sort of successful run, you know, kind of until cloud came along and which Gerstner, by the way, was a proponent of. But, you know, kind of until cloud came along and which Gerstner by the way was a proponent of, but you know by that time the IBM people were back in charge and I was thinking about the context of Microsoft where business models are hard to change and disruption is ultimately about business models and culture is hard to change, even harder to change but
Starting point is 02:09:26 what can't really be change is the nature of who you are and and I think there is you know in Microsoft they were in a similar situation they were a big monopoly and they weren't a product company and the attempts to become a product company with Windows 8 and all the things that went on around that time inevitably inevitably failed and signed Adela to his great credit and you know sort of diminished Windows importance in the company broke it literally broke it into pieces spread it around and this is a multi-step process and got Microsoft back to a place of we're big and we'll do everything. We're not a Windows company, we'll go in there and we'll go solve all your problems. Very
Starting point is 02:10:09 sort of reminiscent of the second version of IBM. And I go back to Google and I've always been intrigued by the I'm feeling lucky button, which doesn't exist anymore, but I always enjoyed that that button continued to exist long after it was impossible to click. Because the moment you started typing the search box, it would start auto searching immediately and jump right to a search page. But it was there in a, it's just so core to Google
Starting point is 02:10:40 to give you the answer, to know everything, like to know everything about the world and to there's a bit where even though the core of their business model is 10 Blue Links, and it's not just the users choosing the search link, which gives them the data feedback loops, they know which results better, but also the users choose the winner of an auction Google puts on for ads and it's an incredible business model. And there's something about that that's always been intention and counter to what Google was founded to be. And I feel like that germ of what Google was founded and meant to be
Starting point is 02:11:20 is an AI answer engine. And it almost feels like even though Google is old and large and fat and slow moving, that core aspect of their nature is still in the culture. And that's why they're finding it in themselves, I think, to do better in AI than you would expect. Was it enough to launch a chat GPT before OpenAI?
Starting point is 02:11:44 No. Was it enough to watch a chat GPT before OpenAI? No. Was it enough to have any sort of cogent response for the first six to nine months? No. But it was enough that I think they've done better than I expected over the past year in particular and gives me I think more optimism than I expected I would have for the company when you know, I When chat GPT first launched? Three AI overview from Google if you search Google's mission Google's mission is to organize the world's information and make it universally accessible and useful which is exactly what language models
Starting point is 02:12:18 Do really really well like the thing that's just undeniable right is you can do you can debate whether? This is gonna be the year of agents, it doesn't feel that way to me yet. But this is the year that most people have realized that wow, LLMs are very good at organizing, surfacing and making data valuable. You mentioned just the debate over breaking up IBM. I'm interested if you could take us through some of that. I bet you didn't rise here to talk about IBM today, did you? No, no, no,
Starting point is 02:12:50 but I want to talk about Intel and kind of your, the history of some of your takeaways and what you think you've gotten right in the past, your perception of, you know, should they break up the foundry business and what you think might be in the works with Lib perception of, you know, should they break up the, the Foundry business, uh, when what you think might be in the works with liputon coming in there. Um, because it, I was listening to Dylan Patel talk about his conversation with, uh, the new CEO liputon and it seems like they're doing lots of tightening up lots of layoffs, but, uh, it's kind of,
Starting point is 02:13:21 I don't even know what framework to apply to analyze like is a breakup the correct thing It feels like something people just say yeah So Intel it's funny. I one of my very first articles was about Intel and What I said at the time was and this was 2013 and this was an art like, you know when you start a site like Strzhekary you're like a new band and why does everyone think a new band's first album is the best because they've been working on these songs for years right and then the next album they had a year to do it and they all suck right so I'll let
Starting point is 02:13:58 people decide if that applies to Strzhekary or not I won't be a sophomore slump but yeah but I had been on you know Intel been a thing I've been wondering about for a long time which was by 2013 when I started they had clearly missed mobile now it wasn't clear to them they were still trying to do the atom processor and then just they're gonna figure it out tomorrow and the the problem with missing mobile is the problem with Intel in general is Intel is always very biased towards high performance. And this goes back to actually Pat Gelsinger, his first time through at Intel. Intel had the CISC versus RISC.
Starting point is 02:14:42 It's like different ways of organizing bits or whatever. RISC is generally more efficient and actually even Intel processors today, even though x86 is CISC, the internal, it's retranslated internally to a RISC type language. None of that is really important other than to say, in the 80s, there was a real push in Intel to switch away from x86 and to a RIS risk type of, I'm not gonna use architecture,
Starting point is 02:15:08 but like for the processors. And Galsinger was a leading proponent that this is a terrible idea. And the reason it's a terrible idea is because there was already a huge ecosystem of software built around x86 and all this low-level code and capabilities that no one ever that was written once and no one ever wants to touch again because it's miserable work and he's like to rewrite all that stuff would take at least two
Starting point is 02:15:35 years and in that time our ability to manufacture chips will improve so much that had we just stuck with CISC our processors would be faster and that was the right bet and that's one of those foundational bets that I like to think about companies in their history and what goes into that which is Intel from the 80s on has solved its problems by having superior manufacturing and by moving faster and yeah our chips may be theoretically less efficient, but if our manufacturing is better
Starting point is 02:16:08 and our transistors are smaller, it doesn't matter because that will swamp whatever theoretical sort of efficiency you might have. And this drove the entire computer industry. You would write, to write a program, every second you spent optimizing your software in the 80s or 90s was a waste of time because whatever improvements you could get would be swamped by the next version of, if you went from
Starting point is 02:16:31 286 to 386 or 36 to 486, that jump was so large you were better off focusing on features even if it made your software sort of slow to use on the current hardware because the next generation of hardware would be so much faster it would solve your your speed problems for you. Now this has generated a lot of bad habits amongst tech developers that's why you get bloat and why you have like poor performing things and all those sort of things but this was sort of super critical and so Intel at its core has always been focused they've always been manufacturing first and focused on better and better performance.
Starting point is 02:17:05 What happened with mobile is in that calculation did not come efficiency. They were never focused on efficiency and in mobile efficiency was everything. So what happened with mobile is Apple went with an ARM processor made by Samsung and they basically rewrote everything. All that stuff Intel didn't want to rewrite in the 80s or if they rewrote everything. All that stuff Intel didn't want to rewrite in the 80s or if they rewrote would just give other processor companies a chance to catch up with them had to be rewritten for mobile because efficiency was so much more important than performance. When that happened Intel was screwed. Now it took them a long long time to realize they were screwed
Starting point is 02:17:41 but they were just fundamentally unsuited to be competitive. It was the whole Paul Adelini turning down the iPhone contract is not true. Tony Fidel, I said that once and I got a call from Tony Fidel actually, this is when I had him on it for an interview, and he's like this drives me up the wall. Intel was not remotely competitive even though they had ARM chips then. Even their ARM chips then were focused on performance, not on efficiency. And so the problem for Intel is once you missed mobile,
Starting point is 02:18:12 you were going to lose your manufacturing lead at some point because volume matters so much. And every time you move down the curve, your transistors get smaller, the cost increased massively. So you need volume to spread out the cost of building these fabs. Like back then when I wrote this article, fabs cost 500 million, now they cost like 20 billion.
Starting point is 02:18:33 And this is over a course of like 12 years. So it was clear Intel was going to be in big trouble back then, and so I wrote they need to build a foundry business. They need to figure out a way to build chips for other people because in the long run the cost of keeping up in manufacturing is not going to be tenable if you're not making mobile chips. And what obviously they didn't, TSMC made all the mobile chips for everyone and guess what happened? TSMC took over the
Starting point is 02:19:03 manufacturing lead. Now there's lots of other things that went into this, why Intel stumbled and sort of things, but at a structural level what happened was actually inevitable. Once Intel missed mobile, unless they figure out a way to make mobile chips some other way, they didn't do that. What's interesting is what is the problem with that it took so long to manifest. Part of mobile was you had an explosion in the cloud because cloud and mobile actually go hand in hand. Intel made all those cloud chips. Intel stock had an incredible run from the time I wrote that article for the next eight to nine years. And I felt like kind of a moron because I might say this company is screwed if they don't do what I say they didn't do what I say and their stock went to the moon but what the way it actually caught up to them
Starting point is 02:19:50 has been in the past two to three years where there's astronomical demand for AI chips only TSMC can beat it Intel's not in the game they're trying to shift to a foundry model but they're so far behind. Being a Foundry is being a customer service business. It's not being an Intel we tell you what to do or we tell our design teams how to change their chips to accommodate our manufacturing needs. It's just totally different and they needed a decade to learn how to do that. Had they changed in 2013, they would be ready today to capitalize on AI. And the counter example here is Microsoft. Microsoft building
Starting point is 02:20:33 Azure, yes it got them somewhat in the game with mobile and things like that, but AWS dominates in that space. But by virtue of building up Azure, they were prepared when the AI opportunity came along. And now Azure is sort of a big AI player. And you know, I wrote about these these two examples a few weeks ago in the context of Apple. I think the concern for Apple isn't the short term. We're gonna be using AI apps on our iPhones for quite a while. It's, are they going to be prepared for what's next if they don't do some sort of sort of reset and pivot here?
Starting point is 02:21:14 Oh, sorry. I didn't answer your question about Intel. Any. Yeah. I mean, it's a, it's a, it's a, it's a decline basically. Like just like, you know, just get as much cashflow out of this thing as you can while you wind down the business for Intel. Yeah Yeah that's what I'm hearing. Yeah I mean it doesn't feel like oh yeah there's a silver bullet just split the business and
Starting point is 02:21:31 they're good like no it's like it's all bad. The problem with the business is Intel needs volume and they get volume from Intel. Sure. And AMD split their business a decade ago and it was really they had a very hard time for many years and they had very tense and difficult negotiations between the global foundry side and the AMD side. Google Foundries was AMD's manufacturing arm and it wasn't until really they got out of that and went to TSMC and then also completely rehauled their ship design business and all those you know that they they got in the business they were and then also that Intel stumbled.
Starting point is 02:22:06 That certainly really helped them. Intel today, so you split it up like who's bought like, Intel's, Intel itself is fabbing some of its stuff with TSMC. Yeah. Who wants to buy Intel's Foundry services? The problem here is TSMC is located in a country called Taiwan, which you know what it is today, but five years ago it'd been like, what, Thailand? Which by the way was probably much better for Taiwan security when Americans thought it was Thailand.
Starting point is 02:22:33 But so there's a real national security element here. And it's just it's a really tough situation because Intel is a failed company at this point. And the reason the failure is so total is because the aspects that drive their failure are the same things that drove their success. It was their arrogance. It was their sense that we're the best, that we will just win through manufacturing might and performance. And all those things work against becoming a good foundry, work against being a customer service organization, work against recognizing the fact that you're not going to make up for missing mobile
Starting point is 02:23:19 through manufacturing, which was their bet for years and years. You had to accept that you lost. And that's a tough place for companies. It's not like someone made a mistake. It's that what they did, what they did too well for too long. It was who they were. They continued being who they were. Right. That's right.
Starting point is 02:23:38 But who else are you gonna get? If you want an alternative to the SMC. It's a very tough situation. Last question and I think we'll be forced to have you make a slightly shorter answer, unfortunately. I wish we had hours to keep talking. I wanted to get your updated thinking on XAIX, the combined entity, the last 24 hours
Starting point is 02:23:57 have been very chaotic. When the initial merger was announced, it made sense for financial reasons for some of the different stakeholders, but I wasn't fully sold on this idea. You're gonna force me to come up with takes that I generally just avoid right about Elon Musk companies for self sanity reasons I think. I remember I wrote an article years ago about like when the Model Y was announced and I was talking about you know it's a Tesla in this aspect what Elon Musk is very incredible at is sort of creating
Starting point is 02:24:31 reality out of thin air he's like the ultimate memer and to create like do lately it's way things used to work backwards I remember I analogize it to like protests like a critique of modern protests is they spin up very quickly because social media makes it very possible but there's no infrastructure under them so they don't amount to anything whereas you go back to like the civil rights era there was years of groundwork that went into like the million man march you know on Washington DC and there was a structure in place that ultimately manifested
Starting point is 02:25:06 large crowds but modern protests are the opposite the the larges comes in the beginning of that all falls apart there's nothing in place and uh... there there there's something that that makes a challenge to write about indian musk related is the you have all the social aspects of the spirit about test law of creating reality is you have all the social aspects, you have this bit about Tesla of creating reality. It's, the stock was buttressed for years
Starting point is 02:25:28 by these true believers, even though the financial parts didn't make sense, you famously had these wars with the short sellers and all that sort of thing, and it worked. It basically manifested a market for this Model Y and then the Model X, not the Model X, what's the other one? The three.
Starting point is 02:25:44 The Model 3, yeah, so it was Model Three, sorry, when I wrote that article. Model Three and Model Y had this massively successful and all the people that were true believers got very rich and congratulations to them. It's great. But it makes it almost impossible for someone, for what I do, who I want to look at structure and fundamentals, I can observe this effect happening, but you can't really say what's going to happen or the effects of it other than to say, this is interesting. And so I wrote about that article and then the solar city thing
Starting point is 02:26:15 came out and he's like bailing out like his brother-in-law or something. And I'm like, I can't write about this. Like, what am I going to say? Like, there's, it just doesn't make sense. And so I think there's the fast-forward X X AI Yeah, there's a theoretical piece here I think actually X AI would be an incredible acquisition target for a lot of companies if it wasn't saddled with X So it feels like the end state is like Twitter getting spun out again like that that that's my that's kind of like my my it just ends up going back to Twitter and and and it becomes the bluebird no one actually wants to like Twitter Twitter there's never been a
Starting point is 02:26:55 company in the history of the world probably where the impact of a company is completely and utterly divorced from its financial realities like I think when Elon Musk bought it and I assume that's continued through now, they'd have like one profitable quarter in their history. Like it's an unbelievably terrible business. And so I think it's probably weighing XAI down. There's a, yes, I get the theory that Twitter data helps XAI. Well, it helped yesterday. You don't need to pay 43 million for Twitter to or for three billion I should say to get it. So yeah that was always my
Starting point is 02:27:29 position too. I don't think it helped yesterday when when Mecca Hitler emerged. But anyways I wish I wish we had a lot more time here. Thank you so much for stopping by. Yeah, no worries. What you guys are doing, I actually had the idea of doing a daily podcast ages ago. Classic example of ideas don't count execution does and you guys did it, I think it's great. Well, you're always welcome here.
Starting point is 02:27:55 You're always welcome. Thanks so much. Thank you. We'll talk to you soon. Bye. Meta just is going deeper with Ray-Ban maker S. Leor Luxotica. I cannot pronounce that first word,an maker Eslil Luxottica. I cannot pronounce that first word,
Starting point is 02:28:07 but people just call it Luxottica. And so Meta is taking a minority stake in Luxottica to accelerate its smart glasses ambitions, investing $3.5 billion in the iconic Ray-Ban manufacturer. We were talking to David Center about the history of this company. It is fascinating I'm very excited for him to break it down for us a little bit more
Starting point is 02:28:28 Hopefully yeah on the show and talk about it because it's very very soon The founder has a crazy story grew up and I think he grew up in an orphanage. Yep, and And and it just what are they wasn't they didn't call him the pitbull. They called him something else But yeah, he was an absolute savage Apparently at one point he wanted to buy Oakley and the founder, CEO of Oakley didn't wanna sell. And so the CEO of Luxottica acquired the largest retailer for Oakley's
Starting point is 02:28:59 and just pulled them off the shelf and basically had started selling knockoff Oakley's even though they were trademarked. And then eventually the Oakley CEO came around and said, okay, like I'll sell your cratering, you know, my revenue. Let's do a deal. So, uh, absolute dog and uh, Shun center to break it down. What do you make of this idea that like, you know, Apple, when they make a device, they redefine and very much standardize that particular market. So when they come out with watches,
Starting point is 02:29:32 there are a number of styles of watch. There's the dress watch, the sports watch, the steel sports watch, there's the dive watch, there's the, you know, Casio style. There's a whole bunch of different styles, right? Apple comes in and just says, there's only one style, the Apple watch, and they become the number Casio style. There's a whole bunch of different styles, right? Apple comes in and just says there's only one style, the Apple Watch, and they become the number one Apple style. And they give you some variance in the band.
Starting point is 02:29:52 In the band, little stuff here and there. And they were doing partnerships. I think they did an Hermes band for a while. They've done a couple other things, but it's been mostly Apple's design language on your wrist. Whereas with the Meta Ray bands, they're saying, and now the meta Ray-Bans, they're saying, and now the meta Oakleys, they're saying, you like the look of Ray-Bans,
Starting point is 02:30:10 we're just putting our technology into the style you like. We're not going to try and create a new iconic style that says meta like Apple says headphones. And they're just kind of like, they're very, very different strategies. And so it feels like well so I think this is strategic this doesn't mean that this doesn't mean that meta can't develop their own styles in time but I think it's very smart to say hey we don't need to innovate on aesthetics and
Starting point is 02:30:38 the sort of silhouettes right there's classic silhouettes Ray-Ban silhouette is Lindy these silhouettes are very Lindy Yeah, and they're different markets the way they're different Luxata had Luxotica has I think Garrett late and like a bunch of other like Brands under it. So they're basically saying like through this we can deliver Luxotica has Brands in every for every demo that you that meta could possibly want right as a hundred billion dollar, you know company And so I think it's very smart. I think
Starting point is 02:31:09 Apple like you said will probably take a drastically different approach in terms of like standardizing around something and and that will say something but accessories like I wear just such a such a personal decision and such an expression of Of who somebody is that I think that, uh, you want to give people max amount of optionality. Yeah. It's just interesting is like, you could have said that about watches like you before the Apple watch, you could have said that, well, you know,
Starting point is 02:31:35 somebody who wears a dress watch wants a dress watch. Somebody who wants a steel sports watch, somebody who wants a G shock is G shot. It's like the G shot. You say G shock and you just immediately think like, you know, special operations guy or Jocko willing listener like that, that, that, it's like a durable, rugged thing. You say, you know, Rolex, that's a different thing. Right. Uh, and, and Apple was able to standardize around it. And it's interesting that, that, uh,
Starting point is 02:32:01 Metta hasn't been trying to do that. And instead they're, they're focusing on partnership here. It's just like, it's just an uncommon strategy, but it seems to be working. There's another post in here. I don't know if we have it here, but someone was talking about it. I'm trying to think of a new, like the key thing is, Apple's great at innovating at multiple layers,
Starting point is 02:32:20 but like generally it's very hard to try to deliver hits in like two specific areas like aesthetics and design Yeah, and then simultaneously in something that's basically a fashion product and simultaneously deliver the technology Yeah, so I don't know. Yeah, Jack Ray here says after wearing Ray-Ban meta Wayfarer glasses for a few weeks I feel kind of naked wearing regular sunglasses I found three use cases that are hard to roll back. One, spontaneous photos of my kids when we're out and about. Any cool pose that has a half-life of three seconds
Starting point is 02:32:52 I can now capture instead of pulling out your phone. Optionality of music or hands-free phone calls without digging around for earbuds. And three, knowledge-seeking chat when I'm walking around, usually for simple factual things. That's exactly what I experienced when I was demoing the Ray-Ban MetaWave errors. Turns out there's more questions I feel like asking
Starting point is 02:33:13 when there's no friction. I'm very excited for multimodal and real-time translation use cases too. They're only gonna get better. But I think those three are maybe enough. And I think with a lot of these products, just having one killer use case, like just replacing the headphones
Starting point is 02:33:31 for hands-free phone calls or something, like if you can just become someone's daily solution for music, like that's enough to just sell the product and then sell them another one the next year when it upgrades a little bit, sell them another one, keep them as an active user, and roll that out for a long time, and then if they can do the next year when it upgrades a little bit, sell them another one, keep them as an active user, and roll that out for a long time. And then if they can do the other stuff, that's great too.
Starting point is 02:33:48 But you just need to get these one, nail the single use case. And so yeah, there's gonna be cool stuff, but it's fascinating to see them roll this out. And it's also interesting how behind the ball it feels like everyone else is now. Like Google was talking about getting into this space. We saw some launches at I.O.
Starting point is 02:34:04 Haven't actually seen any of those in the wild. Haven't seen anyone really talking about those. Apple, it feels like this would be something that they could jump forward to with a stylish pair of eyeglasses with some basic functionality. Just take what's in the AirPods, take a camera. They could do something cool,
Starting point is 02:34:20 but they're just much slower than... Yeah, the other thing with eyewear that's different or that's gonna be like a new challenge for manufacturers is that there's so many different situations where I might want to wear something like a Ray-Ban or J&M silhouette one day and then I might want to I'm playing Jack Marie Mosh, okay but um, the, uh, you know, and then that same afternoon I'm wearing Oakleys when I'm playing tennis or something like that.
Starting point is 02:34:53 And so there's a lot more like swapping and then, then obviously the price low, you could maybe wind up selling people multiple pairs and have indoor pair outdoor pair. It's kind of inconvenient. I feel like there's got to be a better solution to that but I don't know. It says what are this? Yeah, the bifocals. Yeah, where they can like flip down. There's transition lenses but those never fully work all the way but then there's the flip down ones, clip-ons, there's all sorts of different solutions. The big news is that the third browser war has begun. Google stock has dropped on the news that OpenAI
Starting point is 02:35:30 is planning to launch a Google Chrome competitor within just weeks. And this is very interesting timing because- It's time to browse. Yeah, time to browse. Certainly makes sense to become deeper, more deeply integrated into the user's life. Makes a ton of sense.
Starting point is 02:35:46 There's a ton of benefits that come from having a web browser. What was interesting is, we can go into what Google actually launched, or what OpenAI is talking about launching, but this news, this scoop leaked the same day that Arvind from Perplexity announced that they're finally releasing their next big product
Starting point is 02:36:07 after launching Perplexity, Comet, the browser that's designed to be your thought partner and assistant for every aspect of your digital life, work and personal. And so Perplexity launched this on June 9th, and then OpenAI, the scoop goes out via Reuters the same day. And so this feels like very much like, let's not let
Starting point is 02:36:26 perplexity get a bunch of attention and drive a bunch of people to start daily driving Comet, the browser, because even though we're not ready to launch our competitor, we want to get it. Well, I mean, Arvin was on the show talking about Comet, but over a month ago, he said it was really important to the business. This was a big bet that they're making.
Starting point is 02:36:45 Yeah. He and I'm sure both companies are racing to be the first to launch, but the browser from the browser company also launched out of it or they're still in beta, but they launched like a month ago or something like that. So this is, you know, you're not going to be the first. Oh, they launched a month ago with the D a browser. That's interesting because I saw Riley Brown also posted the cursor for web browser and DIA browser. And I thought DIA browser launched the same day,
Starting point is 02:37:08 but I guess it had launched earlier. Yeah, so anybody that was an ARC user can download DIA today and chat with their tabs. But interestingly enough, Perplexity's browser and OpenAI's browser are both built on Chromium, the same open source project that underpins Google Chrome and Microsoft Edge. So the cool thing here, that means that they're compatible
Starting point is 02:37:32 with existing Chrome extensions. Oh, interesting. OK, that's cool. Yeah, I want to talk to more people who were active and tech during the earlier browser wars. The first browser war was Netscape Navigator versus Microsoft Internet Explorer. This is in the mid 90s, early 2000s.
Starting point is 02:37:55 Netscape was super dominant and everyone loved Netscape. It was originally the Mosaic browser. This is the Marc Andreessen project. And then, but Microsoft bundled Internet Explorer with Windows 95, and the distribution was so powerful that Internet Explorer actually wound up winning and became really, really dominant. But then there was this lawsuit and it went back and forth.
Starting point is 02:38:14 But then basically by the early 2000s, Internet Explorer had over 90% market share, but then they got kind of lazy and stagnant apparently. I mean, I'm not exactly sure what happened but they get there's a lot more competition. So Firefox, which was I believe like a spin-out of Netscape or kind of like some of the same heritage there, began getting traction and then Google Chrome launched in 2008 and leap-rogged everyone and Google Chrome was really focused on like speed, it was the fastest, and they did a whole bunch of work to optimize JavaScript so the pages would just load faster and run better
Starting point is 02:38:49 on pretty much every computer that you had. And then they had the open source project with Chromium, and so they were able to kind of standardize the entire industry. And so everyone's always been trying to draw analogies between the browser wars and the LLM wars, and what's the role of open source in that, like is open source a strategy to wind up maintaining
Starting point is 02:39:09 your dominance, how much does distribution matter? Chrome was probably pretty easy to distribute because every single person was visiting Google just every day searching. And so you just put this bar, hey, wanna switch to the faster browser and people just do it because you can have basically like, you know, billions of ad impressions on your product every day. Will be interesting to see if chat GPT can get people to download their own browser on
Starting point is 02:39:35 desktop. I mean, I'm using chat GPT on desktop in Chrome all the time. Which chat GPT model would you want to use as a default search engine? That's the hard part because I always run into this problem where it defaults to 03 Pro, but that takes 10 minutes. And so then I have to go to a 40. And then if I'm in an 03 Pro flow and I'm talking to 03 Pro and I let it cook for 10 minutes, it gave me a great answer.
Starting point is 02:40:00 But then I wanna just be like, okay, just like clean this up a little bit or summarize this or do some bullet points. I want 40 to do that so I have to switch over So I don't know I would imagine I'd go for oh as the default because I want speed but even for oh Could probably be faster before it truly replace those very fast. They've spent a very long time being fast Yeah, and I could imagine them doing a similar project to I believe it was like the v8 JavaScript engine They sent this team out to I want to say like Iceland or something
Starting point is 02:40:33 They basically sent like a bunch of engineers to like an off-site and they were like just go optimize JavaScript for like a Month just go focus on this for like a month or months and come back when it's done. Like you have no other responsibilities than just like optimizing this like compiler. And they came out, they came back with the VA JavaScript engine and created this whole like Node.js boom. People were running JavaScript on the server then.
Starting point is 02:40:55 And I could see Google kind of doing something similar where they're like, okay, we have Gemini. It's good at looking stuff up. It's a good knowledge retrieval engine, go figure out how to make it load all the tokens for the full response in 100 milliseconds. And that would be very, very cool. And I wonder if that's like a uniquely Google advantage.
Starting point is 02:41:16 Tyler, you look something up. Yeah, it was in Denmark. Denmark, okay, I was close, I was close. Yeah, I wasn't sure if it was Finland or Iceland in Denmark. Yeah, I was close, I was close. Yeah, I wasn't sure if it was Finland or Iceland in Denmark. Yeah, the interesting thing here, I'm realizing that tabs are definitely a light lock-in to browsers.
Starting point is 02:41:31 It's not just the default, but if you have six to 10 tabs that you just had open for a really long time, and they're like from a bunch of different things, and you can't exactly remember what they were if you had to list them all off, but you know, I personally end up using tabs as like somewhat of a to-do list. And so if you're spinning up a new browser and you don't have your tabs, it's like, oh,
Starting point is 02:41:52 do I want to just like get rid of my tab stack? I have a bunch of tabs that just have stayed there for years and they're basically like, it's basically like a mini operating system, right? It's like different apps that might be a Google Sheet or something else so there's very real lock-in I could bring all those tabs over but I have to then yeah log in to a bunch of different services and so it's it's really really hard to actually yeah win here I wonder if anyone's using you know in Google Chrome you can actually change the default search bar,
Starting point is 02:42:26 to you know when you type in the search bar and you just type words, it just Google searches it, you can change that to search ChatGPT. Yeah, you can pass in a query parameter and it can just do that, but I haven't heard of anyone actually doing that. And I used to have, I used to be such a power user of Chrome, I used to have different code words basically,
Starting point is 02:42:43 so if I typed like I space and then a query, it would go to IMDB and search that specifically. So you could have Chrome like route to any specific search, any, so you could press like Y space and it would search Yelp or anything else. But I don't know if people are doing that with Google, with ChatGPT, I think people mostly just like control, are doing that with Google, with ChatGPT. I think people mostly just like control command T
Starting point is 02:43:07 and then hang out in ChatGPT. Well, we'll have to ask Chris in 15 minutes about get an update on the browser wars because he was an early investor in. I know one of those tabs that you have pinned right now. What's that? Adio. Of course.
Starting point is 02:43:23 Customer relationship magic. Adio is the AI native CRM that builds, scales, and grows your company to the next level. You can that? Adio. Of course. Customer relationship magic. Adio is the AI native CRM that builds, scales and grows your company to the next level. You can get started for free. I've had Adio open for thousands of hours at this point. Yeah. So Signal kind of breaks it down with the open AI launching the web browser. He says, this is the oldest plan tech, find product market fit with a single killer use case, then vertically integrate and horizontally expand until you control the interface layer itself, app, platform, once you own the interface,
Starting point is 02:43:49 you own the defaults. Welcome to the next generation of browser wars. Yeah, what's interesting is there, like Sam Altman at OpenAI, and just the fact that OpenAI is a company, like there is kind of a mandate to like, vertically and horizontally integrate, figure out code, figure out research, figure out research, figure out devices.
Starting point is 02:44:07 But every company wants to do everything, but then sometimes they run up against barriers. Like there was a time when Google was like, we want to win social networking and we want to beat Facebook and we're going to launch a direct Facebook competitor. And they did, and it didn't go well and then they shelved it and then they wound up producing trillions of dollars in market cap just doing the thing that they do
Starting point is 02:44:31 great and so the question is like the surface area of open AI they have to explore they have to experiment it's it would be stupid not to see if they could get a browser and a device and a chip and a nuclear reactor and everything and sand, get the sand, get everything. But there's no guarantee that they will win the entire vertical stack and that it will be the one company, right? I think my question is, are these gonna be, like is OpenAI's browser gonna be an entirely new app
Starting point is 02:45:00 other than their existing mobile app? Is it, or their desktop app? Yeah, that is interesting. Because if they have to get people to re-download a separate app, then that's like an entirely, you know, they have a good fly, you know, they have a bunch of impressions.
Starting point is 02:45:15 It is interesting that they wouldn't just evolve the apps that they already have installed. Perplexity too, I don't know if Perplexity has, is planning to release this as like a new standalone app or it will be in the perplexity mobile app but yeah I mean I know I think comments like its own thing because we were looking to download it and we need a code and you can't just get it if you're just on perplexity but I don't know all I know is that you should go to fin.ai, the number one AI agent for customer service,
Starting point is 02:45:46 number one in performance benchmarks, number one in competitive bake-offs, number one ranking on G2. So Arvin breaks down his philosophy of Comet, the browser that he's dropping from Perplexity. He says, you can either keep waiting for connectors and MCP servers for bringing in context from third- party apps,
Starting point is 02:46:05 or you can just download and use comment and let the agent take care of browsing your tabs and pulling relevant info. It's a much cleaner way to make agents work. So that is interesting. So I wonder how much like puppeteering will be in this because Chetjiputti and OpenAI have operator that operates a chromium front,
Starting point is 02:46:27 like a headless web browser basically, but you can actually see it working and it's clicking things. And so if they're, like there's also the value of like the training data, if you're getting people using all these websites, you have all this training data of like, okay, they clicked on the blue button,
Starting point is 02:46:42 they clicked on the green button, they saw this, they entered the, this is how they dealt with this form, this is how they dealt with that form. And so that feels like very, very valuable data if you can get it. So it's probably worth duking it out, even if it doesn't, even if it takes a long time. For sure. I do wonder where else they will, where they will plug in, like Clueli operates at like a higher level of abstraction with like the screen scraping and
Starting point is 02:47:06 I wonder if we'll hear rumbles about either perplexity or open AI thinking about like moving up the stack to that level I'm not exactly sure

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.