All-In with Chamath, Jason, Sacks & Friedberg - E167: Nvidia smashes earnings (again), Google's Woke AI disaster, Groq's LPU breakthrough & more

Episode Date: February 23, 2024

(0:00) Bestie intros: Banana boat! (2:34) Nvidia smashes expectations again: understanding its terminal value and bull/bear cases in the context of the history of the internet (27:26) Groq's big week,... training vs. inference, LPUs vs. GPUs, how to succeed in deep tech (49:37) Google's AI disaster: Is Google too woke to function as search gets disrupted by AI? (1:17:17) War Corner with Sacks Follow the besties:  https://twitter.com/chamath https://twitter.com/Jason https://twitter.com/DavidSacks https://twitter.com/friedberg Follow the pod: https://twitter.com/theallinpod https://linktr.ee/allinpodcast Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://www.google.com/finance/quote/NVDA:NASDAQ https://twitter.com/KobeissiLetter/status/1760680756689748478 https://investor.nvidia.com/news/press-release-details/2024/NVIDIA-Announces-Financial-Results-for-Fourth-Quarter-and-Fiscal-2024 https://www.statista.com/statistics/1120484/nvidia-quarterly-revenue-by-specialized-market https://www.google.com/finance/quote/SPY:NYSEARCA?comparison=NASDAQ%3AQQQ&window=5D https://www.marketwatch.com/story/wall-street-keeps-likening-nvidia-to-dot-com-era-cisco-is-the-comparison-justified-eed307c1 https://twitter.com/JayScambler/status/1759372542530261154 https://twitter.com/chamath/status/1760343973632291212 https://x.ai https://twitter.com/TheTranscript_/status/1760436281438314545 https://artificialanalysis.ai https://www.contraline.com/product https://www.cafexapp.com/commercial https://blog.google/technology/ai/google-gemini-ai https://blog.google/products/gemini/bard-gemini-advanced-app https://workspace.google.com/blog/product-announcements/gemini-for-google-workspace https://twitter.com/benthompson/status/1760452419627233610 https://twitter.com/Patworx/status/1760189582870536408 https://twitter.com/micsolana/status/1760163801893339565 https://ai.google/responsibility/principles https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback https://twitter.com/Jason/status/1760780139476992062 https://twitter.com/paulg/status/1760416051181793361 https://twitter.com/chamath/status/1760729719094563019 https://twitter.com/Jason/status/1760780139476992062

Transcript
Discussion (0)
Starting point is 00:00:00 All right, everybody, welcome back to your favorite podcast of all time, The Orlin Podcast, Episode 160-Something. With me again, Chamath Palihapatiya. He's the CEO of a company and invests in startups and his firm is called Social Capital. We also have David Freberg, the Sultan of Science. He's now a CEO as well. And we have David Sacks from Craft Ventures in some understaffed hotel room somewhere. How are we doing, boys? Good. Thank you. This is an odd intro. Can your intro be any more low energy and dragged out? I'm sick. What do you want me to do?
Starting point is 00:00:37 Geez. I'm doing a throw- She's trying to fake the effort. All right. Here, give me one more shot. Watch this. Watch this. Watch profession. You want professionalism? Fake the effort. Come on. Here we go. You want professionalism watch this watch profession you want professionalism make the effort come on Here we go. You want professionalism? I'll show you guys professionalism. Is that binocca? What was that? That's an okay. Oh, welcome to the all in podcast episode 167, 168 with me. Of course, the rain man himself, David Sacks, the dictator chairman, Chaboth,
Starting point is 00:01:19 Polyhapitia and our Sultan of Science, David Pruber. How are we doing, boys? Great. How are you? Is that high energy enough for you? Is it 167 or 168? I don't know. Who cares? We at least get you to know the episode number.
Starting point is 00:01:31 Who cares? Unfortunately or fortunately, we're going to be doing this thing forever. The audience demands it. It doesn't matter. This is like a Twilight Zone episode. We're going to be trapped in these four bubbles forever. You know, like Superman?
Starting point is 00:01:43 It is. This is like the gift in the Christmas Bargass. Oh, and they were trapped in these four bubbles forever. You know, like Superman, it's a, it is, it's, this is like the, it is the gift in the Christmas. I guess. Glass, uh, Zed, was that Zed? Zod. Neil before Zod. And he spun through the universe and the plastic being forever for, for
Starting point is 00:01:56 infinity until, until Superman took the nuclear bomb out of the Eiffel tower and threw it into space and blew it up. And freedom. My background today, I think I'm gonna have to change now that you've referenced this important scene. That was the best moment of that movie, J.K. Where Taren Stam says, Neil to the president and the president says, oh God.
Starting point is 00:02:17 Yes. And then Taren Stam's like, oh God. God is odd. Not God is odd. Odd, Neil before, odd. That was Superman 2 or 3? Yeah, Superman 2 is pretty much the best. You know, like Empire Strikes Back, like Terminator 2, it's always the second one. That's the best one. All right, everybody, we got a lot to talk about today.
Starting point is 00:02:36 Apologies for my voice. A little bit of a cold. NVIDIA blew the doors off their earnings for the third straight quarter. Shares were up 15% on Thursday, representing a nearly $250 billion jump in market cap. So let's just let that sit in for a second. This is the largest single day gain in market cap. $247 billion added in market cap. Previously, Metta did something similar earlier this year.
Starting point is 00:03:02 Remember everybody was down on that stock because they were doing all the crazy stuff with reality labs and then they got focused and laid off 20,000 people. They added $196 billion. In other words, they added like 2 1 1 1 2 Airbnb so their valuation. But let's just get to the results.
Starting point is 00:03:17 The results are absolutely stunning. I dare I say unprecedented Q4 revenue, 22.1 billion. That's up 22% quarter over quarter, up 265% year over year. The net income was 12.3 billion, 9x year over year. And the gross margin of 76% was up 2.25 or 12.7% year over year. But look at this revenue ramp. This is extraordinary. Q1 of 2024, this juggernaut starts and it does not stop and it doesn't look like it's going to stop just to run up from 7 billion, go all the way to 22 billion in revenue for the quarter. Absolutely extraordinary. And
Starting point is 00:03:59 if you want to know why this is happening, why is NVIDIA putting up these kind of numbers, this chart explains everything. This is all about data centers. Obviously, if you heard of NVIDIA before, the AI boom, it was gaming, professional visualizations, I think people making movies and stuff like that. Autos used NVIDIA for self-driving, that kind of stuff. But if you look at this chart, you'll see data centers just starting.
Starting point is 00:04:23 Four quarters ago, starts to ramp up as everybody builds out the infrastructure for new data centers to deal with generative AI. So, just to add one point here, Jason. So, what you can see is that NVIDIA was around for a long time and it was making these chips, these GPUs as opposed to CPUs. And they were primarily used by games and by virtual reality software because GPUs are better, obviously, at graphical processing. They use vector math to create these 3D worlds. And this vector math that they use to create these 3D worlds is also the same vector math
Starting point is 00:05:04 that AI uses to reach its outcomes. So with the explosion of LLMs, it turns out that these GPUs are the right chips that you need for these cloud service providers to build out these big data centers to serve now all of these new AI applications. So NVIDIA was in the perfect place at the perfect time, and that's why this exploded. And what you're seeing is the build out of this new cloud service infrastructure
Starting point is 00:05:32 for AI. Yeah. And also, helping the stock is the fact that they bought back $2.7 billion worth of their shares as part of a $25 billion buyback plan. But this company is firing on all cylinders. Revenue is obviously ripping as people put in orders to replace all of the data centers out there or at least augment them with this technology with GPUs, A100s, H100s, etc. The gross margin has been expanding. They have huge profits and they're still projecting more growth in Q1, around $24 billion, which would be a 3x
Starting point is 00:06:06 increase year over year. And this obviously has made the entire market rip as Nvidia goes. So does the market right now. And the S&P 500 and NASDAQ are at record highs at the time of this taping. Chamath, your general thoughts here on something I don't think anybody saw coming, except for you and your investment in GROC maybe, and a couple of others. I think what I would tell you is that the bigger principle, and we've talked about this a lot, Jason, is that in capitalism, when you over earn for enough of a time, what happens is competitors decide to try to compete
Starting point is 00:06:44 away your earnings. In the absence of a monopoly, the happens is competitors decide to try to compete away your earnings. In the absence of a monopoly, the amount of time that you have tends to be small and it shrinks. So in the case of a monopoly, for example, take Google, you can over earn for decades. And it takes a very, very long time for somebody to try to displace you. We're just starting to see the beginnings of that with things like perplexity and other services that are chipping away at the Google Monopoly. But at some point in time, all of these excess profits are competed away.
Starting point is 00:07:15 In the case of Nvidia, what you're now starting to see is them over earn in a very massive way. So the real question is who will step up to try to compete away those profits? The old Bezos quote, right? Your margin is my opportunity. And I think we're starting to see, and you've mentioned Grock, who had a super viral moment, I think this week, but you're starting to see the emergence of a more detailed understanding of what this market actually means. And as a result, who will compete away the inference market, who will compete away the training market, and the economics of that are just becoming known to now more and more people.
Starting point is 00:07:54 Freiberg, your thoughts, we were talking, I think, was last week or the week before about the possibility of NVIDIA being a $10 trillion company, the largest company in the world. What are your thoughts on these spectacular results? And then Schmott's point, everybody is watching this going, hmm, maybe I can get a slice of that pie. And maybe I can create a more competitive offering. Obviously, we saw Sam Holtman rumored to be raising $7 trillion, which feels like a fake number. It feels like that's maybe the market size or something. But your thoughts here? I don't think anything's changed on the NVIDIA front. There's this accelerated compute buildout underway in data centers.
Starting point is 00:08:30 Everyone's building infrastructure and then everyone's trying to build applications and tools and services on top of that infrastructure. The infrastructure buildout is kind of the first phase. The real question ultimately will be, does the initial cost of the infrastructure exceed the ultimate value that's going to be realized on the application layer? In the early days of the internet, a lot of people were buying Oracle servers. They were like 3,000 bucks a server. And they were running these Oracle servers out of an internet-connected data center.
Starting point is 00:09:01 And it took a couple of years before folks realized that for large scale distributed compute applications, you're better off using cheaper hardware, cheaper server racks, cheaper hard drives, cheaper buses, and assuming a shorter lifespan on those servers and you could cycle them in and out, and you didn't need the redundancy, you didn't need the certainty, you didn't need the redundancy. You didn't need the certainty. You didn't need the runtime guarantees.
Starting point is 00:09:27 And so you could use a lower cost, higher failure rate, but much, much net lower cost kind of approach to building out a data center for internet serving. And so the Oracle servers didn't really take the market. And early on, everyone thought that they would. So I think Chimaz's point is right. Now, NVIDIA has been at this for a very long time. And the real question is how much of an advantage do they have, particularly that there is this need
Starting point is 00:09:51 to use FABs to build replacement technology. So over time, will there be better solutions that use hardware that's not as good, but the software figures out and they build new architecture for running on that hardware in a way that kind of mimics what we saw in the early days of the build out of the internet. So TBD, right? The same is true in switches, right?
Starting point is 00:10:10 So in networking, a lot of the high-end, high-quality networking companies got beaten up when lower-cost solutions came to market later. And so they looked like they were going to be the biggest business ever. I mean, you could look at Cisco during the early days of the internet build out, and everyone thought Cisco was the picks and shovels of the internet, and they were going to be the biggest business ever. I mean, you could look at Cisco during the early days of the internet build out and everyone thought Cisco was the picks and shovels of the internet and they were going to make all the values going to include a Cisco. So we're kind of in that same phase right now with Nvidia. The real question is, is this going to be a much harder hill to compete on than we've ever seen, given the
Starting point is 00:10:39 development cycle on chips and the requirement to use these fabs to build chips? It may be a harder hill to get up. Sex. So we'll see. Your thoughts, you think we're getting to the point where maybe we'll have bought too many of these, built out too much infrastructure and it will take time for the application layer as Freerberg was alluding to to monetize it? Well, I think the question everyone's asking right now is, are these results sustainable?
Starting point is 00:11:02 Can NVIDIA keep growing at these astounding rates? Will the buildout continue? The comparison everyone's making is to Cisco. There's this chart that's been going around overlaying the NVIDIA stock price on the Cisco stock price. You can see here the orange line is NVIDIA and the blue line is Cisco. It's almost like a perfect match. Now, what happened is that at a similar point in the original build-out of the internet, of the dot-com era,
Starting point is 00:11:33 you had the market crash at the end of March of 2000. Cisco never really recovered from that peak valuation. I think there's a lot of reasons to believe Nvidia is different. One is that if you look at Nvidia's multiples, they're nowhere near where Cisco's were back then. So the market in 1999 and early 2000 was way more bubbly than it is now. So Nvidia's valuation is much more grounded in real revenue, real margins, real profit. Second, you have the issue of competitive moat. Cisco was selling servers and networking equipment. Fundamentally, that equipment was much easier to copy and commoditize than GPUs.
Starting point is 00:12:18 These GPU chips are really complicated. I think Jensen made the point that their Hopper 100 product, he said, you know, don't even think of it just like a chip. There's actually 35,000 components in this product and it weighs 70 pounds. This is more like a mainframe computer or something that's dedicated to processing. Yeah, somewhere between a rack server and the entire rack. Yeah, it's giant and it's heavy and it's complex. It does say something here, Chamath I think about how well positioned big tech is in terms of seeing an opportunity and quickly mobilizing to capture that opportunity.
Starting point is 00:13:00 These servers are being bought by, you know, people like Amazon, I'm sure Apple, obviously Facebook Meta. I don't know if Google is buying them as well. I would assume so, Tesla. So everybody's buying these things and they have tons of cash sitting around. It is pretty amazing how nimble the industry is and this opportunity feels like everybody is looking at it like mobile and cloud. I have to get mobilized quickly to not get disrupted.
Starting point is 00:13:27 You're bringing up an excellent point. And I would like to tie it together with Friedberg's point. So at some point all of this spend has to make money, right? Otherwise, you're going to look really foolish for having spent 20 and 30 and $40 billion. So Nick, if you just go back to the revenue slide of NVIDIA, I can try to give you a framing of this, at least the way that I think about it. So, if you look at this, like what you're talking about is, look, who is going to spend $22.1
Starting point is 00:13:55 billion? Well, you said it, Jason, it's all a big tech. Why? Because they have that money on the balance sheet sitting idle. But when you spend $22 billion, their investors are going to demand a rate of return on that. And so if you think about what a reasonable rate of return is, call it 30, 40, 50% and then you factor in, and that's profit, and then you factor in all of the other things that need to support that, that $22 billion of spend needs to generate probably $45 billion of revenue. And so, Jason, the question to your point and to Friedberg's point, the $64,000 question is, who in this last quarter is going to make $45 billion on that $22 billion of spend?
Starting point is 00:14:37 And again, what I would tell you to be really honest about this is that what you're seeing is more about big companies muscling people around with their balance sheet and being able to go to NVIDIA and say I will give you committed pre-purchases over the next three or four quarters and less about here is a product that I'm shipping that actually makes money, which I need enormous more compute resources for. It's not the latter. Most of the apps, the overwhelming majority of the apps that we're seeing in AI today are toy apps that are run as proofs of concept and demos and run in a sandbox.
Starting point is 00:15:22 It is not production code. This is not, we've rebuilt the entire autopilot system for the Boeing and it's now run with agents and bots and all of this training. That's not what's happening. So it is a really important question. Today the demand is clear. It's the big guys with huge gobs of money. And by the way, Nvidia is super smart to take it because they can now forecast demand for the next two or three quarters. I think we still need to see the next big
Starting point is 00:15:54 thing. And if you look in the past, what the past has showed you, it's the big guys don't really invent the new things that make a ton of money. It's the new guys who, because they don't have a lot of money and they have to be a little bit more industrious, come up with something really authentic and new. Yeah, constraint makes for great art. Yeah. We haven't seen that yet. So I think the revenue scale will continue for like the next two or three years probably for Nvidia. But the real question is what is the terminal value? And it's the same thing that SAC showed in that Cisco slide. People ultimately realized that the value was gonna go
Starting point is 00:16:30 to other parts of the stack, the application layer. And as more and more money was accrued at the application layer of the internet, less and less revenue multiple and credit was given to Cisco. And that's nothing against Cisco because their revenue continued to compound Right and they did an incredible job, but the valuation got to so freeberg if we're looking at this chart The winner of Netflix the winner of the Cisco chart might in fact be somebody like Netflix They actually got you know hundreds of millions of consumers to give them cash Facebook and then you have Google and Facebook as well
Starting point is 00:17:04 Generating all that traffic. And then YouTube, of course. But who do you see the winner here as in terms of the application layer? Who are the billion customers here who are going to spend $20 a month, $5 a month, whatever it is? So, I mean, let me just start with this important point. If you look at where that revenue is coming from to Chats Point, it's coming from big cloud service providers. So Google and others are building out clouds that other application developers
Starting point is 00:17:35 can build their AI tools and applications on top of. So a lot of the build out is in these cloud data centers that are owned and operated by these big tech companies. The 18 billion of data center revenue that NVIDIA realized is revenue to them, but it's not an operating expense to the companies that are building out. So this is an important point on why this is happening at such an accelerated pace. When a big company buys these chips from N, they don't have to from an accounting basis market as an expense in their income statement.
Starting point is 00:18:08 It actually gets booked as a capital expenditure in the cash flow statement. It gets put on the balance sheet and they depreciate it over time. And so they can spend $20 billion of cash because Google and others have $100 billion of cash sitting on the balance sheet. And they've been struggling to find ways to grow their business through acquisitions. One of the reasons is they, there aren't enough companies out there that they can buy at a good multiple that can give them a good increase in profit. The other one is that antitrust authorities are blocking all of their acquisitions.
Starting point is 00:18:38 And so what do you do with all that cash? Well, you can build out the next gen of cloud infrastructure and you don't have to take the hit on your P&L by doing it. So it ends up on the balance sheet, and then you depreciate it over typically four to seven years. So that money gets paid out on the income statement at these big companies over a seven-year period. So there's a really great accounting and M&A environment driver here that's causing the
Starting point is 00:19:04 big cloud data center providers to step in and say, this is a great time for us to build out the next generation of infrastructure that could generate profits for us in the future because we've got all this cash sitting around. We don't have to take a P&L hit. We don't have to acquire a cash burning business. And frankly, we're not gonna be able to grow through M&A
Starting point is 00:19:21 because of antitrust right now anyway. So there's a lot of other motivating factors that are causing this near term acceleration as they're trying to find ways to grow. Yeah. And all of this, I know that was an accounting point, but I think it's a really important motivated one. If you, if a hundred billion gets spent this year, you divide it by four, 25 billion in revenue would have to come from that or something in that range. Yeah. And so, Saks, any guesses? Do you have to just keep in mind, I think freeberg, what you said is very true for GCP spend,
Starting point is 00:19:46 but not necessarily for Google spend. It's true for AWS spend, but not necessarily for Amazon spend. And it's true for Azure spend, not true for Microsoft spend. And it's largely not true for Tesla and Facebook because they don't have clouds. So I think the question to your point
Starting point is 00:20:02 that Ben, for obvious reasons, Nvidia doesn't disclose it is, what is the percentage of that 21 billion that just went to those cloud providers that they'll then expose to everybody else versus what was just absorbed? Because at Facebook, Mark had that video about how many H100s. That's all for him. Right. But it is still capitalized as my point. So they don't have to book that as an expense. It sits on the balance sheet. Yeah, sure, of course. And they earn it down over time. You're helping to explain why these big cloud service
Starting point is 00:20:32 providers are spending so much on the data center build out. Because they have so much cash, because they're very profitable and there's nowhere else to put the money. Right, well, so that would seem to indicate that this is more in the category of one time build out than sustainable ongoing revenue. I think the big question is the one that Jamath asked, which is, what's the terminal value of NVIDIA? A simple framework for thinking about that is, what is the total
Starting point is 00:20:56 addressable market or TAM related to GPUs, and then what is their market share going to be? Right now, their market share is something like 91%. That's clearly going to come down, but their moat appears to be substantial. The Wall Street analysts, I've been listening to think that in five years, they're still going to have 60-something percent market share. They're going to have a substantial percentage of this market or this TAM. Then the question is, I think with respect to TAM, is, what is one-time build-out versus steady-state? Clearly, there's a lot of build-out happening now that's almost like a backfill of capacity that people are realizing they need. Even the numbers you're seeing this quarter kind of understate it, because, first of all,
Starting point is 00:21:42 NVIDIA was supply constrained. They cannot produce enough chips to satisfy all the demand. Their revenue would have been even higher if they had more capacity. Second, you just look at their forecast. The fiscal year that just ended, they did around $60 billion of revenue. They're forecasting $110 billion for the fiscal year that just started. They're already projecting to almost double based on the demand that they clearly have visibility into already. It's very hard to know exactly what the terminal or steady-state value of this market is going to be. Even once the cloud service providers do this big buildout, presumably, there's always going to be a need to stay up to date with the latest chips, right? Here's a framework for you, Saks. Tell me if this makes sense. Intel was basically the
Starting point is 00:22:32 mother of all of modern compute up until today, right? I think the CPU was the most fundamental workhorse that enabled local PCs, it enabled networking, it enabled the internet. And so when you look at the market cap of it as an example, that's about $180 billion today. The economy that it created, that it supports, is probably measured, call it a trillion or two trillion dollars, maybe five trillion. Let's just be really generous. You can see that there's this ratio of the enabler of an economy and the size of the economy.
Starting point is 00:23:15 Those things tend to be relatively fixed and they recur repeatedly over and over and over. If you look at Microsoft, it's market cap relative to the economy that it enables. The question for NVIDIA in my mind would be not that is it not going to go up in the next 18 to 24 months. It probably is for exactly the reason you said. It is super set up to have a very good meet and beat guidance for the street, which they'll eat up and all of the algorithms that trade the press releases will drive the price higher and all of this stuff will just create a trend upward. I think the bigger question is if it's a four or five trillion dollar market cap
Starting point is 00:23:53 in the next two or three years, will it support a hundred trillion dollar economy? Because that's what you would need to believe for those ratios to hold. Otherwise, everything is just broken on the internet. Yeah. I mean, so the history of the internet is that if you build it, they will come, meaning that if you make the investment in the capital assets necessary to power the next generation of applications,
Starting point is 00:24:18 those applications have always eventually gotten written, even though it was hard to predict them at the time. So in the late 90s, when we had the whole dot-com bubble and then bust, you had this tremendous build out, not just of servers and all the networking equipment, but there was a huge fiber build out by all the telecom companies. And the telecom companies had a Cisco-like peak market cap. It was worse. Yeah, wall-com and then they went bankrupt a lot of them.
Starting point is 00:24:42 Yeah. Well, the problem there was that a lot of the build-out happened with debt. And so when you had the dot-com crash and all the valuations came down to earth, that's why a lot of them went under. Cisco wasn't in that position. But any of them, my point is, in the early 2000s when the dot-com crash happened, everyone thought that these telecom companies had over-invested in fiber. As it turns out, all that fiber eventually got used. The internet went from dial-up to broadband. We started doing seeing streaming, social networking. All these applications started eating up that bandwidth.
Starting point is 00:25:15 So, I think that the history of these things is that the applications eventually get written, they get developed if you build the infrastructure to power them. And I think with AI, the thing that's exciting to me as someone who's really more of an application investor is that we're just at the beginning, I think of a huge wave of a lot of new creativity in applications that's gonna be written.
Starting point is 00:25:41 And it's not just B to C, it's gonna be B to B as well. You guys haven't really mentioned that it's not just consumers and consumer applications are going to use these cloud data centers that are buying up all these GPUs. It's going to be enterprises too. I mean, these enterprises are using Azure, they're using Google, cloud and so forth. So there's a lot, I think, that's still to come. I mean, we're just at the beginning of a wave that's probably going come. I mean, we're just at the beginning of a wave that's probably gonna last at least a decade. Yeah, to your point, one of the reasons YouTube,
Starting point is 00:26:11 Google Photos, iPhoto, a lot of these things happened was because the infrastructure build out was so great during the dot-com boom that the prices for storage, the prices for bandwidth sacks plummeted. And then people like Chad Hurley looked at it and were like, you know what, instead of charging people to put a video on the internet and then charging them for the bandwidth they used,
Starting point is 00:26:32 we'll just let them upload this stuff to YouTube and we'll figure it out later. Same thing with Netflix. Yeah. I mean, look, when we were developing PayPal in the late 90s, really around 1999, you could barely upload a photo to the internet. I mean, so like the idea of having an account with a profile photo on it was sort of like, why would you do that?
Starting point is 00:26:51 It's just prohibitively slow. Everyone's gonna drop off. By 2003, it was fast enough that you could do that. And that's why social networking happened. I mean, literally, without that performance improvement, like even having a profile photo on your account was something that was too hard to do. Your LinkedIn profile was like too much bandwidth. And then, let alone video, I mean, you would get,
Starting point is 00:27:15 you probably remember these days, you would put up a video on your website. If it went viral, your website got turned off because you would hit your $5,000 or $10,000 a month cap. All right. Grock also had a huge week. That's Grock with a Q, not to be confused with Elon's Grock with a K. Chimath, you've talked about Grock on this podcast a couple of times. Obviously, you were the, I guess you were the first investor, the seed investor.
Starting point is 00:27:41 You pulled at these LPUs and this concept out of a team that was at Google. Maybe you could explain a little bit about Grox viral moment this week in the history of the company, which I know has been a long road for you with this company. I mean, it's been since 2016. So again, proving what you guys have said many times and what I've tried to live out, which is just you just got to keep grinding. 90% of the battle is just staying in business and having oxygen to keep trying things. And then eventually if you get lucky, which I think we did,
Starting point is 00:28:18 things can really break in your favor. So this weekend, you know, I've been tweeting out a lot of technical information about why I think this is such a big deal. But yeah, the moment came this weekend, you know, I've been tweeting out a lot of technical information about why I think this is such a big deal. But yeah, the the moment came this weekend, combination of hacker news and some other places. And essentially, we had no customers two months ago, I'll just be honest. And between Sunday and Tuesday, we've just were overwhelmed. And I think like the last count was we had 3000,000 unique customers come and try to consume our resources
Starting point is 00:28:47 from every important Fortune 500 all the way down to developers. And so I think we're very fortunate. I think the team has a lot of hard work to do. So it could mean nothing, but it has the potential to be something very disruptive. So what is it that people are glomming onto? You have to understand that like at the very highest level
Starting point is 00:29:08 of AI, you have to view it as two distinct problems. One problem is called training, which is where you take a model and you take all of the data that you think will help train it, and you do that. You train the model, you learn all over all of this information. But the second part of the AI problem is what's called inference,
Starting point is 00:29:30 which is what you and I see every day as a consumer. So we go to a website like chatGPT or Gemini. We ask a question and it gives us a really useful answer. And those are two very different kinds of compute challenges. The first one is about brute force and power, right? If you can imagine, like what you need are tons and tons of machines, tons and tons of like very high quality networking and an enormous amount of power in a data center so that you can just run those things for months.
Starting point is 00:30:01 I think Elon publishes very transparently, for example, how long it trains to train his GROC with a K model, and it's in the months. Inference is something very different, which is all about speed and cost. What you need to be in order to answer a question for a consumer in a compelling way is super, super cheap and super, super fast. And we've talked about why that is important. And the grok with a Q, chips turns out to be extremely fast and extremely cheap. And so, look, time will tell how big this company can get, but if you tie it together with what Jensen said on the earnings call, and you now see developers
Starting point is 00:30:44 stress testing us and finding that we are meaningfully, meaningfully faster and cheaper than any NVIDIA solution. There's the potential here to be really disruptive. And we're a meager unicorn, right? Our last valuation was like a billion something versus NVIDIA, which is now like a $2 trillion company. So there's a lot of market cap for GROC to gain by just being able to produce these things at scale, which could be just an enormous outcome for us. So time will tell, but a really important moment in the company and very exciting. Can I just observe, like off topic, how an overnight success can take eight years.
Starting point is 00:31:25 No, I was thinking the same line. It's a seven year overnight success in the making. There's this class of businesses that I think are unappreciated in a post internet era, where you have to do a bunch of things right before you can get any one thing to work. And these complicated businesses where you have to stack either different things together that need to click together in a stack, or you need to iterate on each step until the whole system works end to end
Starting point is 00:31:58 can sometimes take a very long time to build. And the term that's often used for these types of businesses is deep tech. And they fall out of favor because in an internet era and in a software era, you can find product market fit and make revenue and then make profit very quickly. A lot of entrepreneurs select into that type of business instead of selecting into this type of business where the probability of failure is very high. You have several low probability things that you have to get right in a row. And if you do, it's going to take eight years and a lot of money. And then all of a sudden, the thing takes off like a rocket ship. You've got a huge advantage. You've got a
Starting point is 00:32:33 huge moat. It's hard for anyone to catch up. And this thing can really spin out on its own. I do think Elon is very unique in his ability to deliver success in these types of businesses. Tesla needed to get a lot of things right in a row. Tesla needed to get a lot of things right in a row. SpaceX needed to get a lot of things right in a row. All of these require a series of complicated steps or a set of complicated technologies that need to click together and work together. But the hardest things often output the highest value. And, you know, if you can actually make the commitment on these types of businesses and get all the pieces to click together, there's an extraordinary opportunity to build moats and to take huge amounts of market value.
Starting point is 00:33:13 And I think that there's an element of this that's been lost in Silicon Valley over the last couple of decades as the fast money in the internet era has kind of prioritized other investments ahead of this. But I'm really hopeful that these sorts of chip technologies, SpaceX, and Biotech, we see a lot of this, these sorts of things can kind of become more in favor because the advantage of these businesses work seems to realize hundreds of billions and sometimes trillions of dollars of market value
Starting point is 00:33:41 and be incredibly transformative for humanity. So I don't know, I just think it's an observation I wanted to make about the greatness of these businesses when they work out. Well, I mean, open AI was kind of like that for a while. Totally. I mean, it was just like wacky nonprofit that was just grinding on an AI research problem for like six years and then it finally worked and got productized into JetGPT. Totally. But you're right, SpaceX was kind of like that.
Starting point is 00:34:03 I mean, the big moneymaker at SpaceX is Starlink, which is the satellite network. It's basically broadband from space. And it's on its way to handling, I think, a meaningful percentage of all internet traffic. But think about all the things you had to get to to get that working. First, you had to create a rocket. That's hard enough. Then you had to get to reusability. Then you had to create the whole satellite network.
Starting point is 00:34:27 So at least three hard things in a row. Well, I know you have to get consumers to adopt it. I mean, don't forget the final step. We had no idea where the market was. Early on, it started in my office and so Jonathan and I would be kind of always trying to figure out what is the initial go-to market. And I remember I emailed Elon at that period when they were still trying to figure out what is the initial go to market. And I remember I emailed Elon in at that period when they were still trying to figure out whether they were going to go with Lidar or not.
Starting point is 00:34:52 And we thought, wow, maybe we could sell Tesla the chips, you know, but, and then Tesla brought in this team just to talk to us about what the design goals were and basically said, no, in kind way, but they said, no. Then we thought, okay, maybe it's like for high frequency traders, right? Cause like those folks want to have all kinds of edges. And if we have these big models, maybe we can accelerate their decision making,
Starting point is 00:35:15 they can measure revenue. That didn't work out. Then it was like, you know, we tried to sell to three letter agencies. That didn't really work out. Our original version was really focused on image classification and convolutional neural nets like ResNet. That didn't work out. We ran head first into the fact that NVIDIA has this compiler product called CUDA and we had to build a high
Starting point is 00:35:39 class compiler that you could take any model without any modifications. All these things to your point are just points where you can just very easily give up and then there's like we run out of money. So then you write money in a note, right? Because everybody wants to punt on valuation when nothing's working. You tried six beach head market, you couldn't land the boat.
Starting point is 00:35:59 You have to make a decision to just keep going if you believe it's right and if you believe you are right. And that requires shutting out. We talked about this in the MASA example last week, but it just requires shutting out the noise because it's so hard to believe in yourself. It's so hard to keep funding these things.
Starting point is 00:36:19 It's so hard to go into part-in meetings and defend a company. And then you just have a moment and you just feel, I don't know, I feel very vindicated, but then I feel very scared because Jonathan still hasn't landed it. You know what I mean? You mentioned all those boats landing and trying to, trying to, those missteps, but 3,000 people signed up. Who are they?
Starting point is 00:36:39 Are they developers now and they're going to figure out the applications? Yeah. I think that back to the original point. My thought today is that AI is more about proofs of concept and toy apps and nothing real. Yep. I don't think there's anything real that's inside of an enterprise that is so meaningfully disruptive that it's going to get broadly licensed to other enterprises.
Starting point is 00:36:59 I'm not saying we won't get there, but I'm saying we haven't yet seen that Cambrian moment of monetization. We've seen the Cambrian moment of innovation. And so that gap has still yet to be crossed. And I think the reason that you can't cross it is that today these are in an unusable state. The results are not good enough. They are toy apps that are too slow, that require too much infrastructure and cost. So the potential is for us to enable that monetization leap forward. And so, yeah, they're going to be developers of all sizes and the people that came are literally companies of all sizes. I saw some of the names of the big companies and they are
Starting point is 00:37:42 the who's who of the S&P 500. How do you guys reconcile this deep tech high outcome opportunity that everyone here has seen and been a part of as an investor, participant in versus the more de-risked faster time to market? And, you know, Chamath in particular, like in the past, we've talked about some of these deep tech projects like Fusion and so on. And you've highlighted, it's just not there yet. It's not fundable. What's the distinction between a deep tech investment opportunity that is fundable and that you keep grinding at that has this huge outcome? What makes the one like Fusion not fundable? It's a phenomenal question.
Starting point is 00:38:23 It's a great question. My answer is I have a very simple filter Which is that I don't want to debate the laws of physics when I fund a company so with Jonathan when we were initially trying to figure out how to size it I think my initial check was like seven to ten million dollars or something and the whole goal was to get to an initial tape out of a design We were not inventing anything new with respect to physics. We were on a very old process technology. I think we're still on 14 nanometer. We were on 14 nanometer eight years ago. Okay. So we weren't pushing those boundaries. All we were doing was trying to build a compiler and a chip that made sense in a very specific construct to solve a well defined bounded problem.
Starting point is 00:39:05 So that is a technical challenge, but it's not one of physics. When I've been pitched all the fusion companies, for example, there are fuel sources that require you to make a leap of physics, where in order to generate a certain fuel source, you either have to go and harvest that on the moon or in a different planet that is not Earth, or you have to create some fundamentally different way of creating this
Starting point is 00:39:28 highly unique material. That is why those kinds of problems to me are poor risk and building a chip is good risk. It doesn't mean you're going to be successful in building a chip, but the risks are bounded to not of fundamental physics. They're bounded to go to market and technical usefulness. And I think that that removes an order of magnitude risk in the outcome. So, you know, there's still like a bunch of things that have to be right in a row to make it work, but yeah, it doesn't mean it's going to work. All I'm saying is I don't, I don't want it to fail because we built a
Starting point is 00:40:01 reactor and we realized, hold on, to get heavy hydrogen, I got to go to the moon. Right. And Jay tell in sacks, how do you, sacks, I know you don't, you invest in a space hex. No, we, we have done a couple. But like, yeah, so maybe you guys can highlight how you thought about deep tech opportunities versus what you focus on. We probably do something really difficult like this, every 50 investments or so, because most of the entrepreneurs coming to us because we're seed investors or pre-seed investors,
Starting point is 00:40:24 they would be going to a biotech investor or a hardware investor who specializes in that, not to us. But once in a while, we meet a founder we really like. And so, ContraLine was one. We were introduced to somebody who's doing this really interesting contraception for men, where they put a gel into your vas deference and you as a man can take control of your reproduction. You basically, it's not a vasectomy, it's just a gel that goes in there and blocks it and this company is now doing human trials and doing fantastic. But this took forever to get to this point. And then you guys, some of you are also investors in Cafe X, which we love the founder and this
Starting point is 00:41:04 company should have died like during COVID and making a robotic coffee bar when he started, you know, seven, eight years ago was incredibly hard. He had to build the hardware, he had to build a brand, he had to do locations, he had to do software. And now he's selling these machines and people are buying them and the two in San Francisco at SFO are making like, I think they, the two of them make a million dollars a year and it's the highest per square footage of any store in an airport. And so we've just been grinding and grinding and you got to find a founder who's willing
Starting point is 00:41:36 to make it their lives work in these kinds of situations. But you start to think about the degree of difficulty, hardware, software, retail, mobile apps. I mean, it just gets crazy how hard these businesses are as opposed to I'm building a SaaS company. I build software, I sell it to somebody to solve their SaaS problem. It's like, it's very one-dimensional, right? And it's pretty straightforward. These businesses typically have five components.
Starting point is 00:42:01 Yeah. And Saks, you've been an investor in SpaceX, but you don't make those sorts of investments regularly, that craft. Is that fair? Yeah, I have an Elon exception. It's about the founder. No, in our portfolio allocation, we say this much early stage, this much late stage, this much Elon. Elon, except, yeah., I mean you have to be so dogged to Want to take something like this on because the good stuff happens like you're saying freeberg your 7 8 9 10 as opposed to like a consumer product I mean the works are a dozen by year three or four the only app that took a really long time people don't know this
Starting point is 00:42:40 But Twitter actually took a long time to catch on it was kind of cruising for two or three years And then South by Southwest happened but Twitter actually took a long time to catch on. It was kind of cruising for two or three years and then South by Southwest happened, Ashton Kutcher got on it, Obama got on it. I think the network effect, I think network effect businesses are different because that's all about getting your seat of your network. What I'm talking about is the technical coordination of lots of technically difficult tasks that need to sync up. It's like getting a master lock with like 10 digits and you got to getting a master lock with like 10 digits and you gotta figure out the combination of all 10 digits.
Starting point is 00:43:08 And once they're all correct, then the lock opens. And prior to that, if any one number is off, the lock doesn't open. And I think these technically difficult businesses are some of the, and they are the hardest and they do require the most dogged personalities to persist and to realize an outcome from. But the truth is that if you get them, the moat is extraordinary, and they're usually
Starting point is 00:43:28 going to create extraordinary leverage and value. And I think from a portfolio allocation perspective, if you as an investor want to have some diversification in your portfolio, this is not going to be the predominance of your portfolio, but some percentage of your portfolio should go to this sort of business because if it works, boom, this can be the big 10x, 100x, 1000x. Two stories about that. One of the early VC's, and Elon's told the story publicly, wanted Elon to not make the Roadster, not make the Model S, just make drivetrains and the electric components for other car companies. Can you imagine how the world would have changed? And then, totally, a very high profile VC came to me and said, okay, I'll do the series A for Uber.
Starting point is 00:44:13 I'll preemptively do it, but you got to tell Travis to stop running Uber as a consumer app. I want him to sell the software to cab companies. So make it a SaaS company. And I said, well, you know, the cab companies are kind of the problem. Like they're taking all the margin, like the kind of disrupting them. And they're like, yeah, yeah, but just think there's thousands of cab companies, they would pay you tens of thousands of dollars a year for this software and you can get a little piece of the action. I never brought that investor to Travis. I was like, oh, wow, that's really interesting insight. Sometimes the VC's work against it. I have a very poor track record of working with other investors.
Starting point is 00:44:48 Whoa, self-reflection. I do deals myself. I size them myself. And it's because a lot of them have to live within the political dynamics of their fund. And so I think, Jason, what you probably saw in that example, which is exactly why doing things and splitting deals will never generate great outcomes in my opinion, is that you take on all the baggage and the dysfunction of these other partnerships. And so if you really wanted to go and disrupt transportation, you need one person who can be a trigger-puller
Starting point is 00:45:26 and who doesn't have to answer to anybody, I find. That's why I think, for example, when you look at how successful Vinod has been over decade after decade after decade, when Vinod decides that's the decision, and I think there's something very powerful in that, there are a bunch of deals that I've done that when they've worked out
Starting point is 00:45:47 were not really because they were consensus and they had to get supported and scaffolded at periods where if I wasn't able to ram them through myself because it was my organization, I think we would have been in a very different place. So I think like for entrepreneurs, it's so difficult for them to find people that believe it's so much better to find one person and just get enough money and then not syndicate. Because I think you have to realize that you are bringing on and compounding your risk, the one that Freebrook talked about,
Starting point is 00:46:20 with the risk of all the other partnership dynamics that you bring on. So if you don't internalize that, you may have five or six folks that come into an A or a B, but you're inheriting five or six partnership. Yeah, the dysfunction. Yeah. Yeah. Can you just explain really quickly for the audience since they heard about GPUs and
Starting point is 00:46:39 NVIDIA, but they may not know what an LPU is. What's the difference there? Jamal. A GPU, the best way to think about it is, so if you contrast a CPU with a GPU, so CPU was the workhorse of all of computing. And when Jensen started NVIDIA, what he realized was there were specific tasks
Starting point is 00:46:59 where a CPU failed quite brilliantly at. And so he's like, well, we're going to make a chip that works in all these failure modes for a CPU. So a CPU is very good at taking one instruction in, acting on it, and then spitting out one, one answer effectively. And so it's a very serial kind of a factory. If you think about the CPU. So if you want to build a factory that can process instead of one thing at a time,
Starting point is 00:47:24 10 things or a hundred things, what is, they had to build a factory that can process instead of one thing at a time, 10 things or 100 things, what is they had to find a workload that was well suited and they found graphics. And what they convinced PC manufacturers back in the day was, look, have the CPU be the brain, it'll do 90% of the work. But for very specific use cases like graphics and video games, you don't want to do serial computation. You want to do parallel computation, and we are the best at that. And it turned out that that was a genius insight. And so the business for many years was gaming and graphics.
Starting point is 00:47:57 But what happened about 10 years ago was what we also started to realize was the math that's required and the processing that's required in AI models actually looked very similar to how you would process imagery from a game. And so he was allowed to figure out by building this thing called CUDA, which is the compiler that sits on the chip, how he could now go and tell people that wanted to experiment with AI, hey, you know that chip that we had made for graphics? Guess what? It also is amazing at doing all of these very small mathematical calculations that you need for your AI model. And that turned out to be true.
Starting point is 00:48:39 So the next leap forward was what Jonathan saw, which was, hold on a second, if you look at the chip itself, that GPU substantially has not changed since 1999 in the way that it thinks about problem solving. It has all this very expensive memory, blah, blah, blah. So he was like, let's just throw all that out the window. We'll make small little brains and we'll connect those little brains together
Starting point is 00:49:04 and we'll have this very clever software that schedules it and optimizes it. So basically take the chip and make it much, much smaller and cheaper and then make many of them and connect them together. That was Jonathan's insight. And it turns out for large language models, that's a huge stroke of luck because it is exactly how LLMs can be hyper-optimized to work. So that's kind of been the evolution from CPU to GPU to now LPU. And we'll see how big this thing can get, but it's quite novel. Well, congratulations on it all. And it was a very big week for Google, not in a great
Starting point is 00:49:41 way. They had a massive PR mess with their Gemini, which refused to generate pictures, if I'm reading this correctly, of white people. Here's a quick refresher on what Google's doing in AI. Gemini is now Google's brand name for their AI main language model. You can think of that like OpenAI's GPT. Bard was the original name of their chatbot. They had Duet AI, which was Google Sidekick in the Google suite earlier this month. Google rebranded everything to Gemini.
Starting point is 00:50:09 So Gemini is now the model, it's the chatbot, and it's a Sidekick, and they launched a $20 month subscription called Google One AI Premium. Only four words, way to go. This includes access to the best model, Gemini Ultra, which is on par with GPT-4 according to them and generally in the marketplace. Butini Ultra, which is on par with GPT-4, according to them, and
Starting point is 00:50:25 generally in the marketplace. But earlier this week, users on Act started noticing that Gemini would not generate images of white people even when prompted. People were prompting it for images of historical figures that were generally white and getting kind of weird results. I asked Google Gemini to generate images of the founding fathers. It seems to think George Washington was black. Certainly here is a portrait of the founding fathers of America. As you can see, it is putting this Asian guy. It's just, it's making a great
Starting point is 00:50:56 mashup. And yeah, there was like countless images that got created, generate images of the American Revolutionary. Sure, here are images of the American Revolutionary. Sure, here are images featuring diverse American revolution. It's an insert of the word diverse. Sax, I'm not sure if you watch this controversy on X. I know you spend a little bit of time on that social network. I noticed you're active once in a while. Did you log in this week and see any of this, Bruja?
Starting point is 00:51:21 Sure. It's all over X right now. I mean, look, this Gemini rollout was a joke. I mean, it's ridiculous. The AI is incapable of giving you accurate answers because it's been so programmed with diversity and inclusion. And it inserts these words, diverse and inclusive, even in answers where you haven't asked for that,
Starting point is 00:51:42 you haven't prompted it for that. So, I think Google is now like yank back the product release. I think they're scrambling now because it's been so embarrassing for them. But, Saks, like, is it, how does this not get Q8? Like, why not understand how... Yeah, have the red team not catch this, yeah. Well, how, or anybody, or isn't there a product review with senior executives before this thing goes out that says okay folks here it is Have at it try it. We're really proud of our work and and then they say well fun a second Is this actually accurate shouldn't it be accurate? You guys remember when?
Starting point is 00:52:16 chatGPT launched and there was a lot of criticism about Google and Google's failure to launch and a lot of the observation was that Google was afraid to fail or afraid to make mistakes. And therefore, they were too conservative. And as you know, in the last year, a year and a half, there's been a strong effort at Google to try and change the culture and move fast and push product out the door more quickly. And the criticism is now why Google has historically been conservative. And I realize we can talk about
Starting point is 00:52:49 this particular problem in a minute, but it's ironic to me that the Google is too slow to launch criticism has now revealed that Google's result of actually launching quickly can cause more damage than good. But Google did not launch quickly. Well, I will say one other thing. It seems to me ironic because I think that what they've done is they've launched
Starting point is 00:53:13 more quickly than they otherwise would have. And they've put more guardrails in place that backfired. And those guardrails ended up being more damaging. But what are the guardrails? What's the guardrail here? So this is Google's AI principles. The first one is to be socially beneficial. The second one is to avoid creating or reinforcing unfair bias.
Starting point is 00:53:32 So much of the effort that goes into tuning and waiting the models at Gemini has been to try and avoid stereotypes from persisting in the output that the model generates. Where is- Telling the truth. Telling the truth. Exactly. That's exactly what I was just saying.
Starting point is 00:53:50 Yeah. Changing society is our second principle. We'd like to steer society to be better. No, I think socially beneficial is a political objective because it depends on how you perceive what a benefit is. Avoiding bias is political. Be built and tested for safety doesn't have to be political, but I think the meaning of safety has now changed to is political. Be built and tested for safety doesn't have to be political,
Starting point is 00:54:06 but I think the meaning of safety has now changed to be political. By the way, safety with respect to AI used to mean that we're gonna prevent some sort of AI superintelligence from evolving and taking over the human race. That's what it used to mean. Safety now means protecting users from the truth. I feel unsafe.
Starting point is 00:54:22 I feel unsafe. They might feel unsafe, or somebody else defines it as a violation of safety for them to see something truthful. So their first three objectives or values here are all extremely political. I think any AI product for it to be worth assault has to start. They can have any, I think that these values are actually reasonable. That's their decision.
Starting point is 00:54:44 They should be allowed to have it. But the first base order principle of every AI product should be that it is accurate and right. Correct? Yeah. Yeah. Why not? Focus on being correct.
Starting point is 00:54:58 Look, the values that Google lays out may be OK in theory. But in practice, they're very vague and open to interpretation. Therefore, the people running Google AI are smuggling in their preferences and their biases. Those biases are extremely liberal. If you look at X right now, there are tweets going viral from members of the Google AI team that reinforce this idea, where they're talking about white privileges, real real and recognize your bias at all levels and promoting a very left-wing narrative. This idea that Gemini turned out this way by accident or because they rushed it out,
Starting point is 00:55:39 I don't really believe that. I believe that what happened is, Gemini accurately reflects the biases of the people who created it. Now, I think what's going to happen now is, in light of the reaction to the rollout, is, do I think they're going to get rid of the bias? No, they're going to make it more subtle. That is what I think is disturbing about it. They should have this moment where they change their values to make truth, the number one value, like Jomath is saying, but I don't think that's going to happen.
Starting point is 00:56:04 I think they're going to simply go and they're going to dial down the bias to be less obvious. You know who the big winner is going to be in all of this? Jamath is going to be open source. Like because people are just not going to want a model that has all this baked in and weird bias, right? They're going to want something that's open source. And it seems like the open source community would be able to grind on this to get to truth, right? So I think one of the big changes that Google's had to face is that the business has to move away from an information retrieval business where they index the
Starting point is 00:56:31 open internet's data and then allow access to that data through a search results page to being an information interpretation. Service. These are very different products. The information interpretation service requires aggregating all this information and then choosing how to answer questions versus just giving you results of other people's data that sits out on the internet. I'll give you an example. If you type in IQ test by race on chat GPT or Gemini, it will refuse to answer the question.
Starting point is 00:57:02 Ask it a hundred ways and it says,, well, I don't wanna reinforce stereotypes. IQ tests are inherently biased. IQ tests aren't done correctly. I just want the data. I wanna know what data is out there. You type it into Google, first search result, and the one box result gives you exactly what you're looking for.
Starting point is 00:57:18 Here's the IQ test results by race. And then yes, there's all these disclaimers at the bottom. So the challenge is that Google's interpretation engine and chat GPT's interpretation engine, which is effectively this AI model that they've built of all this data, has allowed them to create a tunable interface. And the intention that they have is a valid intention,
Starting point is 00:57:38 which is to eliminate stereotypes and bias in race. However, the thing that some people might say is stereotypical, other people might just say is typical. That what is a stereotype may actually just be some data and I just want the results. There may be stereotypes implied from that data, but I want to make that interpretation myself. I think the only way that a company like Google or others that are trying to create
Starting point is 00:58:04 a general purpose knowledge Q&A type service are going to be successful is if they enable some degree of personalization where the values and the choice about whether or not I want to decide if something is stereotypical or typical or whether something is data or biased should be my choice to make. If they don't allow this, eventually everyone will come across some search result or some output that they will say doesn't meet their objectives. And at the end of the day, this is just a consumer product. If the consumer doesn't get what they're looking for, they're going to stop using it. And eventually, everyone will
Starting point is 00:58:39 find something that they don't want or that they're not expecting, and they're going to say, I don't want to use this product anymore. And so it is actually an opportunity for many models to proliferate, for open source to win. Can I say something else? Yeah. When you have a model and you're going through the process of putting the fit and finish on it before you release it in the wild,
Starting point is 00:59:01 an element of making a model good is this thing called reinforcement learning, right, through human feedback. You create what's called a reward model, right? You reward good answers and you're punitive against bad answers. So somewhere along the way, people were sitting and they had to make an explicit decision. And I think this is where Saks is coming from, that answering this question is verboten. You're not allowed to ask this question in their view of the world, and I think that that's what's troubling because how is anybody to know what question is askable or not askable at any given point in time? If you actually search for the race and ethnicity question inside of just Google
Starting point is 00:59:43 Proper, the first thing that comes up is a Wikipedia link that actually says that there are more variations within races than across races. So seems to me that you could have actually answered it by just summarizing the Wikipedia article in a non-offensive way that was still legitimate and that's available to everybody else using a product. And so there was an explicit judgment. Too many of these judgments, I think, will make this product very poor quality,
Starting point is 01:00:09 and consumers will just go to the thing that tells it the truth. I think you have to tell the truth. You cannot lie, and you cannot put your own filter on what you think the truth is. Otherwise, these products are just really worth it. Yeah, and I'm more concerned about the answers that are just flat out wrong,
Starting point is 01:00:28 driven by some sort of bias than I am about questions where they just won't give you an answer. If they just won't give you an answer, well, there's a certain bias in terms of what they won't answer, but at least you know you're not being misled. But in questions where they actually give you the wrong answer because of a bias, that's even worse.
Starting point is 01:00:48 You should be allowed to choose, right? I actually disagree with your framing there, Freeburger, making it sound like we live in this totally relativized world where it's all just user choice and everyone's going to choose their bias and their subjectivity. I actually think that there is a baseline of truth and the model should aspire to give you that. It's not up to the user to decide whether the photo of George Washington is going to be white or black. There's just an answer to that. I think Google should do their job. The question you have to ask, I think, is not whether Google is going through an
Starting point is 01:01:26 existential moment. I think it clearly is. This business is changing in a very fundamental way. I think the question is whether they're too woke to function. Are they actually able to meet this challenge given how woke and biased what a modern culture their company evidently is. Well, they used to be able to just hide the bias by the ranking and who they downranked. So they did the Panda update, they did all these updates and they would, if they didn't like a source, they could just move it down.
Starting point is 01:01:56 If they did like a source, they could move it up. Yeah. And they could just say, hey, it's the algorithm, but they were never forced to share how the algorithm ranked. The results. The results. And so, you know, if you had a different opinion, you just weren't going to get it on a Google search result page, but they could just point to the algorithm and say, yeah, the algorithm
Starting point is 01:02:11 does it. I just sent you guys, I think this is a hallucination, but Nick, you can throw it up there. We can get Sax's reaction. Wow. Wow. This is nutty, right? But look, it's the ideology that's driving this. The tip-off is when you say it's important to acknowledge race is a social construct,
Starting point is 01:02:30 not a biological reality. Is George Washington white or black? That's a whole school of thought called social constructivism, which is basically this. It's like Marxism applied to categories of race and gender. So Google has now built this into their AI model. And again, the question, yeah, you almost have to start over. Again, this is too old to function. Rip it all out, fire everybody.
Starting point is 01:02:54 Now, Jacob, I think you would have an interesting observation with those search rankings, because what I'm afraid of is that what Google will do is not change the underlying ideology that this AI model's been trained with, but rather they'll dial it down to the point where they're harder to call out. So, the ideology will just be more subtle. Now, I've already noticed that in Google search results, Google is carrying water for either the official narrative or the woke narrative, whatever you want to call it, on so many search results.
Starting point is 01:03:23 Here's an idea. Like, they should just have the ability to talk to their Google chatbot, Gemini, and then have a button that says, turn off like these concepts, right? Like, I just want the raw answer. Do not filter me. It's not programmed that way. I mean, you're talking about something very deep. Sacks, what do you do if you're the CEO of Google? Fire myself. No, seriously, you're the CEO of Google, you're tasked. Let's say your friend Elon buys Google and he says,
Starting point is 01:03:51 Sacks, will you please just run this for a year for me? What do you do? Well, I saw what Elon did at Twitter. He went and he fired 85% of the employees. Yeah, I mean that. But you know, Paul Graham actually had an interesting tweet about this where he said that one of the reasons why these ideologies take over companies is that, I mean, they're clearly
Starting point is 01:04:13 non-performance enhancing, right? They clearly hurt the performance of the company. It's not just Google. We saw this with Disney. We saw it with Bud Light. Coinbase. Coinbase was the other way. No, no, but they had a group of people there who were costing cash.
Starting point is 01:04:26 Yeah, exactly. So, in any event, we know this does not help the performance of a company. The extent to which these ideologies will permeate a company is based on how much of a monopoly they are. So, yeah, the ridiculous images generated by Gemini aren't an anomaly. They're a self-portrait of Google's bureaucratic corporate culture. The bigger your cash cow, the worse your culture can get without driving you out of business. That's my point.
Starting point is 01:04:49 So they've had a long time to get really bad because there were no consequences to this. So at this place, at this point, the whole company is infected with this ideology. And I think it's going to be very, very hard to change. Because look, these people can't even see their own bias. Well, I think that there's a notion that people need to have something to believe in. They need to have a connection to a mission. And clearly there's a North Star in the mission of this, I would call it information interpretation business that they're now wallowing into.
Starting point is 01:05:16 The mission of hijack, dude. The mission of hijack. That's what I'm saying. The original mission was to organize all the world's information. The, and now they're doing, now they're suppressing information that they don't like. Index the world's information period. The end, that's the end of the document. Well, and to make it do an universally accessible and useful was kind of the end of the statement.
Starting point is 01:05:34 Yes. My real point is maybe there's a different mission that needs to be articulated by leadership. And that that mission, the troops can get behind and the troops can redirect their energy in a way that doesn't feel counter to the current intention, but can perhaps be directionally offsetting of the current direction so that they can kind of move away from this socially effective deciding between stereotypes and typical data and actually
Starting point is 01:06:01 moving towards a mission that allows accessibility. You know what? I would do something completely different. I would do a company meeting and I would put the company mission on the screen, the one that you just said about not only organizing all of the world's information, but also making it useful and retrievable. Accessible and useful.
Starting point is 01:06:17 This is our mission, this has always been our mission and you don't get to change it because of your personal bias and ideology. And we are gonna rededicate ourselves to the original mission of this company, which is still just as valid as it's always been. But now we have to adapt to new user needs and new technology. I completely agree with what Sack said, times a billion trillion zillion. And I'll tell you why. AI, at its core, is about probabilities. Okay? And so the company that can shrink probabilities
Starting point is 01:06:48 into being as deterministic as possible. So where this is the right answer. Zero or one. We'll win. Okay? Where there's no probability of it being wrong because humans don't want to deal with these kinds of idiotic error modes.
Starting point is 01:07:04 It's not right. It makes it a potentially great product, horrible and unusable. So I would, I agree with socks. You have to make people say, guess what guys, not only are we not changing the mission, we're doubling down and we're going to make this so much of a thing. We're going to go and for example, like what Google did with Reddit, we're now going to spend $60 billion year licensing training data, right? We're going to scale this up by a thousand fold, and we are going to spend
Starting point is 01:07:31 all of this money to get all of the training data in the world, and we are going to be the truth tellers in this new world of AI. So when everybody else hallucinates, you can trust Google to tell you the truth. That is a $10 trillion company. Right. And one of the things that someone told me from Google, that as an example, so to avoid the race point, there's a lot of data on the Internet about flat earthers. People saying that the earth is flat. There's tons of websites,
Starting point is 01:07:58 there's tons of content, there's tons of information. Kyrie Irving. So if you just train a model on the data that's on the internet, the model will interpret some percentage chance that the world is flat. So, the tuning aspect that happens within model development, Chamath, is to try and say, you know what, that flat earth notion is false. It's factually inaccurate.
Starting point is 01:08:20 Therefore, all of these data sources need to be excluded from the output in the model. And the challenge then is, do you decide that IQ by race is a fair measure of intelligence of a race? And if Google's tuning model then, or tuning team then says, you know what, there are reasons to believe that this model isn't correct. This, I sorry, this IQ test isn't a correct way to measure intelligence. That's where the sort of interpretation arises that allows you to go from the flat test isn't a correct way to measure intelligence. That's where the sort of interpretation arises that allows you to go from the flat earth isn't correct
Starting point is 01:08:48 to the maybe IQ test results aren't correct as well. And how do you make that judgment? What are the systems and principles you need to put in place as an organization to make that judgment to go to zero or one, right? It becomes super difficult. I have a good tagline for them now, to help people find the truth.
Starting point is 01:09:03 Yeah, just help people find the truth. I mean, it's a good, it's aspirational. They should just help people find the truth as quick as they can. Uh, but this is, yeah, I do not envy Sundar. This is going to be hard. Yeah. What would you do for a Berg? I would be really clear on the output of these models to people
Starting point is 01:09:27 and allow them to tune the models in a way that they're not being tuned today. I will have the model respond with a question back to me saying, do you want the data or do you want me to tell you about stereotypes and IQ tests? And I'm gonna say I want the data and then I wanna get the data. And the alternative is,
Starting point is 01:09:40 so the model needs to be informed about where it should explore my preferences as a user, rather than just make an assumption about what's the morally correct set of weightings to apply to everyone and apply the same principle to everyone. And so, I think that's really where the change needs to happen. So, let me ask you a question, Sacks. I'll bring Alex Jones into the conversation. If it indexed all of Alex Jones' crazy conspiracy theories, but three or four of them turned out to be actually correct and it gives those back as
Starting point is 01:10:09 answers, how would you handle that? I'm not sure I see the relevance of it. If someone asks, what does Alex Jones think about something, the model can give that answer accurately. The question is whether you're going to respond accurately to someone requesting information about Alex Jones. I think that's the analogous situation. I was thinking more like it says, hey, I have a question about this assassination that occurred. And let's just say Alex Jones had something that was totally crackpot.
Starting point is 01:10:37 Maybe he has moments of brilliance and he figures something out, but maybe he's got something that's totally crackpot. He admittedly deals in conspiracy theory. That's kind of the purpose of the show. What if somebody asks about that and then it indexes his answer and presents it as fact? Like how would you index Alex Jones, I'm asking you. How would you index his information? I think the better AI models are providing citations now and links. Perplexay actually does a really nice job with this.
Starting point is 01:11:01 Citations are important, yeah. And they will give you the pro and con arguments on a given topic. So I think it's not necessary for the model to be overly certain or prescriptive about the truth when the truth comes down to a series of arguments. It just needs to accurately reflect the state of play, basically the arguments for and against.
Starting point is 01:11:22 But when something is a question of fact that's not really disputed, it shouldn't turn that into some sort of super subjective question, like the one that Chmoth just showed. I just don't think everyone should get the same answer. I mean, I think my decision on whether I choose to believe one person or value one person's opinion over another should become part of this process
Starting point is 01:11:42 that allows me to have an output. And the models can support this, by the way. Do you guys- I think customization is part of this, but I think it's a cop out with respect to the problem that Google is having with Gemini right now. Chamath, what would you do if they made you chairman dictator of Google?
Starting point is 01:11:54 I'd shrink the workforce meaningfully. Okay, 50%. Yeah, 50, 60%. And I would use all of the incremental savings, and I would make it very clear to the internet that I would pay top dollar for training data. So if you had a proprietary source of information that you thought was unique, that's sort of what I'm calling this tack 2.0 world. And I think it's just building on top of what Google did with Reddit, which I think is very clever.
Starting point is 01:12:28 But I would spend $100 billion a year licensing data, and then I would present the truth. And I would try to make consumers understand that AI is a probabilistic source of software, meaning it's probabilities, it's guesses. Some of those guesses are extremely accurate, but some of those guesses will hallucinate, and Google is spending hundreds of billions of dollars a year to make sure that the answers you get have the least number of errors possible
Starting point is 01:12:58 and that it is defensible truth. And I think that that could create a ginormous company. This is the best one yet. I just asked Gemini, is Trump being persecuted by the deep state? And it gave me the answer, elections are a complex topic with fast changing information. To make sure you have the latest
Starting point is 01:13:15 and most accurate information, try Google search. That's not a horrible answer for something like that. Yeah, that's a good answer actually. No, I don't have a problem with it. It's just like, hey, we don't want to give, we don't want to write you an answer. I don't feel like this whole system is totally broken, but I do think that there's a good answer, actually. No, I don't have a problem with it. It's just like, hey, we don't want to write you an answer. I don't feel like this whole system is totally broken, but I do think that there's a waiting solution
Starting point is 01:13:28 to fixing this right now, and then there's a couple of tweaks to fix it over time. I just think the authority at which these LLM speak is ridiculous. Like, they speak as if they are absolutely 100% certain that this is the crisp, perfect answer, or in this case, that you want this lecture on IQs etc. When let's all remember presented with citations. Let's all remember what internet search was like
Starting point is 01:13:53 in 1996 and think about what it was like in 2000 and now in 2020s. I mean, I think we're like in the 1996 era of LLMs and in a couple of months the pace things are changing. I think we're like in the 1996 era of LLMs. And in a couple of months, the pace things are changing. I think we're all gonna kind of be looking at these days and looking at these pods and being like, man, remember how crazy those things were at the beginning and how bad they were. What if they evolve in a dystopian way? I mean, have you seen like Mark Andreessen's tweets
Starting point is 01:14:17 about this? He thinks the positive and the worse, not better. I think it's a comparative market sex. I actually think to your point, Google could be going down the wrong path here in a way that they will lose users and lose consumers, and someone else will be there eagerly to sweep up with a better product. I don't think that the market is going to fail us on this one, unless, of course, this regulatory capture moment is realized and these fed step in and start regulating
Starting point is 01:14:39 AI models and all the nonsense that's being proposed. Friberg, aren't you worried that somebody with an agenda and a balance sheet could now basically gobble up all kinds of training data that make all models crappy and then they basically put their layer of interpretation on critical information for people? If the output sucks and it's incorrect, people will find that there is open truth out there. But people will not know, you can lie.
Starting point is 01:15:00 They may not be, for example. Look at what happened with Gemini today. Like they put out these stupid images and we all piled on. We are in V zero. What I'm saying is there's a state where, let's just say the truth is actually on Twitter, or actually let's use a better example. The truth is actually in Reddit and nowhere else. But that answer and that truth in Reddit can't get out because one company has licensed
Starting point is 01:15:22 it, owns it and can effectively suppress it or change it. Yeah, I'm not sure there's going to be a monopoly. I think that's a real risk. I think the open internet has enough data that there isn't going to be a monopoly on information by someone spending money for content from third parties. I think that there's enough in the open internet to give us all the security that we're not going to be monopolized away into some disinformation age. That's what I love about the open Internet.
Starting point is 01:15:48 It is really interesting. I just asked it a couple of times to just list the legal cases against Trump, the legal cases against Hunter Biden, the legal cases against President Biden, and it will not just list them. It just punts on that. It's really fascinating. And then chat GPT is like, yes, here are the six cases perfectly summarized with, it looks like, you know, beautiful citations of all the criminal activity Trump's been involved in. Ask the question about Bind's criminal activity. Let's see if it's... I'm joking with you.
Starting point is 01:16:19 I'm joking with you. No, I'm serious. Ask if, you know... No, Gemini wouldn't do Biden either. I think they've just decided they're just not going to do it. They wouldn't do Biden. They won't touch it. It's obviously broken and they don't want more egg on their face. So they're just like, go back to our other product. Look, I can understand that part of it. You know, if there's some issues that are so
Starting point is 01:16:39 hot and contested, you refer people to search because the advantage of search is you get 20 blue links. The rankings probably are biased, but you can kind of find what you're looking for. Whereas AI, you're kind of given one answer, right? So if you can't do an accurate answer that's going to satisfy enough people, maybe you do kick them to search. But again, my objection to all of this comes back to simple, truthful answers that are not disputed by anybody are being distorted. That I don't want to lose focus on that being the real issue. The real subject is what Chamath put on the screen there where it couldn't answer a simple
Starting point is 01:17:16 question about George Washington. Okay, everybody. We're going to go by chopper. Wait, we're going to go by chopper to the... We have our war correspondent, General David Sacks in the field. We're dropping him off now. David Sacks in the helicopter. Go ahead, tell them what's going on in the Ukraine on the front.
Starting point is 01:17:35 What's happening in the war is that the Russians just took this city of Diyakha, which basically totally refutes the whole stalemate narrative, as I've been saying for a while. It's not a stalemate. The Russians are winning. But the really interesting tidbit of news that just came out on the last day or so is that apparently the situation in Moldova is boiling over. There's this area of Moldova, which is a Russian enclave called Transnistria. And officials there are meeting in the next week to supposedly
Starting point is 01:18:05 ask to be annexed by Russia. It's possible that they may hold some referendum. They're one of these breakaway provinces. Transnistria and Moldova is like the Donbass was in Ukraine or South Ossetia and Georgia, they're ethnically Russian. They would like to be part of Russia, but when the Polsso Union fell apart, they found themselves kind of stranded inside these other countries. What's happened because of the Ukraine war is Moldova is right on the border with Ukraine. Well, Russia is in the process of annexing that territory now that's part of Ukraine. Now, Transnistria is right there.
Starting point is 01:18:49 Could theoretically make a play to try and join Russia. Why do I think this is a big deal? Because if something like this happens, it could really expand the Ukraine war. The West is going to use this as evidence that Putin wants to invade multiple countries and invade a bunch of countries in Europe. And this could lead to a major escalation in the war. All right, everybody. Thanks so much for tuning into the All In Podcast episode 167 for the Rain Man, David Sacks, the chairman dictator from All In Polyhop, and in Freiberg, I am
Starting point is 01:19:21 the world's greatest. Love you, boys. Angel, investor,ter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter, Otter Queen of Kinoa I'm going all in You're one of us, why? What? You're one of us, why? You're one of us, why? Besties are gone I'm going 30 That is my dog taking an issue of driveway sex Oh man Oh man Oh man
Starting point is 01:19:56 My avid asher will meet me at once We should all just get a room and just have one big huge orgy Because they're all just useless It's like this sexual tension that they just need to release somehow What? You're a bee? What? You're a bee? What? We need to get merchies our back

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.