Big Technology Podcast - OpenAI’s Windsurf Crash, Grok’s Wild Week, Replace Tim Cook? — With Aaron Levie

Episode Date: July 12, 2025

Box CEO Aaron Levie is back for our weekly discussion of the latest tech news. We cover: 1) OpenAi's Windsurf deal falls through 2) Is OpenAI okay? 3) What percentage of all AI spend goes to coding? 4...) Google's AI code play 5) Grok 4 is out 6) Does Grok show the scaling laws are still in effect? 7) Would Box work with Grok? 8) NVIDIA hits $4 trillion 9) Are we in an AI bubble? 10) Should Tim Cook step down? 11) Could Apple merge with OpenAI? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 OpenAI's windsurf deal is off and the executive team is going to deep mind. Elon Musk's grok had one hell of a week. Invidia becomes the first $4 trillion company and should Apple replace Tim Cook, as some analysts are suggesting. That's coming up right after this. Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format. Boy, we have a treat for you because today,
Starting point is 00:00:27 box CEO Aaron Levy is joining us to break. down this week's news. And we have a full slate, a more than full slate, because OpenAI's wind surf deal is off and the team there is going to Google DeepMind. We can also talk a little bit about Grok, the ups and the downs, big ups, big downs, Nvidia hitting $4 trillion. And then, of course, these rumors that Apple wants to replace Tim Cook. So great to see you as always, Aaron. Welcome back to the show. Good to be here. What a week in tech. Absolutely crazy. Let's just start with the big headline first. This just dropped right before we started recording. according opening i's deal to buy windsurf is off and google will instead hire the windsurf CEO
Starting point is 00:01:05 of arun mohan co-founder douglas chen and some of windsurf's r and d employees from the company to join deep mind google and windsurf announced friday so aaron can you tell us about the significance of this windsurf what is windsurf what deal were they were going to have with open i and what is the significance that that deal is off and they are moving instead to google Yeah. In this industry at this point, you never get even just like one piece of news. It's always multiple pieces of news embedded in one major thing. So this is sort of a multi, you know, multi-part announcement, I guess. So Windsurf is, has been one of the faster growing AI coding platforms. It's an IDE that is built off of VS code that lets you have agents that automate your coding. And, you know, quite successful, particularly. in the enterprise they were they were one of the first to really kind of nail the enterprise oriented sales motion lots of protections for data and and your code base that they had a good fit on ruins of fantastic kind of founder entrepreneur
Starting point is 00:02:14 and you know the expectation I think was that they were going to be acquired by open AI help opening I really kind of boot up their coding efforts and clearly that's now off obviously the rumor in that process was there was some structural issues with maybe the Microsoft terms and different, you know, parts of that deal. It's, you know, no one has ever kind of explained exactly the, the problems there. But now with that deal off and Vrunen team going over to Google, it's a recalibration of the market. It's actually interesting. So the thing that everybody should have been thinking this entire time, actually, was, where is Google in AI coding?
Starting point is 00:02:53 Because right now you have, you know, Anthropic from a model standpoint, tends to be seen as the leading model for coding. And then they also now launched, you know, ClaudeCode. OpenAI launched Codex, which is a very strong kind of offering in an agentic coding experience. And so the odd man out there is Google, where Google has, you know, is a very deep engineering-centric organization. And so one would have imagined that they would want to be front and center with AI coding. The Gemini 2.5 model is seen as very good at coding. But again, it's in this a little bit of no man's land because they neither have an IDE, nor do they have like, you know, what tends to be the best coding model from Anthropic.
Starting point is 00:03:39 So they had to do something in this space. And this is a pretty exciting move to launch into. So the politics that you talk about between open AI and Microsoft, I am just going to imagine that Microsoft has GitHub co-pilot, which allows you to do a lot of this AI generated code thing. and the fact that it's invested all this money in Open AI and has access, proprietary access to Open AI's models, probably not such a fan of Open AI going out and building a competing product. Yes, although it's not obvious to me what leverage they have in that dynamic.
Starting point is 00:04:12 So I think that, I think, you know, everything at this point is basically just rumors and conjecture. It's very clear that Open AI. That's what we do best on this show. Yeah, exactly, sure. Or basically our entire industry at this point. But I think I don't perceive that Open AI is constrained by anything strategically at this point vis-a-vis the Microsoft relationship. So I doubt, you know, I think the rumor was more there was things like, you know, IP issues and other dynamics with the acquisition. But again, it's all rumors, so impossible to know.
Starting point is 00:04:46 These, you know, deals fall through for a variety of reasons. but I would not be surprised if Open AI continued their motivation for needing to be in this space more aggressively. And so I doubt this is the last that we hear from OpenAI on either IDE's or coding in general. And certainly they're very committed on the Codex side. And people have had great experiences with Codex, their AI agent. Now, people might look at this and be like, well, this is just continuing a pattern of OpenAI
Starting point is 00:05:16 running into drama anywhere it goes. Is it getting concerning at this point? From the outside, it looks like it is. I think it's fine. They are somehow juggling, you know, building some of the world's largest data centers, you know, massive, massive energy, you know, needed for that. Massive GPUs. They're, you know, acquiring Johnny Ives, you know, company.
Starting point is 00:05:39 They are, you know, releasing models at an incredible cadence. The rumor next is an open source model. So I think they have probably 50 different things. going on, this being only one of those activities. So speaking of speculation and conjecture, what percentage of all AI spend right now, all generative AI spend do you think is going to coding? Because the way that it's talked about, I mean, if you think about just Anthropics growth over the past year, I would be stunned if it wasn't more than 50%.
Starting point is 00:06:11 Oh, yeah, for all AI tokens in general. Yeah, I think that's, I think it would be fun to look at a graph of it. this. I mean, it tends to be one of the highest volume, you know, kind of, if you look at a human, like, what's the relationship between a human and the amount of AI that they can consume? Coding absolutely would be the peak use case right now. There's no other, there's no other human task that one person could cause so many, you know, kind of tokens to be produced. Like, deep research is great, but you do it maybe once or twice a day, and it's relatively confined. summarizing information, very efficient, not very token-heavy.
Starting point is 00:06:48 So coding is definitely the one that is like, you know, this incredible, like, you know, one person could cause thousands of dollars per day of GPU expense if they really want it. So I think this is going to be the killer app for the foreseeable future in terms of just sheer volume of tokens. And so this is why it's such a big prize. And Google, again, you know, it's funny. Actually, just, you know, timed well with your Sergei and Demis interview. I mean, with Sergey back at, like, there's these little small nuances that, that, you know,
Starting point is 00:07:19 you probably don't want to overly extrapolate from, but, but, but, or anecdotes that you don't want to overly extrapolate from, but I think Sergey being back at Google is a very interesting thing to consider around how, you know, this is a company that has a great operator in Sundar, an incredible, you know, AI, uh, innovator in, and, in Demas. Jeff Dean, you know, deep on research and science. And then now this kind of hardcore founder in Sergei, like, this is not a company that is going to lose the coding battle. Like, they, like, there's no way that Sergey is sitting around being like, oh, I'm going to use Anthropic to code the next version of a feature I'm building.
Starting point is 00:07:57 Like, he has to make sure that they are using Google's technology. Like, that is a, that is, you know, obviously a point of pride for any founder is to make sure that you're building the technology that you're using for, you know, the domains you're going after. And so I just, you have to imagine how committed they are to solving this problem. And Varroon now and team are going to be now one more way to accelerate that. Maybe that's the reason why Mark Zuckerberg is deciding to spend billions of dollars on AI talent seemingly. It's because they started using Sonnet, I think, for coding. So he realized there was a problem there. Well, it's not, it's not crazy. Yeah. I mean, think about all of these founders, right? Greg Brockman at OpenAI,
Starting point is 00:08:42 Sergey, Zuck, I mean, they don't want to walk around their office and find out that the thing that everybody's really excited about is somebody else's model. So like, that is like, that, that would be like if, you know, you worked at Facebook and everybody was on, on X all day long and not using, you know, Facebook. So, so like these things are very major points of pride for, for these founders, which makes it the race so exciting to be, to be watching. Yeah, back when I was reporting on social media, whenever there was a trend on Facebook, Facebook when they had their trending column that originated on Twitter. They would never say people are talking about this on Twitter.
Starting point is 00:09:19 They'd say they're talking about it on social media and then point back to Facebook posts or people talking about the Twitter thing. So I think that really goes to the eubris of these companies. And just to put a finer point on what you were saying, you know, for those who are listening and are maybe more on the financial side, not that technical, you're paying for every it's generative AI. So you're paying for tokens, the characters that these machines generate. And so when you say, build me a web app, it's just tens of thousands, if not hundreds of thousands of tokens.
Starting point is 00:09:48 And that's why we're seeing people spend this much money and coding. So I'll give you an example of how crazy this gets. I was talking to a founder this week where, I mean, like, I, you know, every day I see something that I'm just like, like, I have to completely reassess my, my estimation of the future. this founder is so right now a solo founder he has many different I don't know if it's five or 10 or whatever the right number is many different agents in the background going off doing individual parts of his code base as well as the marketing kind of website that he has to build for this product that he's working on and so he is effectively as a solo founder a manager of multiple agents doing all of this work, and then his job, and basically the new form of engineering work
Starting point is 00:10:42 out there is to come up with incredibly precise prompts that are super tuned for his use case, and then kick off all these agents in the background that are going off and doing work, and then he goes and reviews their code, and he integrates that code into the broader code base, and then, you know, effectively is they're reviewing and auditing all of their work. the reason why that's so impactful is or meaningful is that one person could literally be causing tens of thousands of dollars a month in AI consumption because of just the single actions that he is doing. So while that's not going to be the behavior of everybody on the planet, that is this, that is a massive force multiplier of human to compute ratio that we've just never
Starting point is 00:11:28 seen in computer history. All right. So Open AI isn't the only one making headlines. this week, there's been some crazy stuff happening with GROC, both a new model, GROC 4, and the behavior of GROC has been disappointing, shall we say. So let's start with the actual new model first, and then we'll talk about the alignment issues. So Hila Musk builds this massive GPU cluster in Memphis called Project Memphis. He calls it Colossus, I think that was the name of this GPU cluster. And we finally see, I think, the first, model that's built on top of it, GROC 4. This is from Tom's Guide. GROC 4 is live. He says that it's going to be expected to rival OpenAISGPT5, which we still don't know when it's coming. Claude's
Starting point is 00:12:15 for Opus. You have artificial analysis. This is a benchmarking firm. They basically say that GROC is blowing away all these different benchmarks. And then, of course, in the ARC-AGI test, it outperforms every model by a significant margin. Some have said maybe these benchmarks are you know, maybe GROC has just benchmark hacked or you can't believe them, but it seems like there's enough evidence here that there's a chance that making this GPU cluster massive has worked for Elon Musk. What's your read on it? Yeah, I mean, I think it is working empirically. Obviously, you can, you can, and we saw this with meta a little bit, you can sort of train your models to perform better at some of the e-vowls or the benchmarks, which then, you know,
Starting point is 00:12:59 somewhat can delude you into thinking that the model is better than it is, where it's really just better at these kinds of tests. However, right now, I think most of the evidence is that this is a very high-performing model kind of across the board. It continues to align with the theory that more compute, more data generally is going to produce better models. And then they're doing some novel things that I think are emerging across the industry, but maybe this will be the first kind of real commercial model at scale that does this.
Starting point is 00:13:29 but they have a model called GROC4 Heavy that has multiple agents go off and execute basically the same task, and then they go and review their answer for which answer they think these agents think are the best result. And so this is a great example of how you can have a lot of compute in the training process, but then also have lots of compute in the inference process where you just have the model working harder and harder and harder to produce better answers, which is clearly producing great results. They show the scores of what GROC4 Heavy can produce. And I think that will become a standard across the board. So I think it's absolutely a continued improvement in model quality, model performance.
Starting point is 00:14:17 And we're super excited that the scaling laws are continuing to play out. And this is just more evidence of that. Well, it's interesting. So I want to talk with you about the scaling. about us because we've had a number of folks come on this show and say, yeah, we're seeing diminishing returns. And it's not nobody's. I mean, Thomas Korean, CEO of Google Cloud said it pretty much straight up a couple weeks
Starting point is 00:14:39 ago. And now it seems like it's been tested where Elon said, I'm just going to win on scale. And he makes what is, I think, the biggest GPU cluster in the world. And it looks like it is producing. One of his engineers, a guy named Udei Rudajaru, is he's actually, he's left and he's going to open AI to work on Greg Brockman's scale team. And I messaged him after he left. And I said, do you believe in the scaling laws after what you've seen? And he says, yeah, the more GPUs, the better. And it looks like that's what they're showing. So what makes for this disconnect between
Starting point is 00:15:15 everybody yelling, diminishing returns and what we're seeing now, which is like maybe that's not the case. Well, I would kind of say that both can be true. Diminishing returns is, first of all, it's a relative concept, so diminishing relative to what rate. But I think the way to think about it is if you think about a curve that eventually sort of asymptotes, all that matters is where are you in that curve. So if a curve is sort of like this and it's asymptotting right here, well, if we're right here, that's bad, but if we're right here, it will be, quote, unquote, diminishing returns, but you haven't asymptoted or plateaued yet.
Starting point is 00:15:55 And so all that matters is where you are on that, on that curve and trajectory. And you can see based on some of the evils, it's not as if, it's not as if there's a going to be a 10x improvement in intelligence anytime soon, simply because some of these evils were already at 80 or 90% of the, of where the evils at. And so there isn't even room for the model to be, to be 10x better. And so that might mean, though, that you have to apply five or 10x more compute to get to that next final, that last mile of intelligence, which, again, would be both diminishing returns, but also something we would still continue to drive as an industry because you're still just going to get, you're going to appreciate that quality difference. And so that,
Starting point is 00:16:37 that I think is totally fine. You know, in general, talking to enterprises, we're already for the most part, with many, many exceptions, we're already for the most part in a position where the technology well exceeds anybody's ability to adopt all of these benefits so far. So simultaneously, we want the progress to continue at this exact rate, and there's most use cases on the planet still could be benefited just by even what today's models can do. So we want more innovation, we want more compute, we want more intelligence, but even if you stopped right now, you'd still have, you know, massive amounts of economic gain get delivered from what we've already created.
Starting point is 00:17:20 Right, but I guess the question really is for those, and I don't think you've said this, but there's many in the AI industry saying, well, the scaling laws are a straight shot to AGI if we keep making things bigger. So I guess I'm trying to test what we're seeing with GROC, test that statement based off of what we're seeing with GROC. I'm not going to make any predictions on that front. Okay. Because the-
Starting point is 00:17:45 But do you think this is evidence for or against? I think the smartest people on the planet have two totally different views. And so I am, I'm not going to get in the middle of that one. I mean, clearly you have people like Ilya where, you know, it's rumored that he's working on a different architecture and maybe a different path. And then obviously you have other people that are, you know, let's just throw more compute and data at the problem. I think you can start to sense, actually, as an industry that the AGI term has actually kind of gone into the backseat. and obviously more of the conversation is around super intelligence.
Starting point is 00:18:19 And I think there's more and more comfort around this idea that actually the race really is just how do we build intelligence that far exceeds a human and what will the economic and kind of societal benefits be of just even accomplishing that, which are massive.
Starting point is 00:18:35 And I have always sort of found the AGI thing to be particularly squishy as a concept. In the B2B world, I deal way more more with just like utilitarian concepts. And so superintelligence and this idea of we have AI that will far exceed a human, like that alone is enough of a breakthrough to be shooting for. And I think what you're seeing with scaling is we will be able to certainly accomplish
Starting point is 00:19:01 our collective definition of super intelligence with the current, the current path around with scaling laws. Okay. So you would say that there's two camps. One is keep scaling and the other is we need new techniques. Well, if you have, is that right? yeah if you have if you put yon yononlea and uh demis would be in that category of
Starting point is 00:19:24 we need new techniques and demis in in one category and then you put a bunch of sort of uh you know scale max you know maximalists you know that be dario maybe dario maybe anybody just one of these current clusters i don't know where sam is these days so so you know probably sam well he said we know what to do we know what to do and he's in investing in Stargate. So seems like he's a scale maxer also. That's a scale maximalist. So, yeah, um, but, but what's interesting is actually, I think that, that you'd be able to get them all to say the same thing, which is, which is, which is this category that says we need a new idea are probably AGI maximalists. And there's another category, which is like, actually,
Starting point is 00:20:03 it's already proving out the economic and societal advantages of, of even our current approach to AI. So just let's keep running that for as long as possible. And we'll just keep eking out more and more benefit. Like, you could already dramatically improve every health care experience on the planet just by using whatever the latest, you know, state-of-the-art model is in every, in every area of health care. Like, everybody will absolutely get better doctor diagnosis. They'll get better health care.
Starting point is 00:20:33 The doctors will be happier when they transcribe all of their, their conversations with patients with AI. Like, like, and that's just like today's state of the art. We don't need any new breakthrough just to have that ripple. through everything that we do. If every engineer on the planet had background agents that were, you know, checking for bugs or writing new code for them or updating their libraries, all that long-tail work that's really, really inefficient and not enjoyable, already the economic advantage of just today's architecture would be massive. So I think that the, I think you can basically
Starting point is 00:21:06 be happy about both outcomes. Like the superintelligence track with more scale is a great track to be on and we're just going to get more and more benefits. And the sort of like we need a new idea, AGI maximalists, that's fine too. And that's just upside if and when we discover whatever that thing is. Okay. So I want to poke at this a little bit because we did see something this week that is concerning and really goes to the stability of these models, which is that GROC became, I don't know, a neo-Nazi. It seems like half the time these box become neo-notsies. But none of the big ones. I don't know if it was Neo. I think it was like OG Nazi. Straight up Nazi. Yeah. Yeah, OG Nazi. All right. I was giving it too much credit. So, so this from the BBC. Musk says
Starting point is 00:21:50 Grock Chatbot was manipulated into praising Hitler. Grock was too compliant to user prompts, too eager to please to be manipulated essentially. This is being addressed in response to a question asking which 20th century historical figure would be best suited to deal with, I think it was the Texas floods. Grock said to deal with such anti-violet. such vile anti-white hate. Adolf Hitler, no question. All right, so that definitely a Nazi full blast. Also, someone who insulted President Recep Tayyip Erdogan of Turkey, and so he got, the Grock got blocked in Turkey. So just really off the reservation here, messing with Erdogan. So I want to ask you, we got to, one of our listeners dropped a question,
Starting point is 00:22:33 and we're going to get some Discord questions, but basically asked me, what does it say about the stability of these models that, with a little tweak, Grock turned into Mecha Hitler. That doesn't sound like a tight system or architecture. It sounds really wobbly. That's a question for me. I mean, unfortunately, I don't know if there's been a full post-mortem as to whether that was a training issue, all of a, it's in the weights to be Mecca Hitler, or if that was a system prompt issue, in which case you can do quite a bit with a system prompt to effectively, you know, change the direction or path of what you want the AI to respond to. So to the extent that it was as simple as they used
Starting point is 00:23:15 to have a system prompt that said, you know, please be politically correct and be thoughtful and make sure to, you know, not say anything offensive. If they used to have that, and then they basically said, actually no, you know, say anything you want, then, you know, in that latter mode, users could certainly kind of, you know, control it into, into then doing Mecca Hitler stuff. And so I think it's sort of unknown how they train that model, how much of this was system prompt. You know, for being able to remove that as a risk factor, I think it's sort of well understood what you need to do post-training and what you need to be, you know, doing from a safety standpoint. And then it's really just a decision of the model provider and the application layer of how to implement those things. But I thought it was obviously a ridiculous, you know, ridiculously bad situation, deeply, obviously, you know, offensive and dangerous, but also not really that much of a meta story about AI, simply because you can get these models to do anything you want.
Starting point is 00:24:19 And the whole thing is, as an industry, you're kind of working toward, you know, trying to keep these things confined within a particular pattern of behavior and sort of, you know, level of, of, of, of, of, of, of, of, of, you know, of, of, of, of, of. communication style. This is the next iteration. Grockforce from TechCrunch. Grockforce seems to consult Elon Musk to answer controversial questions. So they decided, I guess, to try the next version where if you ask a controversial question, let's say about the Israel-Palestine conflict, abortion, and immigration laws. Grock will reference Musk's stance on these subjects through news article written about the billionaire
Starting point is 00:24:56 founder and the face of X. And TechCrunch tried to do this and was able to replicate multiple times. and it's testing. Is this the answer to the alignment problem? Just follow what Yvonne believes? It's an experiment of how to achieve it. He, listen, he's always claimed he's fairly centrist, so that would make it pretty aligned. Yeah, I mean, I mean, they clearly keep stepping on the rake and the rake, you know, keeps hitting themselves in the face. But I have faith that they will find a way to. to work through some of these kind of ridiculous situations.
Starting point is 00:25:37 Right. And we've talked about where the money is in AI today. I mean, I would say we both said majority coding. And then probably comes to enterprise use cases. Yeah. And as this is all unfolding, we have the big technology discord server. One of our users says, oh, you're speaking with Aaron Levy this week? Why don't you ask him this question?
Starting point is 00:25:55 This is the question. Given what we just saw that Elon is willing to do with Grock, would you really, in your heart of hearts, consider this model? for use at box or even extending it a little bit more? Why in their right mind would an enterprise consider integrating GROC given this pattern of behavior? Well, I think it's a fantastic question and it's absolutely worth worth thinking about. Do you remember like 10 years ago, Microsoft had an AI chatbot, I think called TAY or something? TAY. Yeah, TAY. So I remember it well because I broke, I had the exclusive. So Microsoft came to me to break that news at
Starting point is 00:26:32 BuzzFeed. And I wrote Microsoft has this fun chatbot called TAY. It will, you know, be your friend. I pinned it to my Twitter profile, went to sleep in San Francisco, woke up that morning. Overnight, Europe and the East Coast had figured out that TAY had been a Nazi and I woke up to many concerned messages telling me, please take the pin down. Okay. So I'm glad that I didn't know you caused this problem. So, so that's actually what I didn't cause it. But I might have inadvertently supported it. So I took the pin down eventually. I think this space is a, you know, always this process of figuring out where these
Starting point is 00:27:11 models, you know, kind of go, go a bit crazy, produce either the wrong information or hallucinate or have accuracy issues. And it's all about continuing to iterate on, on how to, how to improve the system prompt, the model, the alignment of these models. And so I, you know, just judging by both how they responded, they took it down, you know, almost immediately as these examples were coming out, the fact that they acknowledged kind of why this was occurring and what they're working on about it. I think that they will continue to improve their model and the AI system. And then it's really up to individual customers to decide which model do you trust, you know, what do you want to use? And I think everybody should take into all of the factors that they would want to consider. So I'm, you know, we're certainly not in the business of, of, you know, telling our customers which, uh, which type of AI model to use. Um, there's going to be some that have really perfected a use case.
Starting point is 00:28:11 And so thus you're going to want, uh, to use a particular AI model, but, um, but, but, you know, I think everybody has to make their own decision of which, which AI to use. Yeah, I guess the their point was, um, if you're an enterprise, I think this is one of the examples given. If you're an enterprise and you're using like rock to write emails, uh, you don't want it like in the middle of responding to a sales request to be like, and by the way, you know who was great? Hitler. But, you know, my guess, my guess, I haven't seen, I haven't, I haven't, I haven't read all of the, I haven't read all of the, if they've done a post-mortem or anything. My guess is that
Starting point is 00:28:45 that's not built into the model as much as, um, it was a, uh, it was a, uh, it was more of a grok, uh, kind of specific application issue, uh, that caused that. But let's, let's see what they, what they, you know, how they respond. I just want to quickly agree with you here, because, Because Elon, we had talked about this actually on the Monday show with M.G. Siegler that Elon had repeatedly, you know, talked about how he lost control of GROC and it was citing media matters to try to take down cat turd, which we know is a capital punishment-worthy crime in the Elon universe. And he kept saying, you know, GROC's getting a rewrite. So this is clearly a post-training snafu where they took it from something that was politically correct. They wanted
Starting point is 00:29:23 to make it less politically correct. And this is sort of where you get on the Internet when you want to go there. I think to respond to that initial question from that person, I do think that anybody who wants to have an enterprise business does have to ensure that they are building basically purely utilitarian AI systems that are generally considered to be very safe and trustworthy. So if you want to be in the B2B game, which will be most of the volume of AI usage and APIs over time, because that's how you will show up in every other product, then this matters a ton. I just haven't seen evidence that they don't want to go fix those problems, but we'll see. Okay. Yeah, I hear you.
Starting point is 00:30:04 I think you're probably right here. All right. I want to go to break before we go to break. If you are on techmeme.com this weekend, you probably see that this podcast is showing up as the top podcast and a list of shows. It's reverse chronological. So we posted it, I think, most recently before the weekend. But it's a great placement. And I want to thank TechMeme for it.
Starting point is 00:30:24 If you're not familiar with TechMeme, it's read by Tech Industry Leaders, Executives, VC, founders, key product people. It has info-dense headlines summarizing the news and enabling leaders to absorb what happen in tech as quickly as possible. I use it all the time for the show and it provides unique and valuable context, including related news, tweets, blue skies, threads when people are still threading. Highly recommend TechMeme. Thank you, TechMeme.
Starting point is 00:30:49 It's really great to be partnering with them. All right, we're going to go to a quick break, and then we're going to talk about invidia hitting $4 trillion. Hey everyone, let me tell you about the Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending. More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news. Now they have a daily podcast called The Hustle Daily Show, where their team of writers
Starting point is 00:31:15 break down the biggest business headlines in 15 minutes or less and explain why you should care about them. So, search for The Hustle Daily Show. and your favorite podcast app, like the one you're using right now. And we're back here on Big Technology Podcasts with Box CEO, Aaron Levy. Aaron, the money for an industry that is majority-enabling coding use cases keeps pouring in. And we now have our first $4 trillion company. I think M.G. Siegler pointed out that the first trillion-dollar company was Apple.
Starting point is 00:31:46 The second, the first $2 trillion company was Apple. The first $3 trillion company was Apple. the first four trillion dollar company is invidia so it just goes to show you all those decades of working to sell computers and iPhones now the GPUs are the hotness it's been yeah this is uh from the times invidia spent three decades building a business worth one trillion two years turning itself into a four trillion dollar company uh is it just another number or is there something significant about this i i think it's i think it's fun it's a fun milestone um there's obviously nothing magical about four versus three point nine nine nine so so to some extent it's it's mostly
Starting point is 00:32:24 symbolic but but i think what what it being the largest company in the world is as an embedded message in it which is just the point of leverage that invidia has relative to essentially what everybody is betting on as the future of the economy which is an AI powered economy with robots and self-driving cars and AI systems that we chat with and agents that do work for us you know you would expect that a meaningful portion of the profits of that economy will accrue to the infrastructure providers of that economy. And as you go through the stack, you've got the hyperscalers, you have the model providers, you have the, then the chip providers, and Nvidia is in the pole position, you know, on the chip
Starting point is 00:33:07 front. So I think it's well deserved. Jensen's a beast, you know, he's, you know, just worked, obviously, insanely hard for decades to get to this point, right? And so it's right place, right time with many, many decades of building up to be able to be in that position. And so I think it's an important milestone for sure. Speaking of these big numbers, though, I mean, eventually they have to be tied to reality. And it's not just Nvidia, right, which is going to have to justify $4 trillion now.
Starting point is 00:33:38 I mean, I guess the scale hypothesis, sorry, the scaling laws news is good for Nvidia. Maybe that's part of the reason why it's up today. but you also have core reef which is at a 4x jump in its share price after like a pretty underwhelming IPO and then you have meta spending all this money on talent newcomer says this AI data center media conjures up the B word which it means bubble is it something to fear what do you think about this term bubble I I think it's all well placed at the moment so we're, you know, if you, if you think about, let's just fast forward 20 years. One cool thing, actually maybe just small anecdote.
Starting point is 00:34:21 One cool thing is Waymo just arrived in where I live in Silicon Valley in the peninsula. And, you know, everybody in downtown SF has already kind of had their religious moment on this, but we've never had it in the suburbs. And you just, you get into one of these things and it takes you to a meeting or it takes you to dinner. And it's a completely life-altering experience of just imagining in 20 years from now, what if every car you get into is autonomous? What if every factory you go to has, you know, is like 80% robots just running around? What if every computer you use is augmenting your work by a factor of 10 to just to work on your behalf to do way more? What if every time you, I mean, maybe people won't like this, but every time you have a sniffle, there's an AI, you know, doctor that's like doing diagnostics on you, just like the future is going to be going to be so many of these autonomous systems around us helping us with. education, health care, transportation, commerce, just basic productivity. And so if you think
Starting point is 00:35:43 out 20 years, and that's the world that we live in, who would you want to invest in other than that architecture and that infrastructure stack right now? And VEDIO would be at the center of that, but then many of these other players and platforms would obviously be in that investment case. So, no, I don't think it's crazy. And I think it's 100% sort of directionally aligned with where the economy is going. I thought $3 trillion was surely the end of it. But we reached four so quickly. And I'm just like, oh, no.
Starting point is 00:36:17 Like, is it going to hit five soon? At this point, there's nothing that's out of the imagination for me. I mean, we probably need to start talking about the tens. So like, what will that? Really? Yeah, why not? Yeah, sure. That's true. Okay.
Starting point is 00:36:29 how long and all right over under invidia 10 trillion by 2028 oh i maybe i would i'll give it a little bit more time but but you know to be to be worth 10 trillion dollars you probably want you know 300 billion in profit let's say and so uh with their margins that means that you know they're doing 400 billion in revenue like that's totally not crazy 400 500 billion in revenue for invidia that's that is a totally realistic scenario to imagine. Okay, this is an investment advice, but I'm starting to scratch my head here. All right. So, of course, there's a company that they've replaced as like the one that's been setting these bars with, which is Apple.
Starting point is 00:37:11 Yeah. This is a report in Bloomberg. Apple should consider replacing Tim Cook as CEO, Lightshed says. So the story says Apple should consider replacing Tim Cook as the iPhone makers struggles with artificial intelligence raise significant risks for the company. Apple needs a product-focused CEO, not once entered on logistics. The two analysts said, Missing AI could fundamentally alter the company's long-term trajectory and ability to grow at all.
Starting point is 00:37:38 AI will reshape industries across the global economy, and Apple risks becoming one of its casualties. You know, it's great setting this up, right? Could Nvidia hit $10 trillion? Because if AI is going to be as transformative, as you suggested with all these various use cases, it is true Apple has been flat-footed. Is this the craziest suggestion that the light shed guys are making?
Starting point is 00:38:01 Well, I think the thing I would say is maybe a couple things. First of all, I think Tim's great. And so I have a bias toward Tim for a number of reasons. But the, you know, the thing that is worth noting is how strong Apple's position is in. and what that then equates to is their ability to watch the space and figure out the right move to make and when to make it because whether some people like it or not, you know, this is still the best device, handheld device on the planet,
Starting point is 00:38:38 and it has the best set of apps on the planet, and it has your whole life kind of tied to it. So given they own that platform, their ability to lodge an AI into that at any point in the future remains very strong. And so I look at this as, you know, if you have basically three options as a company. You could be a first mover and then totally sort of have a debacle in it not work. And we've actually seen plenty of examples in AI where the first mover is no longer the relevant player.
Starting point is 00:39:11 You could have a scenario where you are a first mover that has a compounding advantage that continues to persist. let's say open AIs in that category, incredible execution and absolutely amazing. And then you have another category which is you enter the space at a time when the architecture has sort of been figured out, when we understand the economics of the model, when you're not having to, you know, you're able to have step function levels of improvement by the time that you launch into it. And I think maybe Apple didn't purposely make that choice, but they are clearly in the position where they can actually have that choice now. And so I think you can just look at this as, if this was 2004, we could have easily said,
Starting point is 00:39:58 why has Apple not released a phone? And yet by 2006, like, that wouldn't have mattered, and they had the dominant platform that would continue to exist. I mean, Microsoft had a tablet computer in 2002 or something. I own one, or my co-founder own one. and I owned one of their Windows smartphones made by Compaq or HP. And so think about that they had the smartphone and they had the tablet computer first, and neither of those things mattered to the long-term dominance in the space.
Starting point is 00:40:34 And so Apple has a position and a potential of basically when the time is right to jump in. They still have the devices that we're using. They still have the OS that we're using. And they'll be able to have learned from all of the mistakes of various companies, along the way. So I wouldn't count them out and I think they are clearly sitting around saying when is the right time to pull a trigger on a much bigger move. And so I think we have to just wait for that. What do you mean much bigger move? Well, they either have to make the decision of either train a model that gives them a state of the art AI model or do some substantial
Starting point is 00:41:13 partnership or acquisition move, all of what we've seen with these kind of founder, CEO hires, obviously the acquisition environment is complicated because of DOJ and FTC, but I would I would certainly be astonished if in two years for now there wasn't one of those choices being made, but I'm not I'm not sort of that worried that it hasn't been made yet. All right, here is my galaxy galaxy brain idea. It's one step further from the typical galaxy brain. So I've been on the show advocating for perplexity. Maybe I've been thinking too small. Let me put it this way. Apple, just lost its C-O-O this week, Jeff Williams. And everybody thought Jeff Williams was going to be the successor to Tim Cook. Are we now in a moment of setup where Sam Altman and
Starting point is 00:42:05 Johnny Ive have teamed up on a device? Tim Cook is getting ready to retire in the next couple of years without a clear successor now that Williams is gone. Do we see the ultimate tech merger, where Open AI becomes for profit. And Tim Cook says, Sam, Johnny, pick up the legacy. They did the picture. I think they want this. Can it happen? That is a, that is a wild, that is some wild fan fiction. Um, uh, anything could happen. I think that is, that should be totally in the, in the category of, of options. You know, if, if you're being realistic by the time that that moment would likely occur, you know, opening eye should be much bigger. That would be much more complicated than as a deal. But I like the, you know, certainly as a brainstorm, it's a great
Starting point is 00:42:56 way to brainstorm. Okay. That's a very nice way to let me down. And yeah, I said merger for a reason. I wouldn't call it an acquisition. It might have to come at this point where the two just come together that way. No, no, fair point. And I've seen crazier things in my life now in tech, so I can't write anything out at this point. So let's see what happens. Let me put it this way as we end. I think that type of deal is far more likely than Apple buying Anthropic. Just because it's going to require something so much more substantial, or why would that be more likely? Because I think it's a better cultural fit.
Starting point is 00:43:34 I think the anthropic team and Apple would clash, but I think Open AI going into Apple could potentially work, although Open AI is much leagier than Apple. Although Apple leaks everything to German these days. You know, the only thing I would just suggest or posit is, you know, this, it'll be fascinating to watch what meta does with, obviously, it's new superintelligence org, because we actually already saw it with GROC, to be clear, but meta will be a second round of this. If from a more or less standing start, you know, they're able to accomplish, let's say, some new breakthrough state-of-the-art model in six months, 12 months, 18 months, or whatnot, I think what that will prove is basically, it still remains largely a talent and compute and data game, which means that you
Starting point is 00:44:19 don't really need to buy an existing incumbent. You mostly just need to decide to go big on the compute and on the training and obviously have the right talent to do that. And the day that, like, it doesn't really matter whether you had, you know, all of the other prior versions to, like, before that moment, like, you're, you're doing a reset no matter what. So I would, I would just argue that, like, we get all excited about this idea of some big mega acquisition, but it's not really, it's not a problem that requires that kind of scale, except for when you're just doing the capital expenditure of the GPUs. You really just need the right talent, the right training data, and the right compute. So I would, I would more bet not on one of these very large
Starting point is 00:45:10 multi-tenths of billions of dollar deals simply because there's other paths to get there that are not as complicated. That's a great point. I mean, it's less about, you know, an individual company's IP because everyone's effectively sharing the IP. It's about productizing it right now. That's exactly right. So if you imagine in this industry, within one year, every single breakthrough idea eventually gets discovered by everybody else. Like there's never, nobody has kept an advantage for more than a year on some secret idea that that, that only they have. And so Apple's ultimate, Apple's ultimately advantage is they have a distribution model that nobody else has, and they have a form factor of where AI could show up that nobody has. So they don't need, they don't necessarily
Starting point is 00:45:50 need to have the best model relative to, you know, one or two months being ahead of anybody else. They just need to have, like, a good enough model that any one of our non-tech friends would just be like, this is fantastic, I love this thing, which is just not, again, does not require that scale of, of, of, of, of acquisition or whatnot. All right, everybody, the website is box.com. You could also find Aaron's very insightful posts about AI on LinkedIn, Aaron Levy, and on X, his handle is Levy. Aaron, this was so fun.
Starting point is 00:46:24 It's always great to speak with you. I appreciate the time. Thank you. Good to see, man. Take care. You too. Thank you, everybody for listening. And we'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.