Big Technology Podcast - What Cheaper, Faster, and Smarter AI Gets Us — With Aaron Levie

Episode Date: May 22, 2024

Aaron Levie is the CEO of Box. Levie joins Big Technology Podcast to discuss the implications of AI getting cheaper and faster after OpenAI cut GPT-4o's prices by half and made it twice as fast. We al...so cover AI's impact of AI on jobs, the evolving AI safety debate, and how companies like Box are harnessing these powerful technologies. It was our first public event and such a blast to meet so many of you! Hit play for a thought-provoking exploration of the AI cutting edge, and what comes next. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Box CEO Aaron Levy joins us to discuss where AI goes next after the latest big releases and what a building with this technology looks like as it gets faster, cheaper, and smarter. All that coming up right after this. Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. We're joined today by Box CEO Aaron Levy in a conversation recorded live at our first ever public event in front of a packed house in Manhattan last week. It's a really fun back and forth about how far the AI field has come, what companies can build with the technology today,
Starting point is 00:00:35 and where it's heading as OpenAI and the rest of the pack keeps shipping. Here's our conversation. Well, Aaron, welcome back to the show. Yeah, thank you. So do we just pretend that we're podcasting, or like, how does this work? Well, you already have given away the secret, which is that we're back together in a moment of crazy AI news. Yes.
Starting point is 00:00:56 But this time, we're doing it. it in front of a live audience. It's real. And in front of the first ever public event for big technology. And we're fully sold out with 130 people here with us in Manhattan. This is going to be the first of many. And listen, the audience isn't going to believe me. So I would love it if you guys could make some noise and let everybody who's listening to the recording know that you're here. Let's hear you. And there's simply no way we could have fake that. So that had to be authentic and real. It brings us to sort of the topic of our discussion, which is the latest in AI.
Starting point is 00:01:34 And of course, there's everything from synthetic images, synthetic text, synthetic voice, synthetic video, which we heard a little bit about from Google recently. But let me give you my big picture here, now that the dust has settled a little bit from Google I.O. My perspective is that we haven't exactly seen the groundbreaking stuff that's been promised. right? We were looking at GPT-5 and AI sentience, but we haven't gotten that yet. So am I right in thinking that, yes, we've had some big... You are very impatient if that's your issue. Well, I was... I'm setting you up to own me at the beginning of this discussion. Okay. But I do want to know, like, is there something that we should be thinking about in terms of what's
Starting point is 00:02:16 going on right now that doesn't look at what we just saw over the past few weeks? Which, yes, they were impressive, but they're not this sort of prime. Godlike AI that everyone keeps talking about and waiting for this GPT-5 moment. Were you, so you were unimpressed by GPT-4-0? I wouldn't say I was unimpressed, but I say, I would say, given that- Did you see that video where the guy was doing like a job interview and then the, her little thing was like explaining like how he should look different? I mean, this is like psychotic technology, I mean, this is incredible.
Starting point is 00:02:48 I don't deny it, but I also think you have to understand where I'm coming from, which is that a lot of people, when they saw- Well, actually, that's now the past. But a lot of people, when they saw this stuff, were just like, reasonably or not, and I'm just trying to channel the audience. Right. Where's the, this audience, the audience on X, which we know is the most reliable audience on tech. The X people are crazy.
Starting point is 00:03:10 Yeah, well, we're going to read some of your post, so let's see who's crazy at the end of this. But that is... How do you think I know how crazy they are? Oh, that is a good point. Okay. So, but I guess where would you say we are right now? Yeah. And why haven't we had that sort of step change?
Starting point is 00:03:25 this is the thing. Yeah. We talk about the GPTs and it very quickly becomes old. Like GPT4 is pretty impressive. Yeah. But it already feels like old news and people want something new and they want to feel like that aha moment that chat GPT brought. And we haven't had that yet. Right. They're impressive. I just think these people are like sound like heroin addicts. Like we just need another, I just need another breakthrough. Well, they do call tech people users and that's, you know, it sort of fits. We, we, it comes with the territory. Yeah. So I mean, I basically don't agree with the premise. So I think that this is the craziest technology ever, and it's entirely reasonable that we will see this in sort of step change in this kind of step change fashion.
Starting point is 00:04:07 So if you think about it, we're like 18 months into the initial chat chit moment. We have seen breakthrough after breakthrough in the past 18 months on AI model performance, on AI model effectively intelligence, when you look at the evals that these AI models are put up against. When you think about, you know, one interesting metric is context window, which is the amount of basically data I can put into the AI model or get back from the AI model. And at the start of Chatibati in, let's call it November of 2022, the context window is somewhere in the order of about 4,000 tokens. Just yesterday, Sundar announced two million tokens.
Starting point is 00:04:49 on the latest Gemini model. So when you think about it, there's like not that many technologies literally in the world that see an improvement at the rate of 500x in 18 months. And that's basically what we're seeing in AI. So that's obviously one metric of performance improvement, but you can look across the board, whether it's the cost of tokens dropping,
Starting point is 00:05:11 whether it's the improvement rate that we're seeing. All of this is building the foundation for now downstream, I think an incredible amount of innovation. And then you look at just literally on Monday, the GPT40 Omni model is also another breakthrough. Just in terms of, you know, obviously you always have to kind of look at these demos and say, okay, how much of that was really just like the perfect demo and they kind of knew it would work well in that situation. I think we can chalk up some of the use cases to that. But I'd say chat GPT and OpenAI generally have been incredibly intellectually honest,
Starting point is 00:05:47 kind of stewards in this space. And so when you see a demo from them, it tends to actually work like that in practice. So that ability to have multimodal experiences where you have video in and audio out or video in and text out in basically real time because it's in the same model. I mean, this is going to produce
Starting point is 00:06:05 some pretty incredible just both personal experiences and then not even touching on what's possible in the enterprise. So I think, first of all, I think we should actually probably be glad that we don't have this sort of breakthrough through AGI experience yet because we actually need some time to sort of just pace ourselves, frankly, in the deployment of this. I sent a video of the sort of interview, the interview example. And if you haven't seen this, please, please watch it. This person sort of takes a video
Starting point is 00:06:33 of themselves in a real time. They're talking to the GPT40 model saying, you know, do I look presentable for this interview? And it's giving it feedback. I send that to my parents. And basically they're like, well, like, we basically don't need humans anymore. And so like to somebody like that, like this is AGI. Like, we could stop right here and probably be done with AI for like a decade. And you've already, you've already solved like hundreds of use cases that are these breakthrough, you know, kind of situations. They have another video of somebody, um, uh, who basically, you know, uses, um, AI to see the world. Uh, they're, they're basically blind and they, they, they see the world and they're able to now communicate, um, you know, with AI to have so
Starting point is 00:07:11 much more, um, capability than they, than they would have had before. So, I mean, these are just incredible technologies, just even in the current form, let alone, you know, when we actually get access to GPT5 and so on. And the place I thought you were going to go first was really the cost, because that's the thing that you've really harped on over the past few days. I'm trying to anticipate your answer. I'm like, all right, he's definitely bringing up the cost thing. And I think it's worth spending a minute on it, which is that OpenAI has made the cost
Starting point is 00:07:36 of GPT 40, 50% of what it was previously. And for you, you're working with a lot of AI. I mean, just talk a little bit about what that does for the industry, for anyone who's building on top of this stuff in terms of its ability to be a profitable investment and to be something that is more ubiquitous than it is now. Yeah, so there's probably, I mean, so there's basically three dimensions that AI is going to need to improve on before you even worry about, you know, kind of agentic-like experiences. So even, you know, kind of bookmark that. you have effectively the, let's say, you know, more or less the quality of the models. So how do these models perform on e-vowls? And these evals are, you know, basically throw a bunch of problems at AI models, and then you get a sort of a shared benchmark across how different
Starting point is 00:08:27 models perform. So, you know, Lama 3 versus GPD4 versus Gemini, and you get to see sort of how it does against the LSATs or MBA, you know, courses and so on. So one is sort of model quality. We've already seen that the latest GPT4 models and kind of of the GPT4 class, you know, in many respects, you know, perform better than humans at a large number of tasks, but in some areas are still deficient and not yet kind of human level performance. So model quality is sort of one vector that we need to continue to get performance on. And, you know, you can imagine, you can just extrapolate and imagine, you know, GPT6, let's say, is like, we're like at 99% you know, human level. And then,
Starting point is 00:09:09 GPT 7 is like 99.9.9.99. So we'll see some type of asymptoting, but eventually you're just going to continue to get better and better model quality. That's sort of, you know, vector 1. Vector 2 is how much data can I put in the model? And this has previously been something that was very limited. Actually, a large number, a large reason why, frankly, I think so many of us were not yet sort of figuring out how big of a breakthrough this was, was back in, you know, the GPT 2 days, you know, the base, like, all you could give it was like a couple hundred, you know, effectively characters or tokens, and that was basically all you were working with. So it was very hard to kind of imagine, you know, sort of token, next token prediction when you can only give
Starting point is 00:09:51 it a limited amount of data in context. And now we sort of have breakthroughs now with, again, two million token context windows. That's a massive breakthrough. So number one is quality. Number two is how much data can I put in the model? And number three is cost. And sort of then the performance of the model. I should probably add just one more, which is speed, but with cost and speed kind of come in a little bit of the same dimension. So when you have what you saw on Monday, GPD40 drops by 50%, literally, you can just think about it as, I just took the ability to have intellectual capacity, and one day it cost X amount, and now it costs 0.5x, like overnight. That's pretty
Starting point is 00:10:34 crazy when you think about that. And the fact that we've now done that, or not we, I mean, Open AI has basically done that, you know, I don't know, four or five times in the past 18 months. So we're already maybe a tenth of the total amount of cost of a kind of fairly high quality token just in the past year and a half since the kind of first version of Chatsabit. So this is a breakthrough because if you have a use case with AI a year and a half ago, that may have been slow, you may have had to hack around it, and it may have been relatively expensive and a year and a half later it's much cheaper you don't have to hack around it because you can give the model a lot of data and it's much more intelligent so you just it doesn't take that
Starting point is 00:11:13 much imagination to say well in 18 months from now what am i going to be able to create and build and so you just sort of watch these curves and and i mean the implications i think are going to be massive for startups mostly positive a couple kind of question marks which is if you're watching this curve you probably should be building software for what is going to exist in three years from now as boasted today because you don't want to be building software that sort of assumes that the tokens are expensive and they're not that high quality and they're kind of slow. So we spent a lot of time thinking about like are we designing a system that is sort of, you know, kind of just covering up some of the shortcomings of AI right now or should we design a system that will work really, really
Starting point is 00:11:52 well in a year and a half from now as we get more of these improvements. And that's, you know, the ongoing battle, I think, of anybody building AI, you know, startups is, you know, do you build for what you have today? Do you build for what might exist in the future? How do you avoid getting disrupted from the model? Just sort of kind of basically building in your value proposition directly in the model itself. So many questions. But ultimately, I think, you know, one of the most, if not the most exciting time in history to be building software. And I definitely want to get back to this idea of what we should be building or what software, what software companies and startups should be building. But as we talk about the cost of intelligence coming down, it does make me think,
Starting point is 00:12:29 Like, is there a viable business model for all these companies that are spending hundreds of billions or tens of billions of dollars training models and then selling this intelligence at lower and lower rates? And one of the, like, data points that I think about here is open AI in the middle of this whole Sam Altman thing when he was fired and then brought back in, they were in the middle of a reported fundraise that was going to put them at, what, $100 billion? And we haven't heard anything about that yet. So maybe that's a little bit because of the, they wanted to make sure the board was settled. But what are the economics for these companies? And is this really sustainable for them to keep providing this for less and less cost? I mean, a 50% cost is a big deal. Yeah.
Starting point is 00:13:10 Yeah. Well, you know, what ultimately matters is what is their cost. And so one would theorize that they have come up with algorithm improvements and model improvements where their underlying costs of running the token. through have now dropped by, let's say, 50% or whatever. Obviously, it's sort of hard to pin down because there's no public information on their gross margins. But in general, I'm guessing that they've done something that has driven efficiency that has made their cost structure lower. And so then they're basically kind of giving us that cost structure improvement
Starting point is 00:13:43 as customers. So then their theory is, well, if we drop the price by 50%, do you basically get more than a 2x in usage and volume? And I would argue that basically at every, point in AI performance improvement, that sort of trade has basically come true, which is if you could make, again, kind of you wave magic wand and you say like we have GPT5 or GPT6 and it costs, like a tenth of what today GPT4 costs, I would argue that you'll probably get 100x more usage of AI, not just 10x, you know, more usage of AI. And so at some point, maybe that plateaus, but we're like nowhere near the point where a lowering of cost doesn't doesn't sort of disproportionately impact what you can now build, which then impacts more volume.
Starting point is 00:14:29 This is actually an interesting thing. If you looked at, I have no kind of interesting anecdotes on this. You should probably do the research. But in the very early days of cloud computing, everybody looked at the size of the server market, and they basically said, well, well, if these servers are in the cloud, we should kind of take the size of the server market and maybe shave off some of that spend because as it goes to the cloud, it gets more efficient. because you have shared capacity, so you get less underutilized capacity. And a lot of the kind of total addressable market analysis of cloud computing was looking at the historical usage level of on-prem data centers.
Starting point is 00:15:06 And that's totally fair because that's kind of like all you could really do if you're doing that analysis. But what they didn't realize was as you created sort of on-demand computing resources, it meant that literally every developer on the planet could now actually have access to servers. And you could just like start a company tomorrow and then use computing capacity, which you didn't do when you were a startup, you know, 25 years ago because you just like couldn't, you know, put servers in a data center. So like you just didn't start the company in the first place. So all of a sudden, the cloud computing scale was like 10 times larger than
Starting point is 00:15:38 what you used to do in data centers. And so, you know, similarly, as you get the cost drops of either the GPUs themselves get cheaper or model efficiency gets better, you'll see just a massive of increase in utilization. So I think the business model is still very good for the top, let's say, five or so model providers, where you will run into a question is, you know, could you be the 15th LLM, you know, training company? That seems tough, especially if your job is to do kind of horizontal LMs. I think that would be less likely to work. You know, you also have this battle of like something like a mistral, which was for a period and maybe maybe still ongoing was like, you know, a massive breakthrough in open source AI. And then
Starting point is 00:16:24 meta one day, you know, decides to just exceed all the benchmarks with Lama 3. So I think there will be some parts of the market where there's going to be a lot of competition and it'll be hard to kind of figure out what the business model looks like. But, um, but in general, I think the business model of providing high quality, cheap, highly scalable tokens, uh, if you're in the anywhere in the top three for quite some time is going to be fine. Um, and, uh, and then ultimately like, if you're a hyperscaler in the cloud, what you really want is all of our workloads. You just want us to build our full application on your tech stack. So you're not really trying to make that much margin on the AI itself. You actually want the data. You want the compute. You want the
Starting point is 00:17:00 storage. So I think the business models will all continue to be fine to continue to give away this technology at a lower and lower price. Okay. And so I started our discussion talking about how disappointing this release was. I mean, let's actually talk about the impressive stuff, right? One of the things that I saw over the past at these events within Open AI and Google was that these models have an ability to reason, right? They seem to be able to take problems and then break them down to their component parts and then go through step by step. It's not like the traditional ask a question to an LLM and it will give you an answer. It looks actually like something that's smarter. So did you pick up on that reasoning capabilities?
Starting point is 00:17:38 Because we had this whole like moment in the Sam Altman thing where like people talked about this Q-Star model. which could reason and did math. And I watched some of these demos and I'm like, is that it? Yeah. So I think things like reasoning were still very early on. Anybody now deep in the AI space, you're going to hear us all talk about this idea of agents. And kind of what is this agent like behavior or agentic behavior that you can have in AI models, which really moves from going to an AI model, asking a question, and then just basically getting
Starting point is 00:18:09 the text output or audio output of what that model is producing to actually giving it a, a problem that is often multi-step in nature, maybe interacting with other systems, i.e. other tools. And how do you kind of put that all together where a single AI model connected to these tools can actually produce effectively an agent that really can actually execute full tasks and processes? So we saw maybe slight examples from that,
Starting point is 00:18:36 both on Monday in the Open AI announcement and then yesterday in the Google announcements. I'd say both at a very high level. just because actually so much of this space is still at a pretty high level. If you go and ask like 10 AI startups that are doing anything with agents, you'll probably get more or less 10 different architectures of kind of how the agent actually functions. But what is at least similar to all of them is the LLM or the model is really acting as the reasoning engine and basically the brain for kind of coordinating and executing tasks across other systems.
Starting point is 00:19:14 and software, which is a very exciting concept because, again, a year and a half ago, I think what we thought, you know, or at least we internally, you know, thought and what we saw from startups was, you know, this is like this, it's like a chatbot wave. And the chatbot was really just like the best, you know, kind of way to manifest AI to get people to see the power of it. But, you know, the chatbot is just like one of, you know, a thousand modalities that we might have with AI. When you start to think about the AI is not something that you just, you know, chat back and forth with, but instead it's, it's sort of a reasoning engine for anything that you want software to do, it opens up a very different
Starting point is 00:19:49 world of possibilities. And what about the personalities of these bots that we're going to see? Yeah. I mean, after OpenAI did its release event, Sam Altman tweeted out her. The bot was just extremely flirty, and then it didn't work the next day, and I looked at it and I was like, oh, if they were trying to build an AI girlfriend, they nailed it. Super flirty in the first interaction, doesn't answer your text the next day. Aaron, what do you think about the...
Starting point is 00:20:15 Do I have to answer that question? Yes, you do. Yeah, what do you think about their attempt to build her? I mean, I do not know the internal, you know, kind of workings of, you know, how did the voice get kind of tuned to be the most, you know, interactive and engaging, you know, voice. That's a fun way to describe flirting, but yes. Engaging as a euphemism. but I mean like I thought about that for about 3.2 seconds when I saw it and obviously it'll be like a big controversy online but you know the market will effectively decide what voice we want from these things and and I expect you know open AI Google etc to kind of land on what's the right equilibrium of of kind of okay a little bit too creepy versus like like way too robotic and utilitarian and so like you know somewhere in there is is probably a sweet spot and I think we'll kind of kind of kind of a little bit too creepy versus like like way too robotic and utilitarian and so like you know somewhere in there is probably a sweet spot and I think we'll kind of of go, you know, and do a little bit of pendulum swinging until we find that.
Starting point is 00:21:13 And you spoke about this in the beginning, and I don't want to gloss over it, this capability for the AI to be a tutor. Yeah. Right? I want you to like kind of unpack how important this is because it really, this can really change the equation for parents where like, you know, everyone has this like idea, okay, sit the kid with the laptop, but if you can set the kid with the actual tutor that's going to work with them, personalized through the notes. Yeah.
Starting point is 00:21:37 And not only that, but Google had this. example, it can listen in on a PTA meeting for you and take the notes there and tell you what happened. It's almost like taking AI and putting parenting on autopilot. And everyone's going to be like that's weird and creepy. But it also is like in the best cases, this technology gives us more time to do the human stuff. Yeah. Right. And if you have more time to actually be a parent to your kid, like be caring with them, be present with them. Yeah. As opposed to having to go through this work with them. I think that could be a pretty special thing. Yeah. I don't know like statistics on parenting globally on tutoring and like how many you know parents are good tutors but like
Starting point is 00:22:12 let's just not many okay well so let's just assume that like a significant portion of kids do not grow up with like the highest quality tutor access um uh i i got lucky because my dad was was sort of into that and my mom as well but like let's just say like that's not the case everywhere so like obviously if you could make AI freely available globally that was as smart of a human and you had the interaction paradigm work where I can just interact with it to learn, that is like only a good thing for humanity. It would be literally impossible to say, I want to shut that down or I don't want that to exist. We can talk about all the implications of, okay, how do you make sure that it's as available as possible and bias and all these other things, but like the idea that we could
Starting point is 00:22:56 basically democratize, you know, access to knowledge and, and, you know, and tutoring and help and education to everybody on the planet is basically a good thing. And that's just like one of the many examples of, I think, the power of AI, especially in the consumer side. You know, take that for health care. Take that for basically any kind of subject that I want to be educated on. Take that for just learning how to code. I mean, the amount of, you know, sort of the easing of the on-ramp of what we as people, you know, spend a lot of time learning and let us explore more spaces to figure out what are the areas and domains that we want to go really deep in. That is just a very good thing for the world.
Starting point is 00:23:36 Okay. It's become a tradition on big technology podcast to do a segment where we read Aaron his tweets and make him explain them. So let's do that now. We can also stop that today. It doesn't, that's not something that has to continue. You know, let's continue it. So let's see. The problem with reading tweets is like, it never sounds like it never sounds like when I wrote it. So it's just like it's a totally different time in the day. It's like a different voice. But go for it. I just don't know. I mean, you're going to read. something. Everyone's going to be like, that's not that insightful. And then I'm going to be embarrassed. And then I'll have to explain myself, but go for it. No, no. I handpicked them because I think they're going to start good discussions. Okay. And if this sucks, I'll retire the segment, I promise. Okay, okay, okay. So this kind of goes to the agent thing. A large portion of business problems are constrained by how much time any given problem takes to solve
Starting point is 00:24:29 and the number of people you have to solve it. AI flips this by creating a world where we can solve problems by essentially throwing more compute at them. Yeah. Do we just... Yeah, you riff on that. Oh, riff on that? Yeah. Just riff on my tweet?
Starting point is 00:24:43 Yeah. Okay. It's in the tweet. So, I mean, the only riff I could add is that, you know, there's this... The only reason I wrote that was just classically in business, you sort of have this term of like, let's throw more bodies at the problem. and obviously that just means like just how much headcount do you have like we'll throw more bodies at this engineering problem at the sales problem at this you know whatever the thing is and it's
Starting point is 00:25:11 kind of crazy to think about a world where you would just say let's throw more compute at the problem and and the equation goes from okay I got to call the HR team got to make sure we have budget we have to go hire a lot of people to now it's like well do you want like a hundred leads or a 1,000 leads or 10,000 leads. And that's not going to be driven by how many people I hire. It's going to be driven by how much compute I have. Do I want to, you know, test, you know, 90% of my software for bugs or 95% of my software bugs or 100% of my software for bugs?
Starting point is 00:25:42 It's not, again, how many, let's say, test cases I write or quality engineers I hire, it's how much compute I throw. So you can kind of, you know, work through, like how much of business now can become a problem where we can throw compute at the problem to basically solve it. And it's just like a different way to think about, you know, organizing your company, how you scale your company, and, and ultimately, you know, the role of kind of intellectual labor instead of a business. Okay. Yeah. This is a good segment.
Starting point is 00:26:11 Okay, okay, got it. Okay. You're not biased in any way, but perfect. Okay. I did come up with the segment. The cool thing about another one. The cool thing about AI is that Zuck is unleashed. There's a brand new platform opportunity.
Starting point is 00:26:25 The market is early and up for grabs, and there will be multiple. winners and it uniquely leverages the strengths of meta the breakthroughs in this space will continue to be wild great tweet yeah just riff again i'm telling you yeah go for it um so uh i mean that would so i'm just supposed to describe that like why why meta in particular is positioned and what is zuck unleashed zuck unleashed is just i mean did you see his birthday you know photos i mean It looked awesome. The guy's unleashed. So, you know, you have, so I don't know if anybody was around in like the mid-late 2000s doing web stuff.
Starting point is 00:27:03 But you had this conference called F8 and sort of Facebook was at the center of basically, you know, web software in sort of the consumer world where they created the social graph. They, you built your application kind of using their APIs. That was sort of like, you know, like they, they, they. were really, you know, driving the web forward. And unfortunately, I think mobile probably sort of slowed that down a bit because the conversation then really flipped to iOS and mobile platforms. And so, you know, you had all of this energy and technical talent from meta that was kind of underutilized in a mobile world.
Starting point is 00:27:42 And they really couldn't, they didn't have any platforms. So that is sort of where the Metaverse came from and where Oculus came from was, I think, you know, Zuck's entrepreneurial spirit on like, well, let's build a platform that, you know, that we own and that everybody can build in. And I think just the reality is that at the scale probably that they would have wanted, we're not all in the metaverse yet. So, so you basically have this incredible entrepreneur with insane resources, both in engineering and CAPEX, that has kind of been a little bit held back because he hasn't had a platform to be able to unleash into the world. And now there's this spot that,
Starting point is 00:28:20 that's open, which is open source AI is not owned by anybody. And so he's got all of the right resources for it. He's got basically the entire industry rooting for it to work because we all benefit. The cheaper he can make AI models and the better he can make AI models, we all win. Because either that will mean that open AI and Google will want to work even harder and lower their costs, or we just literally have an open source AI model that we then don't pay any kind of fees for, which is incredible other than the cost of the GPU. So he's got all of this kind of pent up energy.
Starting point is 00:28:53 This is just me, you know, just like imagining, you know, how he's thinking about it. And you watch his videos and you can kind of see like he knows he's on to something, which is he can kind of win in this open source AI world, which is going to be a very, very big space to be a part of. And then commercially, I think it's always good if you're kind of direct competitors, you know, do not win, you know, a large portion of kind of what the zeitgeist is sort of talking about and focused on. So it's good if he has some way to kind of, you know,
Starting point is 00:29:20 defend against, you know, let's say how big Google gets or open AI gets in this world. And then on the offense side, he can probably just make more money if he has people spending more time on his platform and asking questions and getting, you know, recommendations for things to buy. And all of that will be powered by AI in the future. So I think it's both going to be a commercial success and I think it's like structurally, strategically, something that is going to look, you know, like a very good decision in the long run. All right. Let's go to the other member of the case. match, Elon, you said, finally got around to trying the latest Tesla full self-driving last night,
Starting point is 00:29:58 can't confirm it's wild. Is it actually, does it feel like real autonomous driving or were you still, is there still fear for people's life when they're in there? Well, those can be the same thing. So, so it might be that real autonomous driving, you still fear for everybody's life because you're just like, I do not know how this works. Like this is kind of alchemy. This is crazy but it was it was definitely you know very crazy it was a very you know kind of relatively boring suburban kind of trip but but it was like there was just zero need to ever ever interject I mean you have to kind of show that you're you're still paying attention but but I mean it just it shows again we're like the past year and
Starting point is 00:30:41 certainly for the next couple of years you get the sense that we're going to see hundreds of these these like like early previews about the future um which is just pretty exciting like uh i mean i've just never seen a period where you know in any given week you could see two to three things which are just like obviously that's going to be the future maybe it doesn't work perfectly right now but but it's like there's nothing that is is stopping it from working perfectly in a world of more compute and and just more breakthroughs on on the on the models themselves and that is kind of where we're at right now. It's pretty cool. It's really cool. It's really cool. It'd be nice if that happens because obviously we have way
Starting point is 00:31:18 too many traffic deaths. All right. Cutting room floor, I won't ask you to react to these, but just for time. But you have VCs when dot AI is in the name and there's a bunch of people doing some like parkour off of buildings. And then you're going to verbally explain visual tweets. This is good podcasting, man. That is aggressive. That is aggressive. Oh my God. And then there's a, there's, I'm looking at screenshots of videos right now. So this is, I don't. No, this is actually a screech out of this is a... Oh, okay. This is everybody wants small government until there's something they want to ban,
Starting point is 00:31:50 and it's Ron DeSantis standing up front of a table of lab grow meat. Yes. Also, do I need a riff? You get that one. What did you get that one? You can give it 60 seconds on that. Okay. No, no, I mean, it was just, I mean, the...
Starting point is 00:32:04 I think that was also kind of pretty straightforward, without, you know, conveying my political views I just found it ironic that the, you know, party of like, we want the smallest government and, you know, more libertarian-oriented values is, you know, not going to let, you know, science breakthroughs happen in their state. So it's like, okay, well, maybe actually, maybe it's just only when it's convenient do you want, you know, small government, as opposed to this is a very principled, you know, kind of decision. And so that was literally all that was referencing. So, yeah, no, I enjoyed that one a lot. Okay. We're here with Aaron Levy is the CEO of Box. We're recording live in front of an audience in New York's. City, our first public event ever. We are going to take a quick break and come back with audience questions, so we'll be back right after this. Hey, everyone. Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and
Starting point is 00:32:55 original stories to keep you in the loop on what's trending. More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news. Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them. So search for The Hustle Daily Show and your favorite podcast app like the one you're using right now. And we're back here on Big Technology Podcasts with Aaron Levy. He's the CEO of Box. We're in front of a live audience here. Yeah. In New York. He remains here. He hasn't left
Starting point is 00:33:30 despite the reading of the tweets and the conversation about Lab Grow Meat and the live ad. So let's see if we can keep going with this. So what we're going to to do now is we're going to take some questions from the audience and hopefully we'll be able to record them and get them on the podcast. So let's do that. If you have a question, raise your hand, I'll come over to you, state your name and where you're from. And I have a plan. I planted a question in the beginning with Ranjan Roy. He's one of our favorites on Big Technology podcast. He's our everybody, people who are listening to the show, you guys know Ranjan. Let's give it up for Ron John. This guy is amazing. He's on with us every Friday's, every Friday,
Starting point is 00:34:14 and I think he has some questions. I was very interested in the whole idea of agentic agentic experiences. Are there any current ones that you already use in life on a day-to-day basis that you've either hacked together or created, or what are ones that you think are the most exciting in the near term? Cool. Okay, so on agent, kind of agent experiences, I don't think I have any that would qualify as agenic in my life right now, partly because these things are just so new. And I actually think most of the use cases will be on the enterprise side. So they're going to be like things that we never even see that are just happening behind the scenes in our technology and in our software kind of every day. there's I'd say I don't know how this exactly happened but some something like six months ago
Starting point is 00:35:07 there must have been a memo that like everybody read that sort of set off this kind of agent kind of startup wave because in the past month or two I've sort of increasingly seen new startups that all have a somewhat similar pattern that is basically defined by or defined as as you know traditionally when we you know when any of us sell software we kind of say hey like you know your your employee X you know does a particular business process and here's software to let them like go and do X and like we're going to enable them to do that thing better and all these new agents start-ups are kind of saying hey you have a task that somebody does or or you maybe you never got around to so it's not even like you don't even have
Starting point is 00:35:55 anybody doing it we have software that will do that task for you um Q a website, do outbound sales, you know, generate a marketing, you know, translation. And this pattern is emerging pretty rapidly from what I can tell where I've seen, I don't know, maybe a dozen or two dozen startups like this, but it feels, it feels kind of akin to, honestly, like the early 2010s almost where we finally figured out like what mobile was going to look like. like there was like a few years of pretty shoddy kind of you know kind of approaches to the mobile wave like oh seven oh eight oh nine you're kind of like that's kind of like janky and like the there was a web app probably and it didn't work really well and then all of a sudden you're where it was just like
Starting point is 00:36:44 boom instagram boom uber boom instagram boom boom boom door dash and we're just like oh actually so your phone is this sort of new command center for for just like things and then and then like everybody got the memo and then we we were off for the races and and and lots of the startups didn't work, but like we at least all kind of knew more or less how this was going to work. I think we're now emerging in the space in AI where we're kind of getting that memo, which is like, no, it's not going to be like 150 different chat applications. Maybe there'll be a couple that kind of make it. But it's actually using AI as more of a brain behind the scenes for really kind of just taking work that you would have done otherwise and automating that. And
Starting point is 00:37:22 that's pretty exciting. I think the one thing that is really interesting about it, is, um, is that, uh, it does really put a lot of pressure on how you architect, you know, whatever it is, you're building. Um, so I have a, I have a friend working on a, on a startup and we'll, like, you know, go through, you know, the, the, the, the, what, what he's, what he's building. And at any given day, you know, the updates to a GPD4 or Gemini are just like basically, you know, solving entire, you know, kind of components of what you would have had to go in sort of mask or make up for if you were building like a GPD 3.5 paradigm. So like pretty wild that just from three five to four you do a lot less sort of you know you know kind of constraining the system and and preventing it
Starting point is 00:38:07 from doing things because now you can take advantage of more of that AI model. And so and so it's almost like actually like shit like maybe should you be building a startup only just anticipating GPD5 and don't even worry about GPD 4 like it kind of almost begs the question of like don't launch anything right now wait till this thing is even more intelligent. But of course you know at some point like you could do that if you extrapolate that out too much you just wouldn't launch anything so it's like hard to know exactly the moment to to launch but but but it is it does really mean that you need to future proof your architecture in a world of agents all right i'm gonna go there and then i'll come here hi hello my name is ilia actually ex-endervor staff and current
Starting point is 00:38:45 co-founder of morphosis so i wanted to ask uh we mentioned the beginning like did you say what your name is ilia ilia yes like the most famous name in ai Okay, wow. Well, this is she. Okay, yes, different, but yeah, okay. So actually in this current AI and, of course, like future AI world we're living in, do we need humans or what do humans need the role? Well, we have to build the AI, so. Yeah, of course, but what skills do humans need in this current involving AI world?
Starting point is 00:39:19 Yeah, I mean, a great question, like the question, obviously for all of us. I think my answer will be pretty unsatisfying because I think, honestly, we don't know the answer yet. Sorry, let me actually specify. Yes, we need humans. What we should go do about that, I don't know yet. I don't think anybody really knows because, again, the pace of sort of AI development is happening so quickly. But I am not convinced yet, and I've spent hours debating everybody I can on this. I'm not convinced that AI doesn't look fairly similar to prior kind of technological revolutions.
Starting point is 00:39:59 It feels like it's different this time because it feels like, well, the intellectual thing is now like, it's coming after intellectual stuff. But I'm not convinced that it actually, like, in a grand scheme, at a sort of macro level, looks any different. In the sense that what I expect to happen is are the tasks that we do every single day, will just begin to look very different. And it'll look like a little bit different at first, and then a little bit different thereafter, and then you zoom out in 10 years from now, it'll look totally different.
Starting point is 00:40:31 So it almost won't even necessarily, like we won't even feel it probably because it'll just be these incremental changes that amount to a large amount of change. But if you, like, you know, if I showed what I do on a computer screen to my, you know, to, you know, previously my grandparents, they would be like,
Starting point is 00:40:50 how are you creating value in the world? Like you're just on a computer screen and you're just sending an email to like back and forth and then you're like in this Slack thing just chatting and it's like that that creates value in the universe like it would just be confusing right? Because like they'd be like
Starting point is 00:41:06 well why are you not like in a room and like and you know with a chalkboard and talking about a thing and building a you know like so just so imagine in 20 years from now the version of that which is like the person doing work It just says, hey, I need you to quickly analyze this market and all the trends on it
Starting point is 00:41:25 and come back with an answer about this thing. And like five seconds later, it comes back with that thing. Like, that would obviously have compressed, you know, let's say 20 hours of what a human would have done. That doesn't mean that all of a sudden we're going to not work those 20 hours. It just means that we have the answer to then move to the next step in whatever that particular process is just 20 hours, you know, sort of sooner. and I think if you just kind of multiply that out against kind of all of our work, I'm not convinced it then sort of meaningfully changes the job equation. Now, of course, this is one of these things which is like, it would be like really bad to look really wrong on this. So maybe this podcast
Starting point is 00:42:04 will be like the end of me in like 10 years. This is not our goal. Yeah, exactly. Like I thought the tweet reading was the problem, but it was actually predicting that jobs are fine. And we're like totally fucked. But I think that in any area where we can bring automation, for the most part, doesn't mean the job won't change your shift a little bit, but for the most part, you generally just get either more jobs or a shift of what the labor was doing as a result of that automation. And my thought experiment is, and again, the whole system is sort of experiencing this. Maybe there's some sort of unforeseen factors, but again, I'm still pretty convinced of it. My general thought experiment is like a very kind of simple one.
Starting point is 00:42:47 If I could get an engineer within box to write 20% more code, and let's just imagine it's all perfect code, or a sales rep to be 20% more productive, i.e., for the same dollars, they can sell 20% more in revenue. In both of those examples, the benefit, the improvement gains that we see, I'm going to reinvest those gains back into the business to grow even faster. In either of those cases, am I as CEO or my co-founder as CFO or we're going to take those dollars and just be happy with higher profit levels?
Starting point is 00:43:23 Firstly, because we're going to be competing with somebody who will use that productivity gain to compete even better. So it's not like we're any of us are in a static market. So we will just have to go and reinvest whatever that performance improvement is back into the business, which would mean more sales reps because if they're, if right now you're paying them X and they can generate X times three, and now they can generate X times three, and now they can generate X times 3.5, like I want as many of them as I can humanly get, probably up to a point, frankly, where it goes back down to three. And just because there's sort of a natural
Starting point is 00:43:53 rate that you expect kind of a sales, you know, person to be productive at. So I think that's going to happen for most jobs. Again, there'll be nuances. So if today you're doing like very frontline customer support where the customer emails and they say, hey, I need to reset my password and the AI now does that, what does that mean? My hunch still is that, is that actually you'll just move to a higher level set of tasks that the customer is asking for. But maybe some of those jobs have to shift into more customer success as opposed to customer support. So in anybody who does B2B software, we can't get enough people to spend time with our customers. Like it's just like there's just a cost equation.
Starting point is 00:44:29 Like I would like to have more people that can go spend time with our customers. Instead, we have to spend a certain amount of time and dollars on just pure inbound like, I have to change my password type email. So I would take those dollars and reinvest them into things like customer success. It would actually be the same person. Like there's like it's like the skill is is relatively transferable. It would just be a different set of work that they'd be doing as a result of what we freed up. Again, there will be examples that are exceptions, but in, you know, every other era of automation, this is more or less what we
Starting point is 00:44:59 get. And I'm not convinced this is that different of a component of automation. Aaron, can I ask like, what are we working toward? Like we're building this God level of technology that can do almost all of our work. And in 20 years, we're we're still going to be working. So why are we going to do that? Well, so it's funny. So I mean, you should have invited maybe Sam Altman up here because he's welcome to come.
Starting point is 00:45:24 His answer will, what's that? He's welcome. OK, good. So I think his answer would just be different than mine. I think he would say we get closer to a higher level of species where we're not having to analyze the market trends. Like the computers are just doing all of that. And, you know, he is much more futuristic on this dimension.
Starting point is 00:45:47 I'm not sort of sure I understand why we wouldn't just sort of ultimately consume all of the work the AI is doing as people and then just still want to do more than what the AI did. But, you know, it's very possible that there's some crazy step function change that I'm not imagining that, you know, Ilya saw the other Ilya and is like, you know, at that moment, you know, then everything really kind of, you know, changes completely. But, you know, this was five or ten years ago. I mean, people like Vinode, I think Sam to some extent, you know, had a view that maybe we end up having UBI in the future because AI is doing a lot of these tasks. And then we will just sort of share the benefit of that productivity
Starting point is 00:46:28 back to society and humanity. And I don't even necessarily know if that would be a bad outcome. I just don't necessarily think that's the one that will happen. I think, like, people will just find a way to have other people work that they want to work with to go and produce things and to innovate and find the next kind of set of problems we want to solve. Cool. All right, we have one here. My name is Adam Anzoni. I'm the CTO of Funwall.com. Nice. Nice. Yeah, we do. Great, great user of the platform. Yeah, good to see you in New York. Yeah, we've, definitely at Funwall.com, we've seen the value of doing
Starting point is 00:47:07 things like programmatically extracting information from things like bank statements and obviously you think a lot about content and it sounds like you're thinking a lot about you know gpt5 and even if you watch the gpt 4o demo you saw like basically computers now have eyes essentially that we don't have to change like in the past vision models had to have training done to do what we you know saw on that demo yep when you think gpt 5 and all the content that box stores and has. I mean, one thing I'm really excited about is video, but are there other use cases that you see unlocked
Starting point is 00:47:47 on the content layer with these newer, higher performance models? Yeah, so I think, again, if you go back to the earlier framework of, let's just say, cost, quality, performance, and context window, and if you, and GPD5 for me is just like a kind of just a kind of a shorthand for, like, way better AI. So maybe it needs to be GPT6 for the thing I'm talking about.
Starting point is 00:48:13 But when you have those factors all improve, so AI is, you know, 10 times cheaper, 10 times faster, 10 times larger context window, 10 times better intelligence. The thing that we think about, you know, given the business that we're in, is what do people do with their content today? and what if you had, effectively, AI agents do many of those things, you know, on our behalf? And thus, I can, again, throw compute at the problem as opposed to people of the problem. So, that is, you know, some of the most straightforward things like just, I want to review every contract in my business and understand, like, all of the risk in my business against all the contracts I have,
Starting point is 00:48:55 or every contract that is up for renewal. You know, in your business, that's a version of just, you know, things. like, okay, every single loan, you know, that is coming in, I want to review everything about it and be able to do, you know, quickly have some assessment of that information to make a better decision. Right now, you know, we're limited by just all the things, you know, all the aforementioned things, which is like, like, how much data can I put in the window, how kind of intelligent is the model itself? What is the cost for doing that? And if those go away, then we can basically deploy AI agents to do a lot of the kind of, you know, frankly,
Starting point is 00:49:29 very manual, not very strategic, not very differentiating work that either we all spend our time on or our colleagues spend time on and at a scale that was just never possible before. You know, I can deploy a thousand legal review agents at a problem instead of the one person in the legal team that can spend time on this. This is a totally different way to solve business problems inside of an organization. So you kind of just, you know, put that across everything. And this is why this is also why I. I'm just, like, extremely optimistic is, you know, as we heard earlier, like, if you're
Starting point is 00:50:06 in SF or in New York, let's say, like, the access you have to, like, the best law firm or the best, you know, marketing agency, you know, this is a fantastic level of access and networking that we have. But whether it's a startup somewhere else in the world, or they maybe didn't get as much funding, or they're not in the sort of flow of what's going on, you know, AI, as an example, this is, you know, three to five years out for this idea. But like, if AI can basically do the things, you know, that are usually those first steps to just getting started with the business that I didn't have access to before because maybe I'm like a three-person startup in some, you know, name your country, like now I can actually have an outbound sales team.
Starting point is 00:50:46 Now I can actually like actually scale my engineering more effectively. I think this is a massive boon for any small business, any startup, any team that wants to experiment. And I don't think it's going to take from jobs because those startups previously just like they actually were not hiring those people they were just sort of stuck in whatever they were currently doing at a certain scale that they were at so so I think this is going to be I think you know an incredible asset for anybody getting started or scaling up all right I definitely want to give people more time to hang out and mingle and we'll still have more pizza and beer in the kitchen in a moment I also want to say that 15 years ago I started coming to tech meetups in New York City I was early in my career
Starting point is 00:51:26 and we got a chance to hear from some of the luminaries, people who were really pushing the cutting edge forward in the technology world. And we saw people from those tech meetups end up advancing to places within the big tech companies and in media, and one of them became a really noxious internet troll. But most of them ended up being great and productive members of our society.
Starting point is 00:51:47 And I think that there is a real value in bringing people together. It's so cool to see so many people here, many subscribers of big technology, and some people that were meeting for the first time. And this is going to be a tradition that we're going to start here in New York and around the world and hopefully we'll come to San Francisco soon.
Starting point is 00:52:02 So thank you all for coming out. Woo! Let's hear it for you. And that being said, I think it's just been such a great privilege, Aaron, to be able to speak with you. I feel like every time we schedule the podcast, some crazy stuff happens in AI.
Starting point is 00:52:19 Now, maybe that's just because things are happening every week, but it feels like we always end up with the peak. We have been well-timed in these, But I also, I really, I mean, this is, we started boxing in 2005, and I have never seen anything like this. The amount of just sleepless nights on, you just have to catch up to like three different company keynotes in one day. It's just like, it's insane, the amount of innovation that's happening. I think, you know, 95% of it is good. 5% of it is like very stressful.
Starting point is 00:52:50 And like, oh my God, like you never feel like you're moving fast enough and you're not catching up to the right thing. but most of it is just like wow what a lucky time to be witnessing all of the technology change absolutely and thank you for helping us unpack it and understand it so thank you Aaron thanks for having me appreciate it cool thanks everybody for listening and we'll see you next time on big technology podcast thanks Aaron yeah thank you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.