Big Technology Podcast - Sam Altman Returns to OpenAI. Now What? — With Aaron Levie

Episode Date: November 22, 2023

Aaron Levie is the CEO of Box. He joins Big Technology Podcast to look forward at the AI field now that OpenAI CEO Sam Altman has returned. In this episode, we discuss: 1) Whether this is good for the... AI field 2) Should we actually be concerned with Ai safety? 3) Whether the saga is over? 4) How companies are insulating themselves in case of further eruptions 5) The downsides of switching off of OpenAI 6) Does the open source movement rise now? 7) Can OpenAI still lobby effectively with a new board? 8) The EA vs. e/acc fight 9) How Sam let this happen --- You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Sam Altman is back at Open AI. But does that mean everything is right in the Microsoft and Open AI universe? And what happens next to the AI field? Aaron Levy joins us to break it all down right after this. Welcome to Big Technology podcast where we've somehow become a daily show about the travails and the next chapters of the Open AI saga. Maybe this is the final one. We're here today with Box CEO Aaron Levy, who seems to always be here when things are
Starting point is 00:00:29 going insane in the AI world. And maybe that's just a function of this industry. But we definitely have a live one to discuss today. Aaron, welcome to the show. Thanks. Good to be on. And I do like that I get to talk about all the drama with you all the time. Yeah, we love having you on. And we appreciate you being here in a very hectic moment in the tech world. Let me start with this. Is this good for AI? Yes, it is. Why? I think, and I want to be very clear that anything I'm saying is purely on reading, you know, the outside reports on what's going on. So some of this is a bit conjecture. But if you take, if you take, if you look at kind of what has been talked about as what led to the events where Sam had to depart Open AI, I think actually, I think this conversation on, you know,
Starting point is 00:01:29 sort of AI safety and doomerism actually does need to come to a head eventually. And I think we have to get to a point where we're talking about AI safety in much more, I think, reasonable, practical terms. I think we have let, this is all just sort of my personal view. I think we've let sort of the conversation move more toward, you know, really extreme, very, like, like, just insanely low problem. possibility existential risk type conversations. And when you have that at a board level, and especially if there's multiple board members that are starting to or always have believed that, you know,
Starting point is 00:02:11 AI represents an existential risk to humanity and they're in this position where they're going to save humanity if they stop something from happening, that is clearly going to lead to just undesirable outcomes in an organization whose mission is to build, you know, AI, and advance the state of AI. You eventually will not be able to reconcile those views at some point. And so I think to some extent this was probably always going to have to, you know, blow up at some stage. And this was just the moment that it did. And I think it's going to lead to some much more healthy conversations about, you know, how do we advance AI, do it in a safe way.
Starting point is 00:02:50 But let's start to get a little bit more grounded in how we talk about some of these topics. Do you think Open AI let some of this safety stuff get out of hand on its own? I mean, Sam was talking recently about how it's good that the board can remove him if they think he's taking this in the wrong direction. I think that, you know, it's interesting, right? So I think actually that's a very good sentence to say because it's showing that there's accountability and that you are not some, you know, kind of you can't run the organization in a dictatorial way. and have full autonomy, and there's no oversight. Like, that's actually, like, you would want a CEO to say that about their organization. I think what we're learning, though, is that it wasn't a board that was diverse enough in opinions
Starting point is 00:03:39 and experienced enough in some cases on how you operate a large-scale organization, how you make thoughtful decisions at a board level, and that actually turned out to be the problem. And he probably, you know, at the time of saying that didn't predict that that was, you know, something that would be, you know, would be, would be, you know, plausible as a scenario. I think he was probably imagining saying that is like, if I truly am kind of, you know, running out of control and going crazy, the board has the ability to remove me. And that's a, that's a safety valve, you know, to the extent that people think there's risk out there. I don't think that his intent was, was they would do that, you know, preemptively by 10 years before we've, we've sort of shown any evidence. that we're doing things in an unsafe type way? I've come down pretty hard against the doomers.
Starting point is 00:04:28 Even before this, I wrote a story. Maybe I think it was last week about how the AI doomers were finally getting their comeuppance. And I did get some feedback that said, I think it was kind of interesting to hear it, that said basically like, what if you're discounting something that's serious? And of course, I don't think that,
Starting point is 00:04:48 I personally don't think that we're like heading towards this AI doom scenario. But like, you do hear the way that Sam has been talking about the stuff that they've developed. Let me, and I gave this, I spoke about this on one of our emergency shows as it was going on. But to Lorene Powell Jobs, he said, I think people have in their mind how much better the model will be next year. And it's remarkable how much different it is. So I'm kind of curious, like, do you feel like these fears are completely overblown and given the acceleration and the trajectory that the technology is moving? is it possible that the folks who are concerned about safety might have a point?
Starting point is 00:05:24 I think that it's, first of all, I like the, you know, I, I, I don't really like this term so much in how, and how it generally gets used, but, but I, I like the marketplace of ideas. Like, I actually think, I actually think we should be constantly debating, you know, AI and AI safety and where the risk is and whatnot. Yeah, I think it's a very interesting intellectual, you know, type of conversation. and I have friends that are on extreme ends of the AI safety debate. But then that's still different from board governance, and you're literally on the board of directors of a company whose mission it is and a nonprofit whose mission it is to advance the state of AI. And so then at some point, we have to live in the real world
Starting point is 00:06:10 and be practical about the implications of that. Like don't join that board if your views are, you're doing is that harmful to society if you really deeply have that that innate fear like that's just the wrong board of directors to be on so um it it and so like like you know i like there's an alternative universe uh where on Thursday night the board members that that were thinking that open AI was moving too fast they should have just resigned in protest right done a public done a public post about why they resigned and then we should debate about how fast open AI is moving but you don't think that this that this that we're like at a point where the technology is potentially threatening
Starting point is 00:06:49 to humanity i i i can't imagine how we get from what we're looking at today yeah to threatening to humanity um and so we are we're like total like complete step function discontinuous breakthroughs away from from that um we are we're at the stages of like of like we have some productivity gains across a few sectors right now with generative AI, and we're in the very early stages of trying to figure out the implications of how to incorporate this into our software in a meaningful way. There's simply nothing that relates to what we're looking at now and something that is sort of humanity ending. Right. So looking ahead, Sam's back at Open AI right now. Is this really the end of this? Like, we don't have a board seat from Microsoft. We don't have the results of whatever
Starting point is 00:07:40 investigation is going to happen. The board is apparently, like, it consists of, like, it's a hilarious board right now. What is it, like, Larry Summers, Adam DeAngelo, and Brett Taylor. Yeah. And so, so do you think this is really the end? I mean, they're supposed to fill it with nine more people. It's amazing you even got back in there. What do you think about it?
Starting point is 00:08:02 Well, it's almost by definition not over. But I don't, I think the dramatic part period has, I think if you, like, looked at the Google Trends graph of like how much drama and how many people are following the Open AI saga, I think we're going to see a dramatic, you know, sort of, you know, drop from this point, simply because I think between Brett and Adam and Larry, you know, their charter is very clear at this point. It's, you know, built, again, this is from when I'm reading, build out a strong board that can govern, you know, open AI. And you can almost just like already predict the kinds of names of people that would be on that board. It'll be operators. It'll be, it'll be, it'll be a, you know, it'll
Starting point is 00:08:44 be a, you know, I can only predict a very well-constructed board of, of, you know, strong, strong leaders. And, and then their job will be to provide accountability and sort of oversight of the organization, and Sam and Greg and the team will go and build AI. And that seems like, you know, kind of almost everything's back to normal, but probably with a hopefully a clearer charter of the organization, you know, on a go-forward basis. Yeah, you and I had emailed when this was all getting started and, you know, talking about having you on here when things resolve. And when Sam agreed to go to Microsoft, I was like, is it time to email, Aaron?
Starting point is 00:09:27 And it just felt like the ball was still up in the air. And I, that's when, once he decided he's back, like I was like, okay, maybe this is the time. I hope this is really it. this feels this feels i mean the the the fact that you have the uh either forced or voluntary resignation of the board members that that appear to be at the center of a lot of this um i think we're now entering a new chapter of uh of this and i think it was very clear in the process the quote-unquote leverage that sam had where basically the entire organization was willing to go over to microsoft so so at some point you know it becomes kind of semantics which is like well
Starting point is 00:10:05 what, you know, what is Open AI if there's only 20 people left and, you know, a chat Chabit domain, if everything moves over to Microsoft and they can just go replicate the same thing, then you kind of lose all your leverage because all you've really done is, you know, you've gotten rid of a nonprofit organization essentially, and so you don't, there's not a lot of, you don't have a lot of negotiating leverage in that case. So I think we're kind of, now that we've seen all the pieces on the, on the board, I think it becomes pretty, a much more boring saga at this point, which is a good thing. Right.
Starting point is 00:10:40 Okay. So I want to spend the bulk of our time together talking about what's next because obviously like this is this seismic moment and there's like a very tidy narrative that a lot of people have wrapped up that it's great news for Open AI and Microsoft. And I'm questioning that a little bit. It seems like a lot of people are now kind of evaluating their relationship with Open AI. not saying they're leaving, but saying they're like, wow, have I locked my company into the OpenAI API and like, what can I do to make sure that I am protected in case something like this happens? What do you think?
Starting point is 00:11:16 Yeah. So, first of all, I think other than a very fast-moving startup where you just don't have any time to, you know, have multiple paths that you're investing in, I think most organizations at scale have already imagined a world where, you know, you know, have multiple paths that you're investing in. I think most organizations at scale have already imagined a world where they have to be, they have to have optionality on where their AI comes from. Any CIO we talk to, any CTO of a, you know, let's say a SaaS company above a couple hundred million in revenue, and you talk to them about their AI strategy, either they've already done a mix of investments in AI. They're doing something with Lama, they're playing with Anthropic, they've done something
Starting point is 00:11:56 with Google, or they've done something with Google. or they've at least been building an architecture that supports that. So I think that, you know, that optionality is already existing. But OpenAI has the really important feature, which is they have the most advanced models at the lowest cost per token. And that is a competitive weapon and a competitive advantage that does not seem to be slowing down at this point. And so, you know, despite all of the events of the past week, That advantage is just an objective advantage, and these are not, you know, these are not at this moment yet as far commoditized components that you can just sort of swap in and out.
Starting point is 00:12:39 Maybe they will get there in five years from now. I actually would prefer that they do because we always like to be able to have flexibility with our underlying suppliers, but they're not swappable right now. And OpenAI does have the most advanced and the lowest cost per token models from our testing. And so I think the implication is simply just that, you know, hopefully there will continue to be leapfrogs by other vendors, Google and Amazon and Anthropic and meta. But the reason why, you know, at least I personally felt like it was super important to have the open-AI piece resolved
Starting point is 00:13:14 is they do have the best technology. And so it would be very unfortunate to lose that and have to go to something that is more, you know, less advanced and then thus, you know, produce a less, you know, high-quality experience for our customers. So that's why I really like the resolution with kind of what we landed on. Right. So let me know if I'm getting this wrong, but you do have this Open AI integration where people can effectively chat with their documents, right? Yep.
Starting point is 00:13:40 And so you can't, you feel today that you can't really swap that out for an anthropic, for instance? We totally could. I'm talking about very small, subtle differences. But in the most objective measurement, Open AI is in the lead. And so we could, of course, you know, introduce slightly, you know, more degraded functionality in a pinch. But I prefer an outcome where Open AI, you know, continues to execute as we've seen. Right. So are you building now or are you preparing now to be in a place where you're effectively model agnostic or are you there already where, like, you can very quickly switch.
Starting point is 00:14:23 We are model agnostic. Okay. What are you hearing from? I mean, anthropics out there, inflections there. They must be like right now making a very serious sales pitch
Starting point is 00:14:34 to try to get others on. I mean, what are you hearing from them? Yeah. So I mean, because it's only been a couple of days, I probably can't sort of yet synthesize what,
Starting point is 00:14:46 you know, their pitches have done or how that's changed. I would say, you know, interestingly, in something like Anthropic, and I have a lot of respect for their advancements and their models are incredibly strong, to be very clear. In the case of Anthropic, you know,
Starting point is 00:15:02 interestingly, you know, they are a safety first organization, which is fantastic. Again, in that sort of idea of marketplace of ideas, you want different companies trying out different things, but it's not 100% obvious that they would not have the same kind of board, you know, type event as Open AI at some point if they've designed. Right. They could have the same thing. Yeah, like for the first time ever, like I actually am like, oh, actually I would want to see your board of directors before, before, you know, telling all of our customers to go rely on your, your product. Like, I didn't realize this was a sort of an area of risk previously, and now it's obviously very, very apparent that it is in
Starting point is 00:15:46 this space. I had this CEO on LinkedIn. Drop a comment. on a post that I wrote today and I thought it was very interesting. He said, we have a proprietary AI engine for internal corporate data plus open AI for public data. I changed that strategy last week after the fiasco emerged to look at other point solutions in this space, such as Anthropic, which looks okay for now. I've changed and I will not be changing back. I'm disappointed as we put a lot of effort and cost into open AI, but you cannot run a solution stacked faced with the circus and held ransom every time something goes wrong or people don't agree. It is really bad, and I will not solely rely on open AI. It's too risky. You think that's going to be
Starting point is 00:16:25 the mentality for others? I think that, again, I think that's a good mentality. There's, again, there's upper limits of what, of then how that actually works, because you can be model agnostic, but if one model is literally superior for your use case, then you still are going to, you're still going to default to the best model. And then really what you're saying is that in a, you know, in a, you know, kind of nuclear scenario, you can downgrade to some other model, but that's not like your default path. And that's actually just good business continuity in general. I think, you know, even when you're thinking about purely your infrastructure, you know, if one, if one data center zone goes down, you go down to, you go to another. And that could actually
Starting point is 00:17:12 be an entirely different vendor in some cases. So I think AI, you know, will, have a similar characteristic as you incorporate AI into your products, but there are going to be subtle differences of then, you know, the product experience as you, as you kind of downshift to another AI model. Yeah. Okay. And now I'm going to get to the open source thing. So I did post this on substack, you know, my thoughts about the move to model agnosticism. And someone took it one step further, which is that, and they said it this way, they said, if you're a company that fine-tuned open-AI models for your product this weekend or this past week may lead you to switch to open-source foundation models, basically that if you needed any level of customizability,
Starting point is 00:17:55 you may kind of say, well, screw it, like we're not going to risk any company with any board, kind of like you and you pointed out. We're going to go open source. We're going to, you know, tune the model that works for us. And that indicates that there'll be a rise of open source development. What do you think? That actually, you know, so open source is the, is the best antidote to, and the best, you know, counterbalance to, you know, any kind of private provide technology. So provided technology. So, so I'm, I'm in general always a favor of open source. I think what we've seen from open source is still not as advanced as, as some of the proprietary services. But this event, this event is, you know, just one of like an infinite set of
Starting point is 00:18:43 reminders of why it's always more ideal if you can control your technology. Nothing can ever be ripped out from under you under any circumstance, you know, kind of with some asterisks. And and so, and so I, like, I would always, if I, if I always have the choice of the same technology closed or open source, I will always choose open source. Today, that is not yet a choice that is, is realistic just because, again, where the state of the models are. And, and, and that's where you kind of get into the talent question, right? Opening I had always been able to attract these top flight researchers because it had this dual promise of pursue AGI, but do it like in a way that you could feel safe with, you know, consciously because of this special board.
Starting point is 00:19:27 Now the board's going to look a lot more like a standard corporate board and it might give companies that are trying to develop open source models a leg up because they could be like, hey, listen, like at least this is like publicly available open source. You're contributing to the great or good. In fact, already today, Jan Lecun, who's the chief AI scientist, that meta is amplifying this message and basically, you know, all but telling people to come over to meta and that meta could be the big winner here. What's your perspective? Yeah, so, so I think, you know, I think in the history of software and technology, we always have, we always have some yin and yang of a great proprietary platform and then somebody who counter positions that
Starting point is 00:20:09 with an open platform. And we had it with Android and iPhone. We've had it with Linux and Windows and other platforms. We have it in Oracle versus MySQL. We've had it in buying kind of proprietary infrastructure and then Facebook's Open Data Center design efforts. So there's almost inevitably always they close versus open source battle in technology. And at each in each kind of a product category or class of product, there's sometimes a different winner. So in my personal life, I use, you know, mostly Apple products because that vertical integration that they provide means that you usually have better security or better user experience. But in our data center design, we almost always want open source, you know, infrastructure services because of our
Starting point is 00:21:02 ability to control, control that, you know, control upgrades, you know, get the power of the community developing on it. And so, you know, AI is one of these technologies, which is how much does it benefit from the vertical integration? That's an open question. How much does it benefit from the pure kind of scale and flywheel effect of the company that gets strongest can spend the most on GPUs, which can train the biggest models, which can then have, you know, the better AI, which then gets more revenue, and then that becomes a flywheel. You know, right now, you know, Facebook is sort of doing this. They're somewhat subsidizing the whole effort without kind of like obvious monetization,
Starting point is 00:21:45 whereas when Google did this with Android, they were going to monetize it through effectively search and advertising. And so I mean, I love that meta is investing so much in open source AI. They obviously benefited from probably their ad algorithms or whatnot, but like it's probably hard for them still to compete with, you know, with open AI on some. dimensions because open AI has a tremendous amount of focus. They have a tremendous amount of resources to attack this problem, and clearly they have some of the best talent in the world working on it. But I think, again, if you look at this in 10 years from now or 20 years from
Starting point is 00:22:25 now, you'd almost always believe that open source would be able to catch up just because we've seen that in almost every other era of technology. But there could also be some idiosyncrasies about this where where the commercial, you know, version is able to always sort of stay ahead for some set of reasons. And so I would just say, like, this one is like too confusing to predict at this point. Where's Google in all this? I mean, we keep hearing about this Gemini model. It's nowhere to be seen. Yeah. So Google is, you know, Google I would normally think of as in the, in the, it's actually, you know, interesting because Google usually will do the counter strategy to whatever the incumbent strategy is. That's sort of their, their typical path of disruption. So,
Starting point is 00:23:05 So, you know, in theory, they should be the, they should be the meta of this situation doing the open source, but they obviously haven't pursued that path. So, which kind of leaves them, you know, somewhere in between these approaches. And I think we're all, you know, equally waiting on the Gemini launch and we'll just see kind of what that looks like and where it's at. Sam Altman sort of made the rounds in Washington and around the world and been a very effective lobbyist for the regulation and the safeguards that he wants. because he's had this company at his back that's been this nonprofit AI safety focus company. Can he still do that under a new structure? I mean, he's going to lose some of the sheen and credibility, you'd imagine, that he brought up beforehand. Well, first of all, I don't, I mean, I don't know that they've changed the nonprofit element.
Starting point is 00:23:52 Do you know if that's the case? Well, no, the corporate structure isn't going to be different, but like we all know that it's going to be very different, a very different board. Yeah. You know, in terms of the people on there. And like the Microsoft Association, That's like I said, no more surprises. Yes. I guess I never, I never, I mean, I don't know how like, you know, policy engines think, but like I never saw.
Starting point is 00:24:18 Yeah. I never saw that their board was the reason why I could trust that organization. I trusted them more because they have a working product with leaders that I generally understand their motivations. They have the most advanced technology. They have the best researchers. And so as a result of this change, I would say, I would say, you know, I would actually say, I mean, if actually, if anything, Sam's whole whole sort of commentary is probably even more credible because we've- They did fire him. They did fire him. So like, like, he clearly is, he works at the, you know, at the, at the mercy of the board. But I would probably trust the decisions to fire him much more with a much more stable. organized thoughtful board of directors.
Starting point is 00:25:08 Do you know what the story is behind this war between effective altruists and these e-accelerationists or effective accelerationsists for the life of me, Aaron, I'm trying to piece it together. It's bananas. I think I understand it. What's your take? I don't know. I mean, so here, I'll just give my understanding. I think there are these, you know, these effective altruists, which tend to, like, you know, want to be very cautious about the development of AI and then the accelerationists who are like build as fast as possible. Like it's basically like, you know, the open AI board of last week against Mark
Starting point is 00:25:43 Andrewson's techno optimist manifesto. Is that right? Like, and how big of a factor are these factions within Silicon Valley? Um, so I, well, first of all, I think like, you know, in a very simplistic way, I think you just described it. So I think you do fully understand it. So, um, uh, relief. And, and, and like, the subtleties are like, I think the, I think there's one camp, and I don't want to, like, I'm going to, I'm going to, I'll be label free for a second. So that way I don't, you know, step in any landmines. I think there's a camp that sort of subscribes to, hey, like, let's slow down AI progress. And, and that'll give us time to figure out how we can make it more safe.
Starting point is 00:26:25 And then there's another camp which says, like, like, A, like, that's not even really clear if that makes any sense. like what like how does one how does one like decide what's the right speed like should we type the code like slower should we should we just arbitrarily put fewer GPUs to work like what like like like it's a very it's a very it's sort of like antithetical to everything else we do in in technology progress which is when you get to the next step you learn more information so you actually want to get to the next step as quickly as possible so then you get the next step and so on and then and so then that other camp is sort of like well well actually AI is actually a good thing that for society. So let's actually accelerate as much as we can to be able to get there. And, oh, by the
Starting point is 00:27:07 way, along the way, we'll learn what's not safe or what's not working. And the ecosystem will continue, as we always have in the past, you know, many decades of technology, we'll continue to build the right safeguards as we go along. But, oh, by the way, like, we don't need to be fearful for our AI future. So that's also why we can just have this underlying sort of rate of acceleration. And that's, you know, how I generally view. the two camps. And I think that the, you know, it's interesting, I actually, probably a week ago, I would have, I would have sort of said, hey, yeah, these are like academic conversations that, like, talk about at dinner. And, and, like, you know, these are like, these are just the
Starting point is 00:27:46 intellectual, fun debates of Silicon Valley. Obviously, it turns out that these things are much more significant and real in, you know, at the board level of these AI labs. And I would not have predicted that. So just one thing about AI safety. Like one of the things that I saw, I mean, I had an AI founder who was basically DME and said economics always win. That even if we had like an existential risk with AI, if open AI just stopped producing their thing, their technology like the people that invested in them and the employees with money on the line would find a way to override whatever safety was built into their structure. What do you think about that? Well, The reason why this is hard is my brain has not yet been able to click.
Starting point is 00:28:33 My brain hasn't been able to click into what is the actual specific fear that people are worried about. Right. And then why that fear, if that fear even gets remotely true, we're not even like talking about like capitalism level like, like, you know, intervention. We're talking about government intervention. Like if AI literally is doing things that kill people, then, then like, you know, Josh Kushner will not be able to overwork. ride that, you know, because he did a tender round, that's like literally where the FBI is going to show up and they're going to be like, what have you guys been building? So, so like, that's why I don't know that like two or three people on a board of directors need to sort of preemptively save the
Starting point is 00:29:16 world because they've, they've sort of extrapolated some crazy set of events that are going to play out in 10 years from now based on the decision Sam's doing right now. Like, like, if it gets that far, which for everything I'm looking at, I don't see how it does, but if it does get that far, then that's like, this is just above your pay grade. Like, this is a different level type crisis and problem that we're dealing with. Okay. Last question for you. Sam Altman seems pretty capable. I'm sure he's spent time with him. How do you think he let this happen? I think, so he's insanely capable and incredibly competent and obviously an incredible entrepreneur. I think that, I think like other board drama that we've seen in history, I think sometimes you can maybe misunderstand or get out of step with or not totally see some of the
Starting point is 00:30:18 have some blind spots of where board members have kind of evolved to. And I think even the world's best entrepreneurs, you know, can run into that. So, so I don't, you know, I would just say that, I would say this can happen to the literal, you know, best, best, you know, operators on the planet. Yeah. Aaron Levy, thanks so much for making time. Always learn so much from you from our conversations. Great having you on. Good chatting. See you, man. Awesome. Take care. Thanks everybody for listening. We'll be back on Friday with another show breaking down the news. Until then, we'll see you next time. And thanks for listening. We'll see you on Friday on Big Technology Podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.