Big Technology Podcast - OpenAI President Greg Brockman: AI Self-Improvement, The Superapp Bet, Path To AGI, Scaling Compute

Episode Date: April 1, 2026

Greg Brockman is the President and co-founder of OpenAI. Brockman joins Big Technology to discuss OpenAI’s product strategy, the rise of its coming super app, and why he believes AI is entering a ne...w takeoff phase. Tune in to hear Brockman explain OpenAI's bet on the GPT reasoning model tree over video generation, what the "Spud" pre-training run means for upcoming models, and why he believes AGI is 70-80% achieved. We also cover the competitive landscape, the economics behind OpenAI's $110 billion infrastructure bet, and public skepticism toward AI. Hit play for one of the most revealing conversations yet about where AI is headed and what it means for everyone. Join our June 18 event here: https://luma.com/q1tngaw7https://luma.com/q1tngaw7 --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 I think it's extremely clear that we are going to have AI within the next couple of years in a way that is still going to be jagged, but that the floor of task will just be almost for any intellectual task of how you use your computer. The AI will be able to do that. The scariest moment at OpenAI was actually after we launched Catch ET. And I remember being at the holiday party and just feeling this vibe of one. I have never felt that. I was like, no, that we are the underdog.
Starting point is 00:00:30 And we always have been. From the moment we launched chat chvety, I remember talking with my team having this exact conversation. I said, how much compute should we buy? I said, all of it. I said, no, no, no, really, how much compute should we buy? I said, no matter how much we try to build, I know we're not going to be able to keep up with the demand. OpenAI co-founder and president Greg Brockman joins us to talk about AI's most promising opportunities, how OpenAI plans to capitalize on them, and what the Super App is all about.
Starting point is 00:00:57 And Greg is with us here in studio today, Greg. Great to see you. Thank you for having me. Well, we're speaking on a time where OpenAI is shutting down video generation and focusing its energies on a super app, which is going to combine business and coding use cases. And I think from the outside, those of us washing this are like, including myself, Open AI is winning in consumer. And now it's shifting its resources. What is happening? Well, the way I would think about this is that we have been an award.
Starting point is 00:01:30 where we're developing this technology, deep learning, to really see, can it have the positive impact that we have always pictured? Can it build, can it be used to build applications that help people, that help them in their lives? And we've separately had a arm that's saying, let's actually try to deploy this technology, whether that's to help sustain the business, to start getting some practice with getting real world impact, those kinds of things. For the time when this technology actually comes to fruition, that it actually becomes the everything that we've imagined that we started this company to try to to have. And I think that we're at a moment now where we've really seen this technology, it's going to work and that we're moving out
Starting point is 00:02:15 of testing on benchmarks and sort of these almost cerebral demonstrations of capability to it actually being the case that for us to develop it further, we need to see it in the real world and get feedback from how people are using it in knowledge work in various applications. And so the way I think about it is that this is a bigger strategic shift because of the phase of the technology. And it's not so much that we're saying we're moving from consumer to B2B. It's really what we're saying is that what are the most important applications that we can focus on? Because we can't focus on everything, right? But what are the things that we can bring to life that will actually synergize together as we build them
Starting point is 00:02:52 and that will deliver meaningful impact and help elevate everyone. And when we look at the list, so there's consumer, you can think of it as many things, but there's a personal assistant, right, something that knows you that's aligned with your goals, it's going to help you achieve whatever it is that you want in your life. There's also creative expression and entertainment and many other applications. On the business side, maybe you can, if you zoom out, it looks more like one thing of just You have a hard task. Can the AI go do it?
Starting point is 00:03:22 Does it have all the context to do all these things? And for us, it's very clear that the stack rank includes two things at the top. One is the personal assistant. The other is the AI that can go and solve hard problems for you. And when we look at the compute we have, we are not even going to have enough compute to fund those two things. And then once we start adding in many other applications, many other things that AI is going to be very useful for and is going to help people with, we just can't possibly get to all of them. And so I think that this is a recognition of the maturation of the technology and the incredible impact is going to have very quickly and our need to prioritize and to actually pick
Starting point is 00:04:01 the set of applications that we want to shine and to really bring to the world. And when I've heard you talk about Open AI's various bets, one of the ways that you described it is that opening I can be a version of Disney or like Disney, where you have this core compelling advantage at the center and then you farm it out in different ways. So Disney has Mickey Mouse. And then it can do the movies and the theme park and Disney Plus. And for Open AI, it's the model and you can do video generation and be this assistant and then help with enterprise and work. So is it no longer possible then to have that sort of central advantage and then be able to farm it out in all sorts of ways? Like have you decided, have you come to this realization that basically like it's time to pick or choose?
Starting point is 00:04:44 Well, I actually think that in some ways that that story is even more true than it's been. But the thing that's important to realize is technologically that the SORA models, which are incredible models, by the way, are a different branch of the tech tree than the core reasoning GPT series. They're just built in a very different way. And to some extent, we're really saying that pursuing both branches is very hard for us to do for these applications. Now, we are actually continuing the SORA research program in the context of robotics, right, which I think is very clearly going to be a transformative. application, which is still a little bit in the research phase, right, that robotics is not really yet mature and to pull it in the way that we're going to see this real takeoff of this technology and knowledge work over the next year. And so it's a recognition of for this moment,
Starting point is 00:05:32 we really need to put the primary focus on developing the GPT series. And that doesn't just mean text. It doesn't just mean cerebral things. Like, for example, bi-directional communication, having a great speech-to-speech interface. That is something that also. is going to make this technology very usable and very useful, but it's not a different branch of the tech tree. It's all kind of one model. And we just sort of tweak that in slightly different ways, kind of like you describe. And so I think there's something about if you branch too far and you have two different artifacts, that is very hard to sustain in a world where there is limited compute. And the reason there's limited compute is because there's so much demand. There's so much people
Starting point is 00:06:10 want to do with every single model that we create. Okay. So talk a little bit then about why your bet is not on this seems like world model version, where the video understands where things go. It's obviously useful for robotics. Why is your bet on the GPT reasoning model tree as opposed to this area which you had been seeing real progress with SORA? I mean, to see the progress of video generation, you know, generation one, two, three was enormous.
Starting point is 00:06:38 So why is your bet where it is? So the problem in this field is too much opportunity, right? It's the thing that we observe very early on in open eye is that every, everything we could imagine works. Now, there's different levels of friction associated with it, different amounts of engineering effort, different compute requirements, all those things. But every single different idea, as long as it's kind of mathematically sound, you actually can start getting some pretty good results. And I think that shows you the power of the underlying technology of deep learning, the ability to really take any sort of problem and to get to the
Starting point is 00:07:12 meat of it, to have an AI that really understands the underlying rules that generated the data. So it's not about the data itself. It's about understanding the underlying. process and be able to apply it in new context. So you can do that in world models. You can do that in scientific discovery. You can do that in coding. And I think that where we are, as we think about the rollout of this technology, is again that the, you know, there's been this debate of how far will the text models go, how far can text intelligence go? Can you have a real conception of how the world operates? And I think that we have definitively answered that question of it's it is going to go to aGI like we see line of sight and that it is at this point we have
Starting point is 00:07:53 line of sex these much better models that are coming this year and the the the amount of pain within opening eye that we've had to decide how to allocate compute that goes up not down over time and so I think that maybe the core of it is that we have a it's about sequencing and timing and that in this moment the kinds of applications that we've always dreamed of are starting to come into reach. Like, for example, solving unsolved physics problems, right? We had this result recently where a physicist had been working on a problem for some time. He gave it to our model. Twelve hours later, we have a solution. And he said, this is the first time he seen a model where he felt like it was thinking that it felt like this is a problem that
Starting point is 00:08:35 maybe humanity would never solve. And our AI solved it. But you see something like that. Like, you have to double down. You have to triple down. Because we can really unlock all of this potential for humanity. And so I think for me, it's not about relative importance of these things. It's more about what is OpenEye's mission of delivering AGI to the world, our vision of how it can benefit everyone, and the fact that we have a tech tree that we see how to just push it, how to do the engineering, do the further science and research to then have that come to fruition. Okay, so I do want to come back to the next line of models that you're anticipating, but I want to press you on this for a moment. I was speaking with Demis Sazas from Google
Starting point is 00:09:16 deep mind earlier this year. And interestingly, he said that the thing closest, that feels closest to AGI for him was nano banana, the image generator that they have. And the reason is because for an image generator or a video generator to create the images and the videos that it makes, it does have to understand the interaction between objects and have at least some conception of how the world works. So is this a potential, I mean, it's a big bet, but does Open AI potentially miss something by doubling down on the other tree, if that's the case. So two answers. One is absolutely.
Starting point is 00:09:51 Yeah. Right? There still is not like in this field, you do have to make choices, right? You have to make a bet. And that's actually where opening I started is. We really said, what is the path to AGI that we believe in and really focused hard on that, right? The sum of random vectors is zero. But if you align your vectors, then you can go in a direction.
Starting point is 00:10:09 But the second point is it's actually image gen is something that has been very, very popular. within chat Chabit. And that's something we're continuing to invest in, continuing to prioritize. And the reason we're able to do that is because it's not actually on the on the world model, like diffusion model tech branch. It's actually based on the GPT architecture. And so there, even though it's a different data distribution, the actual core technology at the core stack, it's all one thing. And that is, that is actually the pretty wild thing about what AGI is, is that sometimes these very different looking applications between speech-of-speech, image generation, text, and text, by the way, itself, many facets of, like, science and coding
Starting point is 00:10:52 and personal, like, wellness, information, those kinds of things. All of that you can do in one technological envelope. And so a lot of what I'm looking at, and what we as a company are looking at from a technological perspective is how to have as much unification of our efforts, because we really see this technology as being something that's going to uplift and how we're the whole economy. The whole economy is a massive thing. And so we can't possibly do all of it, but we can do our part. That's the general part in artificial general intelligence. That's the G.
Starting point is 00:11:22 That's the thing about that. It really is. Speaking of unifying things, what is this super app going to be? So the way I think about the super app is, so it's going to bring together coding, browser, and chat GPT. That's right. So what we want is to build an endpoint application for you that, really lets you experience the power of AGI, so the generality. And so if that's, you think about
Starting point is 00:11:48 what chat is today, I think chat is really going to become your personal assistant, your personal AGI, right, an AI that's looking out for you that knows a lot about you that's aligned with your goals, that's trustworthy, that kind of represents you in this like digital world. Codex, you can think of as right now it's been a tool that we built for software engineers, But it's becoming codex for everyone that anyone who wants to build can use codex and to produce, to get the computer to go do the thing that they want. And it's not just about the actual software anymore. It's really about almost the use of computer, whether it's to set up, like I use it settings on my laptop. Like I forget how to set the hot corners.
Starting point is 00:12:32 Just ask codex to do it. It just does it. Right. That's what computers were always supposed to be is contort to the human rather than me contort to them. And so imagine one application. that anything you want your computer to do, you can ask it. And so there's a computer use browsing built in for an AI to be able to actually use a web browser and for you to be able to oversee what the AI is doing, that all of your
Starting point is 00:12:52 conversations regardless of application, whether it's for chat or whether it's for code, whether it's for general knowledge work, that's all unified in one way, that the AI has memory, knows about you. So that is what we are building. But it's really an iceberg because that's the tip. what to me is actually much more important is the technological unification. And we talked about it a little bit in the case of the underlying models. But the thing that's really changed over the past couple of years has been that it's no longer
Starting point is 00:13:20 just about the model. It's about the harness. It's about how does the model get context? How is it connected to the world? What actions can it take? How does the actual, as you get new context, how does the loop of interacting with the model work? All of that was something that we had multiple implementations of or slightly different. And we're converging it.
Starting point is 00:13:39 We're going to have one version of that and almost end up with this AI layer that can be pointed at specific applications in a very thin way. So you can build a little plug-in, a little skill, a little UI. If you really want something that's great for finance, if you want something that's great for legal. But you generally won't have to because there's been this one super app that will be very broad. This app is for business use cases, personal use cases. So that is really the core is that just like a computer,
Starting point is 00:14:08 like your laptop. Is it for personal? Is it for business? Right? Well, both. And it's for you. It's your personal machine that gives you an interface to this digital world. And that's what we want to build. So I just talk a little bit about from a non-business standpoint, I'm using the super up in my personal life. What am I using it for? How does my life change? So I would think of it as so personal life, just the way that you use chat chbt. How do you use chatypt right now? And people use it for such a diversity of really amazing applications. Sometimes that's just asking for, I'm going to give a speech at a wedding. Can you help me with drafting it? If you give me some feedback on this idea that I have, I'm working on a small business. Can you give me some ideas there,
Starting point is 00:14:52 which maybe starts the bridge between personal and work? There's any of those questions should be things that you can go to the super app for and it answers, but that if you think about what chat, ABD has been. It's already been evolving. It used to not have any memory, right? It's just the same AI for everyone, starting from scratch. It's almost like talking to a stranger. It's way more powerful if it remembers, remembers the interactions you've had. It's way more powerful if it has access to context, right? That it's hooked up to your email and to your calendar and really knows your preferences and has this, this almost deeper set of just, you know, past experiences with you that it's able to leverage to achieve your goals and to you look at things like pulse is a
Starting point is 00:15:38 feature in chat chaghti t right now where every day it surfaces for you things that you might be interested in based on what chat chag chag chb knows about you so i'd say that in the personal capacity that the super app will be doing all of that and will be doing it a much deeper and richer way when you plan to ship it uh so the way to think about it is uh we should we're taking incremental steps to get there over the next couple months, we should have shipped the complete vision of what we're talking about here, but it's going to come in pieces. And the place that we're starting is with, for example, the Codex app today is something, which is a, it's really two things in one. It's a general agent harness that can use tools. And it's also a agent that knows how to write software. That general
Starting point is 00:16:31 agent harness, that can be used for so many different things. You hook it up to spreadsheets, you hook it up to Word documents, it's able to help you with knowledge work. And so we're going to make the Codex app just so much more usable for general knowledge work because it already, what we've seen within Open AI is all this organic adoption of people using it for that. So that'll be the first step, and there are many to come. I was speaking with one of your colleagues yesterday, taking a look at Codex. And he mentioned that someone using Codex had instructed code. to help them with video editing, it builds a plugin for Adobe Premiere, started separating it into chapters, and started the edit. That's what we're looking at. I love hearing that. That's
Starting point is 00:17:13 exactly, exactly the kinds of things that we want this system to be useful for. And it's been really interesting seeing, like the Codex app itself was originally built for software engineers and that it's almost like the current usability of it for non-softer engineers is actually quite low because there's a bunch of little things where when you set things up, you run into some error that a developer knows what it means, knows how to fix it. It's just kind of what we're used to. But if you're not a developer, you're like, what is this? Like this is not something that I've encountered before.
Starting point is 00:17:51 And despite that, we are seeing people start to use this who have never programmed before to be able to build websites, to be able to do exactly the kinds of things you said of like be able to automate different, their interactions with different pieces of software, to be able to get lots of leverage. Like someone on our communications team uses it to, it's hooked up to Slack, to their email, they're able to go through a bunch of feedback, be able to synthesize it very well. So these kinds of tasks, people who are very motivated can jump through the hoops and then get great return from it.
Starting point is 00:18:22 And so to some extent, we did the super hard part of an AI that is really really, smart, capable, can actually accomplish your task. Now we have to do the much easier part in some sense of make it broadly useful and to remove these barriers to entry. And just looking at the competitive landscape, I mean Anthropic, they have the Claude app. You can use Claude the chatbot, Claude Co-Wort, Claude Code. So they have a version of a super app of their own. I'm curious what you think Anthropics saw that got them to this position earlier. And what do you think your chances are of catching up there? Well, I think that if you rewind 12, 18 months, we have always been focused on coding as a domain. We always had the best numbers on
Starting point is 00:19:07 different programming competitions, these very cerebral things. But the thing that we didn't invest in as much was that last mile of usability of really trying to think about, okay, this AI is so smart. It can solve all these great programming competitions, but it's never seen someone's real world code base, which is messy and not. quite as pristine as the world that it sort of has experienced. And I think that is something that we were behind on. But about, you know, maybe mid last year is when we got very serious about that. And that we had a team very focused on what are all the gaps, what are all the kind of messiness of the real world we haven't, we haven't encountered. How do we actually get training data that
Starting point is 00:19:45 build training environments that let the AI experience what it's like to actually do software engineering, be interrupted in weird ways, all those things. And I'd say at this point, we are caught up. When people go head to head for us versus competitors, that people tend to prefer us. We do know we're dragging in front end. We're going to fix that. But this is the general motion that we've been taking is to say that that usability of thinking about the product end to end, not just a model, and then build a separate thing, right? Really think of it as one product when we're doing the research. We're thinking about how it will be used. That has been a motion that we've been changing within Open AI.
Starting point is 00:20:22 And so I think that the way I would look at it is that we have incredible step-up models coming. Like this whole year, I look at the roadmap. It's truly inspiring what will be possible. And then we've been really focusing now on let's also get the last mile usability. Since 2020, Open AI has been like the undisputed leader. And obviously now the competition is intense. It's like, you just use the word, we're caught up.
Starting point is 00:20:52 Is there a different vibe within the company where it's like now instead of the one that's like far ahead on something like Chad GPT in a real fight? I mean, you're seeing it come out of some of the reporting on what's happening within the company, the fact that there are no more, there's been meetings, there's no more side quests at Open AI. It's all focus on this. How's the environment or the vibe changed here? Well, I would say that for me personally, yeah, the scariest moment at Open.
Starting point is 00:21:19 AI was actually after we launched chat chbt. And I remember being at the holiday party and just feeling this vibe of we won. I have never felt that. I was like, no, that we are the underdog. And we always have been, right? The competitors in this space establish companies that have just sort of much more capital, much more, you know, human resources, data, the whole, the whole thing. Why is open eye able to compete at all?
Starting point is 00:21:44 And to some extent, the answer is only because we never feel complacent. where we always feel like we are the challenger. And it actually, for me, has been a very healthy thing to see us start to see that in the marketplace, to see other competitors emerge and do a good job. And that that is, you know, in my mind, you can never fix it on your competitors. If you focus on where they are, then you'll be where they are and they'll already have moved. And I think that that's what's been happening in the other direction, right, is that a lot of people don't focus on exactly where we are and we get to move.
Starting point is 00:22:15 And I think that the it almost gives us this alignment, this unification of the company. And I kind of described how we almost thought of research and deployment of separate things. And now we really want to integrate them. Like that to me is such a wonderful thing. And so I'd say that the world that we're in is one where I've never felt like we were, you know, you're never as good as they say you are. You're never as bad as they say you are. I think it's just been very steady. and that the core of the model production,
Starting point is 00:22:47 that is something where I actually feel extremely, extremely confident in our roadmap, the research investments we've been making. And I think on the product side, we have such great energy that's all coming together to deliver this to the world. You foreshadowed a couple times already that you have some good models on the way.
Starting point is 00:23:04 What is SPUD? The information said that you've finished pre-training SPUD and Sam Albin, the CEO at OpenAI, has told the staff that they should expect to have a very strong model in a few weeks. This was a few weeks ago. And the team believes it can really accelerate the economy and things are moving faster than many of us expected. So what's but? It's a good model.
Starting point is 00:23:31 But I think that it's really not about any one model. Okay. Right. The way that our development process works is you have pre-training. So you produce a new base model that then is the foundation that we build. further improvements on top of. And that that is always a huge effort across many people in the company. And that's where I've actually been spending most of my efforts over the past 18 months, has been really focused on our GPU infrastructure, on supporting the teams that do all of the
Starting point is 00:23:57 training frameworks to scale up at these big runs. But then there's a reinforcement learning process. So you take this AI that has learned lots of things about the world and it applies that knowledge. And then we do a post-training process where you really say, okay, now you know how to solve problems. You practiced it in all these different contexts. And then here's kind of the last mile of behavior and usability. So I think of Spud as a new base, as a new pre-trained, and that we have had this, I'd say it's like we have maybe two years worth of research that is coming to fruition in this model. It's going to be very exciting. And I think that the way that the world will experience it is just improved capabilities. And that for me, it's never about any one release, because
Starting point is 00:24:41 as soon as we have this one release, it'll be an early version of what we have coming. We'll do much more of each of these steps of the improvement process. And so I think that where we're going is it's almost just, we have this engine of progress that just move faster and faster, and the spot is just one step along the way. So what do you think it'll be able to do that today's models can't? So I think it's going to be able to solve both much harder problems. I think it will be much more nuanced. It'll understand instructions better.
Starting point is 00:25:12 It'll understand the context much better. That there's this thing called big model smell that people talk about, where it's just like there's something about like when these models are just actually just much smarter, much more capable, that they bend to you much more. And you feel it, right? When you ask a question and the AI doesn't quite get it, it's always so disappointing. Right? We have to like explain.
Starting point is 00:25:33 You're just like, you really should be able to figure this out. And so I would just think of it as in some ways, just, qualitatively, there will be, but quantitatively, lots of shifts, right? And qualitatively, there will just be new things where you would be frustrated before. You never use an AI for it. And I just use it without thinking very much. And I think that that is what we're going to see across the board. I'm super excited to see how it raises the ceiling, right? We've already seen these physics applications, things like that. And I think we will be able to just solve like way more open-ended problems, way longer time horizons. And then also very excited to see how it raises
Starting point is 00:26:07 the floor where just for anything you want to do, it's just so much more useful for you. It can be kind of tough for everyday users to really feel the change. Like there was a talk about a lot of buildup before GPT5 came out and then it came out. And actually the initial reaction was somewhat disappointment among the public. But then I think people realized that for certain tasks, it was really good. With these next series of models, do you expect that it'll really be felt sort of in the trenches in certain occupations, or do you think it will be a broadly tangible improvement for everyone? I think that it will be a similar story where you, when you release it, there will be people who will try it and be like, this is a night and day,
Starting point is 00:26:51 different than anything I've seen. And then there will be some applications where we weren't necessarily intelligence bottlenecked. And so if you have a model that's more intelligent, maybe you won't feel it right there. But I think over time that you will feel it, because, Because the fundamental thing that shifts is how much do you rely on the system? Like, if you think about the way we all interact with AI, is we have some mental model for what we think it can do. And that that mental model shifts actually fairly slowly, right? As you get more experience, it does something magical for you.
Starting point is 00:27:19 You're like, oh, wow, it can do that. I never imagine that. And we see this, for example, in applications like access to health information, right? that we see people who, you know, I have a friend who used Chachybt to understand different treatments for his cancer and that he was told by doctors that he was terminal, that there was nothing they could do for him. He used Chachachad he to actually research a bunch of different ideas and he was able to get treatment that way. And that's something where you need to have some level of belief that the AI is going to be helpful in that application for you to really put in the effort
Starting point is 00:27:55 to get something out of the machine. And I think what we're going to see is, you know, is that for any application like that, it's going to become so much more evident to everyone that the AI can help you. And so I think it's a little bit of the technology getting better, but it's also our understanding of the technology shifting and catching up to that. And you'll be relying on it more inside Open AI. You have an automated AI researcher in the works. It's supposed to come out this fall. What is that? So the direction of travel right now. We are in this early phase of takeoff of this technology. What does takeoff me? Takeoff is as the AI gets better and better on this exponential. And in part because we can use the
Starting point is 00:28:40 AI to make the AI better. So our development process speeds up. But I also think when I think of takeoff, it's also about real world impact. And in some ways we've been, you know, every technology is an S curve. Or if you zoom out some of S curves that end up being an exponential. And I think that that's what we're encountering right now. So it's the technology development is moving with increasing speed. And it's this engine that's picking up momentum. But it's also in the world, there's all of these tailwinds because there's chip developers that are getting more resourcing into their programs. There's this economy of people who are building on top of it, trying to figure out how it fits into every different application.
Starting point is 00:29:25 And all of that energy is just accumulating more. and more into this takeoff phase of the AI becoming just a kind of side show to being the main driver of economic growth. And I think that that is something that it's not just about what we're doing in these walls. It's about how the whole world, the whole economy comes together in order to push forward this technology and its usefulness together. And the researcher will then, what will it do exactly? Well, so the researcher will be a moment where the AI, which we're building, that right now it's doing a larger percentage of tasks, that we should be able to let it run autonomously and that I think there's a lot of thought that goes into what that means and that it
Starting point is 00:30:05 doesn't necessarily mean that we just let it off on its own and then come back later and see if it did something good. I think that we are going to be very involved in managing it, right? Just like right now if you have a junior researcher, if you leave them on their own too long, they're probably going to go down a path that's not very useful. But if you have a senior researcher or someone who has a vision, they don't even necessarily need to know the mechanical skills, they will be able to provide feedback,
Starting point is 00:30:29 review the plots that this person's, you know, the intern's producing, and to provide direction in terms of the vision of what is it that I want you to accomplish. And so I think of this as a system that we're going to build that will massively accelerate our ability to produce models to make new research breakthroughs happen to be able to make these models more useful
Starting point is 00:30:48 and usable in the real world. Right. And to do that at increasing. So sorry. What's it going to do? Are you going to say go find AGI and it will just try to create? I think the way I think of it is something like that to first order. And at a practical level, I think I would view it as is taking the full end to end of what one of our research scientists does and be able to do that in Silicon. Another way to think about takeoff is their progress in AI goes from incremental to gathering momentum and then sort of this unstoppable
Starting point is 00:31:21 March 2 and intelligence that's smarter than humans. Do you worry that there's just as there's possibilities for things to go right on that front, there's also possibilities for that process to go wrong? I mean, I think that that's absolutely yes. I think that the way to get the benefits of this technology is also to really think about the risks. And if you look at how we've approached technology development from a technical perspective, we invest a lot in safety, security. Good example of this is prompt injections, right? If you're going to have an AI that is very smart, very capable, hooked up to lots of tools, you want to make
Starting point is 00:32:05 sure that it can't be subverted by someone giving it a weird instruction. And that's something that we've invested in quite a lot. And I think have really incredible results, have an incredible team working on. And it's interesting to think about some of these problems where you can make analogies to humans. Like humans are also susceptible to fishing attacks, to being deceived in different ways, to not really understanding the full context of what they're working on. And we bring those analogies into our development process and think about this. Whenever we release a model, develop a model, how do we ensure that it's going to be aligned with people and be able to actually be helpful? And that is something that we care quite a lot about.
Starting point is 00:32:43 I think that there are bigger questions about the world, the economy, how does everything change? How does everyone benefit from this technology? They're not purely technical, not purely something that Open Eye on our own will be able to solve. But yes, I think quite a lot about not just pushing forward the technology, but also really about how do we ensure that we have the positive impact that is its potential. The worry, though, is that this is a race. And what's being done within these walls for at Open Eye headquarters is also being copied by, many of the open source players, which have much less boundaries and barriers and protection on the safety side of things. I think you said this once, that it takes, you know, people getting
Starting point is 00:33:24 a lot of things right to be creative and sort of one person with bad intent to be destructive. And that's sort of where the concern lies, for me at least, is just, when this is, it's clearly a race, it's going fast. Many of your counterparts have said if everybody agrees to stop it will stop it and it doesn't seem like it's going to slow at all. So is the reward worth the risk, basically? I think that the reward is worth the risk, but I think that that is too coarse-grained of an answer in some sense. The way that I think about it is that we've asked from the beginning of Open AI,
Starting point is 00:34:03 what does a great future look like? How can this technology really be something that uplifts everyone? And you can think of there almost being two different angles. One is the centralization view of saying that, well, the way to make this technology safe is that you have only one actor building it. And so then you don't have any pressures, right? You can really think about getting it right and then figure out how to roll it out to everyone when it's ready, those kinds of things. That's a pretty tough pill in some ways. And I think that there's a lot of properties that you can instead think about approaching differently, which we refer to as resilience.
Starting point is 00:34:40 to think of it as this open system where there's lots of players who are developing the technology. But it's not just about the technology. It's about building societal infrastructure that helps this technology really go well. And if you think about how electricity has developed, that's something where lots of people produce it, that it actually has dangers and risks. But we also build our safety infrastructure in a diversity of different ways around safety standards for electricity, around different ways of harnessing it, about how you scale it, that there's regulations when you're at these massive scales that lots of people are able to use it in a democratized fashion.
Starting point is 00:35:15 There's inspectors. Like there's a whole system that's been built around the needs of that technology, the proclivities of that specific technology. And I think that one thing that we have really, I think, seen with AI is that it is something where we need this broad conversation. We need lots of people to be aware if this technology is going to come and change everything for everyone. people need to participate in that.
Starting point is 00:35:41 It can't be something that's done off in secret by just one sort of centralized group. And so this has been, to me, a very core question to how this technology should play out in something we really believe in is this resilience ecosystem that should emerge around the development of this technology. So you said we're in takeoff in the middle of the takeoff process and we, I guess, all of humanity are experiencing this. Nvidia's CEO at Jensen Wong said recently that he believes AGI has been achieved. Do you agree? I think that AGI has a different definition to many people. And I think that there are many people who would say that what we have right now is AGI.
Starting point is 00:36:23 I think you can debate it. But I think that maybe the thing that's interesting is that AGI, like the technology we have right now is very jagged. Like it is absolutely superhuman at many tasks. when it comes to writing code, those kinds of things. AI can just do it, right? And it really removes a lot of the friction to creating things. But there's some very basic tasks that a human can do that our AI still struggle with. And so it's almost to say that where do you draw the cut line?
Starting point is 00:36:54 It's a little bit more of a vibe than a feeling than it is a, than it is science at the moment. And so I think for myself, we're definitely going through that moment. And if you were to show me five years ago, the systems we have today, I did, oh, yeah, that's what we're talking about. But it's just different. It's so different from anything we ever pictured. And so I think we need to adjust our mental models appropriately. So you're not there yet? I think that I'd say I'm basically like 70, 80 percent there.
Starting point is 00:37:23 So I think we're quite close. I think it's extremely clear that we are going to have AGI within the next couple of years in a way that is still going to be jagged, but that, the floor of task will just be almost for any intellectual task of how you use your computer. The AI will be able to do that. And I think that, yeah, right now I have to give a little bit of an uncertain answer. Okay. Because there's some, it's almost like a like an uncertainty principle kind of thing. You can, you can debate it.
Starting point is 00:37:53 For my own personal definition, I think we're almost there. And with maybe a little bit more, we will absolutely be. Okay. Well, we're, we've got to go to a break. But as long as we're on the way to the. break, I want to let folks watching at home know that you and I are going to be talking again June 18th here at San Francisco at SF Jazz. So I will put some information if you want to come join that conversation in the show notes and I do hope you sign up. All right. We'll be back right
Starting point is 00:38:18 after this. I've interviewed a lot of great tech founders on this show. And one surprisingly universal challenge comes up again and again, finding the right domain name. It's something I ran into myself from launching big technology. The names you want are often taken. And it's tempting just to settle and move on. But the founders I respect most don't settle on fundamentals, and your name is one of them. It should immediately signal what you actually build. That's what I appreciate about dot-tech domain names.
Starting point is 00:38:45 It just makes sense. It tells the world your customers, your investors, anyone Googling you that you're building in technology, clean, direct, no qualifiers. And I'm seeing more serious startups lean into it. Nothing.com, Onex.com, aurora.com, CES.com, ultra.com, dot tech, Alice. Tech, neon.com, Blaze. Tech, Phi. Tech, and so many more. If you're building
Starting point is 00:39:06 something tech first, don't settle. Secure your dot-tech domain from any registrar of your choice and make your positioning obvious from day one. This episode is brought to you by FedEx. These days, the power move isn't having a big metallic credit card to drop on the check at a corporate launch. The real power move is leveling up your business with FedEx intelligence. and accessing one of the biggest data networks powered by one of the biggest delivery networks. Level up your business with FedEx, the new power move. And we're back here on Big Technology Podcast with OpenAI co-founder and president, Greg Brackman. Greg, let me just ask you, what happened in December 2025?
Starting point is 00:39:54 Because it seems like it was an inflection point where all this idea of letting the machine code for hours uninterrupted, went from theory to a moment where everyone said, I think I can trust this to keep going for a while. So what exactly happened? So new model races really went from the AI being able to do like 20% of your tasks, like 80%. And that was this massive shift because I went from being kind of a, yeah, it's a nice thing to do to you absolutely need to retool your workflow around these AIs. And for myself, I've very much had this moment where I have a test prompt that I've been using for years of build a website for me. I'd actually built this website back when I was learning to code took me months. It used to be over the course of 25 that, you know, take like four hours, a bunch of different prompts to get it right.
Starting point is 00:40:46 In December, one shot. Just asked the AI one time and it produced it. It did a great job. So how did those models make the leap? Well, a lot of it is about the better. base models. That one thing about Open AI is that we've been working on improving our pre-training technology for quite some time.
Starting point is 00:41:06 And that in that moment, we got to see a little taste of what is going to be coming for the rest of this year. But it's also really about not any one thing. It's about we're constantly pushing on every single axis of innovation. And the thing that's very interesting about these models is in some ways you get these leaps. And in some ways, it's all continuous. It didn't go from 0% to 80%. From 20% to 80%.
Starting point is 00:41:32 And so in some ways, it just got better. And I think that we've actually seen this improvement continue with every single point release that we've had. Like between 5-2 and 53, one of my engineers I work with very closely went from, he couldn't get it to do the low-level hardcore systems engineering he does to it absolutely being created it. He gives it a design doc. It actually implements it, adds metrics, observability, runs the profiler.
Starting point is 00:41:57 improves it to the point that it's the exact thing that he was hoping to produce. And so I think that the way to think about is it's almost a sort of slowly, slowly, slowly, all at once. But it is all indicated by what's kind of working right now, certainly within a year, sometimes much sooner, is going to be incredibly reliable. And it surprised you because I heard you talking on an interview not long ago about how Codex, right, this Auton's coder, was just for software developers. And earlier this conversation, you said, actually, everyone can use this stuff.
Starting point is 00:42:31 Yes. What led to the fact that you sort of changed your perspective on? Well, I think I'd been focusing on Codex and it's got the code in it, right, as really being for coders. And thinking about people within Open AI, because many of us are software engineers, building for ourselves, it's very natural to think that way. But as this technology has been progressing, we've started to realize that the underwriting, that the under. underlying technology we produced is mostly not about code at all. It's mostly about solving problems. It's mostly about being able to manage context and harnesses and think about how an AI should integrate and do work. And that's something that becomes both, even for code, suddenly
Starting point is 00:43:15 anyone can have access because you can manage something that's going to go do work. If you have a vision, you have something you want to accomplish. You can describe your intent. The AI can execute, can get that done. But then it also starts to be like, why am I just focused on code in? Like there's so much just very mechanical skill associated with Excel spreadsheets with presentations. And if the AI has the context, it has the raw intelligence now to be able to do these things at a great level. So if we can just make it more accessible, suddenly it goes from codex is for coders to codex is for everyone. And soon after this moment where we saw all this improvement, there was another so phenomenon in Silicon Valley.
Starting point is 00:43:54 which was OpenClaw, right? Which is, and maybe it's the broader tech community, where people started to trust it in ways that you suggested, giving an AI bot access to their desktop or getting a Mac Mini and giving it access to, like, their mail and calendar there and their files, and then just kind of letting it go run their life. And then OpenAI brought the founder of OpenClaugh in-house. So you talked a little bit more about the AI as something that will help
Starting point is 00:44:24 run your life for you in a way. Is that the vision by bringing the open cloud team in house? Well, I say that the core thing about this technology is that figuring out how it's useful, how people want to use it. What is the vision for agents? How is it going to slot in people's lives? That is a hard problem. And that one thing I've seen across many generations of this technology is the people
Starting point is 00:44:48 who really lean in, who have a lot of curiosity, who have a lot of vision, that's a real skill and that's an emerging, very valuable skill. in this new economy that is emerging. And Peter, who is the open-claw founder, is I think someone who's got incredible vision, incredible creativity. And so to some extent, it's about the specific technology, but to some extent, it's not at all.
Starting point is 00:45:07 It's really about the, how do we take these capabilities and figure out how those sliding people's lives. And so I think as a technologist, it's very exciting, but as someone who is focused on bringing utility to people, that's something that we are doubling down on an invest in quite a lot. You had a pretty interesting quote about this recently, talking about getting these autonomous AI agents to work on your behalf.
Starting point is 00:45:31 You said, when you do it, you become this CEO of a fleet of hundreds of thousands of agents that are completing your objectives, your goals, your vision, and you're not in the wheat on exactly how different things are solved. And in some ways, this new way of work can make you feel like you're losing your pulse on the problem. Is that good? I think that there's a mixed bag. And so I think that what we need to do is acknowledge the strengths of what these tools can deliver and mitigate the weaknesses. And so giving people leverage, agency, making it so that if you have a vision, something you want to accomplish, that you can have a fleet of agents that will go do it for you. But if you think about how the world works, that at the end of the day, there's an accountable party. right? If you're trying to build a website and your agent messes it up and your user is affected,
Starting point is 00:46:25 it's not really the agent's fault. It's your fault. And so you need to care. And I think that for people to use these tools right, you need to realize that human agency, human accountability. That's a core part of the system, how the human uses the AI. That's something that is deeply fundamental. And so I think the important thing is that as a user of these agents, and we do this within Open AI, you cannot abdicate responsibility. You cannot just say, ah, the AI is just going to do stuff. Of course, but you said, feel you're losing your pulse on the problem itself. That's different than accountability layer to it.
Starting point is 00:46:57 Well, to me, they actually are linked together. Because the point is that if you're a CEO and you're too far from the details, right? If you're running this company, you're running this team and that you've lost your finger on the pulse, that is something that's not going to lead to great results. And so the point that I was trying to make there is that not that it's a desirable thing for humans to not have to know about what's going on. There are some details that because you can trust, like if you are working with a team like a general contractor to build a house, there's a bunch of details there that you probably don't need to worry about because you can trust that they'll be taken care of. But at the end of the day, if there are details that are wrong, you should care about it. You should be aware.
Starting point is 00:47:38 And so this is, I think, an important nuance of you cannot. just blindly say, I'm okay with losing my finger on the pulse, that we need to lean in and say, I need to keep it there to really understand the strengths and weaknesses. And that as you disengage from some of these details, these lower level mechanical things, you should do it because you have built trust with a system that it will do a good job. One last question about the models. You've talked a little bit about the evolutions that the models have gone through, pre-training and fine-tuning, reinforcement learning that gets it more equipped to solve problems step by step and go out on the internet and do things. And now we're in this moment where the models have
Starting point is 00:48:22 learned through that process to use tools and correct me if I'm wrong on this one. What is next in that progression? Well, I think that the world that we're in is one of this increasing capability and depth of what the machine can do. And some of this is about we've got this tool use, but now we also need to actually build really great tools. You think about something like computer use, and AI that can actually use a desktop. Then it is really able to do anything that you can do.
Starting point is 00:48:53 But we also have to build a little bit for the machine to think about how does in the enterprise credentialing work, how do audit trails and observability work? There's a lot of technology to build to catch up with what the core model capability is. And I think the overall direction of travel includes things like a really great speech interface. So you can just talk to your computer naturally and just as natural as this conversation. And it understands you.
Starting point is 00:49:17 It does what you need. It has good advice. It's able to surface that I've been working on this thing. I have a problem. You wake up in the morning. It says, here's your daily report of how much progress your agents paid overnight. Maybe it's running a business for you, which I think is going to be a huge application of this technology. The democratization of entrepreneurship is absolutely coming.
Starting point is 00:49:35 And I'll say, here's these problems. There's this customer that's upset. you know, they want to talk to a real human, like you should go talk to them. Like all of that's going to happen. And then I think that the raising of the ceiling of ambition of challenges humanity can solve, that is also a next step for this technology. And we're seeing the leading edges of it. The thing that I am just very excited to see is almost, if you remember, AlphaGo Move 37, right,
Starting point is 00:50:00 this move that no human ever would have come up with. It's creative. Creative. And it changed humanity's understanding of the game. That is going to happen. in every single domain. It will happen in science, in math, in physics, in chemistry. It's going to happen in material science. It's going to happen in biology. It's going to happen in health care, drug discovery. But it may also even happen in literature, in poetry, in a bunch of other fields.
Starting point is 00:50:23 They're going to unlock human creative understanding and ideation in ways we can't imagine right now. Why do you think that hasn't happened yet, given how strong you say the models are? Well, I think that there is an overhang of what the models are capable of and how people are using them. So the application. Well, yeah, it's almost our understanding of what is in these models. That's something that I think is still emerging. So I think that even with no further progress, there's still a massive shift that will happen. The economy being powered by compute and AI is still going to happen.
Starting point is 00:50:56 But I think there's also something where what we've gotten very good at is training models on tasks that could be measured. And so what we started with was math problems, programming problems, where you have a perfect verifier. And a lot of what the progress has been in bringing us to more open-ended problems has been expanding the space of what can be created. And the AI itself can really help with that. If the AI is smart and understands things, you give it a rubric for how well a task goes. And of course, for things like creative writing, like, is this a good poem? That's a much harder thing to grade. and so we've had less ability to teach the AI
Starting point is 00:51:31 and for it to experience and try things out. But all of that is changing in something that we have a lot of sight for. You know, it's interesting. Reflecting on that, Peter Thiel has mentioned, pretty sure this is what he said, that if you're a math person, you're probably in deeper trouble in terms of these models coming from what you do
Starting point is 00:51:48 than if you're a awards person. And you are a member of Math Club back in the day. Are you not concerned about that? Well, I think that it's much easier to see what we lose than what we gain, right? Because we have a deep understanding of, I used to do things this way, I used to do this math competition,
Starting point is 00:52:05 now that AI can do the math competition. But it was never really about the math competition. Right, right? That's not really the thing that drives humanity. And if you think about the way that we do work right now, there's a box, something tight behind a box, we weren't to do that 100 years ago. That's not natural.
Starting point is 00:52:21 That's not this digital world that we all got kind of sucked into. That's not really what being humans about. being humans about being here, being present, connecting with other humans. And I think that where we're going to see is that AI is going to free up so much time to increase human connection, to build more bonds across people. And that's something I'm extremely excited about. Okay. And then as we shift, well, as you shift really to these more agentic use cases,
Starting point is 00:52:51 there's been discussion about whether the bigger training runs really need to happen. and, you know, especially if you like get the model good enough, then you could sort of let it go out in the world and then you can effectively get much of the uplift in areas that aren't the pre-training, which is what these big data centers are needed for. So you work to, you work on scaling here. Lead that process. What do you think about that argument? Well, I think it misses something very important for how the technology development goes. Because it is absolutely the case that every single step of the model production pipeline multiplies. And so you want to improve all of them. And the thing that we see is we prove the pre-training. It makes all the other steps much easier.
Starting point is 00:53:41 And it makes sense because it's a model is able to learn faster. It's a model that is because it already is like more capable to start when it's trying out different ideas and learning from its own mistakes. That process just is faster. It needs to make fewer mistakes. And so I think that the big shift has been from thinking of it as just, you're just training this cerebral system on its own and just make it bigger and bigger to it's also about trying things out. It's also about understanding how people are using it in the real world and connecting
Starting point is 00:54:11 that back into your training. But it doesn't remove the value and the importance of continuing that research. And the thing that I think has also shifted is we used to really just focus on the raw pre-training capability, but not think as much about the inference ability. And that's been a big change over the past 24 months to realize that it's a balance between you can have this model that has all those great properties in the base. But then you really need it to be able to be inferensible because you need to reinforcement learning. You need to serve it to the world. And that that means that you don't necessarily go as big as you possibly could because you also really think about there's going to be all this downstream use.
Starting point is 00:54:49 And you really want the thing that has the best intelligence times that cost and, and to optimize those two things together. Do you still need the Nvidia GPU if things move mostly to inference? We absolutely do. Why? Well, because there's multiple reasons. But one is that even as the balance of how much inference versus training changes, that you cannot get massive scale training through any other way
Starting point is 00:55:20 besides this concentration of compute on one problem. And so I think that the thing that I think will happen is there's some amount of the deployment footprint goes up quite a lot, but that sometimes there will be you have a particular mass of pre-training run and you really want to concentrate a bunch of compute in there. I also think that the Nvidia team is just incredible and does really, really amazing work. And so, yeah, we partner very closely with them. Isn't there going to be a time where people just say we've pre-trained enough, the models are smart enough? I think that that's a little bit like once humanity has solved all problems. in front of us, then maybe we can say that. Right.
Starting point is 00:55:56 But I think that the ceiling of what we want to accomplish. Like, I think that there's just so much ambition that maybe we've, over the past 50 years or so, just sort of backed off from, right? You think about, even problems that seem very clear, like, can we have health care for everyone that is not just, that's actually preventative, not just targeting when people have a problem, but really think about the lifestyle and how to really help people early detect potential diseases before they happen. Like, that's a problem that I think we can actually achieve through more intelligent
Starting point is 00:56:28 models. And there's probably some level where you can totally solve that problem. And then you say, well, do I need a model that's two times smarter? But there's other problems that are going to demand that. Let's talk about the math about building these data centers. It raised $110 billion earlier this year. What's the math behind that? Does that money go right into data centers?
Starting point is 00:56:51 data centers, how do you think about how you're going to return that money to investors? Talk about those calculations. Yeah, so I think it's as simple as the massive expense we see in front of us is compute. But you can think of compute not as a cost center, but as a revenue center. Think of it a little bit like hiring salespeople, right? How many salespeople do you want to hire? As long as you can sell your product, as long as you have a scalable way to sell that product, then the more salespeople you have, the more revenue you will make.
Starting point is 00:57:18 And I think the world that we're in is we have continued. continually found we cannot build compute fast enough to keep up with demand. And I see this very concretely, right? Right now we have to think very painful decisions about what we're launching, about where the compute goes, and that I think we're going to experience this more broadly within the economy. As we shift to this AI powered economy, the question will be, what problems are going to get that massive compute?
Starting point is 00:57:42 How do you scale so everyone can have a personal agent running for them? How can everyone be using systems like codex? Like there just isn't enough compute in the world to be able to, do that. And so we're trying to get ahead of that problem. But it is a new category, right? So you're doing it with real confidence. I mean, in sums of money, the world has never seen put towards a project like this. When you're building a new category, how do you do it with certainty that it's going to work out? Well, I think there's several components that go into it. So the first is there is historical precedent at this point. From the moment we launched ChadGBT,
Starting point is 00:58:16 I remember talking with my team having this exact conversation, where they said, how much compute should we buy? I said, all of it. I said, no, no, no, really, how much compute should we buy? I said, no matter how much we try to build, I know we're not going to be able to keep up with the demand. And that has been true, and that has been true every year since then. And the challenge is that these compute purchases, you have to lock them in 18 months, sometimes 24 months, sometimes longer in advance of them actually being delivered, which means you really need to project forward. And I think that the world that we're moving towards is one where to date most of our revenue has come from consumer subscriptions. And that will always be very important. There's other
Starting point is 00:58:56 revenue streams we have emerging as well. But the opportunity that clearly is emerging now is knowledge work. And we're seeing this very concretely across every single enterprise, realizing this technology, it actually really works. And to be competitive, they need to adopt it. And you can see this organic energy of all these software engineers using it. And then we're starting to see the percolation of people using it for various knowledge work inside of the enterprises and the willingness to pay and the revenue growth that you're seeing in this industry is very clear, right? It's very clearly happening right now and just project that forward. And we look, like one thing we get to see that maybe the world doesn't is the line of sight to how these
Starting point is 00:59:38 models will improve. And all of this together says that the economy, which is a massive thing, right? the economy is just so large, it's almost incomprehensible. All of the growth, like the highest order bit on how this economy grows from here will be about AI, how well you can leverage AI and the computational power you have available to power it. You said consumer subscriptions are your biggest source of revenue right now. Is the projection that will flip and that business will be the biggest source? I think, well, I think that it is very, like very clear how quickly the enterprise. It's not just enterprise because I think,
Starting point is 01:00:14 enterprise is also changing what it means. Right. So really people using it for productive knowledge, work for those kinds of things. And I think that as we think about pricing, one thing, if you look at how Codex works right now is if you have a chatyp consumer subscription, you can use codex. And so I think it's not going to be as well defined as this category, that category. I think it will really be about you as a user are going to have just, again, like your laptop, this portal to the digital world.
Starting point is 01:00:41 And that is what the revenue fundamentally will, will, come from. Dario said, I think about you. There are some players who are yoloing, who pull the wrist dial too far, and I'm very concerned. I think he's referencing your infrastructure bets there. What do you think about that? Well, I just disagree. I think we've been very thoughtful and very much seeing what is coming. And I think that we will see even this year how everyone who is participating is going to be compute strapped. And I think we have been the most forward in realizing that this is coming and building an anticipation of how this technology is playing out.
Starting point is 01:01:21 And I think that what we have seen is that for other players, that they kind of realize that probably late last year and started scrambling to see what compute is available. And there really wasn't any. And so I think that even as people, it's very easy to make statements like that. But I think that everyone has kind of realized that this technology, it's working. It's here. It's real. Software engineering is just the first example of it.
Starting point is 01:01:44 And that we are fundamentally limited by the computational power available. And you said that also that if he's off with his prediction by a little bit, then the company could potentially go bankrupt. Is that the same case for you? I think that, look, I think that there's actually more degrees of off ramp here. If you start to worry about the downside case, which I think is a very reasonable question. Right. But to some extent, what I think the bet is on isn't about. any one company. It's really about the sector. It's really about do you believe this technology
Starting point is 01:02:14 can be produced and can deliver this massive amount of value that we see coming. And again, I'll point to proof points, right? That software engineering, it's just like the degree to which, if you're not a software engineer, you haven't tried codefs, the degree of which is different. Like, it's just hard to describe. And I think that people will experience it very quickly. Like, you know, six months ago, I think that for us, we saw this internally, but there were less proof points out there. Now there's proof points out there. Six months from now, I think that everyone will feel it. And I think that we will all feel the pain of there's an awesome model and there's just no availability because there's not enough compute. Yeah. But as we were looking at our predictions for
Starting point is 01:02:55 26 on this show, we had a conversation towards the end of the year last year where Ron John Roy was on with us was like, 2026 is going to be the year where everybody uses agents. And I said, Yeah, well, I'll believe that when I see it and I'm using the agents. So here we are. Here we go. What do you use it for? I use it to build tools internally for the people who I work with to sort of get on the same page about when videos are coming and what the thumbnails need to look like. And I'm also integrating things from YouTube.
Starting point is 01:03:34 and so we can basically then rank how the videos are doing based off of thumbnail and like a custom built piece of software that I never would have paid for. And that's one of the things that I think is interesting about this moment, I guess, is that software, it scales used by the masses. But when you use it, therefore, there's going to be so many things that are not made for you. And maybe what this does is it allows us to interact with software in a way that's much more natural. I think that is the key. And again, I just think a lot about the fact that the way we've built computers has really pulled us in into this digital world. You think about how much time you just spend scrolling through your phone.
Starting point is 01:04:16 Yep. Right? The amount of time that you spend clicking different buttons and trying to like connect this thing to that thing. Like why? Why do you have to do that? Instead, the AI being about bringing the machine closer to you, personalizing to you, understanding what you're trying to accomplish.
Starting point is 01:04:30 And that we have all of this. pop culture of just computers you can talk to and that they go and do stuff for you. And it's starting to become real. It's starting to become the thing that you can actually do. And I think that the amazingness of that is something where you just have to try it to really understand. So I definitely think it's a very special moment we're in. Yeah. Then I want to know why is AI so unpopular with the public?
Starting point is 01:04:56 Hugo, for instance, says three times as many Americans expect the effects of AI on society to be negative. as they expect it to be positive. I mean, why do you think the reasoning is behind that? And are you concerned about AI's brand? Well, I think that there is something that we need to show the country of why AI is good for them, not just for the broad economy, for growing the GDP and things like that, but how does it help them in their lives? And I think there are actually many very concrete stories that I hear every day. For example, there's a family where their child was having some headache, some medical issues, was denied an MRI.
Starting point is 01:05:39 And they researched the symptoms with ChachyBT, and realized that they could make an argument to insurance to actually get the MRI. They did that. Turns out he had a brain tumor. They were able to save his life because they used Chachyb T to get access to the right information. And that's just one story. There are so many more just like that of people who have been deeply, profoundly, their lives have been improved or saved, through their use of this technology and through partnering with the technology
Starting point is 01:06:03 in a real way. And so that is a story I don't think gets out there. I think that this is happening in so many people's lives, but somehow the story is not yet told. And one thing I notice is that there's, you know, certainly a lot of pop culture from, you know, the 90s, from the historical context that we have
Starting point is 01:06:21 that's very negative on AI that worries about what could go wrong. But when people actually use AI, that they find utility in it, they find value in it. And so I think that, I am definitely very concerned about us not having successfully help people understand why this technology wave is something that will improve their lives, that will help improve human connection. And that is something that's a big focus in my mind.
Starting point is 01:06:43 And if you think about the opportunity here and why AI is so important, I think this will be the source of economic and national security going forward. I think it's going to be about national competitiveness and that there are other countries like China where AI polls in the exact opposite direction. And so I, yes, I think it's very, very important that we acknowledge that and we really understand how to get the benefits for everyone. But we also are in a time that's like politically unstable. There's concerns about, about work. People are, every time I speak with someone about AI, they're like, how long do I have left to work in my job? And then when I think about the data centers, I mean, the polling is even worse than,
Starting point is 01:07:25 you know, AI in general. This is for Pew. far more data, far more people say data centers are mostly bad, good for the environment, home energy costs, and quality of life of those nearby. So we are at this moment where good jobs are tough to come by, and people see these data centers come into their communities. And they say, not good for the environment, home energy, and quality of life. Are they wrong? Well, I think there's definitely a lot of misinformation about data centers.
Starting point is 01:07:52 Good example is water usage. that if you actually look at our Abilene facility, which is one of, if not the biggest supercomputers in the world, the amount of water it uses is the same as a household over the course of a year. So it's really negligible water use. And yet that there's a lot of misinformation that these data centers consume a lot. And so similar on power, we have a commitment that we are going to pay own way to not drive up energy prices for people. That's something that as an industry now,
Starting point is 01:08:29 that people are making these commitments because it is very important that we improve local communities. And when we build data centers, we really try to go into those local communities, understand what's happening on the ground, how we can help. There's tax revenue that are associated with these data centers. And I think that there's jobs that they create. There's a lot of benefits that come from them. And so I think that's one thing where it is about how we show up. And that's a responsibility that we take very seriously. Okay. But also like if their power costs are not going to go up, you have to bring in power,
Starting point is 01:09:03 which means potentially more pollution. Is that not a concern? Well, I think that there's much more nuance in terms of not driving up energy costs. If you look at how the grid works today, that there's actually a lot of just stranded power, power that is there that is not being utilized and that you need to upgrade the transmission systems. And again, that's something where putting that on us rather than putting that on the rate payers, is very important.
Starting point is 01:09:25 Right. There's lots of places where they have queen power that you can, it's actually being underutilized and just being kind of thrown away. And so there's a lot of benefit that comes from having real reasons for the grid, which is aging and obsolete in many places to upgrade. And that's something actually has real benefits of the community. Like we've seen, for example, in North Dakota that people's rates have gone down because a data center has shown up and has helped with improving the utilities for everyone.
Starting point is 01:09:54 One last question on the politics. You gave $25 million to MAGA, Inc., which is a PAC that's pro-Trom. You spoke with Wired about it, and you said, anything I can do to support this technology benefiting everyone is a thing I will do. And, you know, if that makes you one issue voter or one issue, you know, political support, I'll share the one thing I always wonder when it comes to just this, the one issue. camp is ultimately doesn't a stronger country make you, you know, sort of make your goals much more feasible, even if a candidate isn't, you know, fully in support of what you're doing. Like, shouldn't a stronger country, no matter what, be the North Star of any political activity? And if it's the, if that's the case, you know, then is that part of the donation?
Starting point is 01:10:48 So the way I look at this, so my wife and I made that donation. We've donated to bipartisan super PACs as well. I think this technology is one where it's coming quickly within the next couple of years, really going to transform everything, going to be the underpinning of the economy. And it's not popular. And we really want to support politicians who really lean into this technology, really engage with it. And so I think that certainly this technology is about up, uplifting us as a country. I, you know, I am a one-issue donor. This is something where I feel
Starting point is 01:11:24 I have a unique contribution to make. But it's really about just expressing support for this technology is something that we should be leaning into as a country. What would you tell someone who's scared of AI? I mean, if you have a moment here where you can speak directly to them, they might think it's going to take my job, it's going to pollute my community, it will change the world too fast. What's your message to that? Number one thing is try the tools because is to really understand what it can do for you is something that only by experiencing the AI as it exists now, will that really hit home? And we see so much opportunity and potential and empowerment coming from this technology today. You talked a little bit about what you can build now.
Starting point is 01:12:09 Right. People have never built a website before can build a website. If you want to start a small business and you're thinking about all the back-end processing and how to actually manage it, all those things, the AI can help you with that right now. And so I think that in your life, thinking about how it can help you with your health, how it can help your loved ones, how it can help you make money, how it can help you save money, these are all going to be on the table. And I think it is much easier to see what's going to change than it is to see what you're going to gain. But I think that it's worth giving it a fair shot of really trying to understand both sides of the equation. That's the one thing that doesn't get talked about in the polling data, by the way.
Starting point is 01:12:51 It's the people that have seen it used but haven't tried it themselves or the people that have never tried AI are much more negative. And then you get to the power users or even people that use it casually and they're generally pretty positive about the technology. For myself, we've been thinking about this technology for a long time. What I see playing out in front of us is more amazing, more beneficial and going to really have a much more positive impact than we ever imagined. So last one for you, how would you advise someone to prepare themselves for the future? And it has to be more than just get in the tools. I mean, I have friends who come to me and they say, I don't know what's going to happen with my job or the world. And I just need to know what to do with this.
Starting point is 01:13:33 I do think that the number one thing is about understanding the technology. One thing we've seen is the people who get the most out of the technology. And you have to approach it with a curiosity, trying to really try it in your workflows, those really be able to get over that initial hump of you have a blank box. What do I do with a blank box, right, to really develop this sense of agency, this sense of I can be the manager. I can set the direction. I can delegate. I can provide oversight and to really develop that skill because that is something that's going to be fundamental is we're building this technology for humans to help humans foster more human connection for humans to be able to spend more time
Starting point is 01:14:13 doing what they want. And so the question is, well, what do you want? And really trying to crystallize that and trying to realize that with the help of this technology is going to be the most important thing. Greg, thanks so much for coming on the show. Thank you for having me. All right. Thank you, everybody for listening and watching and we'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.