TBPN Live - FULL INTERVIEW: Sam Altman Responds to Anthropic’s Attack Ads, Live on TBPN

Episode Date: February 5, 2026

This is our full interview with OpenAI CEO Sam Altman, recorded live on TBPN.We discuss Anthropic's Super Bowl ads, Codex 5.3, why managing AI agents is the next interface shift, and how chip...s, power, and compute bottlenecks will shape the future of AI.TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to podcast platforms immediately after. Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.

Transcript
Discussion (0)
Starting point is 00:00:00 Well, we have our first guest to the show. Sam Altman, the CEO of OpenAI. He's in the Restream Waiting Room. Welcome to the show. Sam, how are you doing? Welcome back. Good. Thank you guys for having me back.
Starting point is 00:00:09 Thanks so much. Big day. Big day. Kick us off with, where should we start? Should we start with the model or frontier? Can we start with the model just because I'm having to be done with it? Yeah, absolutely. Break it down.
Starting point is 00:00:21 What did you launch? So we launched 5.3 Codex is, I think, the best coding model in the world. We took a lot of the feedback that people had about 5.2, 5.2 codex. it into one model. It is much smarter at programming, but it's also way faster. You can interact with it mid-turn. I think it's got a much better personality. It's really good at computer use. So it feels like a very big step forward. It was funny as we were deploying it this morning, a couple of very, like, extremely experts at using these models noticed and said, man, something's really different with codex. And I like caught it mid-deploy. So I think you can really feel it quickly.
Starting point is 00:00:56 That's great. Oh, you're saying people outside of Open AI, just everyday users. Yeah, I like I started an hour that we put it out before we, you know. Talk about interacting with it mid-turn. How does that work? Why is that important? What does that unlock? So people are starting to use these tools for very long pieces of work at one time, you know, multi-hour tasks.
Starting point is 00:01:15 And sometimes you don't specify it correctly, right? Sometimes something's not set up right. Something it's not just screws up. The ability, they can do amazing things with no steering, but they can do much more amazing things if you steer them along the way. So this has been, this is one of the things I felt most, knew about this model. So talk about orchestration, how this fits into frontier because I imagine one second. It's notable. Like if you see a coworker making a mistake and you don't interrupt them,
Starting point is 00:01:40 that's rude. Right. It's deeply inefficient. It is incredible what these models can do without any feedback. Like if you think about a new coworker especially, you know, you train them and you give them a lot of feedback early on and they learn the job and you correct them and they kind of get practice. And the models, they will soon do that. But right now, they don't do that. So we just rely on either they get a right one shot or we collect them, correct them all along the way. Yeah. So, you know, I think there's a lot of people that are running multiple agents and multiple tabs. They're starting to think about orchestration. It feels like frontier is a piece of
Starting point is 00:02:14 that. But if you're interacting with a model that's running mid-turn, like how does the user experience change for developers with 5.3? And then what will it look like in the frontier world? I think we will be heading towards a workflow where a lot of people just feel like they're managing a team of agents. And they'll keep, as the agents get better, they'll keep operating at a higher and higher level of abstraction, which at least watching what's happening so far is a jump that people are going to make pretty well. The models are so good. Now there's such a capability overhang that building better tools to let people do that, the Codex app that we launched on Monday was a great step forward for will be very, very important.
Starting point is 00:03:00 But you will be managing very complex workflows. The agents will keep getting better. So you'll keep working at the maximum of your management bandwidth or cognitive ability to keep track of all the stuff. And the tools to make that easy to do will matter, I think more than intelligence for a little while because there's such an intelligence overhang already. What's the role of a forward deployed engineer today at OpenAI towards the end of the year? capability overhang that feels like raw meat for a for a forward deployed engineer. It's like the like they solve that problem.
Starting point is 00:03:33 Yeah. I mean, look, eventually the models will get so good that they'll help companies deploy themselves and the forward deployed engineers will again get to work at a higher level of abstraction. But for now, you go into a company that is not AI native and say, okay, you've said you want to, you know, they say they want to deploy AI. They really are not sure what to do. How do I hook this up to my system? do I need to fine tune a model on my code base? How do I think about orchestrating agents and using things from different companies?
Starting point is 00:04:03 Most of all, or at least what we hear most frequently, is how do I think about security of my data and how do I know that these like AI co-working agents are not going to go access a bunch of information and share it in ways they shouldn't or get, you know, a context exploit or something like that? So the forward deploy engineers take this incredible new technology and a platform like Frontier and say we will connect your company to an AI platform so that you can use all these agents and workflows and everything else you want.
Starting point is 00:04:30 How important are these metaphors or how temporary are them? I was very interested in reading about Gastown and you have these pole cats and it's this whole Mad Max world and that feels like maybe just a temporary aberration where you're setting up agents for specific tasks, but also that could be incredibly valuable in explaining to a large corporation
Starting point is 00:04:51 of how they're gonna integrate AI across the whole organization. Yeah, I suspect like everything else that's happening when an industry is moving so fast, all of this is somewhat temporary on like a long enough time scale. And you, as these models become more capable, these agents are operating on very long time horizons with the ability to just kind of figure it out and our trust in their robustness keeps going up, then maybe you don't need a lot of the abstractions we need today. You know, maybe you just like have a single AI bot that runs at your company and You can say hey, I'm gonna like launch this new product and it does everything an ambitious person would do But that's not where we are today. So today we have to yeah put in a little more to get the pieces put together. Yeah, how are you thinking about the the the meter benchmark for long task horizon? You're at the top of the charts at the same time. It feels like we might need a new chart if we're talking about agent swarms because
Starting point is 00:05:55 they'll be able to do things that go for weeks, but they will subdivide the work. There's some subdivision that happens within a reasoning model, but it doesn't truly parallelize, at least that I'm aware of. So what does it look like in a world where you go to a model, but now it's spinning up a whole bunch of different models underneath? I think there's two of the key insights of the whole fielder in this question. Number one, the implication that no chart in AI lasts more than a few years is right. And like this one, kind of, you know, we'll see how much longer it's really useful. Yeah.
Starting point is 00:06:27 The second is a lot of people thought that, okay, we're going to just need a super long task and a super long task horizon. So we need a super long context. And definitely what people have already seen with coding agents is by agents breaking up work, orchestrating it well, farming it off to sub agents, even with the current limitations of the technology, we can do something, which should not be surprising because it's similar of how people do things and get amazing amounts of work done. So that's been cool to watch.
Starting point is 00:06:56 I think that will keep going. A joke that some people at open eye make is that soon the chart that matters is just going to be GDP impact. And then the question is, what's the one that comes after that? But everything else, a lot of these proxy metrics, there's now so much economic value in what the models are doing. What do you think could come after? I have no idea.
Starting point is 00:07:17 Do you have an opinion? Happiness? I don't know. When you look back at some of your, when you look back at some of your blog posts from 10 years ago. your predictions were usually pretty on point. Yeah. Maybe it's hard to predict.
Starting point is 00:07:28 Thank you. Thank you. Specifically the merge. Like basically calling. I mean, there's a bunch that are good. Like right now we're getting, you know, thousands of messages in the chat about 4-0, and you predicted in 2016 that people could, would become, you know, very attached to a chat bot. Yeah.
Starting point is 00:07:47 I'm working on like a big prediction blog post for the next 10 years seems too far, but the next five. But because it's like, I'm sure a lot of it'll be wrong and it, you know, it's still fun to try. The, the sort of like relationships with chatbots, clearly that's something that we've got to worry about more and is no longer an abstract concept. Even the question of what comes after GDP, like one reason I think that's interesting is the way we measure GDP now could start going down, even though quality of life goes way up. And we don't have a lot of practice with things like that, but the massively deflationary. Isn't that just Europe? Not just Europe. We want the quality of life going up too. Yeah. Switching gears, what do you think about the work that the, the Neo Lab boom, the research efforts that are happening all over Silicon Valley. It feels like there's
Starting point is 00:08:47 there's an acknowledgement that there's research breakthroughs that need to happen and everyone's taking different shots at those. Do you think that those companies will just find a breakthrough and join a lab or launch their own products? I think it's fantastic, first of all. One of the meta things that we wanted to do when we started with Open AI is when we started Open AI is like there had been a period where the technology industry in Silicon Valley in particular was amazing at new research labs or just doing new research in industry in general. And then it kind of fell apart and there hadn't been a good one in a while. And part of what we hope to show, and this was not only us, like a lot of people were excited about new research labs, is that industry could do research again. So seeing that now
Starting point is 00:09:30 become fashionable and all of these new labs, I think it's totally awesome. Some will succeed, some will fail, some will go into some other effort. But having industry support research in, you know, startup style, I think it's wonderful. Over the next two years, would you expect to acquire more individual product companies or more research labs? Good question. I don't have a strong opinion there. I would bet, well, I'd say the very best ones will often look like a mixture of both.
Starting point is 00:10:06 Like the one that I have in mind right now is something that very much looks like a mixture of both. So maybe the shape of things to come is the really truly extraordinary product work will have more and more research component. it and it'll be kind of a more of a hybrid thing. Is data the new oil? Is there value? Yeah, we were joking a couple days ago. A bunch of data, but they don't understand AI. They don't know how to monetize it.
Starting point is 00:10:30 Well, and that phrase was effectively wasted a decade ago. Yeah. And so to say it now sounds really silly. Yeah. But it feels like it could be more true now than ever. Yeah. You know, certainly, yeah, man, they really did waste it a decade ago. I was just thinking of the kind of people that used to say that.
Starting point is 00:10:45 Yeah. Ted talks. You're not supposed to call them out. They're nice people over, Ted. You know, definitely like the sort of the magic relationship of this last eight years, whatever you want to call it, has been like, you know, that we can put in more and more resources, compute data, new ideas, whatever, into creating an artifact. And it gets like, the log of it gets better. So you can throw in, you know, that's why we have this huge exponential. increase in resources, but we keep getting better and better models. And for all of the concern
Starting point is 00:11:25 people have about it's going to top out or it's slowing down or whatever, like, no one's been right about that. I mean, sometimes they, it looked like they were for a couple of months as we digested a new model or came to a new form factor. But it has been an incredibly smooth last six or eight years of this. What those resources are, there can be some tradeoff between, you know, sometimes it's better to spend your money on better data. Sometimes, on more compute, sometimes something else. On the whole, compute power is the new oil is the statement that feels closest to true to me, but there will be other parts too. Is software dead? It's different. It's definitely not dead, but what software, like how you create it, how you're
Starting point is 00:12:13 going to use it, how much you're going to have written for you each time you need it versus how much you'll want sort of a consistent ux. That's all the going to change. You know, there have been a number of these like big sell-offs of SaaS stocks over the last few years as these models have rolled out. There, I expect there will continue to be more. I expect there will be big booms in software. I think it's just going to be volatile for a while as people figure out what this looks like. The statement that someone said to me that is stuck in my mind most these last couple of weeks is that every company is an API company now, whether they want to be or not.
Starting point is 00:12:52 Oh, yeah. Because agents are just going to be able to... Yeah, we had Dara from Uber on yesterday, and he had a pretty refreshing kind of approach. We were asking about integrating agents with Uber, and he recognized that, yeah, the ad business could potentially be threatened if you can order an Uber and chat ShpT, but he basically said, like, you have to think of the consumer. The consumer wants to order an Uber via their preferred agent. You should let them.
Starting point is 00:13:20 otherwise you're going to have other problems. Yeah, that is the right take for sure. And I don't, or I think so at least. And, you know, we've been through platforms just like this before where you, I mean, Uber wouldn't have existed without one. It wasn't until the iPhone where you could like have it make sense to order an Uber to write where you were as you were out in the world. So I think there will be totally new things that happen.
Starting point is 00:13:41 Other things you'll use in new ways. But definitely as I've started using Kodux, my excitement, excitement about having agents go off and do things for me and still use other services pay other services i'm sure we'll need to figure out new business models and how revenue gets shared around but that will happen yeah talk more about codex desktop one more question on on sass have any public market SaaS companies tried to get a soft landing with open a i and do you think there's any there's any value just let's say no one has no public SaaS companies that i'm aware of have tried that with open area.
Starting point is 00:14:20 Look, I think some of them, some of them, some of them, some of them, some of them, some of them, some of them,
Starting point is 00:14:23 and are on sale right now and, and potentially just need new energy and, and, uh, need kind of a new, an entirely new approach and maybe open AI could provide that. Yeah. Yeah.
Starting point is 00:14:35 I think some will be incredibly valuable. Some do feel like a thinner layer now. Um, but I don't know. Like, I was talking recently to a bunch of SaaS companies and they do not, they do not, they do not feel,
Starting point is 00:14:48 unexcited. Like they're like, we're going to go through a big transformation here. And, you know, yeah, sure, other people can instantly write software now, but so can we. And we got a great system of record. And it seems reasonable. Some won't make it, of course. Yeah, yeah. Yeah, talk more about the Codex desktop rollout. It feels like, you know, successful amount of downloads, but key, like a shift for, you know, people who are maybe lightly technical, but don't have time to set up an IDE and configure and invite. to actually start writing software. I want to know about plans to integrate to the phone.
Starting point is 00:15:24 That was a big moment, I think, for a lot of people with the Claudebot, Molkot thing was like, oh, I can text something and it will go and write code, and that's valuable, and that unlocks a new agentic experience. Like, where do you see the Codex desktop ecosystem going? I am, so codex desktop has been a somewhat of a surprise to me. somewhat of a surprise to me in terms of how much people love it, including how much I love it myself. You know, the, I think it's a great example of 10% of polish of the experience of using these
Starting point is 00:15:59 models, especially when there's so much capability overhang, goes an extremely long way to what you can build and how you interact with this stuff. Of course, we should have an ability to kick off new tasks from mobile and we'll do that. I mean, really what you want is like your single AI that's working for you on a unified backend access to all of your data and your ideas and your stuff and your memory and your all the context and the ability to work across a lot of surfaces and often you'll be at your desktop often you'll be on your phone and you just want to add something in but it is a pretty profound shift in my own workflow not just for coding tasks but more general purpose tasks
Starting point is 00:16:41 it's still kind of hard to use if you're not at least reasonably technical but obviously we'll will find a version of this product that can do other knowledge work tasks and control your computer and things like that where you don't have to be. And it'll bring the magic of building stuff really to a lot of people because even if you never look at code, you'll be able to build something reasonably sophisticated. One of the things that I have built when I was playing around with the new Codex app is this thing I had always wanted, just like this magic auto-completing to-do list. I really work with to-do lists and this idea that I could just put tasks in and it
Starting point is 00:17:15 would try to go do them. If it could complete them, it could complete them. If it needed questions, it would ask me questions. If it needed, you know, if I had to do something I could still do it the old-fashioned way. But an interface like that where, you know, all the stuff you want to do, you just sort of explain to a computer or your AI and it tries to go off and do them. And sure, if you're on your phone, you're going to just add a task on your phone or, you know, if you want to easily import something from email you're going to do that, like it feels really good. So I'm excited about all of the ways that this will just become a general knowledge work agent. Were you unsurprised to see a product like OpenClaw come from open source?
Starting point is 00:17:57 Because it's certainly that kind of user experience is not, I would imagine this is something that you knew would be a thing. And yet, I think part of the magic of Open Claw is that it would be very, very difficult for a large tech company. Peter didn't make many phone calls to hyperscalers to say, hey, I'm going to be integrated with your API. It just went out. And you guys, you know, and when I think back on like the Sky acquisition, this kind of experience was probably very top of mind and things that you're working towards internally. I love the spirit of everything about OpenClaw. Yeah. And you are totally right that it's much easier to imagine a one person open source project doing something like that than a company who is going to be afraid of lawsuits.
Starting point is 00:18:41 Data privacy and everything else. You know, they're like I think this is kind of how innovation works. Something like that starts is clearly amazing. There will be a way to make a mass market version of that product. But letting the builders build the equivalent of the homebrew computer club spirit go here is so important. Yeah, totally. Can we switch to social? I feel like if I Google Sam Altman's social, I get pure AI and SORA and then also demand or predictions about a human.
Starting point is 00:19:11 only social network. Where do you see social going broadly? How do you want to integrate it with it and power it in the future? The mold book thing was like a very interesting social experiment to watch. And I think points to agents interacting in some sort of social space, hopefully on behalf of people, at least in some degrees, could be quite interesting. I don't think we know what to do there yet, but it feels like social is. going to change a lot. And I am interested in the space of what a social experience can look
Starting point is 00:19:48 like when your agent is talking to my agent and coming up with new stuff. Clearly, putting a lot of AI bots on the existing social platforms is just making everyone crazy and not that fun. So that's not the right answer. But I think we can design something new for what this technology is capable of that will feel good and useful. Yeah. Is there a solution? to the bot problem that's just all the labs sort of integrating with all the other platforms. And even if you can't detect its AI generated, you can literally say, we just, we just generated those tokens. Like those exact tokens are in our database.
Starting point is 00:20:25 You probably can't do that because their open source models are like good enough to right things at this point. Yeah, yeah. I am excited about sort of like assertion of humanity instead of trying to like detection of AI as a thing here. I don't know if the social platforms, it's like in their interest to solve this. It is causing, it's creating like, at least in the short term, it creates like a lot of engagement and increased usage.
Starting point is 00:20:48 So I believe they could solve it if they wanted to. I'm not sure it's in their interest. I'm actually not even like it is, I don't like it, but it's, some people do seem to enjoy it. Yeah. Can you talk a little bit more about where SORA as a video generation model is going? It feels like tool use is maybe. under-discussed, you know, adding, adding reasoning. It's not just the diffusion model. It's giving
Starting point is 00:21:15 these models the ability to make linear cuts and overlay motion graphics. And when I scroll, the Instagram rails that I see, they're like vibe reels with cut-outs and it flips negative and it's all color-graded and stuff that like, yeah, you could probably diffuse it all. But it's pretty cool just to teach a model to also use after effects or whatever video, you know, motion graphics suite you want to use. Is that an interesting unlock? What do you see going on? So all of that stuff will happen. And I agree with you the models will get really great at doing that. People love generating videos. I would say people, we have not yet found a way that people really love watching other people's videos. This is true for a lot of other AI. Like they love to,
Starting point is 00:21:59 you know, people love talking to chat GPT or whatever. It's not that compelling. But for most people to like read other people's chat dbt generation. So I think there is something. But isn't that for all writing and all video? Yeah, it seems stronger to me in this case than the general case, but maybe you're right. Maybe this is... Yeah, but if somebody says, hey, I generated a 15-minute video, I'm really excited for you to watch it, and you watch the first 10 seconds and you're not that captured by it. I don't care that it was human-made.
Starting point is 00:22:26 Yeah. Yeah. Maybe you're right, and it's not a special case. Do you see that in the data with SORA downloads? Because I've noticed that I'll generate stuff on Sora, download it, and she'll generate stuff on Sora, it and share it to a group chat. And then it's this little in-joke that me and five other people get. And we see this family group chats of, oh, it's our dog and our kids.
Starting point is 00:22:45 But, like, there's not really like, okay, yeah, this is a business. You know, everyone likes this. Absolutely. I would say that the most common use case is something like that. Sure. Like memes on group chats is a real killer use case. How is the Disney rollout going? I was super excited about it.
Starting point is 00:23:02 Jordy was extremely bullish on it from a business strategy perspective. Yeah, when you look at how image models have grown various LLMs historically, and now you're going to have an image and video model that can do something that no other LLM can do, at least legally. Yeah. And he's Bob Eiger joining Open AI. Bob I love it. No. He's going to be looking for a job.
Starting point is 00:23:26 Free agent. He's a free agent. He's a free agent. He'll pick him up for us. You know, do some recruiting. That'd be great. I think that generating... characters in images and videos is going to be very important to people and they really like that.
Starting point is 00:23:42 Like other, like we were saying otherwise, I don't think many people like want to watch me and some Star Wars character doing something together, but I might think it's cool. It's, you know, there's like a real trend going on right now with chat GPT where it's make a caricature of me and my job based off everything you know about me. Yep. And those kinds of things people actually do like looking at other people's media. Yeah, it's almost like a like a face filter or something. There's like enough of it's the studio Ghibli moment. Like there's enough of the human still in there that it's not it's not you can't yours is not the same as mine.
Starting point is 00:24:16 So it's still personalized. It's personalized and it says something about you and you know the the like a lot of these things. A lot of what's gone viral before the image done. I think it's like if you can make people look a little bit more attractive or cool than they look in than they are in real life. Yeah. Without sort of having to ask for that. Yeah. How are you thinking about the actual rollout?
Starting point is 00:24:37 We were debating between like open the floodgates. You can generate any Disney property versus like it's one character out of time. And everyone's posting Spider-Man. And then it's, you know, Mickey Mouse week. And there's another viral moment. I'm not sure what the team is planning there. I know Disney's had some different opinions about what they want to do and try to be a good partner there. But I'd be excited open the floodgates personally.
Starting point is 00:24:58 Oh, that'd be fun. Cool. Talk about your first, speaking of video, your first Super Bowl ad. It felt like not generated, lots of motion graphics, the black dots coming together. Like, what was the goal with that ad? Who were you trying to speak to? It didn't feel like a direct response QR code download the app. What was the mission?
Starting point is 00:25:21 I love that ad. I think that was such a cool one. It was clearly not meant to be like a mass market or direct response ad. Yeah. But, you know, speaking to the like people who are at the center of this. this revolution and just trying to like celebrate everything that has come before and everything that will come after. Yeah.
Starting point is 00:25:41 It, we didn't hear a lot about it from like average users of chat GPT, but we heard a lot about it from like researchers in the field and a lot of resonance there. It was definitely not generated. It was done the old fashioned way. And you know, it had like a lot of people loved it. A lot of people hated it and then many people in the middle didn't get it and I felt okay about that. That's a great definitely.
Starting point is 00:26:04 I like our ad for Sunday. Okay. It's about codex, no surprise, but. Yeah. Talk about the evolution of the advertising to be more just clear about the actual use case, the value. Like what are you trying to say with your advertising strategy now as it refers, as it relates to like video?
Starting point is 00:26:22 Well, the thing I would most like us to say, and I think this is a new challenge given where the models are is to teach people what they can go do with AI. Yeah. I mean, AI is now unbelievably capable and most of the world, it's still like asking a basic questions on chat GPT. Everyone can go build amazing things now. Everyone can go do all kinds of work. Scientists are going to make new discoveries. And I'd like to, you know, to the degree that advertising we do can teach people how to use this. I think that'd be awesome. Yeah, so the KPI is like reduce the capability overhang broadly. I think that should be a general KPI for us, not just of our ads. Yeah. The products that we, we build, how we teach people to use those products.
Starting point is 00:27:10 Like that feels very important. Yeah. Anthropic also has a bunch of ads in the Super Bowl. It seems like they might run a ton. Damn heard. What do you think that they're getting wrong about their characterization of how ads will roll out in chat apps? Well, it's just wrong. Like the main thing that I think is we're not stupid.
Starting point is 00:27:36 We respect our users. We understand that if we did something like what? those ads depict people would rightfully stop using the product. No one, like our first principle with ads is that we're not going to put stuff into the LLM stream. That would feel crazy, dystopic, like bad sci-fi movie. So the main thing that's wrong with the ads is like using a deceptive ad to criticize deceptive ads feels, I don't know, something doesn't sit right with me about that. I asked, I asked Claude what,
Starting point is 00:28:09 if if what the definition of playing dirty and it said what did it say misleading others about your intentions hiding information or creating false impressions yeah thought it was a little dirty I thought it was well played but it was it was it was well played for sure and it was a funny ad and they you know like this sort of the stuff about the chat GPT personality that most annoys me which will fix very soon I thought they nailed in the ad so that part was funny. Yeah. But I don't know, you know, like I also, like, I think it's great for them not to do ads. We have a different shaped business. I did notice that they said in their thing, like, we may later revise this decision and we'll explain why. So, um, yeah, the, the, the blog post,
Starting point is 00:28:57 was, uh, kind of did a good job of disarming the pro ad people gave themselves an out in the future. Uh, do you think they care? I think it doesn't matter. Like, I think it's a side show, you know, people are excited for a food fight and between companies, but like, the, the amazing capabilities of these models, the product, the kind of like the groundswell of excitement around Codex, that feels way more important. How do you stop the pausing that happens in voice mode? Do you need new hardware for that or is it a model capability thing? We need a new model.
Starting point is 00:29:34 We may need some new hardware too, but mostly we just need a new model. I think we all have a great voice mode. by the end of this year. What's the bigger bottleneck energy or chips? It goes back and forth right now again, it's chips. Chips. Is there anything? But it would be different, different times.
Starting point is 00:29:51 Is there anything we like society, America should be doing more aggressively to increase the supply of fabs? Yeah, I think it is, well, it may get itself on its own. It may like normal capitalism may solve it, but I think somehow, deciding as a society that we are going to increase the wafer capacity of the world, and we're going to fund that. And we're going to get, you know, the whole supply chain and the talented people we need to make that happen would be a very good thing to do. Do you think there's an upper bound on model IQ? Like the race right now is you're smart, but you're not smart for days.
Starting point is 00:30:31 You're smart for hours. Can you go much further and get much smarter? It seems certain. upper bound. I don't know. I don't know how to think about that question yet. Yeah, I can't even reason about what 2,000 IQ looks like. I don't know what that means. It's funny you say, I mean, I can't reason about what it means to think about a problem for like 10,000 human years. That's another good one. Yeah. That's great. But maybe IQ is going to feel even even weirder. I don't know. I feel like I somehow feel like this isn't going to feel as strange.
Starting point is 00:31:08 as it sounds. And the like for a bunch of reasons we're so focused on other people. We're so focused on our own lives. We're so focused. We have such a human centric nature that like, okay, this thing is really smart. It's inventing new science for us. It's running companies for us. It's doing all the stuff. And that sounds like it should be impossibly weird and I think it'll just be very weird. Do you think space data centers will provide a meaningful amount of compute for Open AI in the next two to three years, five years? No. Ten years?
Starting point is 00:31:41 You just keep going. 10,000 years? I wish you'd be luck. Okay. The funny thing about the whole back and forth about ads is that in our world, the criticism is that you didn't launch ads early enough. Is there a world where you wish you launched earlier? How is the actual rollout going?
Starting point is 00:32:04 going, are advertisers happy? Do you have like a really long roadmap? Or do you think you'll be faster at catching up to sort of what's frontier and ad? We haven't even started a test yet. We start the test soon. But, you know, we're going to, it's going to take us some number of iterations to figure out the right ad unit, the right kind of the right way this all works. Do I wish we had started earlier? We have gone from like not a company, you know, three years and three months ago or something like that. We were like a research lab. And now we are like a pretty big company with a lot of products. So there's many things I wish we had done faster.
Starting point is 00:32:41 I think we were correct on the tradeoff here of how we balance things that we need to do. I wish, you know, we launched this very cool enterprise platform this morning, which we had done that earlier too. But I had to like deal with the monstrous growth of chat GPT and codex and all that our stuff. Good problems to have. Last question for me. What happened to that internal writing model that you used to write the essay? that feels like something that was really cool, but we never really saw the light of day.
Starting point is 00:33:08 We're going to get a lot of that spirit into a future model. Again, it's like there's so much stuff happening. We have to make these hard prioritization decisions. I would love a cool writing model. Not as much as I would love a cool coding model. And it's what is possible now for coding for science, that's the thing I'm most excited about. for accelerating all kinds of research, AI, and otherwise, for really accelerating the economy. I think that's like the right thing for us to most prioritize in terms of new capabilities.
Starting point is 00:33:43 But yeah, well, like you want you want a model that can write beautifully because it means it, well, you want to write a model that can write beautifully if it can also think very clearly and express that very clearly. That's just useful in normal work. Yeah, that makes sense. Last question for me. How have conversations been with the broad open AI leadership team? You guys are in a position where any single word or sentence you say in any situation can be spun into a headline immediately. And then you guys have to go on damage control, kind of correcting the narrative. But of course, the original message is often, or at least the original news is often seen more broadly than the correction. It seems like an interesting challenge. It is a strange way to live. And I don't know of any private company that has ever been so in the news and so under a microscope. And at some level, it's frustrating. And we're so squarely in the sights of everybody's anxieties and every competitor trying to take us down.
Starting point is 00:34:48 And everybody's like just what is going to happen with AI to their part of the business or their own lives. that there's like a lot of plasma looking for an instability to collapse on. In some other sense, though, the subjective experience of it is we are so busy on so much exciting stuff that it often feels like there's this crazy hurricane turning around us. And when we sit, it's like fairly calm. You know, the media or Twitter goes insane about something one day. They're talking about a crazy meltdown. We're like, that is insane.
Starting point is 00:35:22 Like, okay, and people talk about it all day. and then later find out it's wrong and sort of seemed like a lot of wasted energy, but we're just like, we have this great new model coming. People are building incredible stuff. Companies are transforming. We're trying to figure out how to get more compute and deal with this compute crunch. And we just kind of like keep going and we're busy. And then if we like open Twitter, pop up our heads and look at the news, it's like, wow, that is an insane, crazy thing happening completely divorced from reality or 99% divorced from reality. And like, okay, someone will correct it, But then we get back to work and people flip out again.
Starting point is 00:35:57 It's it is much, it is weird to watch when we look outside. But it is, uh, it is less chaotic internally than I think you would imagine from reading the media reports. Yeah, that sounds. Well, thank you so much for taking the time to come chat with us. Congrats and great. Thank you. And I'm excited to see the codex ad. Please try it.
Starting point is 00:36:19 I have the app and five three have been like, uh, I think the coolest thing we've done in a while. Yes. With one prompt, I rebuilt the tbpn.com homepage to look exactly like Berkshire Hathaway, and it was just immediate. It was very fun. Interesting choice. Plain text. It was very easy.
Starting point is 00:36:35 Immediately one shot. It did not really push it to its limits. But I'm having fun. So thank you so much for coming on the show. We'll talk to you soon. Thank you guys. Great to catch up. Goodbye.
Starting point is 00:36:41 Cheers.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.