TBPN Live - Full Interview: Sam Altman on Sora and the Future of OpenAI

Episode Date: October 11, 2025

A rare live interview with OpenAI CEO Sam Altman and Sora head Bill Peebles. John and Jordi talk through the first 10 days of the Sora app, how AI will change advertising, and what Sora means... for Hollywood.TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devRestream - https://restream.ioProfound - https://tryprofound.comJulius AI - https://julius.aiturbopuffer - https://turbopuffer.comfal - https://fal.aiPrivy - https://www.privy.ioCognition - https://cognition.aiGemini - https://gemini.google.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive

Transcript
Discussion (0)
Starting point is 00:00:00 And we are joined by Sam Altman and Bill Peoples. Sam, Bill, how are you doing? What's going on? Hey, guys. Great to see you. Congrats on all the progress. I've been enjoying Sora a ton. Personally, I've been enjoying making them. I had a ton of fun making the collab post yesterday.
Starting point is 00:00:20 And I was wondering. And prompting your cameo feature. John made it so that he always appears as a bodybuilder if anybody is cameoing him. so you guys got to, got through experience. And it led to some chaotic results. Do you have favorites or a post that you've been coming back to or that have, you know, stuck out to you as particularly, you know, creative uses?
Starting point is 00:00:43 I mean, definitely all of the ones of, like, me stealing GPUs or doing other crazy things to get GPUs have been funny. In the last few days, that they're, at least in my feed, there have these, like, very beautiful sort of fantastic scenes that are just not things that could have ever existed without something like SORA or wouldn't have been easy to make. And watching people build those and watching it's sort of the trends flow through that has been pretty awesome. What about you, Bill? Any favorite uses of SORA so far? Oh, man. Mark Cuban came on the platform a few days ago and there have been some hilarious Shark Tank
Starting point is 00:01:17 memes. Those are probably my favorite. Pitching some SORA features to Mark. Also, leveraging the prompting function to always include an ad for cost plus drugs I thought was especially hilarious, considering he's been one of the most vocal opponents of advertising in AI, he is leveraging the feature to the max. Yeah, I think there are going to be all of these weird new dynamics that we see emerge that just weren't possible in previous kinds of video. And this is like a fun period because it's all going to be so different every few days. Man, I'm watching this polymarket ticket go by, ticker go by, and it's so tempting to, like,
Starting point is 00:01:54 say things to those all the people are. Yeah, yeah, yeah. To be clear, don't worry about the ticker. I don't think we're including, I don't think any of those markets are being featured in the ticker. But, yeah, again, this is the new world we're in. Yeah, you can move the market live on TBPN now. But today, people, I'm sure, will be happy or disappointed.
Starting point is 00:02:15 We're here to talk about SORA, of course. So none of the other topics. But, I mean, I also want to know about ads. Why no ads in SORA on day one? I feel like you've laid out a really great, you know, mental model for how you think about ads on Stratory, on the Andrews and Horowitz podcast. I've, I'm bought in. Is it a technical thing? Do you need scale? Do you need to think about it more? Why no ads on day one? This is like a 10-day-old product, right? Like, it's hard to get anything to work at all. And we, like, we don't, we don't assume success.
Starting point is 00:02:52 We got to, like, go hard-earned success, and then we can think about monetization for it. But this is, like, it's gone great so far. It's still very early, and there's still a lot of work to build something that a lot of people are going to love first. What about surprising capabilities of the model? You mentioned that you've seen some fantastical scenes. I'm interested to know about specific, like, specific breakthroughs that you've noticed that SORA to the model is particularly good at. I noticed one about reflections being great. Obviously, people love the cameos.
Starting point is 00:03:23 But what is surprised you in terms of just like, technically the model can do something now that it couldn't do before? This model is a hugely forward in terms of physics IQ. So pretty much all past video generation models really struggled with prompts that, you know, involve like backflips, gymnastics, routines, et cetera. And this is really the only model that exists today, which can reliably handle these kinds of really complicated dynamics. One of the big features that people have really loved on the app so far is the steerability
Starting point is 00:03:51 of the model. So, you know, if you give it like a really simple text prompt, that's maybe even only a few words, this model is really good about kind of telling a coherent story with like a beginning, middle, and end and doing this like automatically in a way that doesn't require like a lot of direct steering from the user. If you want to like go into, you know, a ton of detail about exactly how your prompt should be laid out and how the story should unfold, it supports that too. So it can kind of meet you wherever you're at in the creative process. But really, this model is just like so hyperstereable.
Starting point is 00:04:17 And it's like just vastly higher physics IQ just makes it able to do things. things that were, like, not possible a few months ago. Is that all within the model, or is there some sort of, like, reasoning step where you're hydrating or unpacking my prompt and writing a bigger prompt or breaking down the problem in some way? Can you share anything about that? Yeah, it's a good question. So, you know, the intelligence for these text conditional video models kind of lies both in the core model itself like Sora, and some amount of it also comes in through the text prompt.
Starting point is 00:04:47 So, you know, however the user decides to kickstart a prompt, you can have like a language model under the hood add some details in. But for example, you know, when it comes to things like, again, like doing these backflips or any kind of physical interactions, how refraction is modeled, you know, when you're pouring water into a glass, all of these details have to be captured by the core video model itself. So that's intelligence, which is really innate to SORA. And certainly you can supplement it with intelligence from a language model as well, but it's not necessarily a prerequisite to get kind of amazing results out of these things. Are there any areas on the physics where you think that the model falls down and you want to improve? I mean, we went through,
Starting point is 00:05:21 the era of like six fingers. It seems like reflections in water are solved, but someone was saying something about doors being hard, or I haven't noticed that one personally, but a lot of the stuff's great, but what have you noticed is like the next version is going to be even better at? This is still very early. I think Bill said that I appreciate it is this is the, this is like the GPT 3.5 moment per video. I agree. And if you went back to use the actual GPT 3.5, you'd be like, okay, signs of great promise can do the occasional impressive thing, but it was really not until GPT4, where these text models started providing real value for people. And we know how to go make the GPT4 equivalent of video models. And we will do that. And then a lot of these things that are
Starting point is 00:06:02 currently annoying, like doors or, you know, once in a while, something goes through something else that's not supposed to. In the same way that the world, you know, love to complain for a brief period of time about where 3.5 fell down and, oh, it's never going to be useful. It's never going to do this. It's now we're going to do that. And then we were able to just keep making it better and better and better and better. The model physics IQ is certainly the best I've ever seen, but it is nowhere near as good as it will be in the future versions. And I think, I hope, we'll see a similar thing to what happened with the GPT text models, which is people will always demand more and better and they will always find new and better things to use it for. And the world will just make ever more amazing videos. How quickly, go for it. Go for it. Go for video here. Yeah. Like, GPT1 really was SORA 1 for this modality. And the progress we've made kind of in the last 18 months came to this 3.5 moment, right, is really compressed compared to how long it took to go from GPT1 to 3.5 in the language domain. So we're really expecting progress
Starting point is 00:07:06 to continue to be mediarch here in the near future. How quickly do you expect the cameo feature to be cloned? That feels like an equally important part of the, you know, The models made a leap, but the product is in the experience and the experience of creating these assets is wildly innovative. We saw stories get cloned. We saw, you know, Algo, video short form feeds get cloned. I expect many other platforms to be looking at this functionality and realizing that this might be the future. You guys certainly believe that it could be important. important. So how quickly do you think?
Starting point is 00:07:49 We're actually totally okay with the world where we do the product innovation and everybody else copies. And I don't think it works for them as well as they think it does. Like the, you know, a lot of people have tried copy in chat GPT. You can go look at some of our competitors apps and they even copy the mistakes. They even copy the design decisions we really wish we hadn't made. And maybe it's worked well for them. I guess I kind of hope it has. But it's been fine for us. I think like the key to this is not any one innovation, but it's repeatedly putting them out again and again and being
Starting point is 00:08:20 first to come up with them and put them into a cohesive offering and, you know, that's what we want to be good at. And if other people want to clone the stuff that works, we also sometimes clone stuff that works. That's fine. But mostly we want to be able to drive the innovation. And I think Bill and his team
Starting point is 00:08:36 have done an incredible job of figuring out how people actually want to use these video models, what the models need to do. Really, they've approached us as a full-stack problem from how do you train the video model to how do you make this enjoyable for users but cameos are one out of many ideas
Starting point is 00:08:54 they have from here on their journey to like the product that we hope to eventually build and so if people take some inspiration from us and copy us along the way I'm sure they will it's fine how do you think about the like popular claim that we want
Starting point is 00:09:11 AI detection I want AI content flagged is that a stated preference that It's not a revealed preference, because personally, I don't want bad AI content, but I don't want bad human-made content either. I want great both, and I'm fine when someone comes up with something genius and they instantiate it with a video model. How do you think about it? I think that is the real thing, is you don't want slop, you want great content.
Starting point is 00:09:40 Different people, one man's slop is another man's treasure, for sure. but what you care about is like good original thoughtful new helpful whatever content and whether that is generated entirely by a human or entirely by air or what i expect will mostly happen in the future which is tool assisted human driven generation um i don't think you care that much if the content is great uh there's a lot of like you know stuff that is technically written or drawn or filmed by a human but is completely derivative and much less original than what an AI has generated. And I think that will be what people really care about long term. You just want great content. Now, I also do want some human connection with it. When I read a great book, first thing I want to do is read about the author that wrote it and what life experience went into that.
Starting point is 00:10:34 I don't think that'll go away. But if they're using an AI as a tool to help them make the writing better, sign me up. That sounds great. Similarly, I would rather watch a video about someone I know than some rame. random AI generated character, which is part of why I think this was cool to offer. One design decision the team made that I thought was really great, and I was actually pushing them in a different direction earlier on, and then I decided they were totally right, and I thanked them and dropped it, was the fact that the feed is AI only, and not a mix of
Starting point is 00:11:06 AI plus some uploaded videos, I think is a subtle but extremely important design decision in how people are relating to this. Yeah, it was a very weird experience for me. I was thinking about the collab post that I was making, announcing this interview. And my initial thing was like, well, I'm going to have to think of a script, or I'm going to have to think of, you know, what I say, or I should record a piece of this, and then I'll use it. And it was like, no, I just typed the prompt and then I get the front-facing videos remarkable.
Starting point is 00:11:32 What are you, what kind of indicators are you guys looking at as SOR can transition from what it was, the second it launched, which was a creative tool, into something that's more of a consumption like a consumption platform traditional you know social media platform like what talk about kind of what you guys are pushing forward to because obviously you're seeding seeding the network with with the tool but it's it's certainly much harder to turn it into something that people are spending hours a day and purely consuming content and not creating content you know we really wanted to design this from the ground up to be centered around creation and a lot of
Starting point is 00:12:13 Of the metrics that we've been focused on optimizing here are really aligned with making sure as many people as possible are actually like getting their hands on the SOR2 model itself and you know, able to create content with their friends and like for the rest of the world. One metric that we're really proud of with this launch so far is that 70% of our users are actually creating content even to this day, you know, a week and a half after launch. And that's like vastly higher than on any other social media platform. And I think it really speaks to just how fun creating.
Starting point is 00:12:43 can be with the right tool set, right? If you look at any of these kind of legacy platforms, there's just like so much friction from like getting off the feed and like into some creative flow state, right? You have to like put the phone down. You have to go get like a camcorder, start recording yourself, find your friends, like do a dance, etc. It's just like a lot of work, right? On SORA, like you can just pick up your phone, find a like any video you like in the feed, remix it, you know, cameo any of your friends. And I think one insight that was not obvious to us at first, but we've kind of clearly seen as an emergent behavior of this product is just like there's all these people out there who would not necessarily want to be like, you know,
Starting point is 00:13:20 influencers or something or have like a big social media presence, but the fact that like all of their friends can just access their cameo, right, put them in all of these crazy situations actually like kind of gets them into the playing field in a way that felt really high friction before. And so, you know, we're closing in on close to like two million weekly active users now. We're really excited that such a huge percentage of that user base to this day is like still creating with SORA, and we're going to continue pushing on that direction and making sure people have even more powerful tools in the future.
Starting point is 00:13:48 Yeah. So 70% of SORA users are creating content. The typical benchmark that people kind of quote randomly is like 1% creation, 99% consumption, something like that. And that certainly feels like my experience on Instagram. I post a photo every once in a while, but most of the time I'm just kind of scrolling.
Starting point is 00:14:06 And I'm wondering if you think that that 1% will be much higher on SORA in terms of actual time in the app, time prompting versus time scrolling. If you have any data, that'd be super interesting. But then also, does that make it more of like a competitor to video games than traditional social media? Because it's such a lean forward experience versus just lay back. What do you think? Yeah, it's a great question.
Starting point is 00:14:31 We still need to study this more exactly how creation versus consumption habits kind of change over time for folks on the platform. It's still pretty early days. I do agree with your point, though, that I think over. time, this is going to feel much more immersive in a way that like video games kind of do. Like you have more agency when you're actually using the platform, you know, not just kind of like mindlessly scrolling a feed like hours a day. And like one interpretation of this product, which I think is kind of interesting, especially from the research perspective, right, is cameos in some ways like the simplest way where you can
Starting point is 00:15:03 kind of like inject yourself into the model, right? So it's a very low bandwidth communication channel right now. You know, you're only giving like a few seconds a video. footage of like any given individual like into the app. But like over time, right, you can imagine like these models know more and more about your life. They really like deeply understand your friends, how you want to like show up in the world. And like over time, this can almost become like a little mini like alternate reality.
Starting point is 00:15:26 Right. So like you're not just generating like videos of yourself with your friends. Like you actually just have like digital copies of yourself running in the model on this SORA platform interacting with other people with agency. And so I think over time we're really going to see this platform evolve into something that feels kind of familiar today and just something that really leans into like the full intelligence of SORA 2 in the future and like really leverages all the world simulation capabilities that we're working on internally. Yeah, I would add to that if if you think of this like spectrum of
Starting point is 00:15:55 the kind of entertainment you can have in front of a computer, at one end you have like watch a two and a half hour movie and you hit play and then you lean back and you don't do anything at all. And then at the other end you have like a very intense video game and you're like, you know, sweating, then your heart's racing, and it's like super, super active. AI is going to push things to be more in between there, so you'll have, maybe you're still watching that movie, but now you can, like, say something a few times throughout the course of it, and it changes what happens as the movie plays out. Or with Sora, you're seeing this amazing new phenomenon where most users are creating in a world where traditionally only one percent
Starting point is 00:16:35 of them did, and so yes, you're like watching a video feed, but you're doing a little little bit more. And it, at least for me, really changes how fun the whole thing is and how I feel about it. Then maybe you'll do what Bill said, and you'll have, like, you'll be way more actively participating in the Sora feed. And I think you're just going to see that continuum blur a lot more. Did you see Bander Snatch by any chance, Sam? Have you seen this Netflix? It's like a Netflix choose your own adventure. And it was really cool idea. But ultimately, people, it never really took off and became like something they do again and again and again. And I'm wondering if it was because it was like not customizable enough or people just want to just sit back and see a
Starting point is 00:17:10 director's vision. I don't know. I never heard of that, but it sounds cool. Yeah. How do you think a question for Sam, how do you think about allocating compute to SORA versus the rest of the business? I imagine Bill is constantly in your ear every other hour, but how are you thinking about it? You know, my real answer is I've entirely changed my focus of how I spend my days to just go get more compute rather than have to make the compute allocation decisions. I still do have to make
Starting point is 00:17:39 some short-term compute allocation decisions, but I hope we're heading to a world where I am instead telling people you've got to find a way to use more compute. And we're going to be very aggressive here. It feels like you're doing a great job
Starting point is 00:17:53 of like bringing things within your control, within the supply chain. What is outside of your control at this point? I mean, most of it. but I feel like you have great you have great partners all up and down the stack multiple partners in different parts of the chain like when I think about scaling up SORA I feel like it's crazy to bet against you like you're gonna you're gonna get the chips
Starting point is 00:18:16 you're you're not going to be GPU for delivery next year it's not so easy it's funny how are the conversations going with with Hollywood oh yeah oh actually yeah you take it yeah I was going to say we been chatting actually with a few you know very notable folks in and hollywood over the last week you know i think people's first reaction to this is like very understandably going to involve a lot of trepidation and like anxiety when we've gotten to just sit in a room with these folks though you know and really explain what we're building i've actually been pretty struck by like how excited uh folks in hollywood
Starting point is 00:18:52 are about this you know we were chatting with um with one actor recently who mentioned that you know on twitter like a year ago saw like a deep fake of her generated with one of these like open source models, which really had like a lot of nasty content created. And when we really like we walked her through kind of all of our safety mitigations, right, how we're making sure that we have this like very well defined model spec, which dictates the behavior that that we allow on this platform and how we are really leaning into like full control of likeness, right, more so than any other platform. Like you have to come in through the cameo process. You can't just like upload an image of yourself and just like generate a video of it of like
Starting point is 00:19:31 any person. You have to come in through Cameo. I think it became clear that we're really setting the right standard here in terms of making sure people are in full control of their likeness in Hollywood. I think that's where a lot of this anxiety comes from, right? It's just feeling that you know, some random person can just kind of take videos or images of you and do whatever they want with them and create all of this like terrible content that's like outside of your purview. But we've really been like designing SORA from the ground up to put users in full control of their likeness end to end. From the moment you sign into the app, app to, you know, needing cameo permissions to, like, access any of your friends' generations.
Starting point is 00:20:06 So, you know, I think we need to engage more with Hollywood, and we're going to continue to do that. But once we really explain the story of SORA, you know, they're very receptive to it. Do you think there's a world... To add something to that. Like, you know, I, the team asked me before launch if they could put my cameo in their open access. And I, of course, thought about for a second and said, absolutely yes.
Starting point is 00:20:26 I had all these Hollywood celebrities then messaging me on the first day being like, you're absolutely crazy. This is insane. This is like the dumbest thing I've ever said. And then by about the third day, they were like, hmm, that was really smart. You got like a lot of free publicity. Maybe we need to be doing that. And I think you're now seeing actual celebrities say, okay,
Starting point is 00:20:44 I'm going to do this and I expect a lot more of them will. Similar thing on other kinds of characters in IP, I can totally imagine a world where our problem in a year or six months or maybe even less is not that people don't want their cameos or their characters.
Starting point is 00:21:00 appearing, but they think we are not fairly having their characters or cameo appearing often enough. This may turn out to be a really big thing for fan connection. Now, it may be that kind of the previous generation of celebrities don't want to do this and the influencer celebrities all do. I don't know how that's going to go, but I bet this will be like a pretty deep kind of new connection. Yeah, it seems like it's been good for DiCaprio in the memes.
Starting point is 00:21:25 Like, he's not directly monetizing those when you show the champagne meme or him pointing at the TV, but like, you know, it builds his aura in some way. A friend of ours posted something yesterday, this is Jeremy Gafon, he said, the reason we're so upset about slop is because it's obvious we're all going to be going to love consuming it in two to three years. It's not going to be sloped for long. Do you agree, Sam? I mean, some of it will be slopped to some people and some of it won't.
Starting point is 00:21:53 I remember, like, there was a real reaction like this in the early GPT days. where people were like, I can't believe anyone reads this. It's like total crap. It's full of hallucinations. You know, it's like, it's not useful to anyone. And then it became more useful to some people, but they said, I can't believe anybody like ever thinks this thing
Starting point is 00:22:11 writes a beautiful sentence. That's insane. And then with GPT5, you have authors saying like, wow, this is a useful tool. It's sometimes like writes a beautiful sentence. Yeah. And I kind of think it'll follow a similar trajectory.
Starting point is 00:22:22 What do you think about the fact that people feel, at least? I don't know if they actually can, but it feels like you can still clock GPT5 writing. you know, it's not this, it's that, the M-Dash. Like, will we still see these artifacts in three years in SORA 5 that people are like, oh, if you know, you know you can tell, but most people can't? Yeah, it's like, what's the M-Dash of video?
Starting point is 00:22:44 Because I don't think it's like six-finger. No, no, definitely not. That's the typo, which doesn't happen anymore. Yeah, I think right now the M-Dash is like this like slightly wired speech pattern in SORA where it likes to say a lot of words very quickly. These generations definitely have like a style to them. I think analogously to GPT, we really want to give users a lot of control over exactly how their videos show up, right, on the platform. Like if you really want kind of like a very soothing experience, right, not a lot of shot changes going on, we want to give users the ability to generate that.
Starting point is 00:23:17 We're going to continue to get more optionality to people. So, you know, there'll be some default kind of behaviors and quirks of of SOAR ops for sure, but we definitely want all power users to be able to be in full control. random question. Where did the name SORA come from? Yeah, this is a fun one. So the original SORA came out in February 2024, the OG blog post. We did not have a name for it. I think like up to two days before we like revealed the model to the world. We just could not agree on the team what it should be. Did you at least have a code word or something? We just called it like video gen. Okay. And so at some ungodly hour,
Starting point is 00:23:58 I, like, just started pumping a bunch of crazy ideas into chat, GPT. And then, like, we basically ran out of, like, English words. So then we switched to, like, Japanese words. Wow. And then SORA came out. I was like, wow, that sounds really nice. It means sky, you know, linked with, like, imagination, like, all the possibilities of creation.
Starting point is 00:24:14 And so then we just, like, last minute, ship Sora. So, yeah. Yeah, it was kind of a mad dash. Okay. Speaking of Japanese stuff, Sam, you said you were looking for Accura NSX a while back. It's kind of this throwback car very, it's not a waymo. what do you think the piece of content or format will be that remains loved in an age where everyone's taking the Waymo of video, the SORA video generation?
Starting point is 00:24:39 Well, first of all, I got that NSX and it lived up to all of the childhood hype. I mean, just incredible. That's amazing. That car is so fantastic. That's great. And I don't know. I kind of think there's going to be a lot of stuff like that for people that. that generated or not
Starting point is 00:25:00 where you still you want the real thing that you have the kind of childhood connection to you know someone like a kid today is not going to want the NSX but whatever a cool car like that is
Starting point is 00:25:11 they will want and at some point like the fact that they can have like a crazy VR experience they'll still want the real thing and the connection to it and everything they have so I think there will be
Starting point is 00:25:20 a huge amount of that in fact I think the future looks like much more of that kind of stuff not much less how quickly do you want to create an economy on SORA. It feels like there would be a number of ways that you could create incentives for creators to create things for IP holders, for individuals to just be passively monetizing
Starting point is 00:25:41 their likeness. Bill, what do you think for timing on that? I mean, this is like a top priority for the team. You know, there's clearly such an incredible value proposition for celebrities, for rights holders across the board here. We think cameo is like a great entry point for this, right? You can imagine right now we have cameos for people. Maybe you have cameos for like, you know, your character or like your brand or something. And so we're actively working on the team right now coming up with like the right monetization
Starting point is 00:26:12 model here to get this rolled out. But it's really important to us, right, that our creators on the platform are rewarded and that they're clear, you know, financial incentives for like the incredible work that they're already doing. So this is like top of mind for us and we'll have updates here over the coming weeks. This is like something we're actively working on. I will, I think it's super important and awesome. I will say I would like to know how many hours of sleep bill has averaged for the last few weeks,
Starting point is 00:26:35 but I bet it's not enough. So we got a lot of stuff. The team's got a lot of stuff they have to do in a short period of time, and it's going to take a little while. Okay, let me put one more thing on your plate, Sam. I mean, earlier, like years ago, you built Looped, location-based product. Have you thought about how AI and location-based, content fits together. Like on most of these social apps, you can tag a location.
Starting point is 00:27:03 That wouldn't even make sense in the current SOAR app, but what does the AI Maps product look like? I haven't thought about AI and location that much, but I've thought about like how AI can really change the social experience for people. We don't have like a for sure answer yet, but we have like a lot interesting threads to pull on. And I have thought back to like, my day is running that startup there more.
Starting point is 00:27:31 My instinct is it is possible to make a very interesting new kind of social experience connecting you to people, helping you find people that is intermediated by AI in an interesting way. But, you know, we'd have a lot of exploration to do there. What advice are you giving to startup founders these days? I remember in the GPT 3.5, GPT4 days, it was like, don't build a company that assumes model stagnation. How do you think about in the age of SORA? That's been really great advice. It really has. It played out exactly like that. There's a bunch of great companies that
Starting point is 00:28:06 aren't built that way, and they've done great. But if you were just, oh, I have a special prompt that tunes up GPT4. Yeah, bad times. But how are you thinking about it now in the context of video and SORA specifically? You obviously do have an API. You have Dev Day. There's people that will build on top of this. Is it a different shape of the problem? Totally. The reaction to the API has been nuts positive. At least the fastest ramping revenue I've ever seen for one of our new models in the API. I mean, maybe there was something faster than I'm not remembering. But the demand there has been just incredible, and people are doing awesome stuff with it.
Starting point is 00:28:46 Bill and I have not had a chance for a one-on-one since launch because it's been so crazy. We're doing one later today. But one of the things I was going to suggest to him was that we, given how much excitement there is, to build on this stuff, that we do something we don't usually do and put out our intended roadmap of the things we're going to prioritize. Because I can imagine really cool new startups that simply were not possible, that will be possible
Starting point is 00:29:10 at each of these new things will ship. So I had a question when you guys released the SORA 2 via API, which was that if SORA has a potential to be a Instagram or YouTube scale business, why release part of your edge for the entire world that they can integrate into other creative tools and then use the model to generate content that doesn't have a watermark that's not in your feed,
Starting point is 00:29:37 that you're not able to get that feedback loop on that you guys do with the SOAR app? For chat GPT, we also put out a great model in the API and people can theoretically compete with us on chatDBT and some try to, but we are willing, we're never going to build every cool use of the technology, and we want the world to get all that stuff. We're delighted to also get paid on people using our API, but like we just want AI to flourish out in the world.
Starting point is 00:30:04 We're not going to build every great use of what you can do with video models either. We'll build one, and I think it's pretty awesome. But people have a lot of other ideas of business and products to go build, and we'd like to enable those. Okay, last question. Back to cars. What's wrong with the Porsche 9-11? Yeah, you said earlier the timeline was in turmoil. You said if you were worth, somebody said if you're worth $5 million, would you buy 9-11, you said no, you agreed with PG.
Starting point is 00:30:31 What did you mean by? I mean, it was, maybe it was like a tasteless joke. It was kind of like late at night. I was, you know, whatever. But I have an unfortunate proclivity for expensive cars. And the response was like, would you ever spend $250K on a car? And I
Starting point is 00:30:47 took that literally. Oh, that's amazing. Hit the size gong for taking. it literally no time for 250K cars not necessarily but I probably that was not my best tweet you know I I enjoy it now we all enjoy it now I have the context congratulations on all the progress to both of you thank you so much for taking the time to stop by the show really appreciate the update and very excited to see where this goes thank you so much thank you talk to you soon

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.