Big Technology Podcast - OpenAI COO Brad Lightcap: GPT-5's Capabilities, Why It Matters, and Where AI Goes Next

Episode Date: August 8, 2025

Brad Lightcap is the Chief Operating Officer of OpenAI. Lightcap joins Big Technology to discuss the launch of GPT-5, how it works, what sets it apart from previous models, and whether it's AGI. We a...lso cover scaling laws, post-training breakthroughs, enterprise adoption, health care applications, pricing strategy, and the company’s profitability outlook. Hit play for a front-row seat to OpenAI’s thinking on the future of AI. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 GPT5 is here and OpenAI C.O. Brad Lightcap is with us to break down the new models capabilities, what it means for the AI business, and what's next for this promising technology. Brad, it's so great to see you. Thank you for joining us on an emergency episode of Big Technology Podcast. My pleasure. Thanks for having me. All right. So briefly, I just want you to talk a little bit about what GPT5 is. So maybe within like 60 seconds or so. Can you talk about what it is and how it improves on previous opening eye models? Yeah, so GPT5 is, it's our next generation flagship model.
Starting point is 00:00:34 It does something really interesting, which is it actually combines into one model the ability to dynamically choose whether to think hard about a problem and reason about it to give you an answer or not. And so you'll remember previously you had to go deal with the model picker in chat GPT, everyone's favorite thing. You had to select a model that you wanted to use for a given task, and then you'd run the process, of asking a question, getting an answer. Sometimes you choose a thinking model. Sometimes you wouldn't.
Starting point is 00:01:03 And that was, I think, a confusing experience for users. GPD5 abstracts all of that. So it makes that decision for you. And it's actually a smarter model. So you're going to get a better answer in all cases, regardless of whether you're using the thinking mode or not. And it's vastly improved on things like writing, coding, health.
Starting point is 00:01:24 It's much more accurate. It's much faster. And so all around, we think a better experience. And now, for those of us who've been following the hype, I think we probably imagine you would lead with this is an explosive increase in intelligence versus there's a switcher on the model that will go to reasoning or non-reasoning when it makes the most sense. So can you explain like what's the disconnect there and why lead with the usability versus
Starting point is 00:01:51 the intelligence increase? Yeah, because intelligence really is a function of how much time the model is going to be thinking. And so depending on how much you want to allocate thinking time to a problem, you're going to get a better answer. Typically, the longer it thinks, the better an answer it can give you. So when we test the model on certain benchmarks and e-vals and we allow it to think, it will dramatically outperform any of our existing models by far. Even though if you don't allow any thinking time, you still get a typically net better answer than you would for one of our non-thinking models like GPT-4-1. So it is a dramatic improvement in intelligence. It should be, I think,
Starting point is 00:02:27 better quality model across pretty much all dimensions, but that reasoning time and being able to use the reasoning time dynamically to think, we think actually is the important part. It makes it for a much better user experience. I'm going to parse your words a little bit. You said that it is a dramatic improvement over previous models. Sam in a press call said that GPT-5 is a pretty significant step over for, oh, Simon Wilson, who's been using your model for a little bit, says it doesn't feel like a dramatic leap ahead from what other LMs, from other LLMs, but it exudes competence. It rarely messes up and frequently impresses me.
Starting point is 00:03:06 I'm just setting this up because I'm curious whether we could say or whether you would say that this model is an exponential increase in capabilities or an incremental increase in capabilities. You know, it's hard to measure it that way. I think we're now kind of into this regime of having to measure intelligence, a lot of different dimensions, which isn't a way to dodge the question so much as it is to explain why GBT5 is such a special model. And so obviously, it's better at the core things that you'd expect to be better at. It scores better on things like Sweebench. It scores better on all
Starting point is 00:03:41 the kind of academic e-vals that we put it through. This one in particular, we actually made a real emphasis to have it score better on certain health benchmarks. So it's better at medical reasoning and other health-related things. But there's a lot of things that go into what makes a model good now because you have a lot of dimensions to play with depending on kind of how that model's trained and how it can think about problems. So if it's faster, for example, we think that's actually indicative of it being better. If it can give you a better answer per unit of time thinking, we think that's an improvement that is an important vector to measure also. If it can do things like structured thinking, problem solving, tool use, all these things are things we
Starting point is 00:04:21 actually measure. And they're kind of invisible to users. You know, if you're just using chat GPT, you don't necessarily appreciate each of these things happening under the hood. But all those things are better for GPT5 than they were for our previous models. Right. And the reason why I'm asking is because I think a lot of people have pointed to the leaps from GPT, original GPT to GPT2, GPT2, GPT2 to GPT2 to GPT3, GPT4. And one of the things people have seen is just a general increase in capabilities across the board. There were no caveats of like, and maybe there's a reason for those caveats, but there were no caveats of, you know, there's intelligence increases in this
Starting point is 00:04:58 place, in that place it was, we trained a bigger model, I'm pretty sure this is what it was, and it's better across the board. So have things changed? They've changed, yeah, from a technical perspective. I think when you go from GBT2 to GPT3, three to four, these were really just exploits of what was and is the scaling paradigm of training larger, pre-training bigger and bigger models, training larger models. It's kind of one vector of training and you get a better model that as a result. And that continues to hold true, but we now have this kind of other category of training, which is post-training, and being able to use test time compute in more interesting ways than we used to as almost kind of a second
Starting point is 00:05:39 stage of training. And so we think that that actually gives us a little bit of a boost, a force multiplier on our ability to push the model toward new intelligence levels, and also be able to train into it a lot of the things that you want an intelligent model to be able to able to do. So using tools, for example, is something that rethink is really important for overall intelligence. GPT2 and 3 couldn't really do that as well. GPD4 could do it in a more nascent way. And now GPD5, you get that baked in with the benefit of these kind of multi-step and longer horizon reasoning processes. So yeah, we want to abstract that from users. Obviously, we don't think that you as a chat GPT user should have to stop and think about that. And in some
Starting point is 00:06:21 sense, I think the model picker being a point of frustration for people was an expression of the fact that people don't necessarily want to have to make those decisions every time they talk to an AI model. They kind of want the model to make those decisions for them. And so that's why we think GPT5 is a big step. And going back to that increasing pre-training, increasing the scale of pre-training, delivering predictable improvements in model performance, yes, now post-training is in the picture. It's making models better in really impressive ways. But are you of the belief, and it's opening I have the belief now that there are diminishing returns from pre-training, given that we're now talking about different forms of training these models? Not at all.
Starting point is 00:07:05 Our scaling laws still hold. Empirically, there's no reason to believe that there's any kind of diminishing return on pre-training. And on post-training, we're really just starting to scratch the surface of that new paradigm. You know, the O series of models, which were kind of the previous reasoning models, were really just the beginning of us starting to explore what's possible in that post-training regime. And I think that's going to be kind of the dominant theme here for the next year or two is continuing to scale in that dimension and continuing to see the gains that you get there, simply because they're so significant.
Starting point is 00:07:41 And so now we're pushing on two axes for how to improve models, and we think that's going to tighten and condense the rate of innovation. Is opening, I believe that the vast majority of improvements from here are going to be coming from scaling or from algorithms? I think it'll be a combination. It's always a combination, right? It's always algorithms, scale, compute, and data, right? And so we push on all three, and they all play a really important role, I think,
Starting point is 00:08:13 in how we look at the future. And then the hard part, obviously, is having them come together. So being able to train larger models requires typically that you want to train on more data, obviously with more compute. And so that's a delicate balance between those things because just scaling up doesn't necessarily mean,
Starting point is 00:08:30 you know, in all cases that you're going to get kind of the same, you know, corresponding rate of improvement. You have to be able to bring those other pieces also. So it's not like we push one button or the other. We actually make a really conscientious effort. to try and kind of pull all of those together. Okay, and you're not calling it AGI, and I have to say, I've lost a bet on this show
Starting point is 00:08:50 because I was listening to Sam on the Theovon show. He says, he said, GPT5 is smarter than us in almost every way. And I said, all right, well, that sounds like what you would imagine AGI would be. And then, you know, GPT5 comes out yesterday or as the release happens. Sam says, I kind of hate the term AGI because, Everyone at this point uses it to mean a slightly different thing, but this is clearly a model that is generally intelligent. Help me understand what's going on because it seems like maybe he wants to call it AGI, but you're not yet. So why is this not AGI?
Starting point is 00:09:31 Well, it is a hard thing to define. You know, you ask the joke here is you ask five people what AGI is. You'll get seven answers. And I think the way we kind of look at it is it's a cumulative. process, right? It's a system. And I think you have to define kind of what is it that that system is, and what do you expect it to be able to do. And for me, at least, that's a system that is reliably able to learn new things that are kind of out of distribution by virtue of its ability to reason, to think, to solve problems, to use tools, to come up with new ideas. And so do I think we're at
Starting point is 00:10:04 a system that I would call AGI? No. But I think we see, we start to see the traces and the pieces of that overall system for generalized learning start to come together in models like GPT5, and I suspect in its successors. I don't know if we'll have a point where we are like, okay, we've crossed from a non-AGI world into an AGI world. And even if there were, I'm not sure we'd actually realize it necessarily until after the fact, because one of the things we've learned working with the models that we have is the capability overhang is significant. I think when Sam refers to the intelligence of the models and having a PhD, in your pocket. We haven't yet really exploited that as a thing. In some sense, I think you could
Starting point is 00:10:50 pause AI progress right here for 10 years. And you'd still have about a decade worth of new products to get built, of new ways that people figure out how to use the models, even at a GPT5 level model in interesting products and interesting processes. And so there's, and one of the kind of interesting things is I think as the models get smarter, they almost demand more from a product building perspective in terms of how you actually plug them into the system. I always kind of roughly analogize it to like you could have a really, really smart intern. And at the end of the day, they're only capable of doing a few things for you. They can take notes in meetings. They can write summaries. They can pull basic analyses together. But if you bring a PhD to work, that person has a
Starting point is 00:11:32 tremendous capability set that they may not be totally effective on the job on day one, but your job is to really figure out how to expose them to enough context, enough information, give them the right tools to make them really effective later on. And that process actually takes longer to get them to their full effectiveness than it would an intern. And I think it's going to be similar with AI models. And so, you know, it is a continuous process. And I don't think it will be linear. But where we are today, I would say, you know, we're probably not quite yet at something I would call like an AGI level system. Yeah. And it brings up such an interesting question, which is, does it really make sense to try to make the models smarter from here?
Starting point is 00:12:10 Or is it about trying to build those ancillary capabilities? You know, I think Sam mentioned this on the media call, but GPT3, he said was high school level intelligence, GPT4, maybe the level of a college student, and GPT5, an expert. So I guess I wonder for OpenAI is the quest to add more intelligence to the mix, or is it to focus on capabilities other than smarts? Some of the things that you mentioned, like memory and continual learning. It's going to be, I think, all of those things. Certainly, there are some unsolved problems.
Starting point is 00:12:40 You mentioned a few here, and I would agree with those, that, you know, you'd expect a really smart person to, you know, kind of comes by default that our models still struggle with. And so there's open research there that we still have to do, I think, to be able to kind of close the loop on what I would call the full spectrum of intelligence. But, you know, there's intelligence like we were talking about earlier in the podcast expresses in a lot of different ways. and part of it is just your, you know, pure IQ. It's your knowledge of how things work and your ability to recall information. But then it's also your ability to reason about how to use other tools to solve problems. It's your ability to be reflective and to look back on your own chain of thought, your own line of thinking, and actually course correct when you feel like, you know,
Starting point is 00:13:23 I actually went down the wrong path, and maybe I didn't come up with the right strategy to solve this problem. And so that's one of the cool things we see is GPT5 on those vectors. we can actually reliably measure as better than the previous systems we had. And for us, I think one of the real-world things that we really want to understand is how do they actually perform in the real world? How do developers use these models?
Starting point is 00:13:46 How do enterprises use these models to actually apply them to existing problems, real-world problems, and see if the next models kind of do better than the last models. And so that's, for us, I think the real-world benchmark is increasingly becoming important
Starting point is 00:14:00 as a sign of intelligence relative to the academic benchmarks. And how big of a priority is continual learning within OpenAI? We have a lot of priorities. I think, you know, certainly that's among them. But we feel really good about our research trajectory. Low priority?
Starting point is 00:14:17 You know, the cool thing about Open AI is the way that we kind of, you know, has, I think, have like, systematized being able to do research. And this has really been true from the early days of the company. I joined Open AI in 2018. is we take this kind of highly exploratory approach to research.
Starting point is 00:14:35 And so we're very much not tops down, I think, in how we approach research where there's one idea and everyone kind of just, you know, glams on to that one idea, and we kind of do one thing at a time. What we really do is a lot of open-ended exploration and small teams. We explore different paths and see if those lead to new ideas that we then kind of cycle back into the kind of core idea, the main line of ideas, if they work. And if they don't, we kind of, we recombine those teams into other ideas that seem to be working and then allow other, you know, new ideas to offshoot from there. And so it really is kind of feeling around in the dark a little bit.
Starting point is 00:15:09 And when you find that kind of patch of grass that you're like, okay, we might be on the right path here, you kind of bring everyone to that point and then kind of let everyone feel around a little more. And I think that's kind of how it has to work. I think it's really hard a priori to know these things, you know, in advance. I think you can have intuition. And I think our researchers tend to have kind of, you know, better intuition than the average. But it really is still scientific exploration. Now, I want to talk about whether how you're plus subscribers or how the people who are using these chat bots will feel using chat GPT will feel the improvements. You know, there's an interesting comment from Ethan Mollock, the Wharton professor who is also experimenting with GPT5.
Starting point is 00:15:48 He says, I think it's a big step forward, but not an unexpected one. If you've been following the curve, he says these models got gold. at the Math Olympiad this week, I'm losing track of what massive advances mean. All the models are improving very quickly right now. Their question is, if you have a model that's capable of graduate level or college level biology, and then it goes to graduate level biology, the average chatbot user may not feel that, even though it's, even though it's gotten much smarter. So I guess I'm curious how you think this will be reflected, the increased smarts will be reflected in the average users chat GPT experience and the plus users experience who've been using these reasoning models for a while.
Starting point is 00:16:36 Is it going to feel any different for them? Yeah. I saw something on X that was akin to what you're describing, which someone basically kind of said, I think for the upper echelon of chat GPT users who are probably in the paid tiers, who are active on a daily basis. basis and a really kind of expert level using these systems, it's going to feel like an improvement, but maybe a more subtle improvement. But for the average user, for the free user, and we're bringing GPT5 to our free tier, it will feel like a dramatic increase. If you actually look at kind of the way free users have used chat GPT, most of them have actually not experienced the power of the reasoning models. They mostly are using GPT-40. And, you know, they mostly are
Starting point is 00:17:20 kind of using it for this very kind of, you know, turn-based kind of like very quick, you know, back and forth, almost search-like, that ways and that I think don't actually kind of express the full capability of the model. And so for a lot of people, this will be the first time using a model that has reasoning capability. And not only will it be, you know, the first time using it with reasoning, but it'll be the first time that they're experiencing a model making a decision about how long to think about a problem and how good of an answer to give relative to how hard the question is. And so we expect that like for, yeah, for the average user, it will feel dramatically different. Maybe for the kind of upper echelon of power user, it may not feel as
Starting point is 00:17:58 different. So I would agree with that. And I think that's a natural thing. I think that's actually a good thing. That, you know, it's, it is, if you've been following the kind of rate of AI progress and you're, you're kind of exploiting the frontier at every point, yes, it probably is dizzying, but it starts to feel, it starts to feel more continuous than if you've kind of, you know, you're using what is basically kind of the best model from a year or two ago. Right. I think you're so spot on about the average user is using it as like a search version of search.
Starting point is 00:18:28 And they're like, well, what should I use when they speak to me? They're like, what should I use AI for? I'm like, just upload stuff and start talking to it about the things you upload. And I had a friend who like was uploading pictures of his son's football practice and asking it for tips about like for coaching tips. And he was like fairly blown away that this thing is giving some like real analysis of positioning. I mean, I wouldn't use it as a football coach, but I do think that as the average user gets into these capabilities, it's going to be fairly mind-blowing. Yeah, it's, you know,
Starting point is 00:18:59 everyone's got a little bit of a different entry point, and that's the cool thing about it, is like, it's really personal for everybody. You know, we focused on health a lot with this release because that was one of the consistently common things that we heard from people as a starting point for how they've used powerful AI was in when they're navigating, health journey. And so we really wanted to make an effort on making sure that if people are going to be using AI systems for health-related things, that we could serve them the best possible model. And so that was a big, a big push for training GP-5. Yeah, you brought up health a couple times. Do you want this to replace a GP? I mean, a lot of people are really underserved with health
Starting point is 00:19:37 care, but I kind of worry about handing them a model that can hallucinate and saying this is the substitute to know. I don't think it'll replace GPs, but what I think it helps people do is become, have more agency in their journey, a little bit more control over their, you know, the process of managing care. It gives people also just an awareness of the conditions. So, you know, we hear stories all the time of people managing conditions that, you know, they didn't really understand because no one actually took the time to explain it to them. And that's not, because anyone did anything wrong, it's just because the health system, the healthcare system as designed, doesn't allow for there to be time to allow people to understand what it is
Starting point is 00:20:23 that they're managing. And so even just giving people that baseline of education of like, you know, this is the condition you're managing. It's this common. It's going to express in this ways. You're going to feel these types of symptoms. That's a huge unlock just in people's kind of psychology for what it means to be managing a disease. And, you know, I don't think, I think you still have to kind of work with a GP for care or, you know, a specialist for care. But having something that can kind of handhold you through that journey, I think for a lot of people is really comforting. And in a lot of cases, it's actually proven to be helpful. Obviously, like, we want to make sure that model is as accurate as possible.
Starting point is 00:21:00 So being able to kind of push the model capability in that domain specifically has been a big area of focus. But we think now with GPT5 and obviously with, you know, with future models, we've seen consistently, the rates of accuracy and the rates of hallucination go up and down respectively. GPD-5, I think, depends on how you measure it, but it's four to five times more accurate than its predecessors. And so, and that may be more accentuated in health. I don't know off the top of my head. But so we have a lot of control, I think, and are pushing in the right direction on being able to make them reliable and accurate. It's pretty interesting. We're talking about things so far beyond the chatbot. Like, of course, there's the chat
Starting point is 00:21:43 function, but there's coding, there's health. And then, of course, there's enterprise or the way that businesses use these models. And businesses are notoriously slow at implementing this technology. And I'm sure there's so many approvals and reviews. And it's tough to get things out the door. But I do think that when you have better models, this is sort of my belief, when you have better models, you sort of are able to push that forward much faster and much more effectively. So talk a little bit about what a better model in GPT-5 will enable on the enterprise front or business front. Yeah, no, I would agree with your assessment there. I think in many ways, I always kind of say we haven't yet seen the chat GPT moment, I think, in business for AI.
Starting point is 00:22:28 I think AI was an amazing tool for consumers where your search space, so to speak, is more narrow, and you've got a more constrained problem. You've got, obviously, a much more narrow context that you're processing, and I think, you know, you can kind of take things turn by turn with very, very few kind of external dependencies. And you really just kind of let the models peer intelligence shine. Businesses are a different category of difficulty. So you've got complex business processes. You've got a lot of multi-user dependency. You've got a lot of context that you have to process. You've got a lot of tools that have to be brought to bear.
Starting point is 00:23:07 Those tools have to be used in succession. in certain ways with this, you know, with certain guardrails. And there have to, you know, there's not as, there's not as much fault tolerance for when they don't work. And so we, you know, kind of goes back to what we were talking about earlier. I think you look at models like GPT5 and the impact that they're going to have in business, it is that baseline of capability that's moved up. It's their ability to, to use tools, to, you know, think in a structured way, to solve problems, to kind of recursively correct, you know, their own mistakes, to do long context retrieval, things like that, that actually, you know, these little things do matter on the edge.
Starting point is 00:23:41 And you don't feel them every day in chat GPT as an individual user, but you will start to feel them as a developer or an enterprise. And so we see this anecdotally, too. I mean, we've worked with large enterprises and small startups and the entire spectrum in between on testing these models and GPT5 specifically before release. And we get a lot of feedback from companies like Uber and Amgen and Harvey and Cursor, lovable, you know, jet brains.
Starting point is 00:24:11 I mean, all companies that have use cases that are highly, highly sensitive to the model's ability to reliably call tools, to deal with long context, to, you know, to problem solve and reason effectively. And so it's a rising tide, I think, across the enterprise. And it's just really going to be on the developers we work with to be able to kind of, you know, understand the difference and the improvement and then implement them in the applications that they're building. Yeah, it is interesting to know that you've been, you have been already working with many companies and letting them use GPT5 already. So has there been a sort of unified,
Starting point is 00:24:51 we couldn't do this with the previous models, but we can do it now with GPT5, or is it sort of spread out in terms of the capabilities that it's now enabling? I would say it's, it's been, you know, Rizing tide across the board. So everyone who's kind of benchmarking and all the companies that we work with typically now are pretty accustomed to evaluating and benchmarking performance across all the models that they use. But everyone has kind of reported, you know, much higher, kind of consistently higher performance on those e-vals. There are a few areas in particular we've seen spikes. So one is coding for sure. I mentioned companies like cursor, jet brains, windsurf, you know, cognition and others that we work with who anecdotally are all. all, you know, have all said that GPT-5 now feels like the most capable coding model, whether that's in an interactive coding environment or more of an agentic coding environment. And then also one of the things that we see consistently now is its ability to reason and problem-solve in very technical domains is significantly improved. And so Harvey's a great example of that where you've got, you know, Harvey AI working with legal firms and law firms is, you know, very, very reliant
Starting point is 00:26:01 on its ability to reliably, accurately, and consistently portray cases that it's looking at, legal analysis, to provide that kind of level of structured thinking you want when you're doing legal analysis. And so I expect we'll see that carry over. I mean, financial services is a very interesting area, heavy on data analysis, heavy on research, heavy on planning. Those are all areas that we've seen improvement in. And so as we continue to kind of see GPT-5 permeate the market, we'll get more and more
Starting point is 00:26:30 of that feedback and can continue. and continue to improve on those use cases. And how about pricing? Because it's half the cost of an input. An input token is half the cost. Then GPT40 output token is the same. Are these lower costs going to help enable more use cases? And on that note, I mean, how does lowering costs
Starting point is 00:26:49 sync with the fact that you've raised like $48 billion this year or announced $48 billion in funding? Is it really possible to lower costs and deliver on the expectations that the investors are expecting on that front. Yeah. So we've, in opening eyes history, every time we've cut cost, we've seen typically some corresponding increase in consumption that usually outweighs the cost cut.
Starting point is 00:27:11 And so, you know, for as long as that trend holds, we will continue to cut costs on models. We know that there's this complicated dance that developers have to do between latency, model quality and intelligence, and price. And I think, you know, what we've tried to do here basically is take the market's feedback on all three of those fronts, and really place these models, these GPD-5 models, not just the standard model, but also the mini-model and the nanomodel on this frontier of quality, cost, and latency
Starting point is 00:27:40 that kind of optimizes for what we think the market needs to be successful. And so we tried to find a really attractive price target at a very attractive, average latency, and then obviously with the kind of built-in model quality and intelligence you get with GPD-5. And so we will continue to push that frontier, And I think the more we push that frontier, typically, the more we just see people want to use it for more things. And so for, you know, that equation to exist, we're very fortunate and it motivates us to try and make them better.
Starting point is 00:28:08 Are you ever going to be profitable? I hope so. Okay. We'll take it. All right. Brad, before we wrap, let me be the first to ask you. When is GPT6 coming? Well, you're not the first to ask.
Starting point is 00:28:23 I could tell you, but I have to tell you. Yeah, no. So Twitter is quick on the trigger on that one. But no, I mean, like, look, we're, like I said, we think GPD-5 is extraordinarily capable. We think there will be better models in the future. We know there will be better models in the future. For now, we're just focused on how do we get this in people's hands? How do we support the companies that are building with us using this model?
Starting point is 00:28:47 And then we're still in the science of it. I think that's the exciting part is, like, we're in the first ending of it. And we ourselves are just understanding the paradigm we're in. And so this is, I think, an important first step. And you kind of have to understand where you are to understand where you're going. And, you know, hopefully the learning from this will make GPT6 much better. Well, Brad, it's so great to have you on, especially today on GPT5 launch day. So whenever GPT6 comes, we'll have to do it again.
Starting point is 00:29:13 Thank you so much for joining. Look forward to it. All right, folks, GPT5 is out. You can try it on chat.com. And it's going to roll out to everybody. So give it a look. And we'll be back to talk more about. it tomorrow where Ron Junroy and I will break down the week's news, especially what the latest
Starting point is 00:29:32 is on GPT5. Thanks everybody for listening and we'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.