Limitless Podcast - The GPT-5 Reveal Nobody Expected (Not in a Good Way)

Episode Date: August 7, 2025

OpenAI just unveiled GPT-5—the first numbered upgrade in 2½ years—and we watched the livestream so you don’t have to. We break down the three-tier model lineup (GPT-5, Mini, Nano), the... new one-model-fits-all interface, and fresh features like customizable “personalities,” advanced voice, interactive canvas UIs, plus direct Gmail + Google Calendar integration that promises true agent-style productivity. Live demos wowed us with instant code-generated apps and visuals, yet inconsistent charts, a weird eulogy bit, and early hallucinations left the reveal feeling more underwhelming than earth-shattering. Tune in for our unfiltered takes on why GPT-5 is powerful, but maybe not the “AI Super Bowl” fans expected.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS00:00 What's New In GPT-503:18 Demo's07:26 Chart Fraud?10:17 Hot Or Not?15:00 Mid Benchmarks?21:55 The BEST New Feature------RESOURCESGPT-5: https://openai.com/gpt-5/Josh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:03 This is the first major model release in two and a half years from Open AI. GPT5 is now live. And in the words of Sam Altman, you now have a PhD-level team of experts in your pocket that can write, create, build, code, whatever you could think of. Josh and I literally just finished watching the live stream. In fact, I still think it's going on right now in the background of the screen. Josh, give me a take. This was a huge day. This felt like the Super Bowl of AI.
Starting point is 00:00:30 This was the first numbered model. I mean, GPT4 came out two and a half years ago. GPT5 is here today. We just got off the live stream. It's a mixed bag of sentiment. We're going to get into how we feel about it. But I want to just dive into everything you need to know about GPT5. So Ejazz do you want to walk us through exactly the announcements, what they did, what's exciting, the cool new features.
Starting point is 00:00:48 Let's just hear exactly what they announced today. Okay. So here's that quick, high-level 30-second take. They are three new models. Everyone now has access to a GPT-5 in some way, shape, or form. You've got GPD 5, the main model, which is really deep in reasoning, learning, coding, all the things we just mentioned. You have a GPT mini model and a GPT nano model. Well, we've got pulled up on the official website what kind of like the main differences are between these models.
Starting point is 00:01:15 But basically, some are more expensive than the others, but the more expensive ones give you smarter results. But there's one key difference this time, Josh. Typically in the past, you've had to kind of choose between the smart model where you kind of have to like wait a couple of minutes to give you a good on. answer, and then you've got like the other models which give you quicker answers, like GPT40 that most people use these days. That is now gone. You can just use one singular model that answers super quickly and really smartly. So this is like a stark difference than what people are used to, right? Is normally when you use chat GPT, you have a drop-down menu, it has a bunch of different options of models you could choose from. Now there's just going to be one. It's going to be GPT-5.
Starting point is 00:01:52 And what's interesting is a lot of people who are listening to this who might not have a plus plan or a pro plan, well, they're actually going to be going from GPT-4. to GPT5, which is going to feel like a huge step change in quality. So I think for everyone who's listening to this, anyone who's using it, whether you're a free user, paid user, you're going to notice a big difference in the quality of your responses based on this new model change. And there's no more confusion. There's just one model that does all the thinking for you.
Starting point is 00:02:18 It will determine how much reasoning is needed, how much thinking is needed. You don't have to prompt it with any specific thing. It'll just decide exactly how much time it's needed to give you the best answer. And that's pretty cool. And they also teased a few other features that are going to be releasing. soon. One was AI personalities, which means that chat GBT, you can choose whether it has more sarcasm or whether it sounds more professional or whether it sounds more entertaining. This is kind of similar to the AI GROC companions that we've seen come out of XAI. So I'm really excited to see OpenAI's
Starting point is 00:02:47 version of that. They're also releasing an advanced voice mode, Josh, which you and I both love. We speak to the voice mode of chat GBT so much. And to your earlier point, this is now accessible to everyone. previously you kind of need to be on some kind of a premium or maybe even pro plan to get access to this. Now everyone has access. But Josh, like we can talk about these benchmarks. We can talk about these kinds of features all day. But the live demos are where all the funds at, right? And they had quite a few demos to show us.
Starting point is 00:03:15 What were your thoughts on that? The demos were amazing. I was very impressed, which is a contrast to how I felt about the rest of the presentation. But to start with the demos, I thought they were very good. So the first demo they gave was on the Bernoulli effect, which, I mean, I believe it's what happened. happens when wind kind of goes over a plane, it allows planes to fly, and it tells you when they're going to crash. And it was a really simple prompt that asked to explain the Bernoulli effect, but then it created this entire visual interface on top of it. That was fully interactive,
Starting point is 00:03:39 very engaging, looks very pretty. You could see it here on the screen here. Basically, there are these toggles you can change, there's a 3D visual element. It's interactive. I thought that was really neat. I think the canvas has been kind of an underutilized part of the chat Chabit experience and allowing you to prompt it to now create these virtual environments, these virtual worlds. That was pretty cool. The second demo, there was a demo where they created a web app in French. And basically, it was a way for this guy to teach his girlfriend how to speak French. And what he did is he said, hey, can you make me some flashcards? And can you, can you make me a cheat sheet? And can you make me a game that has a little mouse. And it has the mouse
Starting point is 00:04:15 kind of follow around this path. And it's kind of like the snake game, but it's with a mouse and cheese. It was cute. It looked great. The actual interface that it generated looked amazing. The quality of the output was really good. What I found interesting is the multi-modality aspect of it. So he asked it to create a game. Not only did it create the flashcards, the game, it created a full dashboard with your progress, and it spoke to you. So it had the voice. It can speak actual French out loud. It allows you to create these one-of-one experiences. You could build a game. You can build an app. You can build whatever you'd like in one prompt. I found that to be really interesting and really exciting. Those are the demos that I thought were interesting. After that,
Starting point is 00:04:55 it got much less interesting. Did you have any takes on demos or anything else about the presentation? Okay, I'll preface this with the positive news first. I, like you, thought that some of those demos were pretty cool. Actually, I think the Bernoulli effect demo that you just described, she created in two minutes, 400 lines of code, which is... Really impressive. Honestly, a pretty impressive feat, but also an ambitious. thing to do live in front of millions of people who are watching a live chain. They generally recommend not to do like live coding demos, let alone live demos alone. And to have the confidence to do that was pretty cool. But that's when things started going downhill for me, Josh. They
Starting point is 00:05:36 started to demonstrate its creative writing ability, right? And think about this, right? You want to demonstrate how good your model is at creating pros. So you might want to create something fantastical or creative hint, though it's in the term itself. But they decided to write a eulogy, which was the first of like many weird things that they tried to figure out. So they wrote this eulogy for the old chat GPT model. And they thought it might be a fun experiment to kind of like compare which model does better. So they had gbt4a, which is the current model that exists, write a eulogy for itself, I guess. And then they had gbt5 write a eulogy for that older version of the model and they were pointing out like how GPT-5 sounds way more intuitive. It made people in the audience
Starting point is 00:06:23 laugh, et cetera. But all in all, I thought it was kind of like a weird example. This is, and the reason why I say that, the reason why I'm so critical is this is meant to be such a magical moment. Two and a half years in the waiting. GPT-5, open air was the one to lead frontier air models in the first place. I want it to be a magical experience, not really talk about the death of whatever in general. And then the other demos, Josh, I think I kind of like tuned out when it came to like the coding stuff. Not necessarily because that's not my forte, but because if you're pitching an example of like coding, I think they were coding up a bunch of different. They were demonstrating pair programming. That's it. Pair programming in software development is a really important thing because
Starting point is 00:07:02 you kind of want multiple people working on the same thing at the same time so that it leaves you to kind of like focus on other things. They didn't really do it in an intuitive way. And honestly, it wasn't that different from offerings that Claude Code or Anthropic already has right now. They had like this crazy benchmark, Josh. Did you see this? It was the biggest form of a chart crime that I've ever seen. Let me let me show you. This was a good one. Do you have the link to pull up here? Yep. Look at this. Notice any differences? That's pretty good. That's pretty good. So it's interesting because the 30 and the 69 are the same height. And then the 52 is taller than the 69. And the 74 is like much taller than the 60. And like the proportions are a little out of whack and this feels a little
Starting point is 00:07:46 dishonest. I was a bit confused. And this kind of happened throughout a few charts in this presentation where they weren't really accurately. I mean, Grant, I believe I'm looking at this right. This doesn't appear to accurately represent progress. So it seems like there was this trend of of incorrect charts. And then Sam actually publicly apologized on X. But then we saw it in a different slide where the charts again didn't quite match up in favor of making it look like there was more progress than there really was. So yeah, questionable charting. That was one of the things. In terms of features, there were some fun features. I mean, one thing that I got excited about, it's dumb, but it's cool, is you can now change the color of your chats. So now you could have like blue chats or yellow
Starting point is 00:08:24 cats. I like the customization stuff. Well, because as a user who sits there all day, I want it's like a little bit pretty. I found the companions, the personalities, pretty interesting, just like GROC. So we talked about this on GROC literally earlier this week. They're starting to roll that out now in chat GPT, where you can have an assistant, you can have. You can have a have a jokester, you could have a storyteller, you could kind of customize the sentiment of the chatbot that you're talking to. And then they had a lot of other use cases that they were demoing. They really had a big moment on health. I think the health section was interesting. They spoke about someone who was suffering from cancer and needed to diagnose her symptoms. And she used
Starting point is 00:08:58 the help of chat GPT to guide her through and give her agency through this process that would otherwise be really difficult to navigate if you don't understand the health data. It was a little bit of a sad and depressing version of this. And I think that was. was mostly the sentiment throughout the presentation was, like you said, EJAS, this is this magical moment of this superhuman artificial intelligence. And a lot of it was kind of veiled in this, this eulogy and cancer and like kind of this gray scale presentation and very minimalist. It was too serious. Very low energy. And yeah, I think a presentation of these things matter. And it's funny seeing Sam stand on stage. He's kind of doing the hands like this Apple Pose
Starting point is 00:09:35 that we see in a lot of these presentations. And as far as the actual presentation goes, that was most of the noteworthy things. I mean, if you are listening to this, you should be excited for a few reasons. One, GPT5 will be available for everyone. Even if you're a free user, you get access to it. It will be smarter. It will do things better. It will allow you to create more cool content. It will basically depreciate the need to be a coder. I mean, this trend feels very obvious now that the coding capabilities of each one of these models is improving so quickly that it seems improbable that a few years from now anyone will really need to write code. I want personal takes personal opinions. We both.
Starting point is 00:10:09 just watch this. We both have our own ideas. What did you walk away from this presentation thinking? Were you mind blown? Were you a little disappointed? Like, where do you stand on the spectrum here? That's that's kind of how I feel, if I'm being honest. All right. Okay. I accept that it's blown a bunch of benchmarks out the water. In fact, not even by that much. I think it's like two percentage points better than Claude Code, which is still, don't get me wrong. As you point out, like amazing. but if you're going to come out with a new frontier model and you're trying to win this entire AI race, I think you need to kind of like maybe do better
Starting point is 00:10:44 or have a better example. One thing I did like about the coding stuff, actually, now that I think about it, is in the demo itself, the guy spun up multiple tabs of chatchubetee. And ran the same prompt, right? And he explained his reasoning behind that, which was like, I just want to see multiple versions
Starting point is 00:11:02 and then pick my favorite, right? Another thing that I liked was Christina Kaplan, who is the head of memory, the memory feature in OpenAIA, who are actually getting on in a few weeks to, on the show to interview. She said that memory and context travels from your older model conversations, as well as across all the new GPT5 conversations. So that's amazing, right? So if you, in that example, spun out a bunch of different tabs, it'll have context from all the other tabs that you're running out. So, okay, that's amazing. But aside from that, Josh, I'm not that enthused.
Starting point is 00:11:31 Like I said, this is the first Open AI live stream where I kind of exited out. of it after like 35 minutes. Sam should have just kept it at 30 minutes. This is the longest live stream he's ever done. And I was just like, why are you doing this? Yeah, it's, it's funny. He just sent me a message like a few minutes before the live stream even ended. And he was like, I'm done. And I was like, yeah, you know what? I'm done here too. It just got a little boring for my liking. There was this, there's this dark undertone to it. And a lot of the outputs didn't match the the outputs that I was hoping for. You expect this model to not only be smarter, but you expect it to come with more features that make it exciting. And they did this
Starting point is 00:12:08 a little bit with the canvas improvements where you can actually generate code and create these UIs. But even the point that you liked EJAS, where you said he opened up multiple windows and gave it the same prompt and then chose his favorite. That feels like a responsibility I shouldn't have to have. I want a mixture of agents. I want you to tell me and you to decide which one. And then I'll tell you how I want it changed. But even just like their own demos of open. up five tabs. They're like, I didn't really like that one. This one's okay. I didn't love this one. I'm like, that's my job. I'm supposed to do this. Like I use Grog Heavy and Grock Heavy has this mixture of agents. It's got 10 of these bad boys doing it all at once and it's given me one
Starting point is 00:12:47 great answer. And that's as a user, that's what I want. I want models that will improve the user experience, not kind of make me have to jump through hoops to optimize my way that I use it. And also, M-Dashes are still there. So, I mean, generally speaking, I'd think sentiment matters a lot. When you see, I mean, I'm going to compare this to GROC because XAI had a recent announcement with GROC for, they're very excited about the future of humanity, the future of intelligence. They're very excited of this truth-seeking AI with this really grand mission statement. And it's very, it's very optimistic. It's very enthusiastic. It's very, it's very much driven in a way that feels exciting. Whereas the chat GPT is like, oh, guys,
Starting point is 00:13:27 this model is very safe. This model is not going to lie to you. This model. And they actually had an entire demo about what was the way that they called it. They had like a specific word for this. Deception is what they called it. They had an entire deception category where don't worry, guys, the model will no longer try to deceive you. It's much better now. And it feels like they're just, they're doing two different approaches. And one of them is very defensive and the other one is very inspirational. And it just left me kind of feeling like, okay, well, I'm still going to use chat Shabit all the time. I'm glad I don't have to pick any models now. It'll just do it all for me. But this doesn't really change much for me.
Starting point is 00:14:01 And it actually makes me feel a little more excited about other companies who are in the space who are progressing even faster seemingly than OpenAI. Because when I think about Google, when I think about XAI, they're both doing really cool things that seem really impressive. I mean, XAI, they had the Genie 3 release earlier this week. And that, to me, blew my mind far more than this GP-T5 release. So I really think, like, this was a disappointment. this was a bummer on a lot of cases. This was not the Super Bowl that I had hoped. I wore this really nice white shirt.
Starting point is 00:14:33 I was ready to go. And they let me down. So that was a little depressing. But I do want to get into social commentary because the good news is we're not alone in this sentiment. A lot of people also were, it left a lot to be desired. So do we have any posts that we could share of people who, oh, actually, here's one right here? So for the people listening, we're looking at a chart. And it is the ARC AGI2 leaderboard, which is generally how we measure the closeness to AGI,
Starting point is 00:14:56 whatever you think that threshold may be. This is a single metric that people use to compare. And what we're seeing here actually is GROC 5 is far ahead of GPT5, which doesn't seem right. Grock 5 has already been out for a couple of weeks. And yet it is much, much more powerful. Granted, it's a little more expensive than GPT5, but that is a significant improvement over GPT5. So if we're just comparing benchmarks in terms of AGI, GROC is winning. Oh, and here's another post from our good friend, Beth Jesus. He was on bankless a little while ago. And he had a comparison of GPT-5's benchmarks in Humanity's last exam to the GROC benchmarks. And again, it missed. And GROC is actually superior in a lot of these benchmarks. So it leaves a lot to be
Starting point is 00:15:40 desired and a lot to be reconsidered. I think somewhere in here, EJAS is a post from Polly Market. It's a picture from Polly Market, which I really, really adore this chart right here. And it says, which company has the best AI model by the end of August? And Open A.J.E.E.R. has been the favorite, and they've been pinned at, what was that? That was like 90% all month. And people really thought, just under 80. They were pinned at 80% all month. And then that was an anticipation, obviously, of GPT5. And as soon as this presentation happens, they are now down to sub 20%, with Google actually taking the first place at over 80%. So that type of shift, that is a public market sentiment shift. That is a big shift. To go from 80% to 20% over one presentation, it
Starting point is 00:16:25 it means a lot. And I think what's fun for us is now we get to reconsider the leaderboards of who is who is going to be the leader of AI and how they're going to be the leader of AI. If anything, this probably complicates it more because, I mean, Open AI still has the most users. They're still incredible. This was by no means a flop. And I don't want to make people think like this sucks. This is an incredibly smart model that is incredibly capable that I will be using literally every single day personally. So it's great. It's just not, I mean, the stakes are so high. We're moving so quickly. There's, there's, there's, there's, there was a lot left to be desired. Did you have, what do you got? Give me something. Agree, disagree. Any more sentiment to share with the public? Yeah, I mean, I mean, I'm not going to try and debate you because I agree with you on this one, right? Like one word to describe it is underwhelming.
Starting point is 00:17:12 It is still great. Like you said, I'm not going to use another model right now because it still has all my memory, which is the most important thing. And the features are technically, I guess, the best. But I'm not really seeing it in practice just yet. there is no kind of magical component. The point around GROC 4 is simply that, like, it performs better than GPT5 on a number of different tasks.
Starting point is 00:17:35 And the point that's being made in this tweet is the tasks that were demoed by OpenAI just down on the live stream were the ones that GROC 4, they were selectively chosen, basically, cherry-picked. And so just goes to show that you can't get this past everyone else that's on social media. It is like the vessel of advertisement, and people could sense the inauthenticity, the kind of like morgue-like effect when they were writing eulogies, that this wasn't really a magical apple moment. This was just kind of like a kind of like nothing burger, dare I say, right? I also saw this tweet, Josh. Oh, this is great. Okay, so I was just going to bring this up. This is so good.
Starting point is 00:18:10 To give some context here, Josh just described earlier on in the episode, their first demo, straight out the gate, was this lady that was demonstrating how GPT-5 could write code to demonstrate something called the Bernoulli effect. And this is like, you know, a complicated physics effect, and it's good to see graphically, visually, the wind dynamics and all that kind of stuff. And someone screenshotsed the answer that it gave in terms of like describing the Bernoulli effect
Starting point is 00:18:39 and cross-checked that with how the Benuli effect actually works, and it was hallucinating. And the reason why this is so funny is because on the live stream, they spent 10 minutes assuring everyone that this model is the model to least hallucinate. And in fact, it showed a bunch of charts and rates showing that it hallucinated the least.
Starting point is 00:19:00 So I just found this pretty funny that you had this kind of like live stream showing all these supposedly cool things. And then you had this kind of like mirror-like vertical on social media where everyone was kind of unpacking and showing and exposing the floors in the demos itself. So it was a mixed bag. Like you said earlier, Josh,
Starting point is 00:19:17 like they're approaching two strategies. One, this inspirational thing. the other like, hey, but we're also like super safe and aligned with humanity. Don't worry, right? And they should have just leaned heavily on one and forgotten about the other. Yeah, listen, models are going to hallucinate. That's okay. But to spend 10 minutes on it and then to show an example of a hallucination live on stream, it's like, okay, well, you got to pick your battles. I don't think those were the winning battles to pick. But I think a lot of this goes down to just the authenticity you mentioned, I think is a really big deal. It's kind of how you carry yourself as a
Starting point is 00:19:51 and as a culture when you're delivering this. And when we compare this to Google, Google feels very aligned. Like I can kind of define Google as a personality as a company. I could almost imagine the decisions that are going to make around certain issues before they even happen. The same is true with GROC, where I understand and a reason why I really enjoy GROC is because it's very unfiltered. When I ask for an unhinged mode, it will actually do that and it will be direct.
Starting point is 00:20:20 And in Open AI, it feels like it's trying to satisfy this subset of people that doesn't really feel authentic to the mission. So you're getting these mixed results where on one end, Sam is this like feisty, aggressive guy who's building AGI for the world. And then the other end, it's like, hey, we're not going to lie to you. We're like, we're being really careful about this. We don't want to hurt anybody. And there's just something a little off about it. Just ingenuous. But yeah, it's just a little off.
Starting point is 00:20:50 which isn't to say, again, the model's great. This is awesome. It's just like not quite as awesome as I hoped. And I think that's probably the overall vibe of this release is, hey, this was going to be the Super Bowl. But it turned out it was just another model release. And in fact, the earlier. It was a regular bowl. It was a microwavable bowl.
Starting point is 00:21:08 It was a Sunday of football. And that was it. Nothing special here. And in fact, like, it's college level football. Yeah. And it's a shame that on the week of GPT5, the announcement that most excited me was Google Sheenie 3. It's this virtual world builder that can create these virtual worlds. You can walk through them.
Starting point is 00:21:23 They're dynamic. They remember where you are. And we'll probably have to do an episode covering next. That was fascinating. Well, actually, Josh, let me interrupt you there. They actually integrated Google Calendar and Gmail, which sounds like. This is huge. Yeah.
Starting point is 00:21:36 I forgot this part. Yeah. Okay. So it kind of sounds like a nothing burger, but it's actually pretty cool and useful to me. No, this is incredible. Let's talk about it. Yeah, yeah. This was in the section of their memory update.
Starting point is 00:21:47 Again, I mentioned Christina Kaplan. She's coming on the show soon. We're going to interview her. We're really excited about that. But she announced that for a while, Chad GPT's memory has only been focused to GPT itself. And she said she found that pretty frustrating in her day to day when she's doing other things that weren't related to chat GPT
Starting point is 00:22:04 and our conversations with it. So she announced these two major features where GPT5 would integrate directly to your Gmail inbox and your Google Calendar. And she went through this demo live on stream where she asked it, hey, I need to train for a marathon that I'm running in. four weeks, can you figure out when is best for me to train, maybe give me an advisable routine and diet to go with, and anything else I might need to be aware of? And it gave this really concise, structured output, Josh. But here's what I loved about it. It didn't just, like, kind of like,
Starting point is 00:22:34 figure out where the best place was for me to run the best timing and book it on my slot so that I could just see it on my calendar. It also wrote at the end, by the way, there's these two unread emails, which you should probably get to an answer before you go on your run. And I love that. That was intuitive. That was human. That felt magical to me. And I know it sounds lame talking about email and calendar, but I know most of you listeners spend a lot of your time day to day, Monday to Friday, doing exactly that. So if there is a tool that comes in, that feels human that can save me a bunch of time and make my life easier, I'm all game for it. Josh, thoughts?
Starting point is 00:23:09 This is how you know we're doing this immediately after the live stream. We haven't even prepped because that, in hindsight, was one of the favorite, my favorite features that they announced it. It was, it's incredible. And of course, I mean, not to brag or anything, but the coolest feature was delivered by our future guest, who's coming on in a couple weeks. So stay tuned for that. But the integration is amazing because what we've recently had Arvind on, the CEO of Perplexity. And I think a lot of my problem with his bullcase on the browser is that I don't really want the browser form factor. Like, I don't want to have to engage with the browser. I didn't even want it to be there. I want to tell my agent what I
Starting point is 00:23:40 wanted to do, and it'll go off and do it. And Open AI has that. They have the agentic feature. And what they just added today is integration into one of my most use applications that require an agent. And that is my email and that is my calendar. And those two things kind of run my life, right? It's the way I interact with all the people for work. It's the way that I schedule all the things in my life. And now Open AI has access to that and it integrates it into my entire workflow. And that to me is incredibly powerful because there's nothing, there's been nothing so far that's
Starting point is 00:24:09 been able to kind of manage my day-to-day life because it doesn't have the context of what I'm doing. and now that it has email context and my calendar context, that is like a very huge unlock. So I'm excited to try this out, probably the most out of anything. I can't believe we failed to mention this. That's going to be really cool. And when you think about it and you think about the ways that you engage with the web, I mean, most of it is through we spend a lot of time on Discord. We spend a lot of time in email.
Starting point is 00:24:34 We spend a lot of time in calendar. Maybe there's a few additional websites. But I'd say a majority of my web surfing time is on a small subset of services that are one API call away from being fully integrated into chat GPT. And I think that that is an unlock that we're starting to see here with calendar and mail is that we are just like a couple integrations away from having most of your online productive life being integrated into a singular model. And, and now it has the context of your email in addition to all of the context you've been feeding it for the last 32 months since I think chat chit came out. So that's a big deal. That feels like an exciting seed they're
Starting point is 00:25:08 planting. I am looking forward to more integrations with more services that I use. But yeah, That's going to be a pretty cool one to try out and to actually use in my day-to-day life. Well, folks, you heard it all here literally first. We started recording this and it's going out live pretty soon, straight out of the live stream. So these are our freshest thoughts. Sorry if they were a little muddled. We weren't prepped as well as we normally were, but we wanted to get this content out to you. We know our listeners want to hear about this and we're extremely passionate about it.
Starting point is 00:25:35 We are going to have a follow-up episode tomorrow because, open AI aside, there were actually a number of other really cool things. that happened in AI. And we're going to cover all of that in tomorrow's episode. But again, if you enjoyed this episode, if you enjoyed all our previous episodes, if you're curious about anything else, hop on the pod, email us, message us, tell us any kind of feedback that we want to hear, like, subscribe and share with your friends. And Josh, we'll see them on the next one. Yeah. We'll see you next time. Here's your homework. I need everyone to go and actually try out the model. Don't let us tell you how to feel. Go try it out. Share your thoughts in the comments
Starting point is 00:26:07 of how you feel. I actually haven't sent a single prompt to the model yet. I'm not even sure it's been rolled out. So we're going to do that. We're going to re-evaluate, discuss, and then we'll come back with even more takes. But yeah, let us know what you think. Hopefully you enjoy this quick and dirty update on GPT5. Go test it out, and we'll see you guys soon.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.