Big Technology Podcast - The Big GPT-5 Debate, Sam Altman’s AI Bubble, OnlyFans Chatbots

Episode Date: August 22, 2025

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Did AI take a step back with GPT-5? 2) Is AI hype going to cool off? 3) GPT-5's switching problem 4) Do ...we need AI agents? 5) Thinking Vs. Doing AI 6) Sam Altman says parts of AI are a bubble 7) Eric Schmidt says the U.S. should stop overindexing on AGI and instead build it into products 8) GPT-6 is going to have much better memory 9) MIT study says 95% of AI projects fail to achieve their goals 10) AI may replace OnlyFans outsourced 'chatters' 11) Is love AI's real use case? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Is the GPT-5 backlash overdone, or a sign of the AI industry finally realizing that the party is over? Sam Altman for one says AI might be in a bubble, but that's probably not going to stop OnlyFans' talent from outsourcing at least some of their flirting to artificial intelligence. That's coming up right after this. Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional cool-headed and nuanced format. We have a great show for you today. We're going to revisit the debate about. about GPT5 and whether it's actually as bad as critics say, meaning the end of AI progress or whether it's misunderstood. We're also going to talk about these comments from Sam Altman that
Starting point is 00:00:41 AI might be in a bubble and also some from Eric Schmidt that the AI industry and the tech industry should stop focusing on AGI and focus on building. And then finally we'll round it out with a conversation about how AI might be coming basically to the back end of only fans conversations if it's not there already. And what that means for our lives. for our society. Joining us as always on Friday is Ranjan Roy of Margins. Ron John, great to see you. How you doing? Good. We took a week off and apparently all AI progress stopped. This was terrible timing for my summer break. I was out in Nepal doing some trekking. And by the way, I have to say we got one of my favorites podcast ratings that we've ever gotten. It's made the
Starting point is 00:01:22 mountains treat you right. Their username is Hedgefunk. I'm a Sherpa from Nepal currently living in New York and I'm thrilled to hear you're heading there to Trek and saying that they regularly listen to the podcast the moment it drops on iTunes. So thank you, Hedgefunk. It was a great trip. I was out in the mountains. I was thinking about life. I was thinking about AGI and I was thinking about why GPP5 isn't anywhere close to it. And Rondon, I'm just going to use this moment to after we were probably the most optimistic podcast about GPT5 two weeks ago to talk about how my reflection has led me to come to this point where I believe it is. as disappointing as the critics point out and maybe even more if that's okay i think that's okay i'm
Starting point is 00:02:04 already going to preface i'll take the other side even i did not have a contemplative climbing of mount everest or what other napolee mountain you are on but i have a feeling i'm going to be arguing with you on this one so not ever's this time uh i wonder what the oxygen deprivation would do for my takes on this podcast but antipurton base camp was where we went okay so i i i I do think that this model is, you know, feels like at least a step back for me from 03. And this is kind of my core complaint here. And it goes towards your core belief of this is a good model. And that is that I want my AI models to think.
Starting point is 00:02:45 And the one thing that was nice about O3, the model that OpenAI has deprecated and not brought back, unlike 4-0, is that this thing would think a lot. And Brett Leichap, who was on the show, the CEO of OpenAI, who was on the show as the company released GPT-5, he said, we know that if models think a lot, they're going to be more intelligent. And GPT-5 doesn't give you the option. Now there's this thinking option and there's non-thinking or fast version of it, but it seems to think a lot less than 03 did. And it doesn't give you the option to, like, really go hardcore on the thinking. So this idea of like it, you know, deciding when to use which tool or which model and that being the intelligence, that really to me, after seeing O3 go away and use this like really underwhelming thinking model, I just think that we're dealing with much less capable AI when we're talking about GPT5. And that to me is distress it. So my reaction, especially watching all the complaining and the underwhelming reactions to it,
Starting point is 00:03:56 and I would put this as separate from the kind of terrifying reactions where people miss the sycophancy of GPT of the previous iterations and were able to, where it always told you that you're right and you're great. I think on this tool calling or which direction should I go, it was really interesting to me because we recorded separately from the Brad Lightcap conversation, and Brad and myself kind of ended up having the exact same reasoning as to why this was a different type of intelligence. It's the idea that knowing which tool to call, knowing which direction to go actually is different than traditional reasoning models as they existed before. And I still strongly believe that is where things need to go. The idea, and we've debated this for years now, is it going to be one model to rule them all
Starting point is 00:04:53 or will there be lots of specialized models? And actually, from a starting point, not even the model, the actual platform or the system or the product, knowing where to go to solve a specific problem is where this has to go. Now, GPT5, I think there's definitely some like uncertainty over how it was rolled out in what it looked like and how people are used to using chat gpti but i actually think i think this is still where it's going and i'm actually happy because it felt like removing a little bit of the overall kind of hype cycle and actually letting people take a breath to me is actually the single best thing i've been hoping for for a long time so i'm still pro gptvive release okay well i i'm going to spend a good chunk of the beginning of this episode trying to get you to change your
Starting point is 00:05:44 stance because I have so many issues with this and by the way the first thing that you just said you said that okay it is letting us take a breath as opposed to overhyping I mean in what universe Sam Altman was again hyping this up saying GPT5 is much better than anything he's used before it's better it's better than most humans at almost every task on Theo von basically saying It was everything, you know, that we imagine AGI will be without saying the words. And then it came out and it was clearly not AGI. So, you know, let's just go right there right now. And then we're going to go back to this switcher thing.
Starting point is 00:06:26 Is this really a break from the hype? Or was it just that like is the break actually that the model just doesn't live up to it? No, I think the break from the hype is it did not live up to the hype. And I think that's good. So I agree what Sam Altman was promising on Theo Vaughn and wherever else, this definitely is not that. But why I'm saying I think that's a good thing is it's going for a moment, like GBT5 has been so built up for so long. Now everyone realizes we're not going to get some magical model release that just solves everything all at once. and then AGI comes and robots take over and all white-collar jobs are eliminated or whatever else.
Starting point is 00:07:12 I'm saying the fact that it did not live up to the overdone hype, that's a good thing. Because now, and we'll get into the Eric Schmidt piece, we'll get into this MIT, 95% of projects don't show ROI study. Like, I think now everyone can get back to the real issue is how can we actually build to solve problems. And so I'm going to actually get to that issue in a moment, but I still want to talk about the disappointing nature of the model. And then I'm going to get to the things that I don't like about this agent idea, even though I was bought into it, maybe a couple weeks ago when we talked about it. So look, I mean, we've talked on the show for a long time, right? It took them like, what, two years, took Open AI two years to go from GPT4 to GPT5. And we said for a long time, just name the next model GPT5.
Starting point is 00:08:01 and don't put all this pressure on yourself to release a God model. And so finally, okay, so I think there is something to be said for like, all right, they got the number out there, right? So we'll get GPT6 probably next year. We don't have to keep like, you know, playing up what the next model is going to look like and anticipating that it's going to be a big one. I think Thomas Wolf from Hugging Face in this FT article about whether AI is hitting a wall, put it really quick, put it really well.
Starting point is 00:08:28 He says, for GPT5, people expected to discover something totally. really new. And here we really, we didn't really have that, right? So basically here's the thing. They spent two years working on this new model. They built it up to be, you know, God level. It wasn't a new thing. I don't even know if it's better. I don't think it is better than 03. I'll just say one last thing and then turn it right back to you. Ethan Mollock, the Wharton professor who quote here often, had a very, very good post about this right after the release that I think got left, mostly overlooked. He said, the issue with GPT-5 in a nutshell is that unless you pay for a model switching,
Starting point is 00:09:08 and no, to use GPT-5 thinking or pro, when you ask GPT-5, you sometimes get the best available AI and sometimes get one of the worst AI's available, and it might even switch within a single conversation. So he's basically saying, like, at the high end, you might get this GPT-5 high, which I I mentioned thinks a lot, or you might get this GPT-5 minimal and you don't really know which one you're getting. So like you're not, if you're a regular user this technology, you might actually feel a downgrade happening when your queries get pushed off to this minimal thing versus like you knew you might have to wait a minute or two, but you're probably getting really good stuff from
Starting point is 00:09:50 03. And I think the fact that this doesn't feel like an upgrade from 03 is a very, very big problem. Okay, so on this idea of like, where is the intelligence, where's the value? I want to dig into that a little bit because to me and like for listeners two weeks ago, I was talking about how I work for a startup called writer now and we had launched something called Action Agent, which is very similar in scope that it's, there's a number of predefined tools just from a prompt knowing where to go and what to do with it. To me, the more I've used tools like this, the more I see that's where things need to go. And the way to think about it is, and agreed, this is a new type of intelligence,
Starting point is 00:10:33 even within the chat GPT ecosystem. So it's going to be rough at first, but it's where the world is going. Like, if you think about, you know, what did Apple promise us, like, with a prompt, even just looking up your flight info and then going and doing like a flight status check, that the reason they've not been able to roll that out yet, it seems simple, but the intelligence there is knowing which system to call what to do, what tool to call next, what action to take next. That type of intelligence is where all the promise of AI is, and no one has done yet at scale. And this is the first, this is like the beginning of this new way of intelligence versus within the model itself, it's going to do a bunch of thinking, it's going to go into its own training data.
Starting point is 00:11:23 it might do a little bit of web search and then give you a result. So this is just the beginning of this whole next phase. And so, of course, it makes sense that I guess I'm going to give Sam Altman a little credit here. The fact that they actually still roll this out, knowing it's not going to be mind-blowing, but it's setting the stage for where AI intelligence needs to go. And then maybe one day Siri will actually work and be able to check your flight status just by asking it. Give you a moment here to sort of try to make the case for why we don't need AI agents. So I'm going to try to blow a hole through this entire concept.
Starting point is 00:12:04 This is the hottest take of all time in this. You defend and I will be on offense. I have been using GPT5, as many people have, but my specific use case over the past two weeks has been very intensive. I've been using it all throughout this trip that I've been planning. including trying to figure out like the amount of altitude I should be increasing day by day as I went up the mountain in Nepal. And of course like, you know, didn't spend my entire time going up this mountain hook to technology, but you want to make sure that you're not going to die on the next stop. So when you're out like one of these tea houses, you connect to Wi-Fi and say,
Starting point is 00:12:48 all right, how far should I go the next day? And I felt like the AI was transforming from, and excuse me for using this term, but from a thought partner to an over-eager helper. And not a particularly useful one either. And we had talked about this idea that like GPT-5 just does stuff, which was the title of the Ethan Mollick piece after he had been using it. And GPT-5 is just trying to do things for you and I think that like we can see AI go in two directions from here or maybe it does it both simultaneously but to discard the thought partner type of use case for this over-eager helper you know agentic use case to me really seems misguided like I think you know some of the best ways to use AI is to help you round out concepts in your head or like to say I'm thinking these
Starting point is 00:13:46 things, what do you think about them, and then have it come back with like a pretty interesting and considered answer that you know might hallucinate sometimes, but ultimately helps you sort of expand your mental capacity because you're just able to think about these concepts with higher horsepower. And I think what GPT5 has done is it sort of, it's sort of deprioritized that use case for like trying to go do stuff for you. And like there are times where I really craved having a deeper conversation about the climb or whatever it might be. And instead, what I got back was just like this bot coming back over and over and over again of like, let me make a card for you to, you know, that shows your turnback times and stuff
Starting point is 00:14:31 like that. I'm like, I don't, you know, I don't need that. What I need something is something to be able to like really explore these concepts and thoughts. And I felt that that has been deprecated in this new model. Now maybe that's a selfish criticism. but but I'm going to just throw it out there and I don't understand why this agentic thing which may work really well in like the when is my flight and we want it what when's my flight
Starting point is 00:14:56 coming um for that to be the direction and the route and the rule for AI moving forward I just question whether that's the right move well no so so I don't disagree with you here I think these are two different actually I think that's like a good breakdown when do I want a thought partner, when do I need to do things? And they are very different use cases. And I think like Open AI's mistake here is they're trying to jam it all into one product. And again, that intelligence of which way, how should I interpret this? And I agree, like I had a bunch of those where it would let me make you like a presentation when I just wanted a quick answer. I think that's, it should be two separate products. I want to do stuff. I want to think and have a thought partner.
Starting point is 00:15:45 They're very different systems that are required for those. Or they need to nail the intelligence of which way to route it at the beginning. And I think to me, the biggest issue here is GPT5 is essentially like some whole new GPT1 in this way of intelligence. Like in this mode of intelligence we're talking about around taking actions, calling tools, doing stuff, versus I'm going to call my training data and maybe do a little bit of web search and I'll give you a nice text dancer or an image or something like that. Like in reality, but I'll also push back that like when web search was introduced to all of these tools, that is agentic. It decides to go search the web because it doesn't have the training anything in its training data to answer the
Starting point is 00:16:38 question. And we all have become so used to that even as you're trying not to die and asking and having chat GP, depending on chat GPT to keep you alive to understand your oxygen, your VMO2 Max or whatever it is. I'm not a hiker by any standard. I think like we've already, the agentic has already been happening quietly. Web search was the big one. WebSquare. scraping an operator, like computer takeover, and hasn't really worked well yet. But web search was always agentic, and it was this exact flow that we're talking about. So we're all using it. It's just that this isn't amazing yet.
Starting point is 00:17:20 This is the V1 of this new way of working. And they rushed it out, didn't explain it clearly, and conflated these two very different ways of working and thinking into the same tool. Right. And so my concern is that they are going, because these companies have a need, and this has always been the concern with raising the amount of money that they have raised, they have a need to replace and augment labor, right? And so that's where you get into these agentic use cases.
Starting point is 00:17:51 My concern is maybe that's not the most useful way that we can use AI. And we might end up in service of returning the money to investors, lose some of this like really useful capability that we've seen so far and let me push back on the web search part of it. Is it agentic yes but what is it agentic in service of? I think well maybe in some ways it is like you know I find me the opening hours of this like you know of like restaurants in New York City that these are these are my criteria so like in some ways there's that agentic stuff. But it's also like, these are ultimately, they are filters of knowledge. And the cool thing about it is they have not, maybe not all, but almost all of the knowledge
Starting point is 00:18:39 that's ever been written within them. And web search, it might be agentic, but it's also just like a way to update their mental models and update this filter with the most current information or stuff that's not in their training data. So yes, that's agentic, but it's also in service of, I think, uh, this use case that I've found very valuable. Now, as I'm saying this, I'm like, uh, maybe I am shortchanging the, uh, do stuff part of it a little bit. Um, but I, I am mourning a little bit, uh, the, the direction of, uh, I'm mourning a little bit the loss of, um, the direct, the old direction. And I find it especially interesting that opening, I added back the four O model, which you talked about, which was like very friendly.
Starting point is 00:19:28 It's maybe a little sycophantic, but not 03 because it thought too much. Or maybe not 03, which was the reasoning, the thinking model, has now been replaced by GPT5 thinking. And I wonder if they didn't add it back because it's just too expensive to run. And they can just run this version cheaper, which again gets to these big question we've been talking about of like, was this old direction just financially impossible for the industry to make work? Yeah, hold on. I think two separate issues there. I'll get into the financials of OpenAI and how much of this is done to actually meet some kind of valuation criteria, which it certainly has to be. But I think that first part, I think if we're going to break it down, it's thinking versus doing. And using it for thinking is it has been very good and they're kind of screwing that up. Actually, from like a user interface standpoint, I mean, they really are screwing this up. Because like everything keeps saying thinking, like the fact that it's maybe they should just change that word to doing and it'll start to at least be a little more realistic about what's happening. So so to me I think even more as we're talking, it these are two very separate things people need to do. The doing side of it, it is going to be.
Starting point is 00:20:49 That's how we get to the next like that's the way we actually realize some genuine value from all of this. I mean, whether it's like the labor conversation or it's just asking Siri to find your flight info and do something with it, like contact Delta and change my flight time or look up alternative flights, whatever it is. You know, like there's really, really simple things that should be better. And I think that's the way things, like, it has to go there. Otherwise, the entire industry implodes, but also I think it can work. this is just the first iteration of that, but they hyped it up too much. I think on the open AI financial side, I'll give Ed Zittron had written a good piece on, like, who I generally disagree with on like how this is financially driven a bit
Starting point is 00:21:44 because it's going to go try to go take a cheaper route. But it should in the end, like 03 should not be running when you're trying to rewrite this email for me. You know, like, uh, like different cases should require different models, different tools and getting that intelligence in place. If anything is ever going to be financially viable, it has to work that way anyways. So like, yes, is it financially driven? Sure. But it, but it needs to be. Like I won't hold that against them. Do I think like that this downplay of hype or this backlash could hurt them significantly? Sure. but I'll still give them the directionally strategically correct decision.
Starting point is 00:22:32 Okay, and I will, I will say strategically incorrect decision, and I think this might be a new debate that we have on the show. But there's also, there has been this argument. We should talk about this, that this has been so bad that all of a sudden everyone's running out and saying, yeah, let's stop talking about AGI anymore or, you know, where we are we might be in a bubble and that's that is what sam altman did say uh in his dinner that he had with some reporters uh he says this from cnbc quoting the verge he said altman said when when bubbles happen smart people get over excited about a kernel of truth are we in a phase
Starting point is 00:23:12 where investors as a whole are over excited about ai my opinion is yes is ai the most important thing to happen in a very long time my opinion is also yes uh and this is from the cnbc story his comments add to growing concern among experts and analysts that investment in AI is moving too fast. What do you make of this? I mean, it's kind of interesting to get reporters together in a dinner after the release of the somewhat bungled release of your most important model ever and say that we're in a bubble. Now, maybe the headlines made a little bit too much of it. But this idea that, like, there's a kernel of truth, even though he does say that AI is
Starting point is 00:23:59 going to be the most important thing that we're seeing right now. But a kernel of truth has led to, you know, some sort of bubble. I don't know. The implication is somewhat concerning. What do you think about it? Sam Altman knows how to get a headline better than anybody else. Like, I don't know. I mean, will I agree with, are we in a phase where investors as a whole are over-excited?
Starting point is 00:24:24 Is AI the most important thing to happen in a long time? Also, yes. Like, I'll agree with that. I feel this, I don't know, I'm going to, this one feels like him just kind of saying stuff knowing how to kind of get a rise out of reporters, get the headline, versus there's any genuine thought put into this statement. Well, I think that if you take, you know, maybe we'll, just take it as a data point, right? I don't think Sam got reporters together to try to, like,
Starting point is 00:24:55 let everybody know that AI is a bubble. I mean, he's currently, I think, just, just raised a huge amount of money, and he's probably trying to raise again. And by way, we have another headline that Anthropic is also raising. They're raising, Intox to raise now $10 billion. And the last report was a $5 billion raise at $170 billion valuation. So the valuations keep going. up. But now we're starting to see talk of, you know, maybe this is a bubble from Sam Altman. I think Mustafa Suleiman had a story this week saying, hey, let's not call AI conscious. And I really want to sit on this New York Times op-ed from Eric Schmidt and Selena Shue about why the U.S. needs to stop talking about AGI and why this is going to set the country back.
Starting point is 00:25:44 Because it is somewhat remarkable that Schmidt has gone from someone who, like, gave a TED talk, said, we're like, you know, we're underappreciating how poor an AI is to this like now pouring cold water on AGI. He says, this is his NJU's op-ed. He says, reaching artificial intelligence or AGI is now a singular aim of America's tech giants, which are investing tens of billions of dollars in a fevered race. It's uncertain how soon artificial and general intelligence can be achieved. We worry that Silicon Valley has grown so enamored with accomplishing this goal. That's alienating the general public and bypassing crucial opportunities to use technology, to use the technology that already exists. It's being solely fixated on this
Starting point is 00:26:25 objective. In being solely fixated on this objective, our nation risks falling behind China, which is far less concerned with creating AI powerful enough to surpass humans and much more focused on using the technology we have now. Just to, we're going to get into this argument of like, just use the tech we have now or focus on building AGI. But just a very interesting data point that Schmidt and you add in their story, they say in a recent survey of the association for the advancements of artificial intelligence, by the way, who's that? It's a, they call it an academic society that includes some of the most respected researchers
Starting point is 00:27:05 in the field. But this is interesting. More than three quarters of the 475 respondents said our current approaches were unlikely to lead to a break. breakthrough. While AI has continued to improve as the models get larger and ingest more data, there's concern that the exponential growth curve might falter. I mean, it's just interesting to see Schmidt saying this in the pages of the Times. And Thomas Wolfe from Hucking Face saying this to the F.T. And Sam Altman saying there's a bubble. And Mustafa Suleiman saying, you know what,
Starting point is 00:27:35 let's stop calling these things conscious. Everybody together is coming together and talking about this AI, you know, hitting a wall or reaching a curve, reaching the diminishing returns point of the curve as opposed to the exponential. And so I'm curious, Ronan, what you think about the context of where AI is heading overall, given that this is what we're hearing. Material security is transforming how companies protect their most critical cloud assets, like Google Workspace and Microsoft 365, with modern purpose-built security that actually works the way people do. The biggest cloud threats walk through third. three doors, email, identity, and data. Material was built from the ground up,
Starting point is 00:28:17 fraud day one for Google Workspace and Microsoft 365, not as a retrofit, so it closes all three. It provides continuous protection before, during, and after an attack. You detect problems early, contain them fast, and recover without chaos. Lean security teams scale through intelligent automation rather than adding headcount. Material blocks evasive fishing and impersonation. It protects sensitive content using built-in rules for personal health and financial information or rules you define. It also spots risky applications, unsafe settings, and sketchy verification or password reset attempts, all without slowing people down. So get the overview at material. security. That's material.com security. I mean, I could not be happier. Listeners can't
Starting point is 00:29:07 see how much I'm smiling right now, that Eric Schmidt's on team product, team build. It's not the model. Everyone is coming around right now saying, let's take the technology we have now and actually try to figure out how to build with it. I think, like, again, his whole point that, like, in China, people are focused on actually integrating AI into hospitals and farming and other areas of life and medical diagnostics. And, like, that's what, we need to be doing. That's what we have been saying for for a long time and the kind of overfocus on this one model to solve all of our problems. If it kind of feels like that's going away and I don't know that I'm excited by this, we can actually focus for a moment. So I read this story and I just
Starting point is 00:29:56 had Ranjan in my head because this is almost making the Ranjan case. You know, are you saying I didn't before and now only Eric Schmidt can make my case? I'm saying, I'm saying, I think, I think Schmidt did a better job. Making the product case. I know, I did not get my times up, Ed, but maybe soon if you're listening. But I do think that he's taking this ideological torch from you, Ranjan, and running with it. I mean, he listens, obviously, so. He does, he does.
Starting point is 00:30:28 Well, we might get him on the show. He might. Anyway, so he, listen to this. Let me just read it, because he's talking about it in juxtaposition with China. And I think I haven't heard it made, I haven't heard the case made, made this way yet. So he says, let's look at what's happening in China. The countries and scientists and policymakers aren't as AGI-pilled as their American counterparts. They're talking about a deep integration of AI with the real economy.
Starting point is 00:30:52 While some Silicon Valley technologists issued doomsday warnings about the grave threat of AI, Chinese companies are busy integrating it into everything from the super app we chat to hospitals, electric cars, and even home appliances. In rural villages, competitions among Chinese farmers have been held to improve AI tools for harvest. Alibaba's Quarkap recently became China's most downloaded AI assistant, in part because of its medical diagnostic capabilities. Last year's China started the AI Plus initiative, which aims to embed AI across sectors to raise productivity. This is interesting. It's basically very, you know, and there's a...
Starting point is 00:31:33 another stat here saying how like the best majority of folks in China have said that AI has had a positive impact on their lives whereas in the United States the best majority have not yet recent poll said 32% of Americans say they trust AI compared to 72% in China over three quarters of adults in China said that AI has profoundly changed their daily lives over the past three to five years that's double, that share is the highest globally and double that of Americans. I don't know, maybe it's team Eric and Ron John that have come together to sort of maybe make me believe in the product side of things over here. But, I mean, it is interesting because it's like maybe we, the AI industry has reached
Starting point is 00:32:25 this point where, you know, if AGI was realistic, then that would be a good strategy. But if it's not, you have to sort of like get going full speed as opposed to focusing on developing it. And I think that's the core message here. Yeah. And I would actually add one part to that. He brings up a really interesting point because we've talked for a long time around AI has a branding problem. And like instead of focusing on what it's actually doing, meanwhile, like everyone has or a large percentage of the population in the U.S. uses AI in their data. day-to-day life probably is deriving significant value, probably writing all their emails using
Starting point is 00:33:05 chat GPT. People are even, I mean, when we talk about AI, like having it alter your photo to look better, all that stuff is AI. So people are using it, but instead, I mean, the way like, just the term itself, you have kind of two groups of marketing. You have like when Doge was going through, everyone was like they're using AI to cut jobs and make decisions. just like that very negative connotation or the conversation around job displacement but then on the other hand you just have really bad marketing like remember the google gemini commercials matthew mcconaughey like it just like the industry as a whole has not communicated to people this is how your life has already changed because of AI and like and people aren't even
Starting point is 00:33:53 processing it or realizing it so i think that was a really telling stat if 75% of adults in are saying it has a positive effect, whereas in the U.S. it's only 32%. I think that's more around how the figureheads, the industry, the marketing has just been bad. Right. And I think you're only getting this op-ed. If Schmidt, who's close to this, who's what I think he was the first investor in Anthropic, he's been watching AI since he ran Google, if he didn't believe, if he came to the conclusion that we're nothing close, we're not close to AGI, then it is to to make this pivot. So I think that's an important context. And I think again, if you take that along with what Altman, Mustafa Suleiman, Thomas
Starting point is 00:34:39 Wolf is saying, you're like, oh, shoot, like this might be that wall. Now, again, I'm not, we talked earlier, like, were there going to be overreactions to GPT5? Yes. Or is the fact that, like, you know, is that AI is done, you know, reaction to GPT5, an overreaction. Yes, of course it is this is still very powerful technology even if it stops here only improves incrementally but it is interesting that they're all saying it now and i think that that you're you're totally right ron that like there's a there's a branding problem um and i think there's a building problem right this gets again back to your argument i love it which uh i'm coming towards it because um here's i think it's been under emphasized i'll put it that way and there's a lot of truth to
Starting point is 00:35:28 to the position you've taken. Here's again from this op-ed. It's paramount that most people outside Silicon Valley feel a beneficial impact of AI on their lives. AGI isn't a finish line. It's a process that involves humble, gradual, uneven diffusion of generations of less powerful AI across society. Instead of only asking, are we there yet? It's time we recognize AI is already a powerful agent of change. Applying and adapting the machine intelligence that's currently available will start a flywheel of more public enthusiasm for AI. And as the frontier advances, so should our uses of the technology. But being too fixated on artificial general intelligence risks distracting us from AI's
Starting point is 00:36:12 everyday impact we need to pursue both. God damn it. He wrote it a lot better than I've said it. I know. I think that the conversation here has been building. And you know what? I think it's a good point. And I've been on team model for a long time.
Starting point is 00:36:29 I've been on go build those better models and everything will take care of itself. Reading it in this way does lead me to believe that, yeah, I think this focus on GPD5 and focus on the next model has sort of put the U.S. at a disadvantage where it hasn't been building it into everything because it's just expecting a God model to come in and fix everything. And that doesn't seem to be happening. Yeah, I think the more we're talking today, I think this is a huge inflection point. Like GPT-5 was such a big idea that hovered over the entire industry for so long. And now it's out there and it's underwhelmed everybody. And now everyone's kind of recognized this because I definitely, there's a study that came out from MIT this week
Starting point is 00:37:20 where it said 95% of AI projects within companies have not. seen any kind of value. And I actually, I think this is near and dear to my heart now working in the enterprise AI space in all these kind of conversations. But really what I believe has been happening is over the last few years, that expectations misalignment has been a huge problem, that people are hearing everything can do these gigantic, massive things, and don't even start small and just start learning how to use it and integrate into their company. and just getting AI fluent. But I think also this ties back to, like, the Schmidt op-ed, the GPT-5 backlash, like, could this,
Starting point is 00:38:10 do you think this is a moment where everyone's going to take a breath realizing the last two to three years, call it two years of actual implementing AI, we've had the wrong approach and actually become a little more sensible and start, start it's, well, maybe we'll get another it's time to build an op-ed at the right time. I don't think so. This is why I don't think so because of the funding. Now, think about the amount of money that's gone into this, right? Like the industry needs AGI. They need this AI that can build these like super powerful tasks to be able to justify the valuations that they've gotten. And you're thinking about like the biggest VC rounds ever. These models need to improve more than they are because they
Starting point is 00:38:53 still get things wrong. They're still unpredictable. Is part of the reason why the 95% of projects within enterprise have failed? Is it part of it because of misaligned expectations? Yes. Is part of it because the technology isn't good enough? Yes. I think so. It's still, it's, it can't, we had a reader right in that says it's still, it's too unpredictable to be useful in many corporate settings. And I guess like you could say, yeah, well, you've been using it for sales and marketing, and it's just not good for that. But even in back office cases, like, if these things start, like, I don't know, doing jobs 95% right,
Starting point is 00:39:34 but getting 5% wrong, you're going to end up with, like, a very bad situation for the companies that have implemented it because there can be errors. Well, I would push back that in our current human-led infrastructure, 95% average, accuracy in a lot of spaces is not a is not actually great but but I think one thing like actually so I was reading and it was the the study actually it was by the amazingly named nanda group at MIT one of the other things that found though is it said the biggest problem the report found was not that the AI models weren't capable enough although execs tended to think that was a problem instead the researchers discovered a learning gap people in organizations simply did not understand how to use the AI tools properly or how to design workflows that could
Starting point is 00:40:27 capture the benefits of AI. So to me, this learning moment, people having misaligned expectations and approaching things the wrong way, to me, that has to change. And now it feels like maybe in the last two weeks, we're hitting an inflection point where it will. But here's a thing. who's which AI company is going to go to these enterprises and say so about that AGI thing we've been telling you about that. You know what? Forget about that for now. Let's go with these much less ambitious projects even though we've been telling you that like God AI is is coming around the corner. I think the advantage of China that China has aside from like some central planning that maybe forces companies to do this is that the models there are are open
Starting point is 00:41:18 source. Yes, they cost money to run, but there's typically cheap to run. And so you're looking at just like basically paying the inference costs. And that allows you to like put AI in your refrigerator or your briefcase or whatever it is, as opposed to here where like the companies are selling ambitious projects. I agree. The, the way the funding has been structured at the giant players is definitely kind of rubber meets the road moment's going to happen where they, going to have to decide. And even what you said right there, like in your refrigerator, you know, like the promise of using computer vision to see what kind of foods in there and when it's going to spoil. Like, this stuff should be happening. The technology is there. The models are
Starting point is 00:42:06 there. And it hasn't yet. And it's like felt like where we've just been all waiting. So, So again, I agree there's going to be some genuine issues from, like, how the funding structures of a lot of these companies, but again, it has to happen at some point. Like, you can't just keep promising it, unless you believe the technology, you're still team God model, and GPT6 will actually move us in that direction so we can all just kind of wait for our refrigerators to know what food's going to spoil and not actually just build it. V. Yeah, I mean, I'm not going to say that. I predict the GPD that opening I was going to call GPT5 AGI and clearly like that was wrong. So I'm not going to say GPT6 is going to be it. Although they opening I is talking about GPT6 having a lot more memory, which I think is like a really good way to take this technology to really get to know people and to remember things about their lives. That's good. But this, this, this, this MIT study is is, you know, fairly brutal. This is the report. This is from Tom's Hardware. The report said, only 5% of AI pilot programs achieve rapid revenue acceleration.
Starting point is 00:43:17 The vast majority stall and deliver little to no measurable impact on profit and loss. The fightings are based on 150 interviews, a survey of 350 employees, and analysis of 300 public deployments of AI. And that these 95% do not hit their target performance. But yeah, because generic AI tools like chat GPT do not adapt to the workflows that have already been established in the corporate environment. Yeah, this is going to be an issue. No, I think for me, like this one hit home hard again. Like this is my day-to-day life now learning and talking about and working on enterprise AI. I mean, it's real. It's definitely
Starting point is 00:44:00 real. The way a lot of people approach these AI pilot programs, again, was assuming everything will just work perfectly. I don't have to like fix my, existing broken workflows and I'm just going to layer chat GPT on top of it and everything's just going to work perfectly and it's going to take no effort. I think that this has been another big area as people approach it as typical software deployment where it's a very, very different thing. That's something I'm probably going to write about soon, but like it's just so real that I think the way the entire industry, most organizations have been thinking and approaching it over the last two years, we are really hitting an inflection point now. And GPT-5 might be that canary in the coal mine, but I think it has to change if we're going to let our refrigerators know what our food's going to spoil.
Starting point is 00:44:58 And you're not going to die climbing on the Nepalese mountains. Well, I'm still alive with the help of. mostly folks at tea houses, and I didn't follow Chad Chippy T's advice. I went a little bit higher than I should have, and things were fine. A little ibuprofen goes a long way from your altitude sickness, headache. But yeah, just to go back again to this why the product matters, why the model matters, yes, it would be good if like, you know, this sort of rhetoric around AGI started going away. and you know companies started putting this stuff into place however like one way you can solve a lot of these problems again will be better models but the question is is this technology fundamentally unable
Starting point is 00:45:44 to do it i don't know okay but before we end we definitely should talk about one place that AI is making its way into the workflow or apparently might be according to this new report from rest of world and that is only fans this is from rest of world a hidden network hand handles chats for only fan stars. AI could soon take over. That's from the story. Artificial chat bots are starting to take over from low-wage workers known as chatters who impersonate only fan stars in direct messaging with fans. The adult websites creators rely on these remote operators to flirt with fans, earn tips, and sell images and videos. Chatters in the Philippines a hub for this work told the rest
Starting point is 00:46:28 of world that rising sales quotas have made their work more stressful. One set his company plans to replace the worst performers with AI. The chatterers believe that once AI fully masters sales, their jobs could be automated, but for now, the bots cannot impersonate human quirks fully, they said. These people are using keyboard smashes and intentional misspells and Gen Z slings.
Starting point is 00:46:55 Gen Z sling and AI can't do that yet. Well, Roger, maybe this is it. This is if you've got to start with product, maybe it is the only fans chat by the way it's it wasn't it always obvious that the people on on only fans are we're definitely not speaking to the models themselves isn't this always like on the internet no one knows you're a dog situation but now it may be going to AI so yeah maybe there's hope for the product side I don't know what do you think this is it's the product it's it's attainable use case leveraging the technology as it stands today to
Starting point is 00:47:31 to deliver value in a concrete fashion. It's there. It's real. Apparently what happens is the, when chats come in, these groups of people often in the Philippines are directed by bosses on Discord to like go out and chat and they monitor their activity. And, you know, one of them, the story has one of them freezing because there was this like emotional message
Starting point is 00:47:57 and the supervisor got mad and they know that AI will, you know, won't have that that issue um i mean i don't know if the technology is good enough yet uh to replace these human chatters but i hate saying i hate saying this one if you are an only fans like uh chatter the fake chat person that's the job that's going to get displaced by ai out of all of them I disagreed with Dari Amadeus 50% of white collar workers or whatever it was but
Starting point is 00:48:34 I'm gonna have to go with this one probably now what if it's more than the chatters what if it's the only fan models themselves it's gonna be I mean because here this is this is another part of the story automation and only fans is now moving beyond chat using AI management companies that represent models
Starting point is 00:48:52 can generate photos of them in poses requested by subscribers without any human involvement whatsoever. AI images generated by one of these companies using a stable diffusion reviewed by the rest of the world were so realistic they could not be distinguished from photographs. It's funny because the whole kind of like narrative or meme around like that was written by AI. That's an image generated by AI and it's obvious to all of us. I'm convinced like a lot of content out there is already quietly generated by AI. Like you had, those kind of big splashy PR-driven efforts.
Starting point is 00:49:30 But in reality, behind the scenes, because those are the companies that don't want to advertise it because like the Sheeans of the world or whoever else, like, if you look at their product pages, does not look too natural. So, yeah, I think it's already there. Let me put it this way. We talked at the beginning of this episode that there's like two main uses for artificial intelligence. one is this thought partner thing one is this agent thing kind of left out the other one which is this sort of therapist slash companion I thought you're going to say thinking versus doing
Starting point is 00:50:08 and I didn't want to go there so oh god yes because that's where we started let's move on from this idea let's go with the talk about yeah there's thinking doing and then there's there's friend or companion and it's sort of interesting that like with the agent and stuff, you move away from that. And we also, like, there's this HBR article that we've talked about that's circulating out there that says, like, the number one use case for AI is companionship and therapy. And, you know, I don't know. You don't really need AGI for that.
Starting point is 00:50:42 No, though. Just need better memory. And by the way, that's what they're building towards with GPT6. Memory? This is where this is, it's all just going to head this way. Your agent dream is going to die, man. I know. I know.
Starting point is 00:50:53 My dream of actually. Your agent dream, my thought, partner dream, both going away. It's just going to be AI friend and lover. Yep. Let's just get ready for it. It's where it's going. Are you excited for this future? And somehow Open AI will actually realize its valuation with just companionship.
Starting point is 00:51:13 I mean, maybe that's not an unreasonable thing. Like getting people like the elasticity of demand and just jacking up the price, I think. if it's companionship is probably a lot more valuable than what's my oxygen maximum on a while hiking the Nepalese mountains. I think you're right. I think that's where it's going. And then we'll merge with the AI and I have a companion slash AI lover at all times in our head.
Starting point is 00:51:47 And all war will end and peace will come in our time. and people will be kind to each other. And that will be true superintelligence is the understanding that we're all one as a species and as anything carbon-based. The real AGI was love the whole time. Exactly. I think it's time to go to that.
Starting point is 00:52:12 I'm glad we found a kumbaya moment here at the end. All right, man. Thank you so much for coming on. Great speaking with you as all. all right see you next week all right everybody speaking of merging with AI we have two brain computer interfaces coming up the next brain computer interface episodes coming up the next two Wednesdays next Wednesday we'll speak with journalist Sally Adi about the state of the brain computer interface and the following week we have the leadership
Starting point is 00:52:41 of precision neuroscience coming on to talk about their BCI Ranjan and I of course will be breaking down the news each Friday so stay tuned as we cover next week headlines next Friday. And thank you all for listening. We'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.