Big Technology Podcast - Boom Times For ChatGPT, OpenAI’s Deep Research, AI Super Bowl

Episode Date: February 7, 2025

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) New data showing ChatGPT's impressive growth since mid-2024 2) Was ChatGPT voice mode responsible? 3) How... important ChatGPT's growth is for OpenAI 4) How seriously DeepSeek challenged ChatGPT's traffic numbers 5) Does brand matter or are bots interchangeable? 6) OpenAI does the Reddit AMA 7) Experimenting with OpenAI's Deep Research 8) Why AI reasoning methods contain so much promise 9) New Gemini releases 10) Weird Gemini naming conventions 11) Big Technology in New York Times' Klarna story 12) Are the AI Super Bowl ads a good idea? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 New chat GPT growth numbers come in. Open AIs built a pretty good research assistant, and the Super Bowl fills up with AI ads. We'll cover that on a Big Technology Podcast Friday edition right after this. Welcome to Big Technology Podcast Friday edition, where we break down the news in our cool-headed and nuanced format. We have a major week of news to cover and some of our own to break, and we're joined, as always, on Friday in studio. Live from Spotify headquarters by Ron John Roy of Margins. Ron John, great to see you in person. Finally, welcome back to the show. Big Technology podcast Friday edition is all cleaned up today. Alex and I are here at Spotify Studios. We're sounding good. Usually we're sitting both in New York or in some strange locale having a conversation through the computer screen. But today, we talk in person. And we love to cover the news every week. This week we're going to break some news or at least share some new data on chat GPT that I've gotten from similar web, which shows chat GPT's really interesting growth story.
Starting point is 00:00:58 So we're going to start there. And then, of course, we're going to cover deep research with both, which both you and I have spent $200 to try. And then, of course, it's the Super Bowl this weekend. So we're going to talk about why these companies are spending money on Super Bowl ads and not on improving foundational models. And I have a feeling I know where both of our perspectives are going to be on this one, although we might be more aligned than usual. All right. Here's the data from similar web. So for quite some time, and I even wrote a story about this last year, chat cheap.
Starting point is 00:01:28 had been flatlining. The growth had just completely stopped. So you see a very, very quick run-up to 100 million monthly users or on the chart that we're looking at now from similar web. They're measuring web traffic, so about 2 billion visits per month. And it flatlines. And it is basically either down or just barely touching where it was in early 2023, so four or five months after chat chiptis released and then there's an inflection point and i'm pretty sure the inflection point is when sam altman tweeted her because the moment open a i releases uh or not even releases announces the fact that they have these superior voice uh chat type of capabilities where you could talk you could interrupt it feels live all of a sudden interest in chat chpt skyrockets and
Starting point is 00:02:20 we can see in the chart that we're looking at uh and for those at home it's just a inflection point moment where Sam Altman tweets her, and it goes from 2 billion visits per month to 4 billion, basically 4 billion. And that's when we start to see Open AI announce that they've gone from 100 million users to 300 million users. So, Ranjad, I'm curious what you think about the boom times for ChatGPT, mostly just like, how important is this for OpenAI that they've actually found something that's made their chatbot take off? I think this reflects that OpenAI and ChatGPTT, is the Kleenex or Xerox or the household name of any kind of generative AI. And the numbers, again, we heard 100 million to 300 million, but seeing this from a third party is actually pretty impressive.
Starting point is 00:03:09 To see it from February or April 2024 to late 2024 almost triple in terms of traffic is incredible. But it makes sense. They're the household name. Every non-core tech person I talk to does not say, does never talk about Claude, has no longer talks about Bing. There's a brief moment they might have been and is only talking about chat GPT. So I think it's both good for OpenAI, but it's also good for generative AI in general. It shows it's becoming more of a regular thing.
Starting point is 00:03:40 So my theory is that this whole brouhaha was Scarlett Johansson when Sam tweeted her and people were talking with Open AI or they thought they could. By the way, they did release it, but just months later generated way more interest in using chat TPT. Now, there's been so many other releases they've done, better models, they incorporated Dahlion, which is image generation. So that might have done part of it. They've also, like, they stopped hallucinating. The responses are definitely better.
Starting point is 00:04:06 But I'm curious. I mean, it's really, really fascinating that Chachapiti just stagnated for almost a year and then picked up. So I'm saying it's the Scarlett Johansson thing. What's your perspective? That's an interesting theory. I'm going to give you, I'll give you that, but I'm still going to disagree. I don't think it's Scarlett Johansson here.
Starting point is 00:04:25 I think this is, again, this is reflective of if I think in throughout 2023, no one outside of tech talked about generative AI. 2024, it became a thing. We've talked about this. That was when the hype cycle kicked in in high gear. That's when everyone started thinking about it. That's when everyone started talking about it. It's every single headline. And chat GPT is the first place people will go.
Starting point is 00:04:50 It literally, it's shorthand for everyone I know for AI right now. So that makes sense. It reflects the industry, not just open AI. One of the thing that's interesting looking at these numbers is just how unevenly distributed the gains in AI have been. So if we're looking at our similar web numbers, again, this is web visits. Bing had $1.5 billion per month in February 2024. It had all of $1.85 billion per month in October 2024. You look at ChatGPT, starts with $1.6 billion, and now it has $3.7 billion per month. So it's left Bing in the dust. And by God, I mean, the rest that you mentioned, Claude, it doesn't even factor. There is no consumer adoption, basically, for Claude.
Starting point is 00:05:35 Question here. Is Bing.com in the data, the search engine as well? Or is it? That's the search engine. Okay. So Chatsyptia has surpassed the search engine. And the search engine really hasn't gotten much of a bump, even though it's delivering so much of the same services. So you're right, it really is the brand that makes.
Starting point is 00:05:54 the biggest difference here. Actually, let's take a moment here to pour one out for Bing. Because remember, I think in 2023, when we would talk, we were Bing boys. Remember, like, Bing was on par with ChatGPT as kind of the face of whatever was going to happen in generative AI. I remember people having, like, just the weirdest, wildest conversations with Bing. No one is doing that today. No one is stress testing Bing.
Starting point is 00:06:20 Microsoft, they just kind of, I guess they went all in on co-pilot and enter. surprise, but Bing consumer, it was a good run. It was a good run, but we tried. We do have cameras with us today, so allow me to just quickly address the audience. Yes, we were Bing Boys, and we apologize for that. And if you're just joining us today or recently, let's wipe that out of our memory, and we're going to pick up as if that never happened. I'm a proud former Bing boy. I'm okay with it. Honestly. Everyone goes through their Bing phase at some point, right? Well, look, you've got to live it out. Bing was at its best when it was trying to steal a Porter's Wives. Once they neutered that capability, it was toast. I mean, look at what
Starting point is 00:06:58 happens. It's really disappointing and a disaster. Yeah. Sorry, Bing, but you're right. To me, oh, man, it almost makes me question my normalcy because I'm on perplexity all day. I'm looking here, Claude, these are the places I'm spending a lot of my day, and no one else is. No one else is. Maybe we're just ahead of the curve. Hopefully. I like to think that sometimes. Me too. But here, look, this is a another thing that we think about coming out of last week, where we talked about how DeepSeek came out. It's about as performance as OpenAI's reasoning model. It's much cheaper, and it shows you the full chain of thought. And, well, actually, we'll get into that in a second. But it's about as performance, and it's much cheaper. And we talked about how
Starting point is 00:07:40 models don't matter. And if you're looking for the optimism about Open AI, is that they have a runaway success as a product in ChatchipT, and the numbers just really push it forward. Yeah, no, no, I think that's correct. And we've talked. And we've talked. To me, still, Open AI's greatest trick in the world, and we've talked about this before, is that in the UI, the way it kind of like let the text stream out to you, when it didn't need to, if you ever call via API, it just gives you a block response, made people feel like this was something magical and it was thinking. Open AI has always been, and we're going to get into deep research, operator is not a good product,
Starting point is 00:08:21 but it's a mesmerizing product. It's a beautiful product. It's just not very good. So they still have a strong team. And now they had a product, Kevin Whale from Instagram and Artifact briefly. Like they're playing the right game in terms of product, I think. I think. Financially, we can discuss separately.
Starting point is 00:08:42 Last week we also looked at DeepSeek's performance and we said, oh, this is bad because they've commoditized open AI's model. But for the data that I got from a similar web, shows another story, which is maybe even more concerning for open AI. So we all saw DeepSeek go to the top of the App Store charts. And for me, it was like, well, the App Store charts take into account hotness. Like, how hot is your app? If your app is super hot, then you're going to go to the top of the charts. But then you look at the traffic. And it's not only that people were downloading, it's people were using Deepseek a lot, a lot, a lot. And this is again from similar web. You see last week, so January 28th, chat chip BT had 139.3 web and mobile visits.
Starting point is 00:09:31 DeepSeek had 49 million. So it cut about like further than any other company has been able to cut into the lead of open AI and it had about a third of the traffic that chat chip BT took years to build overnight. And I think part of this is just because the product, the deep seek product, if you go to deepseek.com and I can't recommend it. because you never know it's going to happen to your data there. But if you go there, you'll see the chat bot right out its full chain of thought, and it's
Starting point is 00:10:02 mesmerizing. You see the reasoning work in a way that you only get bullet points with OpenAI. And of course, there was a lot of media interests which drove this. But for me to see these numbers and to see that it basically built a third of what ChatGPT has, again, taken years to do, that to me might have been the most concerning thing for OpenAI. that all of a sudden there's a challenger that might make ChatGAPT not that verb or noun or whatever you want to call it.
Starting point is 00:10:29 Yeah, but I think the numbers, the more interesting part of that to me is, again, January 28th, 49 million visits versus 139 for OpenAI. That reflects just kind of just how quickly this can rise and fall because that had to be driven by the media hype, curiosity. It also kind of makes me wonder, still how niche is all this behavior, because I don't think tech normal or normal people
Starting point is 00:10:57 are going to DeepSeek. It was all of us going and spending time and testing it against OpenAI. And to get those kind of numbers for that quick, like that bounce, I think still shows that this stuff's ephemeral and like it can, people can go anywhere. People can have a bunch of bookmarks up. They'll switch to the next thing because if Deep Seek came out of nowhere and got to those numbers quickly, and we'll see where it is. a month or two now, I think it shows that no one has a competitive stronghold or any kind of
Starting point is 00:11:27 lock in on this stuff, other than us now paying $200 for Open AI, a chat GPT Pro, which we'll get into. Well, 49 million people in a day, or 49 million visits in a day to a website. That's not just the nerds. That is some part of the general population. If it, okay, if it is just the nerds, then what? the entire usage of chat GPT is nerds times three? That's embarrassing.
Starting point is 00:11:53 That's what worries me. No, no. When I look at this number, I cannot believe any non will go with nerd, but tech forward person was going to deep seek. So that actually, the 139 million visitors to chat GPT, what percentage that is non-early adopters, that does make the kind of addressable market of this a little more questionable. Back to OpenAI. If we were worried about their models commoditizing last week, if their chatbot can commoditize, like you said, you could just go to a different website.
Starting point is 00:12:27 And next thing you know, chat chit is unseated. Shouldn't there be alarm bells going on in Open AI headquarters right now because of what we're seeing? Of course. I think definitely. To me, Gemini is the most interesting competitor in this because or even, I mean, Microsoft, I guess, is it co-pilot now? Or what's the generative chat? It'll always just be Bing to me. It'll just be Bing to me as well.
Starting point is 00:12:51 I think. Once a flame, always a flame. Because where people already are and just injecting the chatbot layer is always going to be easier. And the distribution site, actually, sorry, we haven't even mentioned meta-AI in all of this. And their numbers, I'm sure, they always have, they can always get when you have 3 billion users some dramatic headline number. But having the chapbutt integrated into where people already are is always going to be a natural advantage. And I think this is another case where we have not started to see that level of utilization for Gemini. But Open AI, yes, alarm bells ringing very loudly, I think, should be the case.
Starting point is 00:13:35 And so as the siren went off, the Open AI team of merry band of characters made their way to Reddit to answer questions from the town. It really feels like that's what happens. They all did a Reddit AMA. And they gave some very interesting answers. So it is really clear that DeepSeek put them on their heels, and they said as much in this AMA that included Sam Altman, CEO of OpenAI. Kevin Wheel had a product. Somebody asked, this is, I think we should just read the Reddit usernames because they're fun to say. I always love, I gave a wedding speech once where I found marital advice from Reddit and the best
Starting point is 00:14:12 part of it was reading the entire Reddit usernames out loud to the entire audience. You're still friends with these people? Yeah, one of my closest friends. Okay, see? So folks, what we're about to do is just going to bring us closer. Yeah. So let's go to our good friend, Lulll's inventor. Lowe's inventor says
Starting point is 00:14:27 to the Open AI team, would you consider releasing some model weights and publishing some research? And in response comes a remarkable statement from Sam Altman. Yes, we are discussing. I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.
Starting point is 00:14:44 Not everyone at OpenAI shares this view, and it's also not our current highest priority. We are the wrong side of history on open source coming from the CEO of OpenAI. To me, it's just, it really pushes the point home that whatever happened last week and everybody's been out there trying to sort of bring it down and say this isn't such a big deal.
Starting point is 00:15:05 It put the whole proprietary model industry, the Open AIs and the anthropics of the world on their back feet, understanding that they are about to be passed by open source, and they have to embrace it. Curious how you read this statement. I'm glad we're on video today, so viewers can see me just shaking my head, because this is where, in terms of a company with this valuation, it sometimes still kind of, it amuses me to know that whatever corporate communications people would normally be around are not, because this is just Sam Altman, I feel, just writing out loud. And just at this exact moment, that thought went through his head that, you know, maybe open source, that's the topic du jour. So let's say something big and controversial. But then even qualifying himself saying it's not a top priority.
Starting point is 00:15:53 So that one felt really to me like just kind of stir the pot a little bit. But I don't think that – I don't read too far into that because it can't be their strategy. It literally cannot. So they did financially. Like they cannot – if they open source their modeling, and try to win only on the product in the UI alone, they will never be, what is it, wait, what was the Mossa? $300 billion valuation that they're aiming for.
Starting point is 00:16:18 Yeah, yeah, yeah, $300 billion. You're not going to be a $300 billion company when you're open source like that. Maybe you will. Maybe it's the product that matters and you open source your model and you incorporate the best of open source and you grow that way. Which I do argue regularly, but in this case, the way they have built themselves out, I don't think they will win on that. I happen to like this about Altman. Get on Reddit. leave the comms people in the conference room and just say what you feel even if it's not 100% true and then people like us can sort of break it down and explain to the people at home what we think is is real okay thank you sam thank you and your merry band of open-eyed gentleman singing on to reddit right the nobles of the court bounce their way down to reddit so okay back on the rails so here is from theory sudden 5996 they
Starting point is 00:17:08 They say, let's address this week's elephant deep seek. You know, what do you think about it? Sam Altman says it's a very good model. We will produce better models, but we will maintain less of a lead than we did in previous years. I mean, that to me is the biggest confirmation that what Deepseek did, really, even the playing field and Sam saying it himself. Yeah, no, I mean, on one hand, recognizing and trying, the rational point of view would be they see Deepseek, they see R1, and they're just recognizing that this is the state of the industry,
Starting point is 00:17:38 Potentially, we need to go open source. Potentially, we will not have as dramatic a lead over our competitors. But we're certainly going to get into. Then there's GPT5 comments down the road. And they obviously are still trying to sell this idea that GPT5, whatever it is. And it's not going to be GPT 50. It's just going to be GPT5 is going to be this earth-shattering AGI, whatever it is.
Starting point is 00:18:05 They still have to sell that idea. And they're trying to. I see you've been deep in the Reddit AMA, which makes my heart warm. It really does make me happy. Where else can you get Sam Altman unfiltered? Actually, everywhere. Okay, let's talk about chain of thought. So one of the most interesting things that DeepSeek will do when you go to deepseek.com
Starting point is 00:18:26 is it will show you exactly the way it's thinking through a reasoning problem when you use its R1 model. And OpenAI just give you some bullet points. I've had so much fun trying to work through this chain of thought really seeing how the model thinks and I think whether the model's actually thinking or just computing is like a pretty fun debate that we can have and what is thinking
Starting point is 00:18:46 maybe we'll come to that at another point but the redditors are asking can we please see all the thinking tokens here's Sam Altman yeah we're going to show much more helpful and detailed version of this soon credit to R1 for updating us so okay so here they are literally
Starting point is 00:19:03 admitting out loud that they've been pushed by Deepseek. And Kevin Wheel, the product head says, we're working on a bunch of these to show a bunch more than we show today. The problem is, the more we show, the more that we can get distilled. They're obviously still smarting that in their minds, Deepseek has distilled some of their models and put it into their own. I think this is great for the industry. I think this is really good. Okay. So I had said one of the greatest tricks, UI tricks of all time was Open AI and the text streaming to make you feel like the computer is thinking. I think DeepSeek has taken the next greatest UI trick in terms of
Starting point is 00:19:42 showing the chain of thought processing. And again, as you said, we can maybe save the what is thinking for an ayahuasca retreat or something like that down the road. But I think live on air, live on air, of course. But without getting too philosophical, it's again, kind of a part trick in the sense that these models always go through some logical iterations to get to that output, there's always, and you said in the end it's actually just math. So the text representation of that you're seeing is still some kind of party trick here, let's say. It's still a computation that's happening. But deep seek doing that. And I've seen a writer.com, which is an enterprise generative AI tool that we use, like they had sub questions and it showed you the different types of questions that it was asking. to get to the final answer. So other tools and models have done this. DeepSeek brought this to the general population, and it's brilliant because it makes people
Starting point is 00:20:38 even more attached to these type of tools. Like it makes them really think that there's thinking, which makes them more usable. But in reality, like, I don't know if you tried this, when you saw within that chain of thought something that didn't quite work in the way you wanted to, you can't just tweak that step. You're starting from scratch again.
Starting point is 00:20:59 So, yes, I'm sure there's like some Twitter threat about how to prompt engineer your way out of chain of thought reasoning. But in reality, it doesn't really give you that much help. Yeah, but the chain of thought is really very cogent and so fun to read through. And you see the model be like, nah, maybe that doesn't work. Like, especially one of the cool things about deep seek is just like it's very casual, the language and not so formal. So whatever they did to make that work is pretty impressive. All right, let's talk about Stargate. So Theory Sutton asked how important is the success of Stargate to open AI's future?
Starting point is 00:21:29 future, which is, again, for listeners, it's the 500 billion attempted infrastructure built by Open AI, more likely... Announced 500 billion. It'll be likely, more likely tens of billions, which is still impressive. Kevin Wheel says, yes, everything we've seen says that the more compute we have, the better the model we can build, and the more valuable the products we can make. We're now scaling models on two dimensions at once, basically the traditional LLM and the reasoning models and both take compute, so it is serving products for hundreds of millions of users.
Starting point is 00:22:01 And as we move to more agentic products that are doing work for you continuously, that takes compute. So think of Stargate as our factory for turning power GPUs into awesome stuff for you. Such a product guy response. Such a product. But it makes sense. But it makes sense. Well, he spent years at meta.
Starting point is 00:22:20 He's been on this show. So that's just how he operates. Yeah, yeah, yeah. Even on your episode last Wednesday with the VP of Omniverse and simulation from Nvidia, it's interesting to me that how kind of like dogmatic people are about more compute means, more intelligence, better outputs, better products. And I mean, Kevin here is going down that same path. More compute is better. And I like that people are starting to recognize. And it's kind of a nice way of putting it two dimensions.
Starting point is 00:22:48 There's going to be the like raw compute in terms of getting better output. but also coming up with new techniques and ways to actually drive that output. But in reality, I still, I think Deep Seek shows has shown us. And the number is not $5 or $6 million to actually build and train the whole model, but the actual training part of it was $6 million. We can probably take that at some face value, that the future does not only mean more compute means better products. And I think the industry, at least a lot of people,
Starting point is 00:23:21 but the people with the most vested interests are still living by that rule right now. Yeah, well, I'm a believer. I think it's right. Once I try to tell you a quick story, once I try to get Kevin Wheel to leak me some information from Facebook, I've never been shut down by somebody so quickly in my entire life. So that's just, that gives you some context, disresponse. All right. Let's talk about GPT5.
Starting point is 00:23:45 Ron John, I feel like this brings you great joy. So do you want to take this one away? So, Reddit user Concheria had asked... What do you think that means, concheria? Conk, like the shell, and then turning it into a name. They have a bunny user name. They have a bunny ninja avatar, whatever you will incorporate that into. Meaning, if you're listening, Concharia, concharia, please let us know the etymology of your Reddit username.
Starting point is 00:24:12 Now I'm starting to feel weird reading these names because I'm like, I'm sure we'll find out that it's some dirty term after this. Anyway, let's just read it. We're cancel. But so they ask, will there be an update to advance voice mode? Is this a focus for a potential GPT-5-0? What's the rough timeline for GPT-5-0? I did like that they just kind of by default called it 5-0, showing how confusing. And there's been a lot of kind of like almost hilarious aggregations of what the series of model names has been from OpenAI.
Starting point is 00:24:43 And I think this shows how ridiculous it is. So thank you. Sam responds. And he says, updates to advance voice mode coming, I think we'll just call it GPT5, not GPT5O, don't have a timeline yet. So at least one, they're streamlining the way they're marketing this to GPT5, which I think is a good thing. He doesn't, at least he didn't say AGI. I'll give him credit because actually they need to announce AGI before we get to GPT5. So I think that's why that did not make its way into there.
Starting point is 00:25:14 But I don't know. There was a long period of time where for Open AI to succeed, they had to get to GPT5, whatever that would be. And I think they've actually, to their credit, gotten to a point where that's not necessary anymore. Like the battle of the next year or two could just be in the operator and deep research and whatever other product, which makes me happier than anyone that people are actually competing on product now. But I think it shows that the fact that he's a bit cavalier about this after, what was that tweet about like Night Sky or something like that? Oh, yeah. So Sam wrote this like really weird a crypto, like, cryptic. Crypto, Lord Almighty.
Starting point is 00:26:00 Cryptic poem that, you know, made us think that something big is coming. But I think he was just writing a poem. Or maybe he had Chad TPT write him a poem and he was sharing it with the rest of us. But so whatever Sam was trying to communicate in the past. or at least kind of allude to. Now I kind of like it. That is just no timeline. We'll call it GPT-5, but let's talk about other things.
Starting point is 00:26:22 Yeah. Again, I doubt we're seeing GPT-5 this year, or maybe ever. It'll just be versions of- I think we see GPT-5 this year. Oh, yeah. End of the year, I think if they don't have any killer runaway products, they kind of have to. They have to release something.
Starting point is 00:26:41 And again, like, whatever 4-0 became, 4-0 mini, whatever RASP, they could have just called one of these GPT-5 and tried to, like, build some hype around it, and we'd all go along with it. So at a certain point, if none of those products that they are releasing, and we are both paying $200 a month right now for these new products, so maybe they'll be okay and they don't need to. But if the pressure comes, I think they have to release something. Okay, so you've mentioned multiple times that we were paying $200 a month. I mean, when I spend $200 a month on SaaS. We're talking about it. I'm talking about it. We did it so we could tell you, everybody at home.
Starting point is 00:27:17 For you, for our you. What Open AI is chat chiptipti pro is all about. And so we will skip our plan segment on chat chipiti search and tell you about our experience is giving Open AI. So much money to use chat chip PT. So Open AI now allows you to spend $200 for a few things, unlimited. use of chat chipt their AI agent operator that will go use your browser and do tasks for you and then something that just came out this week which we tease in the open a new chat chiptt agent called deep research by the way amazingly they decided to take the exact name for the similar
Starting point is 00:28:01 product that google has we'll cover that in a bit but i found that fairly shameless and wrong let me read the story. OpenAI is announcing a new AI agent designed to help people conduct in-depth complex research using chat GPT, the company's AI-powered chatbot platform. This is from TechCrunch. I love how reporters still have to write that ChatGPT is the company's AI-powered chatbot platform. Just in case you didn't know, TechCrunch. You're writing to a tech audience. What are you doing? Appropriately enough, the bot is called Deep Research. Open AI said in a blog post published Sunday that the new capability was designed for people who do intensive knowledge work in areas like finance, science, policy, and engineering, and need thorough, precise, and reliable research.
Starting point is 00:28:47 It could also be useful. The company added for anyone making purchases that typically require careful research like cars, appliances, and furniture. We have both attempted this. It is, in my mind, the best research agent that you could potentially use that you could use right now. and Ron John, I know you've been deep in the weeds, so I'm curious what you've been using it for. Last week, you said operator was interesting, not worth the money. Is deep research worth the money? Yes. I will say, yep, yep.
Starting point is 00:29:17 It is, it's fantastic. It is incredible. And I said last week, operator is mesmerizing and useless. Deep research is fantastic. So basically, like market research oriented. questions, asking what are e-commerce trends within a specific category? Look through Reddit. Look through different research reports. Look based on geographies. Even asking an initial question, it will break down and ask you back good questions as though you're talking to an actual research analyst.
Starting point is 00:29:52 And then it will provide you an incredibly well-sourced number of bullet points, paragraphs, with hyperlinks embedded. Like, it does an incredible. job with this, and it makes smart arguments the way you would expect from, I don't want to say PhD level person, because I don't even know exactly what that would mean in terms of intelligence. But overall, this to me was huge. And it did a good job. My kind of litmus test on all this is I think there's a lot of generative AI where people, products come out, and people, rather than looking at what is available today, talk about what it could be in the future. Operator definitely fell into that category. this on day one, on day zero, actually delivered what it promised. And, I mean, honestly, you have to figure if you're in any strategy-oriented role, any just any business-oriented role, research-oriented role, this becomes incredibly valuable. Yeah, I've used it for a number of interesting things. I was on CNBC Tuesday to talk about Google earnings.
Starting point is 00:30:56 So I asked it to give me an entire like prep document about the state of Google. It was, yeah, so this is the cool thing about it. It searches the Internet. And it will, if you ask it, it will give you current information. And so it, like, pulled out, like, the projected ad spend. And right now it just has text. But over time, opening eye anticipates, it'll be able to put charts in there, which I think will be fascinating.
Starting point is 00:31:20 And I thought, wow, like, I won't prep for CNBC without this again. It is really, really good. I think they're just going to end with I won't prep for CNBC. Yeah, no, I do my, I do my prep. I work very hard on that and on this. And I also had it give me a prep for the podcast today. And so I actually took last week's prep document. So for folks, we spend the week just kind of dropping stuff in a Google Doc that we find interesting.
Starting point is 00:31:46 And now on our Discord also, which has been quite fun. And I just downloaded the prep doc last week. And I put it into the query. And I said, use this as a reference. You can now go to the internet and search our show and see what my episodes with Ron John looked like and give me some topics to talk about. And at first, like, it went super broad and gave me, like, what I would do if I was doing, like, an AI overview podcast. And I was like, no, I need only information that came after February 2nd. And so, of course, like the top AI
Starting point is 00:32:20 story of the week is deep research. So it gave me. It talked about itself as the top, uh, oh, no, you're good deep research. It's like a, look, it has selfish, um, motivations, yeah, motivation. So it does really feel you. That's AGI. That's AGI. That's AGI. That's human. It's AGI. Definitely. And then it really broke down, next thing, alphabet earnings, which again, I was on CNBC to talk about. And, you know, it says AI spends sores among deep seek challenge. And it talks a little bit about what we're going to talk about in a bit, just the CAPEX that Apple is going to go through to try to build AI, sorry, AGI, but AI, right? So I found that to be very good. And then I also asked it to sort of give me a report on like this like how to enroll in health care in New York state and good luck on that yeah it was I don't think AGI I don't think super intelligence will help us with that on that one so I have a question yes is there a moat for this for open AI is it it's a really really well done product and again going back to our my general thesis that open AIs strength lies in the product and the models shouldn't matter and hopefully they they recognize that too and if
Starting point is 00:33:32 that they'd only invest in the product more. But it's a good product, but can DeepSeek or Google or whoever else? I mean, Google has a product named Deep Research. We just don't have access to it. Oh, yeah, we do. Is it public? It is public. Oh, have you tried it?
Starting point is 00:33:48 I tried it today. I had it put together a similar episode plan. Who won? Open AI won. Although Google was good, but Open AI won. So the question of, is it a moat? No, I don't think it's a moat. Yeah.
Starting point is 00:34:01 Okay. I switch my laptop over here because I'm about to read a lot, and I don't want to face away from the camera for the entire thing. I think that's right. And we can proudly see our Apple devices here. That's right. Even though Apple intelligence sucks. Apple intelligence.
Starting point is 00:34:15 Yeah. Oh, so I did buy a new Mac computer this week. Of course we did. I went to the Apple store, and they're like, and have you heard about Apple intelligence? And I'm like, oh, God, yes, I have. By the way, Vision Pros, nobody. Nobody anywhere close to them. Are they up?
Starting point is 00:34:32 in the Apple store? But they used to have a special section and now they're off in a corner and legitimately no one cares about that. I don't know what I would do if I walked into the Apple store and the sales rep with a smile on their face came up to me.
Starting point is 00:34:44 Have you heard about Apple intelligence? I might be arrested. I might be arrested. Calm, cool and collected. All right, Siri, Siri, calm, Siri, calm. So this is, the post that I want to read is from Ethan Mollick. It's called The End of Search and the Beginning of Research.
Starting point is 00:35:04 He's a Wharton professor that's actually quite good on AI, and he's been on the show, which I have to mention every time we cite his work, it's just part of the contract. Part of the contract. And he makes this point that what we're seeing right now is this combination of a new mode of AI interaction called reasoning, which we talked about, and agents. So let me read some of this, because I do think it's so good. He says, for the past couple years, whenever you use a chatbot, it worked in a simple way. You type something in and it immediately started responding word by word or more technically token by token. The AI could only think while producing these tokens so researchers developed tricks to improve its reasoning like telling it to think step by step before answering.
Starting point is 00:35:49 That approach called chain of thought prompting markedly improved AI's performance. So that's like the move from traditional LLMs to reasoning. He says, reasoners are capable of solving much harder problems, especially in areas like math or logic where older chatbots failed. The longer reasoners, and this might be repetitive for people who are deep in, but I feel like it's worth reading. The longer reasoners think, the better their answers get, though the rate of improvement slows as they think longer. This is a big deal because previously the only way to make AIs perform better was to train bigger and bigger models. Because reasoners are so new, their capabilities are expanding rapidly. In months, we've seen dramatic improvements from OpenAI's O1 family to their new O3 models,
Starting point is 00:36:33 and that's where DeepSeek factors in. And DeepSeek has, it's R1 model that everyone went crazy about last week was a reasoner. So basically what's going on with this deep research, he calls it, he says, deep research is a narrow research agent built on OpenAI still unreleased O3 reasoner with access to special tools and capability. You can see that the AI is actually working as a researcher, exploring findings, digging deeper into things that interest it and solving problems, like finding alternative ways of getting access to paywalled articles.
Starting point is 00:37:07 And it goes on for five minutes. Sometimes I can think for five, ten minutes. He ended up getting a 13-page 3,778 word draft with six citation and additional references to one of his queries. This is the point I'm trying to make by reading this. I think what we're experiencing with deep research and the reason why it's even a question that it's worth paying $200 a month for is because it is. It is an implementation of these new AI methods that we're starting to see with deep, deep research, we're starting to see with R1.
Starting point is 00:37:37 And it might be that we're just at the cusp of something very interesting happening in AI with this reasoning moment. What do you think about this? And do you think that I am reasonably excited about it the same way that Ethan Mollock is? No, I completely agree. And this is to take, I don't want to be cynical about it. But to me, I'm incredibly, incredibly excited about, again, watching what deep research was able to do and what that means for certainly any kind of like just general research type stuff. But also an open AI very kind of, you know, from a marketing perspective, shoved in, you can research couches or because they want to try to have some more commercial aspect to this or more consumer focused aspect. But this is going to happen.
Starting point is 00:38:23 Like this is going to, there's no question to me that these types. of models, these type of actions will kind of reshape what the web is, the way we interact with it, the way we interact with most apps. And I think that's good and that's going to completely rebuild so many areas and so many things. I think the area to kind of maintain some caution is what is the word agent mean? What is the word agentic mean? Is this agentic? Is this something else?
Starting point is 00:38:51 I think that term is still being thrown around a little too cavalierly because like now there, They've kind of gotten it to where a simple chatbop query is an agent, which I don't think is necessarily the case. Just seeing chain of thought processing it from deep seek isn't agentic. But deep research showing you that it's going into a bunch of different websites and showing you which websites it's going to and showing you what it's extracting from those websites and how it's compiling it, I think that's huge. I think that's incredible in terms of showing people this is possible.
Starting point is 00:39:27 To me, the biggest change that I think needs to happen is letting people interact within that process. Because right now, you kind of like put in the prompt, let it think for 20 minutes sometimes, and then get something and then have to revise it. But imagine you can actually in the middle of all of that action, say, actually, wait, I don't like that. I like this. I think that will be a huge change in terms of how useful this stuff is. Not only that, it's going to learn your tendencies. And the more you interact with these things, like right now, the memory is just something that they don't have. And that memory is coming.
Starting point is 00:40:03 So they'll learn your tendencies. And next thing you know, you're going to have like a research assistant that really knows everything that you want. And just to think about how much room there is to improve, there's already so much going on now. Malik is pretty level-headed, again, Wharton professor who's deep into AI. He says, these systems are already capable of performing work that once required teams of highly paid experts or specialized consultancies. These experts and consultancies aren't going away. If anything, their judgment becomes more crucial as they evolve from doing work to orchestrating and validating the work of AI system. The labs, the research labs, he means, believe this is just the beginning.
Starting point is 00:40:45 They're betting that better models will crack the code of general purpose agents, expanding beyond narrow tasks to become autonomous, digital workers that can navigate the web, process, information across all modalities and take meaningful action in the world. It's pretty high praise. It is. I think to me, I was thinking, especially on that shopping side of things and like thinking, okay, management consultants potentially replaced or that industry certainly changes, us having to do lots of research in general, but we have very specific parts of our job and profiles for the larger population.
Starting point is 00:41:23 Like, where does this start to apply? And the shopping thing, it's still weird to me because how much of that does someone really want to be automated? Like, is the process that the agent is going through? Is that actually the joy that a person experiences? Is going around and clicking on different websites and reading through their views? Is that annoying and a pain? Or is that the part of it that people actually enjoy? Do you like online shopping?
Starting point is 00:41:51 I do. I do. Yeah. I think there is enjoyment. And also, like, you'll never feel that emotional attachment to something you get if the bot just got it for you. Yeah, exactly. Like, it's the act of doing the shopping or doing the research sometimes is, am I getting into, it's the journey, not the destination right now? Well, it's also the destination. But there is definitely joy in sort of finding cool stuff to go visit and then going and doing it. Like, if a bot's just doing that for you, then it's just like, all right, well, I could have just Googled it. to the first result. Yeah, so I think right now, and don't get me wrong, the research, consulting,
Starting point is 00:42:31 strategy, journalistic, this is a pretty big opportunity in market. I'm not downplaying that at all, but still, who is using this and how, especially to expand outside of that, is still not a trillion dollar market. I mean, to get to that, what are the use cases for agents? Because, again, Apple intelligence cannot find our flight information in our email when you ask Siri, which they pitched us as agentic, and that's my momentary Apple intelligence bashing. But like what are actual agents being used for in everyday life and for normal people? People have not been able to articulate that case well, and I'm still waiting for that to happen. It might have to be humanoid robots going back to the invidia conversation.
Starting point is 00:43:17 All right, all right. That's humanoid robots are always an easy. easy sell, I think, for anything, but everyone's building them. But let me ask you another question about what Ethan is saying, which is basically that consultancies aren't going away and that the orchestration of AI is going to be more important than what the actual reports are. I don't know. I've always been on the side that, like, AI will be creative in the workforce and not destructive. But I think you have to look at this with clear eyes. And that is that there are going to be jobs that just completely go away, even if more jobs are created over time. And it seems to me,
Starting point is 00:43:54 like this stuff is going to, maybe not get people fired, but certainly make a company think twice before hiring. Not 1,000 percent. Actually, I think it was from Goldman Sachs like a couple weeks ago they were talking about how an S-1 financial filing, which is a enormous document, but was always kind of a non-human kind of like really plug-in-play type of document, used to take two weeks and 16 bankers and now can be done in like five minutes. And again, that all makes complete sense to me. You have a bunch of data feeds and AI can aggregate it and you just review the entire thing. Like that's going away. Management consulting, all the research and grunt work goes away and that's good. And I mean, you can imagine out of all the job displacement, the least
Starting point is 00:44:41 sympathetic group when we say the bankers and consultants are under threat, pour one out for Bing and the bankers. Yeah. By the way, one of the interesting things, I'm sure you notice this too, it's way more accurate than it's been, way less hallucinations. You can click, it gives you sources, you can click through to the sources, and the numbers are good. Yeah, actually, that's a really good point. Everything I clicked through was 100% correct, which was almost shocking to me in terms of
Starting point is 00:45:08 of the output. That's huge. This meaningfully changes, especially any kind of job that involved opening a lot of browser tabs and copying and pasting text and synthesizing that text. That is completely changed, and there's no way to argue. I get saying orchestrating and validating will keep certain populations like at least a little less scared, but this is big, this is huge. My internship from 2009 just disappeared. I mean, half my life has disappeared right now. So it's not just open AI. Google also has a release this week. They released a set of Gemini thinking models.
Starting point is 00:45:49 Some TechCrunch, Google is bringing its experimental reasoning, artificial intelligence model, capable of explaining how it answers complex questions to the Gemini app. The Gemini 2.0 flash thinking update is part of a slew of AI rollouts announced by Google this week. Also, talking about the CAPEX, the company is planning to spend $75 billion on expenditures. Like growing its family of AI models this year, that's a considerable jump from the $32.3 billion on CAPEX it spent in 2023. That's a lot of money. $75 billion? It's like when Satya said, I'm good for my $80 billion, right? Sundar saying I'm good for my $75.
Starting point is 00:46:28 That, of course, had me go check where NVIDIA is right now. So it's down 12% since the Deep Seek announcement and it's back up a little bit. The story of compute, the story of NVIDIA, the story of chip demand, I think the one thing that was interesting about the last week or so, I mean, Mark Zuckerberg and META did not show that they're moving away from this KAPX bend. Google's coming out and saying it. So it's clear that the tech giants are still taking this path. And Open AI still wants this path and saying Stargate is very important. So I think it's interesting because the entire big technology industry has a vested interest because if compute and KAPX are critical, then only they can win. So this is going to be really interesting to watch play out that as long as compute and KAPX are critical, they're the winner.
Starting point is 00:47:28 So they're going to say that. They're going to keep spending. And if someone, that's why I still think, and we talked about this, Deep Seek was such a big story and remains a big story. because it showed that that entire narrative can just collapse on its own if smaller players come out and do interesting things. That's right. And by the way, I like went out on CNBC and I said, I think Google is on its way to being the best position company in the AI race.
Starting point is 00:47:53 And of course, they promptly missed their cloud numbers and went down double digits. But I think they do have so much potential. The one thing they really need to fix is the way that they name their models. So if you go to Gemini right now, there's Gemini 2.0 Flash. There's 2.0 Flash thinking experimental. There's 2.0 Flash thinking experimental with apps. There's 2.0 Pro with deep research. At least we know what that means. 1.5 Pro and 1.5 Flash. Do they do this? Is it just a joke to them? I don't change Google. Don't change. I love it. I love it. I want Google to never stop naming model. You know, Open AI. I'm a little disappointed in them. I think they're model naming convention is a little, is not good. I want that from Google.
Starting point is 00:48:41 I don't, if they ever had a perfectly streamlined suite of products with a beautiful name, I would question everything. Remember Bard? None of this stuff works. Remember Bard? Nobody uses Google Chat. They need to spend. No, no, GChat was the greatest product of all time.
Starting point is 00:48:58 And they, I don't even, it's called like hang out chats with meat or something like that right now. I'm ready to put my head right through the table. I am. They're spending $75 billion this year. Could you spend $500 million and buy an ad agency and just name this stuff like normal human beings? Get a subscription to Gemini 2.0 Flash Thinking experimental with apps and ask it to name your models for you. But I don't know.
Starting point is 00:49:22 With all the turbulence and volatility in the world, Google giving its models and products really inconsistent names just makes me feel just a little more at peace. This makes you happy. Don't change, Google. Don't change. So speaking of AI and job loss, there was a great New York Times story about Klarna over the weekend. Klarna is, of course, a payments startup. It says, why is the CEO bragging about replacing humans with AI, ask typical corporate executives about their goals and adopting artificial intelligence? And they will most likely make vague pronouncements about how the technology will help employees enjoy more satisfying careers or create as many opportunities as it eliminates. and then there's Sebastian Simeonkowski, the chief executive of Klarna.
Starting point is 00:50:08 He has repeatedly talked up the amount of work they have automated using generative AI. Okay, yeah, that sounds familiar because he was on the podcast, and he was talking about how much work they automated with generative AI. Oh, that's a story that catches my mind. It catches my eye. Let me scroll down. Okay, so this time story, as usual, cites one podcast and another podcast, and then it says this. When the host of the big technology podcast asked why he was so intent on taking Klarna's AI prowess, C. Mankowski said it partly, it was good for humanity. We have moral responsibility to share what we are actually seeing,
Starting point is 00:50:44 that we're actually seeing real results and that actually having implications that are actually having implications on society today. Then he acknowledged that another part of the motivation was self-promotion for sure. We are regarded as a thought leader. I was pretty stunned to see the times name our show in the story, especially because they had so many podcasts being nameless. So thank you, New York Times, for siding us. And I want to point out this was Ron John's question because we spoke right before and he goes, ask him why he's talking about it. Very interesting question, Ron John. Thank you for that.
Starting point is 00:51:11 Well, I'm glad the Times is finally on it that what is the motivation behind bragging about replacing humans with AI? I think, again, I'm glad. I genuinely am glad that they're asking this question because the marketing impetus behind all these pronouncements have to be questioned. And that certainly applies to Sam Altman and that certainly applies to Open AI our entire AMA discussion from earlier. And I did love that moment that there was this big kind of pronouncement, literally the good of humanity. And it's self-promotion for sure, we're regarded.
Starting point is 00:51:49 And he even called it a thought leader, which is normally I feel maybe some people out there use that seriously. But most people I know do not use that as like a serious. term. And he kind of just was like, yeah, we're a thought leader. We're self-promoting. They are still, I believe, looking at IPO. So yeah, this is a question that if people start pushing on more, start asking more, not why is someone saying this, but, or sorry, asking why is someone saying that, not just the content, but the motivation. I think the whole AI industry needs to ask that question for every announcement. Absolutely. I was stoked to see that story. I was stoked to see the headline be the exact question that you asked and I was surprised and grateful that we were mentioned.
Starting point is 00:52:35 Not by name name, but by podcast. Okay, I will take it. They put podcast name more than host name is what makes me happen. How do they not just say Alex, how do they not just say Alex Cantorowitz from the big technology podcast? It would hurt them. It would hurt. They would really, they would have to cry. Oh, come on time. Yeah. So we are on the cusp of the Super Bowl. If you're listening to this, Either the Super Bowl has happened or it's about to happen or maybe the Super Bowl is happening and you're one of the few humans on Earth that's listening to the podcast as the game's going on, in which case we appreciate you. Thank you for choosing good content. Guess who's going to be in the Super Bowl? Of course, the Chiefs and the Eagles, but also Open AI and Google.
Starting point is 00:53:15 This is from the Wall Street Journal. Open AI is set to make its Super Bowl ad debut. Open AI, the Artificial Intelligence Company behind Chat Chip-T. Again, I just like love the soul coming out of the reporter having to write that explanation. is expected to air its first TV commercial during Sunday Super Bowl. OpenAI's brand took off in late 2020 when it launched its wildly popular chatbot, chat TPT. The big game ad is by far. Open AI's biggest foray into advertising as the race to build the world's most powerful AI technology
Starting point is 00:53:45 and win over users intensifies. And the Hollywood reporter also says about Google. Google bets that the Super Bowl can turbocharge Gemini's ad business. Google is planning a major Super Bowl ad for its Gemini AI product line, including a 60-second ad in the second quarter of the game and purchasing 50 different 30-second ads in every state, each one spotlighting a local business that uses its AI software. That's smart.
Starting point is 00:54:10 I was reading this news, almost instinctively saying, spend that money on buying GPUs and scaling your models. However, I think this is brilliant on behalf of OpenAI and smart on behalf of Google. You've got to get your products in the hands of people, like we talked about at the beginning of the show, people have to use chat chip PT. People have to know chat GPT
Starting point is 00:54:31 and millions of people are going to use and know and talk about chat GPT, especially if the ad is half decent after it's in the Super Bowl. I think this is the game. It's whatever $5, $10 million is really well spent by Open AI. I think for Open AI specifically as well,
Starting point is 00:54:46 I mean, they clearly have been moving towards a more formalized professional marketing function. In December, early December of last year, They had hired the former Coinbase CMO who had been at Meta for 11 years and was global head of brand and product marketing for Instagram. Actually, the whole suite of product. So serious marketer. So I'm very curious to see what they're going to do. The challenge for me is twofold.
Starting point is 00:55:14 One, the AI ads to date we have joked about have been terrible. Apple intelligence, not to go back there. But if we remember, they had all these ads of basically people kind of like not wanting to pay attention to people not as important to them. So summarizing their content in real time. But even Google had a disastrous ad in the Olympics, if you remember, where it's like a little girl wants to write a letter to her favorite athlete and the dad uses Gemini to do it. Like how tone deaf. Like it's still one of the greatest things I saw. I was like, these companies need to just hire one person who's just normal and sits in the corner and does.
Starting point is 00:55:53 nothing but just gets shown the ad and says this is terrible. You love this idea, and I don't think it's a bad idea. No, I literally just in the corner, they get paid a lot of money, and they just sit there and, okay, that is just terrible. Oh, okay, normal people think they'll think that's good. Oh, man, I go back and forth, though, because the other side of this is this generative AI has a branding problem. Like, when I still talk to most non-tech friends and family, they still associate generative
Starting point is 00:56:27 AI content as bad. And, like, that's the whole joke. It's like, oh, that's so chat cheap. Did chat chit-tie write this? Yeah, did chat? No, no, I mean, it's still the entire. Which is a fair insult. Which is, but it's a-
Starting point is 00:56:39 We used to call kids Wikipedia when they were saying generic stuff. Generic stuff. That's what I mean. And to me, like, the actual products, if you. know how to use them are so far beyond that stereotype of AI-generated content is like overly formulaic, and that's like two years ago, chat GPT. So there's a clear branding problem. How do you solve this when people have a negative connotation of the technology, have a negative connotation of you, the company? Like, it's got to be a damn good ad. And I think the crypto bowl of
Starting point is 00:57:17 2022, I believe it was January, with the Larry David ads, the Coinbase ads, remember, the bouncing? They're brilliant ads and they're really well done. But like, I don't think it helped the end. In fact, certainly was not a good moment afterwards for the industry. Well, it certainly got a lot more people just put their money in crypto and then they got the rug pulled. That's true. But this is different. Like, I don't think we're going to have the same scam. No, no, no. To me, that part isn't the scam. But how do you solve this branding problem? I think Like, if you're Katie Rauch, the CMO of OpenAI, you're sitting in a room, you're like, we have this branding challenge. I hope they recognize it.
Starting point is 00:57:52 How do we overcome it? I am very excited to see what this commercial looks like. What's your best guess of what it's going to be? I'll give you mine. I think maybe you're going to have Shaq and Charles Barkley sitting at the inside the MBA desk, and they're like saying nasty insults to each other that they're like typing in chat GPT is giving to them. Or maybe something voice. Maybe it's just somebody like driving. in the car and like having a conversation with chat ch pt and it'll be like a snickers commercial it's
Starting point is 00:58:20 like bored you know not bite into a chat ch pt all right hold on here's do you know the the most successful kind of like not even skeptical AI thing thing that converted in AI skeptics I saw I don't if you saw like chat chpT roast my Instagram profile yeah where you literally just screenshoted your grid and then put it on and it and that was a moment that I think a lot I saw like of people being like, wait, this is genuinely creative. It's not formulaic. It's actually funny and interesting and creative. So I think that would be mine. Roasted Sam Altman or other famous people letting their Instagram profiles get roasted by ChatGPT. That's my ad. Yeah. Okay. So basically we both agree that it's some form of AI roasting humans.
Starting point is 00:59:09 AI roasting humans. And it's got to be funny. It's got to be good. I don't think trying to, trying to tug at heartstrings in any way. Google's going to do that. Google's going to do that. I'm sure it's just going to be like a kid and a grandmother, you know, just trying to communicate with each other and then Gemini will solve it. But the weirdest part, like, I introduced my dad to Gemini Voice, and he really, he has Parkinson's and has trouble typing into his phone.
Starting point is 00:59:38 And it was this emotional moment. Like, genuinely, that could have been the commercial right there. Yeah, it could have been. And they're still, they, that is sitting there and somehow it's still going to get screwed up. They'll mess it up. Somehow. They have made some really beautiful search ads before in the past. The greatest tech ad of all time.
Starting point is 00:59:55 And I realized how old I was when I brought that up to some younger people. Parisian love. It was from 2009. It's a Google search ad where, and it really made Google search emotional where someone goes through the process of studying abroad, falling in love, getting married. It was amazing. If they can pull off. the 2025 Parisian love, I'm betting it all on Google. If they can have a good ad on that.
Starting point is 01:00:19 I wouldn't bet against their ad agencies. Okay. We need to get out of here, but who do you think is going to win in the game? I'm a New England Patriots fan. Yeah. I don't want Mahomes to three Pete, so I want the Eagles to win, but my God, the chiefs somehow, they always do it. So I will grudgingly bet that the Chiefs will win. And I am a Jets fan, and I want Tom Brady's legacy,
Starting point is 01:00:50 especially his Bill Belichick's legacy to fall apart. So I'm taking the Chiefs. All right. I mean, I'll take the Eagles just to take the other side, and that's where my heart lies. What's your prediction on what happens at halftime? We've got Kendrick Lamar coming out. I have this feeling that Drake is going to come out.
Starting point is 01:01:08 They're going to hug. And then they're going to both take out fake guns and shoot them, and it's going to say, bing. Wait, as in, as in. The search engine. As in Bing. That could be the most aggressive call of all time. And if you are correct about that, I mean, it's time to retire. They should hug it out on stage.
Starting point is 01:01:29 I do like this. If they can do it, world peace will happen. That literally, we are the world comes on, just like Stevie Wonder at the Grammys. and Kendrick and Drake sing it together. Canada and the U.S., friends again. Let's bring it. Let's bring peace at this Super Bowl. Peace to all of us.
Starting point is 01:01:45 Yes, as the Eagles and the Chiefs go at it. That's right. All right. Well, Ron John, great to see you in person. This has been so fun. This has been fun. Let's wave to the people at home. All right, everybody.
Starting point is 01:01:56 Thank you for watching us or listening to us. We do this every single Friday, breaking down the week's news. Sometimes we break some news. And we hope you join us. If this is your first time watching the show, you can subscribe to us here, either on Spotify or whatever app you use to get podcasts. And on Wednesdays, we'll do, I'll do one-on-one interviews with people in the tech industry, and then Ranja and I will be back every Friday. So that'll do it. Thank you for listening.
Starting point is 01:02:23 And we'll see you next time, a big technology podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.