Big Technology Podcast - Why OpenAI Killed Sora, Did Apple Just Save Siri?, Meta’s Big Loss

Episode Date: March 28, 2026

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Why AI-video didn't take off 2) Who wins now that OpenAI is shutting down Sora 3) The real reason OpenAI... shut down Sora 4) What happens now that OpenAI and Anthropic are competing for similar AI assistant customers 5) Anthropic's new 'Capybara' model class is coming 6) OpenAI has a big new model called Spud in the works 7) Apple's Siri fix isn't much of a fix at all 8) Meta and Youtube lose a precedent-setting court case 9) Should Big Tech be liable for teen mental health? 10) Tech stocks tank 11) OpenAI shelves ChatGPT adult mode, probably forever --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 OpenAI is giving up on video generation. Here's the real story behind it. Apple is going to make a bunch of AI assistance available in Siri, and meta loses a landmark court case that could spell even more trouble ahead. That's coming up on a Big Technology podcast Friday edition right after this. Fiscally responsible, financial geniuses, monetary magicians. These are things people say about drivers who switch their car insurance to progressive and save hundreds. Because Progressive offers discounts for paying in full, owning a home, and more.
Starting point is 00:00:35 Plus, you can count on their great customer service to help when you need it so your dollar goes a long way. Visit progressive.com to see if you could save on car insurance. Progressive Casualty Insurance Company and affiliates, potential savings will vary, not available in all states or situations. Welcome to Big Technology Friday edition where we break down the news in our traditional cool-headed and nuanced format. We have a great show for you today. We're going to talk all about why OpenAI killed SORA and really the bigger story about what it means for the company's ambitions. We're also going to talk about a couple new models coming out from Open AI and Anthropic. That has people on the inside buzzing.
Starting point is 00:01:13 This new upgrade for Apple and Siri is going to amount to anything? We'll touch on that. And then finally, Meta Loss Big in court this week. And that may be news that will harm it in the future. Joining us, as always on Friday to do it is Ron John Roy of Margins. Ron John, welcome. SORA is dead. Erotic chatbots are no more. Apple might be fixing Siri. This feels like a good week. I'm currently in Park City, Utah, and there's no snow. So I'm not very happy about that. But I'm happy about all this news. Well, we will dig into it. And clearly the SideQuest era is over. And the biggest casualty so far has been the death of Sora, the video platform that we talked about so much that went to number one. on the App Store not long ago, and not only that, but the API as well. This is from the Wall Street Journal.
Starting point is 00:02:05 Open AI is planning to pull the plug on its SORA video platform, a product that released to great fanfare last year and has since fallen from public view. The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential IPO as soon as the fourth quarter of this year. Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of SORA for developers and won't support video functionality inside ChatGPT either.
Starting point is 00:02:42 Ron John, let me just throw, let's look at it micro and then we can zoom out. Let me throw a couple ideas at you in terms of why SORA died. One is maybe it's just that the appeal of video AI isn't that great. Like there's an initial thrill when you generate a video, but maybe most people just want to watch instead of create. And then secondly, maybe it's just that all the videos ended up looking the same, right? That, you know, you take AI, it generates the average of averages, and that's what you got. And then all of a sudden, the utility of this just nose dived. Yeah, I think separating out SORA specifically from just the promise or kind of current state of video AI is important here.
Starting point is 00:03:25 I have to say, I mean, given I'm talking to retail consumer goods type of people on the marketing side all day long, video is still very interesting. And everyone has been VO from Google seems to be kind of becoming the industry standard. So everyone is actually very interested in it, especially from like a true production standpoint. So I think video, I don't think, like I still am surprised that they're actually completely see. that area and not even making it available via API anymore, not making any video functionality. But Sora the product. I mean, I don't know. I, I have not opened it in a long time. I, what were, what were your greatest Sora hits to date? It was me and Jake Paul walking old lady across the street, very enthusiastically. I, I went through actually just in
Starting point is 00:04:19 Memorium of Sora. And so my son, I would actually, who's seven years old, like, we would, I would ask him to give me prompts. And I had, like, a chicken and a horse running around a toilet bowl was one of them. It was, like, it was basically a lot of stuff like that. And it was funny. And his friends, they would all get a good laugh out of it. I guess that was not necessarily the market validation one needs to address the tam
Starting point is 00:04:45 of whatever they're going to have to pitch investors for the IPO. But some 70-year-olds are going to be unhappy about this. Yeah, you burned an entire rainforest just to get the horse running around the toilet running around the toilet bowl. Yeah. But that joke really goes to the heart of the matter. And I actually do have some intel on the real reason why SORA has been shelved. And I will say this week, I'm in San Francisco. And yesterday it was meeting with Greg Brockman, the president of Open AI.
Starting point is 00:05:16 And we have an hour, 10, hour, 15. long, hour and 15 minute long interview coming next week. So stay tuned for that on Wednesday. And of course, we begin with this sort of new pivot to enterprise and coding. I won't give it entirely away, but I will share this bit from Greg about why video generation became deprioritized and was looked at as a side quest within Open AI. So first I'll say, I thought this was, and when I spoke about this last week, I thought this was a consumer enterprise thing. So maybe they thought that the SORA app was more consumery and they're really focusing on businesses. It's actually not what it is at all. So basically, OpenAI has seen that these GPT style models are working. And there are other ways to try to
Starting point is 00:06:08 pursue the most powerful AI. And that most famous other method right now is the world models. So models that actually understand physics. And that is part of what was baked in to SORA. And so I spoke with Greg and said, well, what's going on here? And he said, the important thing to realize is technologically that the SORA models, which are incredible models, by the way, are a different branch of the tech tree than the core reasoning GPT series. They're built in just a very different way. And to some extent, we're really saying that pursuing both branches is very hard for us. So I think this is needed and very interesting focus that we're seeing from OpenAI, where they're basically like, we want to build the strongest, most powerful models as well that we can.
Starting point is 00:06:59 We're seeing the results in these techs, GPT-style models that do all the things, that call the tools, that do the reasoning, that do chain of thought, that are getting things accomplished. And we have to decide where to put the compute. and to do that in a world model way would really limit the company's ability to progress to make progress in the area they see as most promising. And that is why SORA is being deprioritized. My goodness, that is, that is focus. That is real focus right there.
Starting point is 00:07:34 I have to say, because just on last week's episode, we were half joking about now all you have to say is world model. And it's kind of the trendiest term. and meta is making a big deal about it. Google's talking about it. Like that's definitely, I think, going to be one of the buzzwords of 2026. So for them to actually acknowledge that that is not going to be another area of investment, I think that, okay, that is a pretty big deal.
Starting point is 00:08:02 Right. So that is the logic. And I think that makes total sense. And of course, Greg and I will speak about it more next Wednesday. So folks, please tune in for that. But now it's sort of going to. goes to something interesting, which is like, what does this race look like? Because guess who else hasn't really done world models? Guess who hasn't done the side quests of image generation and
Starting point is 00:08:21 video generation? It's anthropic. And so you're starting to see a race. So if you, the race is taking a very different shape than it had not long ago. Remember, it used to be that, well, chat GPT, open AI had chat GPT and it was winning consumer with all these images and videos and the chatbot. And Anthropic was enterprise and it was sort of doing the enterprise thing with coding and business and all these these applications. Well, what's happened now is that both these companies have centralized on the use case. Maybe you could call it the open claw style use case, which is what they both seem to be going for, which is that you give the AI access to your desktop, to your phone, whatever it might be. And if you're at work, you have a do work for you.
Starting point is 00:09:08 if it's in your personal life, you would organize your personal life and take action for you. And I think they do see that this, and I'm going to hand it to you again, this potential agentic use case where the tech goes after what you need, whether it's consumer or business, it's the same thing. It's centralizing in this sort of one stack, so to speak. And now it's just like almost like a battle to the death here between the two of them to get this right and go after it. Well, I mean, one, listeners cannot see, but maybe on YouTube you'll see me smiling right now
Starting point is 00:09:47 because Alex is, it always feels good to be recognized. But I have to admit, when I said this kind of autonomous knowledge work, and this is what we started seeing at Writer when we started building this last June, like, it was that magic moment of like pulling files from one place, doing something to it, pushing it to another system, and then thinking about like how that applies to absolutely everything. I will say when I made that prediction, I think in Octoberish probably to start, I did not think Open AI ahead of an IPO would actually be kind of consolidating at its entire strategy around that. So I'm going to take the win on this.
Starting point is 00:10:26 One thing I'll note, this isn't two players. And I'm saying that again, self-interested because this is what we're working on at Writer. But more and more so Sierra just this week released something called Go-Ebrose. Writer. Notion is going into this space. It really, I mean, I'm seeing this very closely firsthand. Everyone is going after it. So I think everyone has recognized that's the prize. So it's not just Open AI and Anthropic on this. It's even more traditional SaaS companies are trying to go in this direction. So, so I think it's definitely clear that I was right on this one. Yes. But no, no, no, no, it's clear. But now I am wondering, like, image generation and video, like, does it, especially on the consumer side? See, I still think there's going to be a big difference between enterprise and consumer.
Starting point is 00:11:18 And, like, does meta start making moves in here and start kind of filling the gap? Does Google just kind of own it? Because to me, those kind of functions are still going to be very important in consumer. Right. Well, this is the thing like nanobanana. has been a huge asset for Google, their image generator. And by the way, something that was interesting. I don't want to give away too much of the Greg interview, but I think this is important to discuss.
Starting point is 00:11:42 I'm not going to push you too hard. Because it's newsworthy. Guess what's not going away is image generation and chat GPT. And when, you know, when you think about that, what's your initial response? Well, okay, so creating images doesn't take as much compute as creating video, which is true. But what Greg says is basically like it is, um, it is. is the image generation is being done with the same GPT style technology. Whereas like the video generation is done with this completely different technology.
Starting point is 00:12:14 So that, and it goes to the generality of the models where like they can do text. They can do image generation, video generation they can't. But I think you're right that there is this big opening for somebody to do video generation well. And clearly there's, there are like some startups like runway. But Google, Google is in great shape. here. I'm kind of rooting for runway in this. I don't know, like three years ago, 2023, because I started testing every single generative AI tool available. And I remember runway was probably the first place that I actually did an image generation. And this is back
Starting point is 00:12:52 in the six-fingered days of like, like when, you know, four legs, all of that. And then even video, they might have been the first place. I started testing and playing around with video. So maybe this opens the door, but, but yeah, I think one other note, though, is image generation also it's not just, like, create an image of a cat. I don't know. Actually, one of the best soras I saw that was circulating around was, I think it was a cat with a shotgun shooting a ring doorbell. I don't know. That was Sora at its finest. I don't know if you saw the tweet, but it was from them.
Starting point is 00:13:29 It was like, we will give credit to everyone who made videos that. matter. It was animals running around toilet bowls and cats shooting ring ring doorbells. But, but I think there's also image generation, even in the enterprise, like generating diagrams and like there's a, it's still visual communication in many ways. It's not just make me a funny image. So I think it makes sense too that they still have to play in that, that they still, it's still important. Right. And I think the important part also is that it is, along the same tech tree, as opposed to something completely different. But you're right, even if it was different, you'd probably want it in your suite of tools.
Starting point is 00:14:11 I want to go back to something you said, actually, that it's not just Open AI and Anthropic. Yes, there are others, but a lot of these companies are working with Open AI or Anthropics technology underlying. So there's a good chance that they'll see the benefit no matter what, even if, let's say, it's Sierra, that ends up being the one that deploys this for business. You're coming into my world right now, Alex. I think so at writer, we have our own family of foundation models, the Paul Meyer family. So for us, and actually there was like a very interesting thing,
Starting point is 00:14:46 Intercom, which now has Finn, they announced this week as well, that they basically have trained their own foundation model. So I think starting to see some kind of combination of using whether it's like cursor using deep seek, which they just basically didn't say that they were doing, but then were, but is actually a very,
Starting point is 00:15:08 like a very thoughtful approach to this. I actually think more and more people, the companies are going to start taking this approach that's not just going to be an API call to Open AI or Anthropic. So I do think like the, and I say this with the notions and the cursors and the sierras of the world, that like more and more, it's that I think a lot of tools,
Starting point is 00:15:30 to date, it was just that API call. I think more and more people are going to start either customizing or fully training on the foundation model side. Okay, I'm going to come in skeptical here, and if I have to eat crow again, I'll do it. But I do think that the foundational model companies are going to be, well, without a doubt, there'll be big players here. Let me take this to you again. You know, if the battle between opening eye and anthropic shapes up to be not the way that it was
Starting point is 00:15:59 previously, but like going head to head on the same use case, which they hadn't been previously. Like, Claude was happy to not have lots of consumer users and Open AI was happy to not go after Enterprise. Now they're really going head to head. What do you think that means for this race? Or how do you see that shaping up? And who do you think is going to win? I don't think this is the right idea for Open AI. I think like they had a foothold in consumer. I know the business model for consumer has not been figured out yet. But that was still. where they had the edge. Like, they could start to go after this.
Starting point is 00:16:33 We talked about last week what that means between them and Microsoft is a very big question in my mind because remember when you say, enterprise, this is Microsoft's world. And already there's tension around, you know, open AI starting to do deals with AWS and like potential rumors the FDA had reported on a potential lawsuit.
Starting point is 00:16:55 So like it just puts them in such a different. space than they have been. And it's honestly kind of surprising to me that it's like basically we want to be Anthropic is what they're saying. Everything they're doing is let's just try to catch up to Anthropic. And yes, Anthropic had a very good year. But like, remember a year and a half ago, people were leaving Anthropic for dead. Maybe that's an exaggeration.
Starting point is 00:17:22 But we were even, you know, pulling up charts of declining consumer usage and joking that We're no longer clawed heads and we're Gemini guys. You know, like there was this moment and then they really nailed. And we called this. We said it was a risk and a bet, but going all in on coding meant something very different. But I just don't think, I think it's too late for them to make this switch and it's reactive rather than we are our own unique business. We have 800 million users. We're going to get to a billion.
Starting point is 00:17:54 We're going to run ads. People are going to search about everyday life. And there's a lot of ways to monetize that. I just, yeah, I don't know. I need to see this in action versus just being reactive. That is interesting. But let me put the counterpoint to you here, which is we all saw the OpenClaw moment, right? And I think many of us still aren't, including myself, haven't fully wrapped their heads around what that can be applied to elsewhere.
Starting point is 00:18:20 Because OpenClaw is basically like you're going to put your, you know, create a virtual machine or get a Mac Mini, put this. AI agent, allow it to control that machine for you, plug it into a couple of services that you use, and then basically have it be a assistant with persistent memory that gets stuff done for you. And so think about, let's say, and again, I think it's important to say that this is not going to be a breakdown of like, you know, that opening eye goes after enterprise and not consumer. Think about this type of use case. Imagine you're like dealing with a hospital, right, or dealing with an insurance company, right? And you're. you're trying to get something covered or you're trying to understand your,
Starting point is 00:19:01 you know, what your data looks like compared to others. To have this like always on assistant with persistent memory that can go out and negotiate on your behalf with the insurance companies or go out and monitor your health situation, like is that a consumer or is that an enterprise? That's consumer. But it's still in this agentic world.
Starting point is 00:19:23 So maybe to sit that out, which seems like it's a bad business. decision. Okay. I mean, fully agreed, given now you're, you're the one saying that kind of like coming up with these broad agentic use cases and visions. And again, remember, like, I think this is the exciting part, why everyone's so fired up is like once you feel that power, just imagining all the possibilities. And I think you put it well, like always on connected to your data and able to take action. that those are kind of the three foundations of this whole thing and again i i we don't know what no one has named it yet i've been like racking my brain it's it like autonomous knowledge work open claw
Starting point is 00:20:09 i don't know if it's going to stick maybe that could be the open claw but like that is the harness hive harnessive is the let's take over i think uh but but i do think yeah it's i fully agree it's not consumer versus enterprise. Every person, I think, will have a lot of things that they will be able to build and do with it. So I agree it's the central part of the battle. I just mean more like, and we're going to get into the shutdown of the erotic chatbot, but like even the advertising business, I was just at shop talk in Las Vegas all week, open AI, ton of talk around commerce. Again, it becomes like one of those interesting things that it's consumer, it's also enterprise, because you have the retailers, but you also have the end consumer who will be shopping on it potentially.
Starting point is 00:20:58 So I think I agreed there, but I just think that overall as an organization to start cutting these very consumer-friendly things, are you going to be able to focus on and advertise a large advertising business when you're trying to do everything else? That's where I think there's issues. I think you will be. I mean, ChachyPT is already at $100 million annualized run rate. Oh, that one killed me. It's been out for six. Can I make one?
Starting point is 00:21:28 Do it. Can no one ever say annualized recurring revenue when a product has been out for six weeks? Like, it's just not ARR at that point. Let it don't make us extract. Don't do the extrapolation. Just say it's been out for six weeks and we've made, what would that be like $9, 10 million or whatever it is? like that just that's all it is right now. And it could be much bigger. And I mean, it'll be, that'll be great for them. But reporters, please don't use ARR unless there's some
Starting point is 00:22:01 kind of meaningful trend. That's, I think I'll keep doing this, do it, doing it on this show just to annoy you, Raja, but I message her loud and clear. Let me, let me, let me end this segment with one thing, which is this all sounds good in theory. But the problem that I have and the problem that many people have is a trust problem where I want the AI to do all these cool things for me, but I do not trust it to have access to my Gmail and calendar and desktop and all these things. And obviously, like some of the leading models, like are the leading providers, even OpenClaught, like we don't recommend you do this without, you know, some precautions like running on a separate machine.
Starting point is 00:22:45 Do you think that that trust barrier is ever going to be overcome? Yeah, 100%. I mean, I see it myself. Well, actually, hold on, to add nuance to it, like, I have something that just 7 p.m. every day, here's all the emails that you have not answered that are like from today and then greater than 24 hours old. And here is a suggested response based on your entire Gmail history. So I get this. I don't have it send it the email yet.
Starting point is 00:23:14 So I've not gotten to where I'm actually like press button. send email to everyone. But as you start seeing it, start actually kind of like tweaking the how you want the response is structured. I do think there is a world where I would just have it send the response. So I think that there, like the trust comes with time and quality. And like the more you start to see and the more you also start to understand what questions not to ask or where data is going to be bad and you're going to get like a subpar answer,
Starting point is 00:23:47 that is going to be one of the most, I think, important skills, but also that's how people will build trust. This is why I always get emails from you at about 710, 712, right? The claws like, that's wonderful. Let's make sure to schedule at this time. Well, we're about to find out really where this is going to go because we have two major models coming from both one each, one from OpenAI and one from Anthropic. Let's start with Anthropic. There's a very interesting story in Fortune this week. Anthropic
Starting point is 00:24:21 acknowledges testing a new AI model representing step change in capabilities after accidental data leak reveals its existence. Anthropic is developing and has begun testing with early access customers, a new AI model more capable than it has released previously. The company said following a data leak that revealed the model's existence. Anthropic spokesperson said the model represented a step change in AI performance and was the most capable we've built to date. The company said the model is currently being trialled by early access customers. A draft blog post that was available in an unsecured and publicly searchable database prior to Thursday evening said the model is called Claude Mythos.
Starting point is 00:25:01 The company believe it poses unprecedented cybersecurity risks. Mythos has also been called Capibara. In a document Anthropics says Capybara is a new name for a new tier of model larger and more intelligence than our opus models, which were until now our most powerful. Compared to our previous best model, Capi Barra gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity among others. Wow. So we could be seeing a new class of model, you know, Anthropic, of course, has its three, sonnet, opus, and what's the other one.
Starting point is 00:25:43 Haiku. Oh, yeah, yeah. So now we might be getting... We might be getting... Capybara. What's your quick take reaction to this? I mean, I am a step change in models now that when we've been
Starting point is 00:25:59 talking about this solo episode, that now we all kind of know what the battle is, I think we'll be very, very interesting to see. Like, again, But it's still hard to understand when you both worry about, say that you're worried about cybersecurity or recognizing these risks, but then say it gets dramatically higher scores on test of software coding, academic reasoning, and cybersecurity. It's still just, I don't know, kind of difficult to actually parse out where they're going with this. What do you think?
Starting point is 00:26:32 What's your prediction on what it will feel like the first time you crank out some work on Capy Barra? Well, I have been thinking about this because, you know, we've been like sitting here and we review like every update that seems incremental, right? Like, oh, it got a little bit better at this, got a little bit better on this, got a little bit better on this. And it's starting to feel like it compounds. You know what I'm saying? Like we started with chat chit in 2022 and it had all these sorts of flaws. And over time, they've been patched up in a way. And so like, I just started to think about it this week in terms of like all the AIRs.
Starting point is 00:27:09 AI CEOs say that there is this like exponential happening. And maybe that's just the way that you get that exponential, right? Like, you know, with your, with interest, for instance, like, all right, there's like 6% on, on your investment. And then another 6% on your investment and that 6% that you got last quarter. And then all of a sudden that that starts to really grow. And it seems like that might be what's happening with these AI models. I like this. Maybe I'm being overly optimistic.
Starting point is 00:27:36 No, no, no. But that's not even optimistic. that's realistic. That's like compounding accrual of value coming is actually the way this is playing out. But the marketing was done before that like GPT3, GPT4, like that everything had to be revolutionary in a step change. And then people were disappointed when it wasn't. So I do think that could be the right way to look at it. And people don't.
Starting point is 00:28:05 And maybe it is just a marketing limitation that they have to make a big. deal, but it would be kind of nice if everyone actually just spoke about it like that. Like, here's our release notes. It's definitely a little better. And then you can do a lot more and that's all we should really focus on. They will never, ever speak that way. No, no, of course not. Guaranteed.
Starting point is 00:28:28 Here's another one. Open AI, new model. It's coming out called Spud. OpenAI CEO Sam Altman said the company has completed the initial development of its next major AI model code name Spud. He told staff that the company expects to have a very strong model in a few weeks that the team believes can really accelerate the economy. He added, things are moving faster than many of us expected.
Starting point is 00:28:52 You know what's interesting? It's like just as the, and now I'm going to really sound optimistic and I'm trying to check myself, but like just as the world implements today's models and is starting to find that they can do things with them, that they really couldn't do with the previous. previous generations. And that's leading to this like explosion of possibilities. It's like weight and they're building better models than these. Some that they say are sizable leaps. Like it is one of those moments where you like sit back and just go, this is crazy. I mean, it feels it definitely feels like that. But sorry, I just have to
Starting point is 00:29:33 stop for a moment and say the name Spud did not jump out to you as what the hell is going on. And opening, sorry, Anthropics sitting on mythos and Capybara. I don't know. That's kind of like, but Spud is the code name for their model. Where is this coming from? Not exactly inspiring. No, no, he's literally, the team believes can really accelerate the economy. He's not even saying like you'll be able to do a little bit more multi-step reasoning.
Starting point is 00:30:07 He's being, again, as we were just talking about, everything has to be grandiose and can really accelerate the economy. And it's called Spud? I don't know. Maybe. Can you imagine you're at your job and your manager walks over and they're like, got to tell you, we're replacing you with Spud. That would hurt my feelings. But it might happen. Maybe do you think this is, it's so bad.
Starting point is 00:30:33 that it actually makes me, I will never forget the name versus mythos and stuff, maybe it'll just kind of fall into a haiku I couldn't remember off the top of my head, but spud. You'll remember. Don't even call it GPT6.
Starting point is 00:30:47 Just come out with our new class of models. The Spud family of models. Spud. Spud. But look, let's be clear here. It's just a code name. It's not like Open AI is going to release Spud. No, I'm saying the product name Spud.
Starting point is 00:31:03 And then I code. code name, but again, most tech project like codenames that I've ever come across or been part of, everyone has a somewhat ambitious name or like a kind of like strong, grandiose big name. So that's why this one really jumped out of me. But I kind of like it now.
Starting point is 00:31:22 The same people who are branding the Pentagon operations have come and started to code name. No, but the Epic Fury would be more in line with mythos. Spud. I don't know about that. I don't know. Well, we'll find out soon enough. All right, folks, if you are enjoying seeing Ron John riled up, well, just wait till the second half where we talk about Siri.
Starting point is 00:31:45 We'll be back with hopefully some better news about Siri's direction, but I can't promise anything. We're back right after this. I've interviewed a lot of great tech founders on this show, and one surprisingly universal challenge comes up again and again, finding the right domain name. It's something I ran into myself when launching big technology. The names you want are often taken, and it's tempting just to settle and move on. But the founders I respect most don't settle on fundamentals, and your name is one of them. It should immediately signal what you actually built. That's what I appreciate about dot-tech domains.
Starting point is 00:32:22 It just makes sense. It tells the world your customers, your investors, and anyone Googling you that you're building technology, clean, direct, and no qualifiers. And I'm seeing more serious startups leading into it. Nothing. Dot Tech. 1x.com. Aurora.com. CES.S. Dot tech. Pye.com and so many more.
Starting point is 00:32:43 If you're building something tech first, don't settle. Secure your dot-tech domain from any registrar of your choice and make your positioning obvious from day one. Starting something new isn't just hard. It's terrifying. So much work goes into this thing that you're not entirely sure will work out. And it can be hard to make that least. of faith. When I started this podcast, I wasn't sure if anyone would listen. Now I know it was the
Starting point is 00:33:05 right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the U.S. from household names like Allbirds and Cotopaxi to brands just getting started. With hundreds of ready-to-use templates, Shopify helps you build a beautiful online store that matches your brand style. Get the word out like you have a marketing team behind you. Easily create. email and social campaigns wherever your customers are scrolling or strolling. It's time to turn those what-ifs into with Shopify today. Sign up for your $1 per month trial today at Shopify.com slash big tech.
Starting point is 00:33:45 Go to Shopify.com slash big tech. That's Shopify.com slash big tech. And we're back here on Big Technology podcast Friday edition. Siri is on its way to a new improvement. This is from Bloomberg. Apple plans to open up Siri. to rival AI assistance in iOS 27 update. The company is preparing to make the change
Starting point is 00:34:08 as part of a Siri overhaul in its upcoming iOS 27 operating system update. The assistant can already tap into chat GPT through a partnership with OpenAI, but Apple will now allow it, allow competing services to do the same. The company is developing new tools to allow AI chatbots installed via the app store
Starting point is 00:34:28 to integrate with the Siri assistance. The chatbots will also work with an upcoming Siri app and other features in the Apple Intelligence Platform. I'll just quickly share my perspective here and then, you know, let you riff off at Ron John. People were, and I initially took this as maybe Siri is saved, but then I realized that it's just going to be the same disappointing user experience that you use to find ChatGPT in Siri. So in other words, same Siri, new stuff you can access with it, not a Siri that is actually you know, a distilled version of Cod or something like that.
Starting point is 00:35:06 And that just makes me even further disappointed in what we're going to get on the iPhone in terms of AI assistant. I will refrain and try to remain from, I will try to remain calm. But this was, this made Spud look like pure poetry of a name because when I'm reading the announcements, I agree, there's been a lot of like very promising things about series. that have made me question my hatred for it and think there is a possible future again. Like, this got even more confusing because Gemini being the kind of base foundation of Siri and actually making it valuable is kind of exciting. This not only, like, when's the last time you actually use the chat GPT integration in Siri?
Starting point is 00:35:54 I don't use it. I mean, why would I do anything but go to the chat GPT app? Yeah. Yeah. So, but then, like, the actual. technology here, like Siri actually having connected apps or kind of, I think they called them maybe skill, there was something for many years, like trying to actually integrate specific functionality within apps has existed within Siri. So like just having that as the query,
Starting point is 00:36:21 I guess it'll be a little less friction if it means that Siri will then be able to directly read out the answer to you or have it appear rather than opening up the target app. But like, I don't know. This is, yeah, I agree. This is horrifying. Like, if this is still where their heads, Siri has to be able to be good enough to compete with those apps. Like that, it just has to.
Starting point is 00:36:44 It should not trigger some other thing. And I thought that's what they're working on. So the fact that this is going to be some kind of thing. And actually the last line, this approach should allow Apple to generate more money from third-party AI subscriptions through the app store. That was the most depressing part. that if this is just like, well, buy your Gemini through the app store and then we're going to take a cut.
Starting point is 00:37:06 This is, this is troubling. I don't know. We'll see. I'm still, I'm going to give them, give them a chance here, but not positive. Two months till WWDC. So let's see. But not very enthusiastic right now about what's going on. Although, go ahead.
Starting point is 00:37:25 I was going to ask, do you think WWDC they're going to have like a well-flashic? fleshed out vision of what Apple intelligence is, what Siri is. No, I wouldn't be surprised if we didn't hear anything about Apple intelligence this year as well. Well, but even the Siri overhaul and stuff, right? Like, you think there, no? Nothing. No. Maybe a few minutes.
Starting point is 00:37:49 They have to release products. They have to let the products do the talking. They cannot tell us again about what's coming. That seems to be. I agree. I agree. Yeah. Okay.
Starting point is 00:38:00 Agreed. Let's go to our other friends over here in Silicon Valley, Meta, and YouTube. This is a pretty big deal, actually. The case, a court in California found them both liable for harming a young user with their futures, with their futures. This is from the New York Times, Meta and YouTube found negligent in landmark social media case. It says, meta must pay, so the landmark decision could open up social media companies to more lawsuits over users, users, well-being. Meta must pay $4.2 million in combined compensatory and punitive damages, and YouTube must pay $1.8 million. The Bellwether case, which was brought by a now 20-year-old
Starting point is 00:38:43 woman who had accused social media companies of creating products as addictive as cigarettes or digital casinos, and they led to anxiety and depression. And the court found in this person's case. And there are thousands more of these lawsuits coming through. Now, I was on CNBC just as this happened, and I was like, you don't want to lose these cases because you're going to have others. And the other panelists that was with me was like basically, you know, this is a win for them because the amount was so small. It was only, I think, just a few million dollars, six million total. But when you lose, you open the door for other losses. And some might see the award and say, if they got that, we can get even more. And lo and
Starting point is 00:39:29 behold, meta stock has just tanked over the rest of the week. So how do you read this, Ranjan, in terms of the potential for successive liability for meta after this problem, after this loss? But hold on. I'm trying to, because I had also seen that they're ordered to pay 375 million in civil penalties. Yes. So that's a separate case.
Starting point is 00:39:51 They also lost this week in New Mexico, which for some reason wasn't talked about as much. And I have a theory as to why it wasn't talked about. Okay, hold on. Walk me through because, and just to note, like, Andy Stone, the head of comms over at Met, I'd seen a tweet where like, basically he was like, it's not that much. It's a fraction of what the state sought, even for the $375 million, which was just terrifying to me. The state saw $2 billion. But the point is that, like, if you lose, you open yourself up to further losses. So that's, that's to me, so I guess I can sort of give my perspective here. The issue here is that the ruling is basically telling them that you can't use Section 230 as a shield anymore. It's not necessarily that you're being sued over the content on your platforms. You're being sued directly over the way that you design them. And courts are finding that, yeah, that you can't hide behind Section 230,
Starting point is 00:40:50 which protects like forum owners from the content that people put on top of them. And now we could potentially see thousands of similar cases. that is a problem. Yeah, I think, I mean, and honestly, this is something since 2015-16, I've been hoping would be recognized. So it is, I mean, 10 years late, but still am very positive about it. But I think, like, especially, yeah, it not only being liable opening the door, the fact that the smaller case still found that, and it said like the reporting, the finding validates a novel legal theory. that social media sites or apps can cause personal injury. I mean, that's a huge deal.
Starting point is 00:41:36 Like, it's that the actual being liable not like for the actual injury side of it, I think gets really interesting. How do you see this playing out? Like for how many, for at least a decade, every one of these has come and gone and meta keeps meadowing. So and Instagram, every single person I know just all day long scrolls on it. Like they're still doing what they've always done and have been doing a very good job at it. So how do you see this? Do you see this actually affecting their business? I saw a great interview with a law school professor this week where the law school professor and his name escapes me.
Starting point is 00:42:19 So apologies was basically like Medez has to appeal this and they will appeal this because this is effectively setting precedent. in the country for whether Section 230 can work to protect you or not. And the court just found no. So the way that this professor sees it going is that it goes all the way to the Supreme Court and then the Supreme Court rules, you know, specifically on the boundaries around Section 230. And if the Supreme Court, and I'm just saying like, let's say it goes out the way that this guy thinks, if the Supreme Court rules that Section 230 is not protective, it's not just the thousands
Starting point is 00:42:57 in action now. it could be even more that come in. And, you know, I mean, I guess that's somewhat concerning in terms of like, all right, if you're a content business, are you now liable? Like, you know, we have the Discord. And luckily everybody's pretty well behaved and happily contributing there. But like, am I now liable for everything said? You know, in our Discord instance, it just opens up, you know, this Pandora's box that could,
Starting point is 00:43:22 you know, cause damage to the Internet and especially to Meta's business. And on the Meta's business, I just one more thought here. they are spending a lot of money this year, $115 to $135 billion on AI infrastructure. And the reason why the market has, quote, unquote, allowed them to do that is because they're making all this money. If you start seeing your margin trimmed by a like sort of death by a thousand needlepokes from these lawsuits, then all of a sudden your ability to spend on innovation goes down because your margin comes down and the market doesn't give you the leeway that it might have otherwise. So that's a potential problem there. Okay.
Starting point is 00:44:00 So a couple of other thoughts. See, I guess it's just been so long where they have not been affected that it's just still, it's almost unbelievable for me to think that they will or could be. I mean, that's just, it's been years and years of when was Mark Zuckerberg in front of Congress way back? That was like 20, 2018, 2017, 18. 2017, 18. Yeah, like, remember good memes back in the day.
Starting point is 00:44:25 but like I think I was there oh you're okay so yeah it was so long ago but I think a couple of things that jumped at one the cigarette analogy is interesting to me because like I saw like some interesting you know like is that really the right analogy I think there was an op-ed in the New York Times on this basically the idea is like there is good and there's bad so it's not like cigarettes which I mean maybe you can argue there's good but like in general I think most people are not even pretending there's any true good out of it versus like social media can be a net societal positive. It can also be very negative. To me, though, and I've written about this a lot, like the algorithm is the cigarette or the tobacco. Like it's not the content. It's not even like the core technology of posting a photo. It's just algorithmic-based content. And I think if this finally gets people back to talking about that,
Starting point is 00:45:25 is a danger. I'm very, very happy about it. I think it's good. But yeah, I'll, just like fixing Siri, I'll believe there is a material impact in meta's business when I see it. I know we haven't been in the courtroom, but let's just, I'm curious if you agree with the verdict here, because the meta argument is teen mental health is complicated. It doesn't come down to one app. You cannot blame everything on a single app. I mean, obviously, in some cases, they're contributing to teen mental health issues. But then again, there is some merit, I think, in saying that, like, there's a combination of factors and not just one culprit. What do you think?
Starting point is 00:46:12 So I will say, and again, this has been a long, long rant, I think. It was like 2019, we'd written the margins five ways to fix social media. One of them I still loved, which we'll never get. but it's that the timeline should be reverse chronological by default. So there's no algorithm suggesting the content. Because to me, there is one culprit. It's the algorithmic recommendation of content. That's it.
Starting point is 00:46:35 Like whether it's on YouTube, whether it's on Meadow or Facebook, whether it's on Instagram, TikTok, that's the entire platform. That's what it radicalizes people, makes them feel bad. So I do think there's one culprit here. I think it is interesting. Wait, hold on. You're saying that this person's mental health issues, you would say, are entirely due to the algorithm. I mean, that's like saying, is smoking responsible for lung cancer? Or could obesity or environmental factors and air quality? And it's, come on. We all use social media. We all, like, everyone, like, it's just, I don't know, I guess, I, I do. love like a lot of the times I have friends who are like I'm not influenced by social media.
Starting point is 00:47:28 I'm not influenced by the ads. The posts don't actually make me feel like I'm missing out on something or I need to improve my vacation. But to me, I don't know. Is that not the most centrally clear thing to you? Well, I guess this is sort of, I see your point. It's like the kind of argument to Meda's argument is it's not that smoking cigarettes lead directly to cancer, it's that smoking cigarettes are a known carcinogen. So your odds of getting cancer. And then therefore, in many ways, the cigarette companies are liable for sort of the additional cancer cases that they cause, even though you can't draw a straight line one to one. And maybe there's, there's a similarity with like, are more kids depressed today because of social media? If you can
Starting point is 00:48:18 prove that and it's tough, then I'm just again, talking through these arguments. Yeah, yeah. No, I think it's a good, like causality is very difficult to prove, which actually now that we're talking, I do think makes this really a big deal. Now, because causality, it is, especially in this case with mental health is like feels nearly impossible to prove. I don't know. Maybe you could like based on their usage statistics somehow start to like draw more of a direct correlation of that specific use. especially if you're looking at individuals.
Starting point is 00:48:55 But I mean, anyone who is clicked on the YouTube right rail of recommended videos, it's anything like it just exacerbates, exaggerates, like radicalizes in many cases. I mean, it's just designed to make you feel. And the easiest way to make people feel and stay engaged is to make them not feel great. Maybe. But they keep coming. But you, let's say, yeah, people keep coming back. I don't know.
Starting point is 00:49:28 I want to, I, I call it doom scrolling and I want to doomscroll. Everyone, that's my choice. That's what a good addictive, if you want to sports scam, bet on sports, if you want to vape, if you want to smoke cigarettes, whatever your advice of choice might be, like, it's similar to me. I know, do you want to know one of my hot takes? Yes. This is a, this is a, so Twitter changed from reverse chronological by default in the spring of 2015. And to everyone's default feed was algorithmic. Right.
Starting point is 00:50:11 And what happened through 2015 into 2016, or hold on, let me get the exactly. Yeah, that that is responsible for the political climate globally right now? Yes. Okay. First of all, a couple of things. Number one, what you described, you know, somebody smoking and vaping and sports gambling and spinning through reels. That's basically my weekend where I've got the vape and this. You know, putting in 20 on the Packers.
Starting point is 00:50:47 No, look, one thing I'll say about the whole. whole algorithmic thing is that was that's always going to be weird for me personally because I was reporter at bus feed at the time and I got the scoop that Twitter was moving to an algorithm that was on like a Friday it held on February 2016 yes even more in line with my my theory here February 10th 2016 well I mean look if you could say that Trump I mean out of all the candidates he played social media well But I think the thing that really put him in office were those debates where he just, I mean, Shane Gillis has like a pretty good bit on this. Just that like, you know, he's like, you know, one candidate's like, I'm Rand Paul and I believe in schools.
Starting point is 00:51:40 And then Trump was like, you're a complete loser. And everyone's like, you could do that. No, but it traveled more because of the time. Yeah. And then it traveled because of the, you're right. That's interesting. Yeah, it's possible. I'm not saying it's impossible.
Starting point is 00:51:56 People would have ruined it off. That was a weird election year also. I mean, not to bring us all the way back to it, but like I definitely, you know, when I was also at BuzzFeed, I did some reporting on Trump rallies and, you know, got retweeted by this Tennessee GOP account that was like, you know, the mainstream media will never show you this. And which was funny because, like, I was part of, I don't know if you call it the mainstream media, but the media at the time. and B, that that account, Tennessee GOP, which was massively followed and influential during the election, was run from St. Petersburg. But that's a different story. Like, we could talk about that another time. What a time?
Starting point is 00:52:35 Good, good times. So I got that scoop that there was going to be this Twitter algorithm on Friday. And then there was a big thing that happened. It was called Rip Twitter. I don't know if you remember that. Like a million people tweeted RIP Twitter over a weekend after my story came out. And that led Jack Dorsey to say we were never planning to introduce an algorithmic feed next week. And then my mentions flooded with people saying, you're a liar, your career is over, how does it feel to have no credibility?
Starting point is 00:53:06 And I thought I was totally gaslit. I thought I was done. And then they made the announcement that Tuesday, the following Tuesday. Jack. Just a minute. Just a minute reporting on that company. That was crazy. All right.
Starting point is 00:53:22 Yeah. So that's my theory. I'm sticking to it. Okay. Should we talk about the tech stocks? Very rough week for the tech stocks. This is from CNBC. The tech stocks suffer their worst week in nearly a year driven down by war worries,
Starting point is 00:53:37 meta legal woes. Microsoft is 30% off. It's high. 30%. Do you think this is just war? Or is it like a, a growing uncomfortableness and unease around the spending and the lack of near-term profits from AI for these guys.
Starting point is 00:54:01 So I do think it is very important this week what's happening. And I think the kind of like headline side of it is, yes, like the market's been getting cream this week. Like tech stocks have been on like epic runs anyways. So like giving a little back feels a pretty like a pretty natural. thing. But to me, because of all the circular financing that's at the foundation of a lot of what's happening in AI right now, because of all the kind of like follow-on effects of just the tech giants start actually being in a little bit of trouble. What does that mean for the industry writ large? So I think like, and again, Microsoft, I think could be a whole other segment,
Starting point is 00:54:47 is in terms of why are they not doing as well as the others. Give it to us in 60 seconds on Microsoft. Well, no, I mean, I think it's clear. They have fallen behind. There's nothing exciting coming out. They have the install base of everyone using copilot, but people are not paying up and converting to paid subscribers in any meaningful way. They just replace co-pilot leadership.
Starting point is 00:55:16 So, like, on the whole AI, thing, it is pretty crazy that they had like Open AI. They were the partner at the beginning very early and now still they're not really anywhere notable.
Starting point is 00:55:32 What's the last exciting thing from Microsoft in AI that you can think of? Bing. I mean, they had Bing. They actually they were the first to, they could have pushed this
Starting point is 00:55:45 through. Yeah. I think everyone's kind of coming around it. And maybe it'll just be a good wake-up call. I'm sure, given their install base, given their, like, who they are, if they figure this out, they will be a force. But I think, like, I think there the market is recognizing it a bit. Okay. Let's end this week with one of our traditional products slash feature funerals.
Starting point is 00:56:11 And ladies and gentlemen, we're gathered here today to pay our respects to the short and quite eventful life of the OpenAI adult mode, which has left our world indefinitely and doesn't seem like it's coming back anytime soon. From the Financial Times, OpenAI has shelf plans to release an erotic chatbot indefinitely as it refocuses on core products following concerns from staff and investors about the effect of sexualized AI content on society. Sam Altman's startup has already delayed the release of its adult mode amid internal discussions over whether to scrap the mile entirely, the sexual chatbot faced growing
Starting point is 00:56:51 pushback over how it could encourage unhealthy attachments to AI systems and expose minors to problematic sexual content. Rest in peace, adult mode on chat chutee. Perhaps our planet is better off that you never saw the light of day. How do you feel about this, Alex? Companionship has been one of the cornerstones of the Cantorot's school. of the future of AI. Well, I don't want to be the morality police and say you shouldn't be able to like have cyber sex with your chatbot. But speaking of businesses that open AI shouldn't be in, this seems like one of them.
Starting point is 00:57:30 It just opens up this whole can of worms. I think this is the right choice. What do you think? Matt, the unquestionably the right choice. Like we've brought this up when the moment they said enterprise, I was like, you cannot have erotic chatbots running. around and like then uh still pretends that people are going to trust you but but again i don't know maybe they could have done something interesting maybe this focused all the creativity in the industry
Starting point is 00:57:59 are we losing the the weirdness of soror and potentially erotic chatbots now that everyone's just making claws well speak of speaking of like you know potential competition like it does open up the door for other chatbot providers to use some of the underlying technology and make this erotic chatbot of their own. Just because you can't use it within the chat chbt interface, maybe you can use like a GPT based adult mode chatbot and you can make a pretty good startup that way. Question. And with the disclaimer that we are not lawyers and will not pretend to do so, but given the idea of, around liability in the social media use case, should AI company be responsible for the end content created with its VAPIs?
Starting point is 00:58:54 They have control over the, and again, I mean, actually this ties back to the Pentagon and the war question, but like, should they be responsible? For adults, no. Like, adults should sign off that they don't know where this is going to go, and they shouldn't be liable. But for kids, absolutely. What do you think?
Starting point is 00:59:13 Well, hold on. You're saying Open AI, if some other service is calling their models via API, and then it's adults having erotic chatbots, is that Open AI delivering that service and content? Should they be responsible for whatever happens? And yeah, definitely if then kids are using this, that's a whole other thing. And should the service be liable? Should Open AI also be liable? Okay, so two separate questions. Yes, that's great. That is a great question because it's a little bit different than like the comparison is cloud hosting, but it's a little bit different than cloud hosting, right? Because it's like cloud hosting as you store your stuff here and that enables you to do what you want to do. Whereas like chatbot is like you're using this technology to do what you want to do. No, no. And it's actively generating new content. I think it's the person, sorry, I think it's the person that deploys it. I, I, I think it's the person that deploys it. I, I, I don't think open AI should be liable if somebody else uses their technology. I think they should have terms of service because they want this technology to have a good reputation. Remember, they're the polling issues. But I don't think they should be legally liable if somebody else deploys it in this way. Okay.
Starting point is 01:00:28 Then in the terms of service, do you think they're going to kind of like prevent others from creating erotic chatbots? Probably not. I mean, because if you think about it, is actually... Are you think what I'm thinking? Do we have to call it, like, make our own version of this? No, no, no, no. Because...
Starting point is 01:00:48 Maybe, maybe, but... If we do, you know what we're calling it. Wait, what? What, chat. Oh, no, stop. No, no, no, no, no, no. I was going to say something in a whole different direction about... Imagine if it's a brilliant maneuver
Starting point is 01:01:09 that they could actually, like, see a ton of API-based revenue, of basically everyone else creating erotic chatbots and then put that under the umbrella of kind of like enterprise revenue and have hockey stick charts about look how fast or enterprise and API business is growing because that's technically enterprise
Starting point is 01:01:27 but I can't think anymore because I mean it would be diabolical a diabolical plan I think you have to explain to anyone who missed I think you have to explain to anyone who missed last week's episode the context of a
Starting point is 01:01:46 of this app name. All right, folks. So last week at the end of the show, we talked about dry chatting, which is where you practice a conversation with a chat bot before you go in and do it live with a real person. And so, of course, the, if you don't do that or the actual live chat with a person, wouldn't be called a dry chat. It would be called, it would be called a wet chat. And it's just disgusting to think about that.
Starting point is 01:02:21 But I'm just saying it'll be a good name for an adult chatbot app. I've gone over the line. I think we should make this our second episode that ends with that term that I just cannot bring myself to say. And then hope next week it doesn't. Yeah, that will be our base hope as we can end an episode without bringing that up. But if I had to bet, I would say I doubt it. All right, Ron John. Thank you so much for coming on.
Starting point is 01:02:54 50, 50. All right. See you next week. All right, everybody. Thank you for listening and watching on Wednesday. Greg Brockman, president and co-founder of Open AI, is going to come on and share lots of new information about open AI. Don't miss it.
Starting point is 01:03:10 Thank you again. And we'll see you next time on Big Technology Podcast. Getting ready for a game means being ready for anything. like packing a spare stick. I like to be prepared. That's why I remember 988 Canada's suicide crisis helpline. It's good to know just in case. Anyone can call or text for free confidential support from a train responder anytime. 988 suicide crisis helpline is funded by the government in Canada.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.