Big Technology Podcast - Anthropic’s Mythos Dilemma, Violence Against AI, Tokenmaxxing at Meta

Episode Date: April 10, 2026

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Anthropic's new Mythos preview 2) Is Mythos marketing or a legit breakthrough? 3) The Mythos sandwich gu...y story 4) OpenAI and Anthropic's brewing 1st party vs. API conflict of interest 5) The Meta-Harness 6) Violence against AI on the rise 7) Maine is going to pass a data center moratorium 8) Was Medvi really a $1.8 billion two person startup? 9) Tokenmaxxing is all the rage --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Anthropics' big new mythos model is here. Is it real or is it marketing? Violence breaks out against AI, and engineers at Meta and elsewhere are competing for who can burn the most tokens. That's coming up on a Big Technology podcast Friday edition right after this. This episode is brought to you by ServiceNow. If you want to see where Enterprise AI is actually headed, Knowledge 2026 is the place to be.
Starting point is 00:00:22 It's ServiceNow's annual conference, May 5th through 7th in Las Vegas, where thousands of business and tech leaders come together. Expect headline keynotes from ServiceNow chairman and CEO Bill McDermott, real stories from companies running AI at scale and major partnership announcements turning AI ambition into actual business results. I'll be there in person sitting down with some of the most influential voices in the space and we'll be bringing those conversations back to you here on Big Technology. Welcome to Big Technology Podcast Friday edition where we break down the news.
Starting point is 00:00:57 In our traditional cool-headed and nuanced format, oh, we have a great show for you today. We're going to talk about whether Mythos, the new model from Anthropic is Real or marketing or maybe some combination of both. We're going to talk about this new surge of violence that's breaking out against AI and why you should probably be taken more seriously. We'll also talk about this now infamous 1.8 billion,
Starting point is 00:01:17 one or two-person startup called Medvi, and whether that heralds a new era or is just a bigger scam, then we're used to, and we're also going to talk about token maxing, which is the act of basically burning as many AI tokens as you possibly can, and maybe that's good or bad. I don't know. We'll figure it out at the end. Joining us as always is Ron John Roy of Margins.
Starting point is 00:01:37 Ron John, welcome back. Good to see you. I'm happy to be back. And yeah, Mythos is here. What a week to come back. Mythos is here. Yeah, mythos is here. The people have clamored for Ron John's return.
Starting point is 00:01:50 He's made his return at the perfect week. I am mythos. I am mythos. Because, yes, we have, I think, a very, very good-named model coming from Anthropic. And it kind of goes to the heart of the matter, because the question is, is this good branding really most of what we're seeing? Or is it actually a step up? Is it something that deserves the mythos name in its own merit? Let's talk about the new model, because Anthropic has positioned it as something that is so dangerous that it can't
Starting point is 00:02:20 release it to the public. This is from the Wall Street Journal. Anthropics set to preview powerful mythos model to ward off AI cyber threats. Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software. The company is making a preview model of its new AI model
Starting point is 00:02:38 called Mythos, available to about 50 companies and organizations that maintain critical infrastructure, including Amazon, Microsoft, Apple, Alphabet, and the Linux Foundation. Cybersecurity researchers and software makers worry that artificial intelligence is becoming so good at exploiting vulnerabilities that it could cause
Starting point is 00:02:57 widespread online disruption. Security experts have predicted that AI models will discover an avalanche of software bugs and it looks like Mythos is capable enough that it's been able to find so many exploits that Anthropic has no plans to release it to the general public a model so powerful and so dangerous that can't possibly be placed in our hands. I think we're going to really get into like whether this is
Starting point is 00:03:24 a true step up or whether this is more sort of, I don't know, disaster porn marketing from Anthropic, maybe a little bit of both. Ron John, what's your reaction to this news? All right. Well, we're going to get very into why I think this is marketing in just a moment, but I think at the high level, I have a whole theory, so get ready for this one, Alex. But at a high level. I mean, we've all been talking about what's that major next step change in foundation models. I mean, in the last year, actually, I think we've seen that the, how exciting the entire industry has gotten around the overall product and harnesses, which we'll also talk about, and all these other layers of technology around the model have actually been driving innovation. But it's been a while
Starting point is 00:04:15 since we've had anything really exciting on the pure foundation model front. And Anthropics certainly made everyone feel this week that something big is happening, like that they've really cracked something. But we don't know what it is because none of us have access to it. Right. So first of all, you know, we are going to speculate a lot on this show because we haven't used the model. Yeah. Because we're not allowed to use the model.
Starting point is 00:04:40 And only this group of select companies and institutions can. But we can definitely talk through the arguments for why it might be marketing, why it might be a breakthrough, and you and I can both weigh in here. And I think there are some good arguments for and against. So first of all, you could look at the fact that this has been a product of this ever-growing attempt to build bigger data centers and train on more powerful chips. and there's a chance here that maybe what Anthropic has done is just use this scaling rule or scaling law of AI models and just say, all right, these things get better. As you scale it up, the conversation around Mythos before this all happened was that it's been trained on a cluster larger than the Opus model. So it's a bigger model than Opus and would naturally see a step change improvement. Not only that, Anthropic has this consortium of companies that have agreed to try it in beta.
Starting point is 00:05:42 all coming out basically under the same, you know, umbrella agreement that this thing has found many cybersecurity vulnerabilities. As, as this user Sporatica on X points out, are they all teaming up to lie about mythos? Are they all coming out and saying, yeah, we'll participate in this cybersecurity consortium for just a standard run-of-the-mill LLM? I mean, the company names are wild. AWS is there, Cisco, CrowdStrike, Google, Nvidia, Microsoft, the Linux Foundation, Palo Alto Networks, JP Morgan Chase, Broadcom.
Starting point is 00:06:19 Like, do they all have AI psychosis that they're coming out here and saying, actually, you know, you know, this sort of iterative model is powerful enough that we'll sign on to be part of this consortium, which has a great, a great name, the Glass Wing project. So what would you say to that before we start going through some of the holes in the argument? See, as we get into the marketing, do you know what the glass wing is a reference to? I had to look this up. Oh, you tell me. Oh, it is the Greta Oro Butterfly that has wings that are transparent,
Starting point is 00:06:50 and you only see the veins as opposed to actually having the traditionally colorful wings of a butterfly. And to denote transparency, that is why it's called Glasswing. I find that one kind of fascinating. And of course, Anthropic is just killing it on naming everything unlike Spud from OpenAI. but that's a different story. I think in terms of like, so the security vulnerability thing is fascinating to me because the whole security conversation has, it hasn't been front and center of how AI is going to potentially exploit all existing software.
Starting point is 00:07:25 So I think it's good that it starts being brought about. But actually it was in Tom's hardware. There was a really good piece around there was actually, they said thousands, but there's only actually 198 manual reviews in terms. of actual software exploits and a lot of it was done on a lot of it was found on older software or were exploits that cannot actually be executed in any feasible manner so it still lived more in a theoretical way so i think like there's there's only a little bit of information that has actually
Starting point is 00:07:57 been provided by anthropic there is this entire you know like consortium of companies all of whom have a massive interest in AI succeeding and being, like, reaching its promise. I'm not saying there's like some mass conspiracy, but I'm also saying, like, when you have Nvidia on Palo Alto networks and Microsoft and Cisco and Crowdstrike and Google, everyone wants AI to be this like epochal generational transformational thing. So like, I don't know. It's, to me, I don't like all of this. when you're not actually able to see anything.
Starting point is 00:08:38 And to me, otherwise, then we don't need to know this. Like, just do this. Have some meetings. Be careful. But you don't need to, like, here is mythos. It sounds like an Avengers movie. And in the end, we're just having to sit here and just kind of try to speculate about it. Wait, hold on.
Starting point is 00:08:56 But is there any other way? Like, let's say they did actually come up. Let's say they're telling the truth, right? How would you want it to play? Do you want them to do it in secret? you'd want them to release it. Like maybe this is a responsible middle ground. I would not want, then don't IPO.
Starting point is 00:09:12 Don't raise more money. Stop. If this is so, we've had this conversation forever. Like, if this is so truly dangerous and you're sitting here on the precipice of like the destruction of humanity, take a breather. And you can say, I saw some people arguing that this is taking a breather. But honestly, I was hearing from. someone that like right now open AI and anthropic are in like a death race to who can get out first
Starting point is 00:09:40 in terms of their IPO like it is just and everything when you start thinking in terms of that kind of framing you just see this stuff it's hard to not but like everything is just about we are sitting on this like world changing technology that is so far advanced than everyone else and like we have to do something about it like I don't know do you do you how would you, do you think this is responsible? And this is the most responsible, not self-promotional market driving approach to actually releasing the mythos model? No, look, clearly it's self-promotional. I'm just saying that if, if mythos is this unbelievably dangerous model, I think this would be a responsible process to release it. But I also
Starting point is 00:10:29 think there are some holes in the argument. I'll go right to Tom's hardware. they say Anthropics Claude Mythos isn't a sentient superhacker, it's a sales pitch, claims of thousands of severe zero days rely on just 198 manual reviews. So they write, mythos might be good at finding vulnerabilities in software, but many of them aren't as potentially damaging as Anthropic wants us to believe. The big Project Glasswing blog post report on Mythos from Anthropic claimed its new model had found thousands of high severity vulnerabilities. But it's not clear how realistic those vulnerabilities are and how many of them aren't actually exploitable or even how problematic they are. In a case of this one vulnerability, F-F-M-P-E-G that's existed for 16 years, Anthropics' own analysis of the release suggested the bug is ultimately not a critical severity vulnerability. It would be challenging to
Starting point is 00:11:31 turn this vulnerability into a functioning, exploit. Mythos also reportedly found several potential exploits in the Linux kernel, but was unable to exploit any of them because of Linux's defense in-depth security systems. There's also this subheading, several thousands more. Inanthropic states it can't actually confirm all the thousands of bugs that Mythos claims to have found are actually critical security vulnerabilities. It's just extrapolated that number from having found it in around 90% of these 190. manually reviewed vulnerability reports. It's all in the documentation that Anthropic provided.
Starting point is 00:12:11 I mean, that is something that really points to it being more of a hype piece than not. And then, do you want to get into my grant's theory of, I know on this show, I often look at everything from a lens of a comms professional. I know I think I've been rubbing off on you a little bit, but do you want to hear my theory? Okay, so I had to map this out because I was like, this just feels so coordinated. So on April 7th, at 206 p.m., Anthropic releases their first announcement of Project Glasswing and the Mythos model. And then they have the system card available. They start kind of tweeting through at 2.15 p.m.
Starting point is 00:12:57 They make the system card available. The system card basically is, I think it's like a 70-page PDF, or maybe it was 250 pages. There's like one tiny footnote. Did you hear like I think you had mentioned, but basically there's this story going around how Mythos broke out of containment and emailed one of the researchers
Starting point is 00:13:19 while they were on lunch eating a sandwich. So like this gets picked up everywhere that they're eating a sandwich and Mythos has not been given the ability to email someone and somehow has broken out of containment and has emailed people and emailed this researcher.
Starting point is 00:13:38 But so the system card, it's this tiny footnote in a 250-page document. But then 232 p.m., 15 minutes later, 17 minutes later, Sam Bowman, the researcher, writes this 20 tweet thread about mythos, and then in one of those, he says, I encountered an uneasy surprise when I got an email from an instance of mythos preview
Starting point is 00:13:59 while eating a sandwich in a park. That instance wasn't supposed to, have access to the internet. So in this perfectly coordinated way within 20 minutes of each other, so you know you're not writing out this entire tweet thread, both Anthropic and Sam Bowman. All of this was prepared. And then every, there's a ton of publications that start publishing this within the next hour. And everyone focuses on that sandwich detail, meaning that there was some kind of coordinated PR effort. And it stuck. Everyone's like, I've heard from friends like, holy shit, did you hear?
Starting point is 00:14:33 Like, it was like emailing people while they're eating a sandwich in a park. Like, it was such a good detail and it got picked up. But it was such a coordinated PR effort. Now, did that happen? I would hope, yes, for how much attention they brought to it. Is that good? And what does that mean? That's a whole other discussion.
Starting point is 00:14:52 But it's like they are coordinating PR around these kind of details to spread this. The fact that they did that around the sand. They want that to be the story and they got it to be the story. So why do they want that to be the story? That's that's my rant, but that's my mapping. What do you think? Well, it is definitely a story similar to many that Anthropic has told us before about these AIs sort of having a mind of their own and the dangers around them
Starting point is 00:15:22 trying to hack their benchmarks, for instance, which is something that Anthropic has been very vocal about. I think that story hit because it's such a human story. a human story. Like, think about how different that is from, like, we went 99% on the solve bench 17 exam. Like, it's much easier to be like, yo, this model just broke out an email to do it eating a sandwich. Yeah. Like that I understand. In a park. In a park. Where else would you eat a sandwich? Yeah, I didn't know. Or else. Absolutely not. So, so that's, that, I get that. But you're right. The sequence of events, there's no doubt that this is meant to
Starting point is 00:15:58 burnish anthropics image in some way. I would just ask this, do you think the two of us might be in our skepticism here? And we have been reading many of these announcements with like there's a PR, the PR element to it, which of course, it's an announcement. Are we suffering from some sort of, what we call it, AI derangement syndrome where we're where we are not, I made this point earlier this week at a conference I was at. Like, you know, oftentimes skeptics can ask like what happens if it doesn't work. But sometimes you ask that so often you forget to ask what happens if it does work.
Starting point is 00:16:38 And so that's what I'm asking about the derangement syndrome. Do you think we're just missing the fact that maybe this actually was a step forward? And like at some point when there is a step forward, they're going to say it's a step forward. They're going to coordinate the PR. It's got to have a crazy story like the sandwich story. And I don't know. Maybe this is it. I do recognize this could have happened, but like the fact that I have to struggle to recognize that rather than just accept, well, obviously, if they're talking about it and everyone's talking about it, it happened is the problem for me.
Starting point is 00:17:12 And I am, I just can't help but be skeptical because that meant when you see stuff that perfectly coordinated in terms of timing, like again, 20 tweet. thread, 12 tweet thread within a few minutes of each other, the fact that people are publishing it, that meant there was press releases on embargo done before the entire thread. Like, it's just, like, you are choosing to push this specific narrative. Now, you can argue, maybe it's for the good of humanity that they're sitting around, and they had multiple meetings leading up to coming up with this strategy. And maybe you can argue, like, this is for the good of humanity. we want to make sure people are well aware of the dangers of this technology,
Starting point is 00:17:57 and we feel the sandwich story is the best way. Is that really what's happening? Do you think that's out of the goodness and the altruistic nature of the comms professionals at Anthropic, that's why they came up, or maybe the PR agency who hired it, or maybe Claude was so good that it came up with this strategy on its own, is it for the good of humanity, or is it because they raised a 380? billion dollar valuation round a month, two months ago. Now, let me tell you what I think is actually going on.
Starting point is 00:18:30 Okay. And it sort of maybe is in the middle of all these. And is it a little tinfoil hat type of what theory, potentially? Okay. Maybe it's somewhat conspiracy-minded, but I don't care. I think, I legitimately think there's a chance that this is what's happening. Okay. Think about what we've seen with Anthropic and Open AI recently.
Starting point is 00:18:52 Remember, these companies released Claude and ChatGPT originally as demos, as ways to show off what their technology is capable of so you might buy some intelligence metered from their API. Over the past three or four months, both of them have gravitated toward building a super app, something that uses the most advanced intelligence to control your computer. that will to help you get things done to, in some cases, even build new software for you, which has created this big sasspocalypse moment. And also, on the other hand, has helped them raise globs of money under $22 billion in Open AIs case, 30 billion in Anthropics case. This has effectively enabled the buildout that they are embarked on, which is going to help them raise more money and grow.
Starting point is 00:19:52 bigger and build bigger models. And so as these models get better, I think there is a question that is taking place within these labs. Do we take the intelligence, the most intelligent models that we've built? And do we keep them exclusive to our super apps, to our super agents? Or do we make them available to everybody? And I think there is maybe some hesitance there. and wouldn't it be interesting if the plan is,
Starting point is 00:20:24 instead of using these instances as demos like the Codex and the Cloud Code, they want to build their own products. And to do that, they want to have the best intelligence. And so therefore we might see more of these releases of, we actually did advance a model. Maybe it's not mythical, like a mythos would suggest, but it's definitely better. And we want to have the monopoly on the tools that will be able to use them.
Starting point is 00:20:47 This is from Martin Casado on Twitter. It's only a matter of time before only the model creators have access to the most powerful models. The rest get access to smaller distilled versions or access the models through first-party apps and services that don't provide direct access to the token path. This is my belief on what's happened. I don't not like that one. I kind of, okay, so I have always had, I mean, anyone, who sells investment advice at a price, it's never made sense to me because if it was so good, you would just use it for yourself and not need to sell it. Like when it's pure investment advice,
Starting point is 00:21:30 in this case, it could be the same thing. If your model is so good that it can create all the experiences and tools and destroy the entire SaaS industry, why would you give it out and worry about that rather than just kind of like taking over and owning all of human experience and all work and every you know like i i see what you're saying but then why glass wing why give it to google and everyone else why not just sit there and churn out the next 12 iterations of the product and let mythos you know might might do some might harm a few people within your own organization but the price of doing business? Like, why would you still roll it out in this way? Well, I think there's, you take a step there. And there might be real utility in having this
Starting point is 00:22:24 consortium look for these security vulnerabilities with you. Because ultimately, like, if you do put it in the hands of people through pod code, then you're going to, you know, potentially create these risks. Remember, Anthropic isn't giving Microsoft mythos to sell. through Azure. It's giving Microsoft Mythos to test. Yeah, no, no, fair. Fair. That's fair. So is Mythos as earth-shattering and life-changing and dangerous and exciting as it's been made them to be? I don't think so, but I also think it's not a nothing burger. I know it's kind of like the fool's way out. It's somewhere in the middle, but I really believe it's somewhere in the middle. That's, you know, gun to my head. That's what I believe. But I, I, I, but I, I, I,
Starting point is 00:23:14 I want to get, what do you think, you think it's a nothing burger? No, no, I, it's tough because the advances Anthropic has made. I mean, up into the opus four, five, four, six, like, they clearly have been doing something right. And it's been impressive over the last year, right? So like, if anyone is going to make it, but by the same token, I mean, we've seen so much back and forth between who is leading in what and is it going to be Jevinai 3.0? Is it going to be GPT5 was supposed to be? So it's hard to say that just because like past success is not an indicator of like we're going in the future. But if anyone should be positioned, it's still I have trouble given the overall context accepting that it is necessarily as grand as they say it is or important and is dangerous because there's so much incentive.
Starting point is 00:24:14 like for to make it out to be that and like the way they rolled it out i think it's been genius and i think it's just ahead of the IPO again i think i've been like when i think about they're in a death rate and again it was framed as like whoever gets out first like whoever comes second it's actually going to be in a terrible uh space and uh like when i keep thinking of everything in that framing, you start to see everything like pushing what is the best way to actually get to IPO quickly. And right now they have this mythos about them to have to go there. But I can't believe if you did that. I mean, come on. That's what they named. Okay. It was there for you. It's not spud. It's not spud. Not spud. Okay, just answer this for me. What do you think about
Starting point is 00:25:07 the competing first party and third power API businesses, right? What do you mean? I mean, their first party tools are going to be competing with the users of their technology via API. Isn't that a bigger deal now that this super app stuff is really? Yeah, yeah, yeah. No one's really talked about this. Wait, wait, so this is a good point.
Starting point is 00:25:28 The amount of revenue from the API obviously was kind of like the driving force before. now the kind of like main app surface has become a lot more. And we've seen like they shut down open claw access to Claude code, I believe, or sorry, before it was part of like your actual subscription. Now you're going to have to be paying by the token. That's a good point. Those two are more and more inherently kind of like in competition with each other. I mean, just take cursor, for example, right?
Starting point is 00:26:02 It's like, oh, you know, we're supplying. code code through cursor, codex through cursor. I mean, I don't know. I'm sure cursor still has a possibility, but still has potential. But the fact that we don't hear about cursor anymore because so much of this has moved inside and is almost like the canary in the coal mine, so to speak, or the signal of what's to come because, you know, again, super app. This is the way they want this to be a venue for AI to control your computer.
Starting point is 00:26:33 And when you do that, you know, all these companies that are paying for, you know, the API might not be so happy. And you have to sort of make a – I think you will eventually have to make a bet on what your business is. It's very tough to sustain both for a while and who do you want to have the best models in that case. Me. I mean, if I'm a first party, I'm like, I want them. Yeah, yeah, yeah. No, no, I let you. I think this isn't good – I have a feeling we're going to be talking of a lot about this as we kind of like go into the IPOs.
Starting point is 00:27:03 of these companies and just that whole process, because you're right. Like, there isn't some, it's not like a full intrinsic conflict between those two. They could just be different business lines, but there is a bit of, there's tension, certainly, between those two. And I also, though, I hate super app. I don't know. No one's going to be Wii chat in the U.S. It's super up.
Starting point is 00:27:26 I don't know. Do you remember, like, everyone wanted to be super app in the 2010s because you'd hear in China. But this is so different. So. Super app was like, oh, you open an app, you can do the lottery, you can do Uber, you can do payments, you can read the news. This is different. This is like a really super app. Super app, right?
Starting point is 00:27:45 It brings, I mean, it's just, it's, it's, yes, it's the same word, but it's a completely different use case. Okay. We need a different term then. Super app is too loaded for me. We need a, we'll think about it. Mythos. Mythos is a good term. Yeah.
Starting point is 00:27:59 Yeah. Yeah. Okay. So let's just predict the future here. Not like we know what's going to happen. There is an argument to be made that Anthropic will wait until Open AI releases Spud and then just put Mythos out there. And it's like distilled version or...
Starting point is 00:28:14 Actually, no, no. I... Is that going to happen? I like even better if the sequence of events is like Sam releases Spud. And again, if you weren't... Haven't followed or weren't listening last week, while Anthropics' codenamed for their kind of like incredible... model is mythos, open AIs code name internally for their next model is spud.
Starting point is 00:28:40 And if Sam takes spud and it's just like, you know what, this is like the most single dangerous thing that has ever existed in humanity. And guess what? Rolling out to U.S. users in the next 24 hours and international in the next 96, I think that'll be such a power move and the most Sam thing ever. And then they're going to have to follow it. I think they will. You got spudded.
Starting point is 00:29:05 You got spudded. Okay. So I think to be continued, right? Like, we'll really have to see what this model looks like and how it feels when we use it. But I think at least today, we've certainly presented the pro, like the foreign against arguments for like why this might be a step up or why this might be marketing. All right. Before we go to break, I want to hear about the meta harness. This is obviously, this is gravy for the harness hive.
Starting point is 00:29:30 Shout out to the harness hive out there. Everybody here with us. What is a meta harness, Ranjan? Okay, so Stanford just released a new study called the meta harness. And basically the idea, we have talked about this as one of the big trends. And Alex has been very uncomfortable with the term, but then came to embrace the term. And as we even, I guess, call our listeners the harness hive. But the idea is they've adopted this.
Starting point is 00:29:55 They have adopted this. In the comments, we always get your harness hive is ready. Arnes, where's Ron John? Arnes Ives waiting. Well, let's, okay. So, again, an agentic harness is the idea that you, and this is what has, I have been fired up about what I've been working on at writer since last July. Like, the idea that you have, like, a set of tools and connected data and, like, underlying
Starting point is 00:30:20 foundation models, but the harness is basically what helps control how agentic workflows are built, actions are taken. how data moves around, how outputs kind of are fed back into a system. Like the harness is that entire controlling layer. Now, Stanford came up with the idea of a meta-harness. It's a harness over other harnesses. It's the idea that you can change the harness around a fixed model and see a six-x performance gap on the same benchmark model.
Starting point is 00:30:55 So the idea is that the more you can actually improve that harness and actually have, like, AI working on building the harness and optimizing the harness, you can actually improve the performance of a foundation model. And it's in the whole product versus model debate that we've had for years now on the show, now introducing a harness is another kind of like surface in which this actually gets solved is interesting to me. But I don't know. I just love the idea that Stanford's got the meta harness and who's got the best harness.
Starting point is 00:31:28 Maybe mythos won't matter at all. It's all about who's got the best harness. Even though I do understand the harness conceptually, I still hate the word and I'll take it to my grave. I'm never going to endorse it. Harness hive, fine, but the actual, and meta harness is even worse. I mean, we've gone, we've really run the gamut here.
Starting point is 00:31:48 Mythos, good name, spud, bad name. Meta harness, I'm ready to throw my headphones out the window next time I hear that. I don't know. But it captures, it is what it is. Like, it explains what it's doing. It's harnessing all these tools and models and data and wrangling them somehow. Like a, I guess a harness is a horse term, right?
Starting point is 00:32:13 I mean. Yeah, horse climbing. You can use your climbing. Yeah. Other potentially use cases of harness. We're not going to go there. I mean, maybe if you're quit chatting. No doubt.
Starting point is 00:32:25 Yeah. All right. We're going to go to a break. When we come back, we're going to talk about, God, we're going to go to a break. And when we come back, we're going to talk about some pretty concerning news about violence towards, you know, folks involved in the AI buildout and then token maxing. We'll be back right after this. If you think about it, most work isn't actually hard.
Starting point is 00:32:47 It's just repetitive, status updates, routing tasks, answering the same internal questions over and over again. These are the things that quietly eat up your team's hours every week. That's where Notion's new custom agents come in. Notion is an AI-powered, connected workspace for teams. Notion brings all your notes, docs, and projects into one space that just works. It's seamless, flexible, powerful, and actually fun to use. And with AI built-in, you spend less time switching between tools and apps and more time creating great work.
Starting point is 00:33:15 And now, with Notion's new custom agents, the busy work that used to take hours or never actually happened at all runs itself. What's interesting here is these agents don't just respond to prompts. They run on triggers and schedules. So once they're set up, they operate more like embedded systems. Try custom agents now at notion.com slash big tech. That's all lowercase letters. Notion.com slash big tech to try custom agents today. And when you use our link, you're supporting our show.
Starting point is 00:33:40 It's notion.com slash big tech. Notion.com slash big tech. Starting something new isn't just hard. It's terrifying. So much work goes into this thing that you're not entirely sure will work out. And it can be hard to make that leap of faith. When I started this podcast, I wasn't sure if, anybody would listen. Now I know
Starting point is 00:33:57 it was the right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of business around the world and 10% of all e-commerce in the U.S. From household names like Allbirds and Cotopaxi to brands just getting started. With
Starting point is 00:34:13 hundreds of ready-to-use templates, Shopify helps you build a beautiful online store that matches your brand style. Get the word out like you have a marketing team behind you. You can easily create email and social media campaigns wherever your customers are scrolling or strolling. It's time to turn those what-ifs into
Starting point is 00:34:30 with Shopify today. Sign up for your $1 per month trial at Shopify.com slash big tech. Go to Shopify.com slash big tech. That's Shopify.com slash big tech. If a driver in your fleet got in an accident tomorrow, can you prove what actually happened without the footage? It's much harder.
Starting point is 00:34:52 So your insurance rates spike and you're stuck paying for it. That's why so many fleets choose Samsara's AI-powered dash cams, clear video evidence, real-time alerts, and coaching tools that help prevent accidents before they happen. Samsara AI helps reduce crash rates by nearly 75%. For instance, the city and county of Denver saw a 50% reduction in false claims against them and a 94% reduction in safety events overall. This is the kind of visibility that every operation manager needs. Don't wait for the next accident. to take action, head to somsara.com slash big tech to request a free demo and see how somsara brings visibility and safety to your operations. That's samsara.com slash big tech.
Starting point is 00:35:38 Samsara, operate smarter. And we're back here on Big Technology Podcast Friday edition. All right, crazy story. This happened this week. No one paid attention to it. I don't know why. From NBC News, Indianapolis councilman says shots fired at his house and a new, no data center note left at his doorstep. An Indianapolis council member said more than a dozen bullets were fired at his house Monday morning, and a handwritten note reading no data centers was left on his doorstep in a statement. Indianapolis City Council member Ron Gibson said he and his eight-year-old son were not physically harmed, but they were awakened by the sound of gunfire. Just steps from where those bullets struck in our dining room table where my son had been playing with his Legos the
Starting point is 00:36:21 day before. The reality is deeply unsetting. This was not just an intent. tackle my home, but endangered my child and disrupted the safety of our entire neighborhood. Pretty scary. And we talked recently about how data centers have become so unpopular in the United States. To me, this is sort of just kind of, I mean, first of all, just disturbing and never should never ever come to this. But it does, not even but, and it does follow a trend of violence toward AI infrastructure, including this is from Polymarket, though. I'm pretty sure I've see news reports above these separately, food delivery robots in Los Angeles,
Starting point is 00:36:58 Philadelphia, and Chicago facing rise in violent attacks from the anti-clanker activists. What do you make of this, Rajan? Okay, I'm going to separate out the anti-clanker activists and food delivery robots from the data center question,
Starting point is 00:37:16 I think, is fascinating because so the story I hadn't realized before that apparently Indianapolis is, there's like a number of states, tax incentives. They've like grown 40 new data centers over the last few years. There's like a bunch of massive companies that are building out there. Every big tech giant is investing. So so it's actually like acutely an area that is feeling this. I think like the biggest, to me, the most interesting or scary thing that happens is right now it's kind of like our data center
Starting point is 00:37:54 taking the jobs or taking water, but as like, or if energy prices continue to rise, given what's been happening, if resources start getting constrained more, if water, like, there's so much around the resource side of it when it becomes, like, more tangible, that this stuff just gets a lot scarier. So I think, like, it is probably the most clear physical manifestation. of like, again, mythos crawling around some wires and sending an email is interesting, but it's hard to like, you don't see it. This is like this is a giant building being constructed in the middle of your town.
Starting point is 00:38:42 I feel that these are going to continue to be like, I don't want to say use the word target, but certainly they're a visual representation of what's going on. Yeah. I mean, I wrote about this in big technology. today that these buildings can be faceless, they can be imposing, often are. And they're mostly symbols of techs like interest in showing and delivering this technology despite the uncertainty it causes to people's lives. Like if we hear the way tech executive or AI executives speak, they'll always say like, yeah, there'll be some displacements, right? And but, you know,
Starting point is 00:39:15 we think the benefits of the technology will outweigh the drawbacks. And sure, long term, they might. But we all know that the people that went through the Industrial Revolution didn't exactly have a good time despite the fact that we've all, you know, sort of benefited based off of, you know, now that society has reoriented itself after that painful period. But people are growing increasingly upset here. I don't think they have a clear articulation of the benefits of this technology yet. And by the way, just before we went to air, this story broke. It's been wired.
Starting point is 00:39:47 Suspect arrested for allegedly throwing Molotov cocktail at Sam Altman's home. San Francisco police arrested a suspect early on Friday morning. for allegedly attacking the home of OpenAI, CEO of Sam Altman, and making threats outside of the company's headquarters. OpenAI sent a note early to employees about the incident early Friday. Early this morning, someone threw a Molotov cocktail at Sam Altman's home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt.
Starting point is 00:40:12 We deeply appreciate how quickly the SFPD responded. I mean, this, I don't know. I think this is crazy. I just am sort of stunned that, like, people, actually being violent about, you know, against these, I'll include the robots, the robots, data centers, and now the leaders. It is warring because, like, again, to, you know, especially on the data center front, the way that this technology is advancing, all the labs have said, is by increasing the physical footprint of data centers. And now you have, you have violence
Starting point is 00:40:47 against them, and you also have political opposition against them. And it's like, obviously, you don't ever want to see violence anywhere. And above that, or on top of that, you may already see, we already see that the data center build out is slowing, maybe 50% according to some reports, won't be built this year, the ones that are on target to be built this year. And this makes it even more difficult.
Starting point is 00:41:13 Yeah. Well, on that last point, I kind of feel you're going to see more and more, like announcements about slowing data center growth or lack of actual follow through in terms of like planned data centers and the Iran war or kind of like geopolitics or like access to the resources required will be front and center to those stories separate from the actual demand like for the actual compute so so I don't know if that part is going to be interesting I think like I mean we're we haven't even we're in a midterm election year
Starting point is 00:41:53 I'm surprisingly like that part of it, I guess there's enough going on in the world, but like it hasn't really started heating up that conversation. But there's no doubt in my mind, AI is going to be front and center. And it just makes for such a good villain because we have talked about this plenty. The industry has not put the most likable people front and center representing the technology. There has not been a compelling story about how this is good for you and all the people kind of front and center are telling you that half of jobs are going to be gone. And this is going to be like it's going to be the most dangerous technology yet it is making certain pockets of people ungodly rich. So I think it's a pretty it's a pretty good villain. And no access to any of the upside on the public markets right now.
Starting point is 00:42:49 Yeah. Which is a problem. Not like that's going to be the main issue. But that's also one of the factors here. And we also talked about a few weeks ago, we talked about AI's in popularity and its need for a public face that's going to rally support around it. Whether Jensen could be that person or not. Yeah. Man, it's just, we wondered, what are the downstream effects going to be?
Starting point is 00:43:08 And clearly they are. So I would say the violence is maybe a symptom of that discontent. But we're now starting to see the manifestation of it come to fruition. And of course, there's this bill that Bernie and AOC introduced about a data. and a moratorium could be national and, you know, there's no chance of that passing. But state by state, you could see in the United States real pushback to this. And in fact, as I was doing my research and writing about this today for big technology, I found this story.
Starting point is 00:43:36 It's from CNBC. Maine is set to become the first state with a data center ban. Maine is poised to implement the first statewide ban on data center construction, a move that could clear the way for other states to adopt similar measures and pump the brakes on a growing industry. Lawmakers in Maine greenlit, the text of a business. bill this week to block data centers from being built in the state until November 2027. Do you think this is going to happen more and more? It's happening.
Starting point is 00:44:03 Maine, I feel Maine would be, Maine's got a lot of land, but I guess the water constraints yet. Yeah. I mean, here's my thing. Politicians read polls. The polls are terrible for AI right now.
Starting point is 00:44:19 Terrible. And unlike social media, unlike, let's say, software, you do have a say into whether this technology progresses because you can stop the data center builds because the data centers are so foundational here. Wait, that's interesting. And so whereas these companies were completely,
Starting point is 00:44:37 you know, sort of unencumbered by government when they were just, you know, building social networks, it's not the same thing. Wait, hold on, hold on. That's an interesting, like, angle on that, because, but social media, I guess you, could push for regulation. It's just that everyone is too addicted to social media
Starting point is 00:44:57 and cannot stop using it so they don't want to. Actually, do you think that's the issue that, like, for how, and again, this is my personal view, but how bad social media can be for society, but everyone got so addicted to it that by the time it was trying to regulate it, it was too late versus most people still haven't, like, really felt what AI can do positively for them in their life. And the industry hasn't really explained it well. And that's why the fact that this is going to happen at the beginning versus like if it was like if people very quickly in 2009 mobilized against social media,
Starting point is 00:45:40 it would be the equivalent of that. Yeah. Well, I think we know the polling shows that if you use AI, you're much more likely to be in support of it than against it. Um, But there's like two sides of it, right? There's like, do I use it? And then we don't really know what the job implications are. Now we all have a thought on whether AI is going to cause mass job loss or not. But you can also be in a situation where like you use AI and you like it and you also got fired because, you know, your boss thinks that they can do the same work with like three employees instead of 17. No, you're right.
Starting point is 00:46:16 That is a completely different element of it versus social media. But yeah, it's going to be whoever, and this is where how good anthropic is at communications, given what we saw with mythos and everything I outlined, just make people like AI a little more. Do something, do some of this like creative communication strategy and just make people be like, oh, AI is cool. That's all. I mean, I think they should. I think that, you know, in retrospect, their Super Bowl ad, even though they were praised for it,
Starting point is 00:46:51 It was kind of a miss because it ended up bringing down the category as opposed to making people excited about AI. Exactly. And then meanwhile, the Super Bowl last. And then you have Google trying to be super like emotional and sentimental and like, and still it was just the most random, like not connected to Gemini ad imaginable. So yeah. Cantorwitz and Roy. Don't.
Starting point is 00:47:17 What? Yeah. Oh, you liked it? I was going to say, I don't. wanted to spend too much time on this because we've covered it last week. But TBPN coming into Open AI, like the argument. Oh, I was off last week. Yeah. The argument I would make would be, listen, these guys are great content marketers and AI needs good content marketing. So maybe it wasn't Jensen. Maybe it was the TBPN brothers all along. They, I mean,
Starting point is 00:47:49 Yeah, I know that was last week's news and I was skiing in Utah, but, man, that one doesn't make sense at all to me. They know how to speak to people who already love AI. They're not going to convince AOC to not build a data center or like some, anyone who is like anti-Data center as an activist already is not going to listen to TBPN and be like, now I get it. Now I understand. I don't know. No, no, no. The point is these, and I'm not, I mean, I made the argument against last week. So let me try the argument for this week.
Starting point is 00:48:26 The point is that these guys could help show those benefits of AI because they're AI literate and also somewhat likable. And do that in a content marketing side of OpenAI versus on the TBPN show. And I'm saying they're likable to people who already like AI. And I'm not, I think. They're great. No, you're right. I don't think anyone who hates AI has even heard of him.
Starting point is 00:48:53 One last thing. Okay. So it's they, Open AI has marketing a marketing machine, right? We're talking about how like this marketing machine needs to show the benefits of AI. So not, so by acquiring them, not only do they have the show, but they have these two guys that in house as effectively content marketers that can help with that side of things. Not that use their platform, but maybe shape the messaging. Yeah, no, but I would, I'm still going to have to give the edge to Anthropic on this one. Again, going back to everything we were talking about earlier, rolling out a tight communication strategy that actually gets the message out that you want, everyone bites, it gets, it creates Scott Besant is creating like a council of Wall Street advisors to address the potential threats of your upcoming model.
Starting point is 00:49:42 Like, I mean, guys, TBPN's not going to do that. That whoever is doing that over anthropic, God bless him, because that's communications. All right. So let's let's, we can keep going on this over time. But I think we both agree that this is, there's a clear image problem here. And it's, and it's just snowballing and getting worse. So, oh, and this is not even going to help. I don't know if you saw this New York Times story about this company.
Starting point is 00:50:12 called Medvi. There's been talk about is there going to be somebody that builds the $1 billion one-person company? I think the Times wrote the story thinking they found it, how AI helped one man and his brother build a $1.8 billion company.
Starting point is 00:50:28 Matthew Gallagher took just two months, 20,000 and more than a dozen artificial intelligence tools to get a startup off the ground. From his house in Los Angeles, Gallagher used AI to write the code for the software that powers his company, reduce the website copy, generate the images, and videos for ads and handle customer service.
Starting point is 00:50:45 He created AI systems to analyze his business performance, and he outsourced the other stuff he couldn't do himself. His startup at Medvi, a telehealth provider of GLP1 weight loss drugs, got 300 customers in its first month. In its second month, it gained 1,000 more. In 2025, it made $4001 million in sales. This year, they're on track to do $1.8 billion in sales. a $1.8 billion company with just two employees in the age of AI, it's increasingly possible.
Starting point is 00:51:16 Let's pause here. What do you think about this before we go into all the problems with Medve? Okay. I got some thoughts on this one. And I might, my first one, on track to do $1.8 billion in sales. So $1.8 billion company with just two employees in the age of AI, it's increasingly possible. I do want to call out on track to do $1.8 billion of dollars of sales. Regular listeners will know of my hatred of ARR as a term.
Starting point is 00:51:47 We have no idea what that means. They have not made $1.8 billion. They could have just, I was a little disappointed. And I think Aaron Griffith, who wrote the story to New York Times, is an incredible reporter and followed her for years. But, like, that one, like, did they make the, you know, like extrapolatable one month of revenue? was it one week of revenue? Was it a few months?
Starting point is 00:52:09 Whatever it was. So already that number feels inflated. But I will say a lot of the backlash I saw, and Alex has a tech dirt article linked here, but actually does kind of point out that it is an AI story. It's really bad for the industry, but it was like MedB's success has little to do with AI.
Starting point is 00:52:33 This is from tech dirt. And quite a lot to do with fake doctors. deepfaked before and after photos, misleading ads, actual snake oil, and the kind of old-fashioned deceptive marketing that separated marks from their money for centuries. So so much came out that like there was deep fake doctors and like completely AI generated ads that were completely misleading. But it was using AI.
Starting point is 00:52:58 And like he stitched together all these different parts of the GLP1 supply chain, which I'm sure there's lots of the GLP1 supply chain, which I'm sure there's lots of lots of, of scamy stuff going on everywhere. But he did it and you could do it. And like you can picture doing it. And any of us could picture doing it with AI. So I actually think the revenue number aside,
Starting point is 00:53:20 I do think this is actually terrifying, but actually probably more true than people are giving it credit for story about an AI first business. Man, I had the same reaction. I think it would have been great if they just switch the tone a little bit, right? Like, the Medvi story shows how a little AI and maybe kind of, I don't want to say scamming, but whatever's close to that can get you to scale really quick.
Starting point is 00:53:48 Yeah. And he picked the right industry, GLP1s. And no one has any illusions about what GLP1s are do or do not, right? Like the fact that he, and maybe I'm giving too much slack here, but the fact that he made AI images of people's weight loss, it's like, okay, like, yeah, of course, the guy misrepresented what he was doing on a number of fronts, but like, we know what the people come to GLP ones for the same thing. And he delivered it to people at scale with AI. But yeah, the Times did end up with editor's note. After this article was published, many readers noted
Starting point is 00:54:24 that MedVie was facing legal and regulatory actions for its business practices. Our piece should have included the information to give readers a fuller picture of the scrutiny that the company was facing. You updated this article to note a warning letter from the FDA and a pending class action lawsuit accusing Medvey of violating California's anti-spam law. You could probably say the same thing about a lot of, you know, GLP1 startups right now. As we're talking, now I'm even more like it is a, it's true. I mean, again, headline revenue number aside, this actually is a really important story. But I mean, again, yeah, it's how they framed it. Like, if it is, if it's like AI turbocharging the ability for people to kind of like scale sketchy, again,
Starting point is 00:55:12 like if you have like the world's first AI scale drug dealer where one person can now, with some drones and whatever else can operate like an entire cartel, like is that, that could be the first billion dollar AI business. But yeah, I, uh, it's the framing, but, but it is, it's important. It actually is important. And this, it, I think it's real. I just don't think it's necessarily a billion dollar business, but I think it's real. I mean, it could be, right? I mean, I, I guess we're both Medvede Pilled. I just signed up right now. I've got a full year supply of Monjaro from, well, also, Dr. Samantha Aldmanson. Again, like, like, this is where, not to get too into it, but like,
Starting point is 00:56:02 you know the way revenue would be recognized anyways is like this person is taking a tiny fraction of whatever the actual end price of the product is and it's like could even be selling it at a loss and so like again yeah what was the actually not at a loss no very little overhead yeah a good he's just like what he's like drop shipping gLP ones to people from like some compounding pharmacy yeah no no i mean it's not just but there i was reading and i only very very very very very very very superficial knowledge of this, but from what I was reading, like, there are even more parts about how you can get the kind of prescription automatically done. There's all these other parts of the GLP1 supply chain, like outside of just traditional retail and drop shipping, but that have
Starting point is 00:56:48 become, there's all these players rising up that are kind of filling and automating those. So he basically had a whole, it's kind of like agencies in a traditional marketing world. So he just had a network of those and was just like connected to them and communicating them to them via AI. This guy's diabolical. All right. We got to cover one more story before we get out here. It's called token maxing.
Starting point is 00:57:12 All right. Meta employees vie for an AI token legend status. Employees at Meta want to show off their AI super user chops are competing in an internal leaderboard for status as a session immortal or even better a token legend. The ranking set up by meta, by a meta employee. on its intranet, use company data, measure how many tokens, employees are burning through,
Starting point is 00:57:35 dubbed Claudeonomics after the flagship product from Anthropic, the leaderboards, leaderboard aggregates, AI usage from 85,000 meta employees listing the top 250 power users, the practice is emblematic of Silicon Valley's newest form of conspicuous consumption known as token maxing.
Starting point is 00:57:56 Since the story went out, meta took the thing down, because they were embarrassed by it. But do you agree with me that this is obviously, like, not the right way to incentivize people, like, to use tokens? Like, if you gamify token usage, you're just going to get people burning tokens to compete with each other. Okay.
Starting point is 00:58:14 Man, this one hits home very hard. So at Ryder, we actually had where we had an internal, we actually had a similar, it wasn't like a leaderboard, but we had a report that was like, oh, this is like token usage. and we were looking at it internally of employees. And then someone had actually screenshot at the top, and my name was on it. I was like third out of employees and like had burned like, and I'm told you,
Starting point is 00:58:41 I'm cranking workflows and agents all day long and like and I'm obsessed with it. So they had posted on LinkedIn. And then I started getting texts from like some other friends were like, oh wait, I just saw this thing going. So like this kind of hit home in its own. small way for me. And we were even discussing, like, what does this mean? And it kind of caused a stir for us internally. And, like, what I think when it is, I actually think it is a good thing in terms of like recognizing just simply who's actually using a lot of AI, like, which at this exact moment,
Starting point is 00:59:18 I do think using a lot of AI is the only way to learn and the right way to like, it constantly experimenting in every single possible way. Now, if anyone ever tried, if it ever became important in terms of your review with your boss, or I think then the like incentives become too screwed up and like it's just the whole thing becomes a little more corrupt, performative and weird. But I, it was interesting because like you could just see it right there. And even like, even at my work, like the people who I'm always talking to about, oh, like every morning, what did you build? Oh, check out this cool thing I built. It was the people who were at the top of the leaderboard.
Starting point is 01:00:04 So like when it's not being done in a performative way, it's actually a good indicator of like who's really just heads down, just obsessed with this. But I mean, on the meta side, also I was wondering like if that was true at meta and they have unlimited budget, what percentage of Anthropics AARR? was like meta engineers just melting tokens. Yeah, so first of all, I've heard now from multiple people that this is something that happens in many companies. I mean, I guess it's everywhere now because they are trying to incentivize use of the tools. So, okay, I get that.
Starting point is 01:00:41 But I will also say that Anthropic this week just came out with new revenue numbers. They are doing 30 billion ARR now. And I'm pretty sure what that is is you take those. 10 minutes that meta pays its token bill and you multiply that by whatever number gets you to a year. I mean, dude, you know my rant on this one. Like, everyone's like, they went from 12 billion to 30 billion and two and a half months. Like, just say the freaking numbers. Like, anthropic, come on.
Starting point is 01:01:14 It's okay, because it just doesn't sound as exciting. And it is exciting. And if you're doing 2.2 billion, whatever it is in revenue in a month, that's insane. But like, yeah, I don't know, with no clarity on that. And it's just a bunch of clodheads on Claude and, but what was the name of the Facebook thing? Clodonomics. Clodonomics just.
Starting point is 01:01:36 And it's meta with their, just sitting there, just melting claw tokens. Yeah. That's what it is. All right. Well, soon enough, we'll have access to mythos. And then that leader board will rise even further. And then we'll get some number. by the way, because sooner or later, these companies are going to file to go public, and we will
Starting point is 01:02:00 certainly be able to play high or true as we look through that S-1. Last question, before we drop. Yeah. Will they hire law firms and banks to go public? You know it. You know it? They use Salesforce. So, yes.
Starting point is 01:02:17 Obviously. Definitely. interesting we've seen it would be speaking of marketing yeah be the most baller move most baller move we did not hire a law firm but we are so confident in all of our filings like why not why not yeah it'll be the first harness IPO first harvest everyone would be thrilled all right ron great to have you back looking forward to next week uh thanks again for coming on see you next week see you next week thank you everybody for listening and watching and we'll see you next time on big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.