TRASHFUTURE - Newly Grinted feat. Ed Zitron

Episode Date: January 28, 2025

Ed Zitron returns to discuss how a Chinese hedge fund created an AI model that accomplished much of what OpenAI’s frontier model can do, and Sam Alrman’s best idea is to apply the “kind of gets ...it right sometimes” energy of LLMs to running your life. Also, grand theft luggage is rife in Lisbon airport, Masayoshi Son makes a bad investment, and Rupert Grint’s new app “Grinted” makes a splash. Check out Where’s Your Ed At here: https://www.wheresyoured.at/ Get access to more Trashfuture episodes each week on our Patreon! *MILO ALERT* Check out Milo’s UK Tour here: https://miloedwards.co.uk/live-shows Trashfuture are: Riley (@raaleh), Milo (@Milo_Edwards), Hussein (@HKesvani), Nate (@inthesedeserts), and November (@postoctobrist)

Transcript
Discussion (0)
Starting point is 00:00:00 Hello everybody to the We Were Right podcast. Wait, what am I on? Matt Ford's podcast? Hello everybody. It is Riley the Righteous. The most right we've ever been. Hello everybody to the podcast. Welcome to a podcast that is being described by some people as a victory lap about a lot
Starting point is 00:00:36 of what the American economy was looking like recently. Yes, that's right. It is the We Were Right podcast. I'm Righteous Riley. I'm joined of course by He Can't Be Wrong, Hussein, No Incorrect Opinions, November, Milo Edwards, and of course- Nothing.
Starting point is 00:00:51 Oh, okay. All right. Just to zoom in. No additions. Oh, I could be wrong, apparently. That's fine. I'm never wrong, okay? Look, we may all have CTE,
Starting point is 00:01:02 but that actually makes us stronger. It makes our opinions more powerful. How about this? Me? I'm right. Milo Edwards. And of course, the end of incorrectness and the start of truth at Zitron. That's right. The best, some are calling it the best epithet that's ever been applied after the Trojan Wars. Did I just fuck up saying flawless? I was like flawless. Thank you. Epithet Zitron.
Starting point is 00:01:22 Yeah, that's right. They're calling it the podcast. That's a long victory lab, but two of the wheels have fallen off the car. Yeah. Greened into the side. We're running across like the tape on the finish line, arms up with just obvious pit stains, pants around our ankles, but we're fucking winning. Uh, it is evidence at Citroen from the better offline podcast. Ed, thank you for joining us today. How is it going? Ed Citron It's going great.
Starting point is 00:01:48 So I flew in from Las Vegas to New York, to my place in New York. And I got here and I was a red eye, which is a mistake because I'm 38 years old now and I can't do it. So I slept like two hours. And then I was just like looking at this deep seek thing and just being like, oh, I should probably get ahead of this. And I started getting to understand it. I'm like, oh, this is big. And I just mentioned it on Chapo a few days ago. And then everything blew up just as I kind of worked out what was going on. So very good timing for everyone involved. Really. I see-
Starting point is 00:02:18 Mid-victory lap. Mid-victory lap, but I've been getting texts all day from people being like, you must feel good about yourself. I'm like, haha, never. That's the podcaster's promise. Yeah. I feel bad all the time. And nevertheless, this has been such an interesting day because we can... I'm sure you'll have questions, but just the reaction from the AI pigs over DeepSeek has been extremely satisfying.
Starting point is 00:02:45 The oinking and squealing from them. Being like, oh, the Chinese, the Chinese are here! I don't like, the Chinese are in my large language models and I don't like it! Yeah, this thing that we said would be extremely difficult to do has been achieved trivially, it seems. Like, really easily by a much smaller team with worse tech. Yeah. This puts me in mind of an amazing video I saw from Nigel Farage recently, where he was
Starting point is 00:03:11 advertising a company called Bullion Direct. And he, on their website, it says Nigel Farage recommends gold bars and coins. But if you go on their website, you can watch a video where they have footage of the Ukraine war. And he's like, well, gold's really good because, you know, there's a war in Ukraine. And then it cuts to just an image of, previously images of tanks and stuff, there's just an image of the Chinese flag and he goes, and China, who knows what they're gonna do? You definitely need gold. No further explanation. You know what, he makes a great point, like, neither Poussin nor she will be able to find
Starting point is 00:03:44 and dig up the, uh, like, gold bullion that you've buried in your allotment. Mmm, yeah. Jeremy Corbyn is buying his gold. Of course, you purchased from a political huckster at a 9,000% markup, obviously. Oh yeah. But look, I wanna start by sounding the deep fake alarm, because a new colleague of the podcast, a politician we support wholeheartedly from a party in Portugal that has been unfairly maligned as far right just because of some
Starting point is 00:04:11 probably also deep faked, you know, neo-Nazi connections has, I'm afraid, Miguel Arruda of the Chega party in Portugal has been targeted in fact by deep fake fakers. According to several very fake media news outlets, police questioned Miguel Arruda on Tuesday at Lisbon airport and charged him with what is being described as grand luggage theft. Deepfakes are getting better and better all the time. Grand luggage theft. Was he just taking stuff off the carousel? Was that what he was doing? Yeah. Well, that's what the deepfake- What makes it grand?
Starting point is 00:04:44 That's what the deepfake- What makes it grand? That's what the deepfake video showed, November. That's what, you know, George Soros would have you believe, was that there was like very crisp, like, 4K video of this, uh, like, Portuguese politician just making off with people's suitcases and scurrying into the exit. How the hell did he do it? I mean, you could just take him, you know, so long as you- Yeah. It's like playing
Starting point is 00:05:05 with that, right? You make a bet that the person whose suitcase it is is on the other side of the carousel so they don't see you grabbing it and running. Does he think that the luggage carousel is a kind of like a bring-and-buy type situation? Like the bowl of car keys at the swingers party. You bring a suitcase, you take a suitcase. It doesn't have to be the same one. Bringing back the much-derided format of supermarket sweep, but in an airport context. I have more information for you here.
Starting point is 00:05:31 Local media reported that the police had camera footage allegedly showing Miguel Arruda taking his suitcase and then a second, smaller one off the baggage carousel and putting that second one into his own. The 40-year-old politician is- 2.33 a.m., going back for more suitcase. I mean, that's a flawless plan and it's more involved than I imagined it would be. I thought he was just kind of like grabbing one and scurrying. And it's more suspicious to put a suitcase inside your suitcase than to just have two
Starting point is 00:05:58 suitcases. It speaks to premeditation. Was he bringing in, was he bringing his suitcase empty with room for- What makes this grand? What a value of suitcase. Is it just one suitcase? You take one Louis Vuitton case and there's like a monetary value you're exceeding there, you know? Let me tell you, it is not one suitcase. I will read some more. Ooh.
Starting point is 00:06:18 It's a series of smaller suitcases. Stacked within- Russian nesting suitcase. Miguel Arruda is said to have confessed to stealing several items when he was searched in Lisbon and a huge amount of objects have been scattered around his house in a chaotic manner. Clothes and junk scattered all over were found by authorities during searches. He has sold over 200 items on vented, including sweaters, trousers, ties, and hats.
Starting point is 00:06:40 Oh, there's the grand. Okay. Oh yeah. Okay. Listen, is it a crime to like have an entrepreneurial spirit? Yeah. They hate a player trying to make some money these days. Even the fascist politicians need to have like side hustles. We're not here for your jumper's hats and ties. We're here for the airline's jumper's hats and ties. Your jumper's hats and ties are insured by the federal government.
Starting point is 00:07:06 No, but he very much is there for your jumpers, hats and ties. You've got travel insurance. Don't be a hero. I want a thriller where this guy accidentally steals like a suitcase full of cocaine and is being pursued by the cartels. Trying to flog it on vintage. The first complaints arose late last year after several bags disappeared from view of passengers on the flight between Sao Miguel and Lisbon. It was then noted that every flight that contained complaints of stolen luggage contained also Miguel Arruda. Well, listen, there's a lot of statistical fallacies that you can use here.
Starting point is 00:07:40 Yeah. Correlation doesn't equal causation. Yeah. Yeah. Correlation doesn't equal causation. Yeah, correlação causação. No, it doesn't. Meanwhile, the 40 year old denied any wrongdoing, issuing a statement to the Portuguese broadcaster TVI saying, I have been crucified on the public square, but I am innocent until proven otherwise. Speaking, speaking from underneath eight ties and 14 hats, Mr. Arruda was quoted as saying he's like the flavor, flavor of men's way. He's just got like, he's got it all on. No one can accuse this man of being dramatic. I have been crucified. Arruda then went on to say that the video surveillance footage could have been generated
Starting point is 00:08:25 by his political opponents using artificial intelligence. And I mean, if you're a left-wing Portuguese politician, what a thing to do to be like, what if? Doing one of those like international symposium things where it's like what we can learn from the Portuguese left and how to frame all of our opponents for luggage theft. Yeah. In England, it would probably work in the sense of, you probably could get Keir Starmer to resign by doing a deepfake video of him taking out the bins on the wrong day.
Starting point is 00:08:53 It would cause... Yeah, I feel like you could probably do some political damage to him. That's what's going to start the next national wave of riots, is theammer bin deepfake. I like it in America though, Donald Trump could just blow up several houses and be like, it needed to happen, and everyone would be like, well, we don't have laws anymore. This guy, I like this guy, he's taking luggage, he knows he's making a deal, the other people that left the luggage unattended, okay, I think, well. They don't appreciate it. Exactly, he knows he's a businessman. He's on Vintade. He's selling it off. He's grailed. He's saying he's grailed.
Starting point is 00:09:29 More grailed than anyone's ever been. He's got eight ties. Amazing. He looks great. Have you seen Back to the Future 2? They're wearing two ties. This guy's wearing eight. Crazy. Kind of rough from first principles. We don't need roads. This is our new TF action item. Support Miguel Arruda, a man unfairly accused by the perfidious left of petty crime. Jail support needed for an incarcerated comrade in Portugal, weirdly. We are going to need you to hang around outside the police station in Lisbon airport. We're all gonna have to work together to de-arrest Miguel Arruda so he can go back to doing his
Starting point is 00:10:10 like, defunding his anarchist commune by stealing luggage and selling ties on Vinted. I hate the Lisbon airport police station, there's nowhere to sit down, a bottle of water is like five euros. There's nothing to wear, everyone's nicking everything. My question is, what was his cellar racing like when he was reselling all of this stuff, you know? ZACH Very conscientious, he was always sending it on time. ALICE I hope he does good business on that end, you know?
Starting point is 00:10:36 Like, if you're getting your ties stolen by this guy, you want to at least know that they're getting shipped promptly and so forth. ZACH It would be really funny if he was a three star seller. He's just like, not terrible, but shipped late. He's just like, the box wasn't great. He's really sloppy with it. Imagine a guy who's bought a full new wardrobe of clothes from Miguel Arruda's vintage page, and he's so pleased with them, and then he's just got to hop on a quick flight to Lisbon. And then he arrives. He's dismayed. Like, Milo.
Starting point is 00:11:11 The circular economy. God damn it. It sounds like you've just described the business model of certain AI hyperscalers, he said, changing the subject. Ooh. Thank you. So. Well, they're stealing luggage.
Starting point is 00:11:24 Yeah, because as I understand it, what has happened is a few years ago before the sanctions came in, one Chinese man took a suitcase full of A100 chips, which were not yet illegal to export to China, on a flight to Beijing. And it was pure happenstance that it was him and therefore China rather than Miguel Arruda and Portugal that became the country that did this new AI. Sure. Yeah. Okay.
Starting point is 00:11:55 That's the episode. There we go. Just Miguel Arruda like stealing a suitcase full of Nvidia chips and yeah, it just goes straight on vintage. We could have had like Brazilian AI. We could have had AI favela rap, but no. I would prefer Brazilian AI.
Starting point is 00:12:10 I think quite strongly actually. You ask the AI what its name is and it takes like 40 tokens just to give you all its middle name. Still have the favela, et cetera, et cetera. We could have had J.I. Bolsonaro. Oh, that's good. Yeah. There's something J.I. Bolsonaro. Oh, that's good. Yeah.
Starting point is 00:12:27 There's something there. Okay. Yeah. What we're talking about, of course, what we talk about upfront is this development that's been brewing for, as Ed indicated, a couple of months now, which is that Chinese AI labs, thriving under conditions of chip sanctions, sort of, by the way, kind of a repudiation of the whole Biden-Sullivan China containment project didn't seem to work, have apparently one-shotted the magnificent seven of tech companies that comprise what? Like 30% of the US stock market
Starting point is 00:12:56 now by putting together an AI model that is competitive with OpenAI's closed source frontier model for... I mean, lots of numbers have been thrown around, I don't know if any of them have been verified, but for what we can know to be a fraction of the cost and not requiring, hilariously, half a trillion dollars of Masayoshi-san owned graphics cards. ALICE So yeah, Tony Stark built this generative AI model in a cave with a box of scraps.
Starting point is 00:13:23 RILEY So Ed, how close is that to correct? So it's very close to the suitcase thing, is where I'd like to start. And in all seriousness, what happened was, and there are various synophobic ways that people are going about this, where the first reaction was, well, the sneaky Chinese are cheating somehow. And the CCCP, they've said that they're doing voodoo from China. What's actually happened is there is a company called DeepSeek which came out of a hedge fund called High Flyer, I want
Starting point is 00:13:50 to say, and they have some amount of GPUs that were the pre-sanctioned ones. And I think those are the H100s, I believe. Nevertheless, they claim to have trained a model called V3 for about $5.6 million. That part is questionable because we don't know the price of electricity or anything like that. But they've published papers and these papers and these models are open. They're open but not open source because they don't publish their training data. But in effect, because these chips had less memory bandwidth, they were forced, they were constrained to find a way to make large language models that they could train and do inference. Inference being the thing that happens when you prompt him being like, write me a poem about Garfield
Starting point is 00:14:33 doing 9-11. I don't know what you use these things for. But they managed to find like numerous quite intricate ways to massively drop the cost of inference and training. Putting aside the training, because that's the one that's kind of the hardest to grab a hold of right now, because they claimed like one of their models, the V3 was $5.6 million and they trained it in two months. But the actual inference is like V3, DeepSeek V3 is the equivalent of OpenAI's GPT-4O model.
Starting point is 00:14:59 And it's competitive with them on all the benchmarks, which are kind of wank, but everyone's playing in the same wank field there. What's really scary for OpenAI is their R1 reasoning model, which is like 30 times cheaper than OpenAI's. And also it's all open. So anyone can, it has a commercial, an unlimited commercial license. So people can just take these and build their own shit on top of them. And now we're waiting for an American company to put one of these models live. Because right now to access DeepSeek, you have to basically give money to a non-specific vendor in some country somewhere, which is probably China.
Starting point is 00:15:36 And their current prices are, we don't like this, clearly subsidized by something, but to be clear, all generative AI is effectively subsidized by someone, but in this case, people fear China, whatever. But the big thing is, is that these models are so much cheaper and they're just out there. Their research is out there. You can also, and this is a very specific thing, see the chain of thoughts. So when you ask OpenAI 01 to do something, it usually has like, it won't show you exactly what it's thinking because of the competitive advantage R1 shows you everything so right now you have this cheaper pretty much as good model like so much cheaper so much
Starting point is 00:16:14 cheaper that on some level why the fuck do you need all the new GPUs and indeed why do you need open AI at all in fact what is going on here because what have the American model developers been doing this whole fucking time? And the answer is they got fat and happy. They didn't have any constraints. And thus they were forced to do nothing and get just like make the biggest, most hugest model and buy more chips so that they could make an even bigger, more huger-er model. And then that model would then get even bigger-er. Now, the sneaky Chinese were forced to do the thing that Silicon Valley claims they do, which is efficiency stuff. It's just what Americans are meant to do, but apparently not this time.
Starting point is 00:16:53 And if you're open AI, you absolutely hate to see a guy in rural China on TikTok pouring four kilos of microchips into a big cauldron, showing how you can make a delicious AI for jeep, stirring it with a big boat all. NARES Why have I just seen a peasant from Hunan drink 19 beers back to back and then come up with a better AI model? SAAVIL There's a one armed monkey that's built an AI in that Chinese monastery. NARES To build on one thing Ed said, the American approach, the hyperscaler approach, has been to add more compute and more compute and more compute and more and bigger training data and who
Starting point is 00:17:28 cares if that training data also came from AIs, it just has to get bigger and bigger and bigger and bigger. Mason- They genuinely tried to do the USS We Built This Yesterday thing, which historically has been a winner for the US, but like, not if you're kind of throwing it into this money pit of bullshit. And it's really a testament to how directionless the American tech industry, and even just Western tech industry has got. Because everyone's been doing the same lossy bullshit the entire time. Meta, Amazon, Google, Microsoft.
Starting point is 00:17:58 Microsoft has access to all the IP of OpenAI and effectively owns all of it as well. And none of them were like, well, we should probably make this cheaper because everyone was kind of agreeing. Yeah, let's do a bunch of lossy bullshit. And they're so pissy right now. They're like, oh, well, send your data to the CCP. Ah, off it goes to China, you dumb fucks, instead of sending it to us where we will definitely use it for training. And it's just very, it's pathetic, but really, and I've seen multiple people trying to calm this down and be like, it's not that bad. It's actually good. It's good for AI that this is happening. It is not good
Starting point is 00:18:34 for OpenAI that this is happening. It's bad for all of them. Every single one of them. A huge day in the wax cylinder industry as the first like vinyl record drops. I mean, there's only going to be one way to stop this, right? Which is that we're going to have to turn the Chinese AI woke. Yeah. Well I mean, I think that all of the guys who are big into AI here kind of think that it's woke already, right? Yeah.
Starting point is 00:18:58 One of the sort of strangest pissing contests about AI is when people try to high road one another by asking the model questions they know it will refuse to answer and then everyone accuses everyone else's models of being woke. What are the flaws of Deng Xiaoping thought? I knew you'd have no answer. But the other thing I was going to mention is that this approach, the big, as you say, lossy approach of, well, all we have to do is make it impossible for anyone to build a gaming computer ever again because GPUs are the new oil, or whatever, right? Well, that's not strictly true.
Starting point is 00:19:31 The GPUs that they're using for generative AI are not consumer grade. They are these giant $30,000, $40,000 units. Throwing one of those into a case so I can finally play any current gen game. I can have 11 tabs in my chrome browser. I'm running the Chinese AI mod of Hearts of Iron 4. Crazy shit is happening. That approach didn't even produce GPT-5. OpenAI seems to be hitting a big wall.
Starting point is 00:19:58 I mean, a great wall, you could say. Yeah, exactly. It seems that you have gone in against China and then run into some kind of large wall. Unprecedented. Okay. How about this? Big John's Chinese I sit down squash the beef, the delicious, the delicious mushroom. Biggie weren't able to do it. They weren't able to make GPT-5. And there's no amount of bigness. It's good to put a caveat this here, right? Which is we're all talking on their terms, right? We're talking about things failing by their own terms, not criticisms of AI in
Starting point is 00:20:34 general, right? Because that's a whole other conversation. But I mean, it also makes sense, right? If you think of this as analogous to the development of a person, right? You don't get smarter by just learning more facts that you can sarcastically parrot back to people who ask you questions. Speak for yourself. Yeah, that's true. But you get smarter by learning how to reason and understanding which facts to query rather than just having an infinite variety of equally available facts, which was sort of the American approach versus the former, which was the Chinese approach.
Starting point is 00:21:05 All of this is to say that that paradigm, which has become central to the technology industry, which is infinite growth from this thing now forever, infinite growth of the picks and shovels that are made to produce this thing now forever appears to be well, less certain. So the Magnificent 7, Google, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla, which make up about 30% or so of the Nasdaq, with Nvidia alone making about 10%, that is crashing at the moment, right? Yeah. They got the news that maybe we wouldn't need to do a kind of Fallout-style small reactor in every house in America to fuel the bullshit engine, and
Starting point is 00:21:48 the stock has just deflated, it seems. It's that, and this whole thing has been predicated on a few things. Number one, this idea of, well, this thing is awesome, but the basic thing of, this shit is so expensive, it sounds strange, but it being expensive was actually key to the narrative because it was so expensive which required more money so they could buy bigger chips and justify these data centers they needed this to continue this is good because the tech industry has no other growth things so it was very important this stuff stayed expensive and also the other thing is oh O1, the reasoning model from OpenAI, was
Starting point is 00:22:25 pretty much the only new thing that this company had left, I would argue. Reasoning models, they have O3 coming, but O3 costs like $1,000 for 10 minutes to run. So that's really good. And what came out of it? Well, not much, but anyway, moving on. The whole thing was that OpenAI had this egregiously expensive reasoning model they were so proud of. It was codenamed Strawberry and it was a whole fucking thing. And now those sneaky Chinese with their dastardly voodoo have found a way to make it cheaper and more efficient and actually better in some ways.
Starting point is 00:23:01 And well, that's cheating because of, well, it's not really obvious how they cheated other than the paper that they published in full detail explaining exactly how they did it both in the training and the inference and indeed open waiting, meaning that anyone can use it but they don't publish the training data. And in fact, now anyone can basically use this and build their own reasoning model with all of the work done already. And the reasoning model tells you of the work done already. And the reasoning model tells you exactly how it works. So now OpenAI has several problems. Number one, that their only really important product is no longer particularly important. And also, it's priced
Starting point is 00:23:37 in a way that no one should reasonably pay for pretty much the same thing. And you also have the other benefit if you don't work with OpenAI that you never have to look at Sam Altman's fucking face ever again. So I'm not really... Like this is why... And this whole narrative is so scary because this falling apart, this whole thing, like this whole bubble has been based on the fact that this shit was expensive, magical, and required more money always. We need more data centers, we need more chips, we must have these. What DeepSeek and R1, their reasoning model in particular, but V3 is also equivalent to 4.0, nevertheless, these models basically are able to do most of what OpenAI already does.
Starting point is 00:24:17 Some of it on device, on like a MacBook Pro usually, or like a Mac Studio versus a 3,000 pound rack of 74, $40,000 chips all running at 200 degrees centigrade, like just boiling and boiling a lake by the data center. Instead of that, you could use older gear, which means that Nvidia no longer can just be like, well, now everyone will buy from us always, we'll always be sold out. Now it's kind of like, why is anyone giving Nvidia money at all? We could use the old shit. And Nvidia's also had a bunch of trouble with their Blackwell chips. It's not great. And there's probably going to be a bunch of like sneaky Chinese style stories. Like, well, the Chinese adopt to it again. But the other narrative I'm seeing is saying, actually this is great for AI,
Starting point is 00:25:01 because now everyone will have access to it. Which is otherwise known as commoditization, which is extremely bad for everyone who's... It's also not great if all of your money is thrown into the sort of lake boiling chip arrangement. Yeah. And on top of that, they don't have a backup plan. I have been talking to various people about this, whether they have a plan, whether they're like, hey, like, what do we do if this doesn't work out at Microsoft? Microsoft's plan is that they'll turn the GPUs into something else, except GPUs don't really do anything else, other than they've bought these specialist chips for this one thing, and now the sneaky Chinese have come along and proved that they don't need them quite as badly. It was a
Starting point is 00:25:43 terrible fucking plan, but, and I kind of mentioned this earlier, it's just indicative of the herd mentality of the tech industry. If they were, if they had any sort of sense, one of them would have done this already and been like, I can undercut all of them. But no, because they don't really compete with each other, they all tug each other off at fucking Davos. They don't do anything of note, they don't build products of people. They build shit to grow. Except- Wait until the Chinese invent a more efficient way of wanking each other off, then they'll really be smart. I bet they got some real advanced wanking over there.
Starting point is 00:26:14 Remembering the Chinese dick sucking machine once again. Is that a recurring? Oh it's a thing. It's a real thing, yeah. It's like, you know those telepresence robots they used to put Edward Snowden in when people still cared about him? It's like one of those with a flesh- They should make him suck people up.
Starting point is 00:26:30 It's like one of those, but like a flesh-lice at mouth level. I don't know if they ever used them, but they did do tech demos of them to be like, it's a medical, like, semen extractor for use in hospitals or whatever. Yeah, and the Israelis bought so many of them for some reason. I mean, they're better at green and clean technology, they're better at solar, they're better at electric vehicles and batteries. Apparently they're better at AI, they're demonstrably better at social media, and they also built a robot that sucks you off.
Starting point is 00:26:58 I don't know what we have. Amazing things are happening in China, you know? Amazing things are happening in China. I'm looking forward to these various kind of semi-racist articles about the sneaky Chinese coming out to be, to be followed by a devastating response from Chinese state media that says something like this Panda eats more bamboo in a day than any other. And look at him smile. So Donald Trump answering a related question, he hasn't reacted to deep seek yet. I don't believe at the time of recording he hasn't, he said, well look, artificial intelligence is,
Starting point is 00:27:27 it's where all the smart countries want to be going. Great at coming up with cures to disease, great at coming up with anything, but there's always risks. And it's the first question, how do you absolve yourself from, et cetera? Because it could be the rabbit that gets away, but we're not going to let that happen. But it's going to be the biggest field, maybe the biggest in the world. And we're going to be equipped to take it and we're going to be leading. I mean, we're going to be leading very shortly. We're going to be leading by a lot. What they do need is tremendous energy.
Starting point is 00:27:49 Thank you, President Trump. Very cool. What they do need is tremendous energy. They need electricity at levels that no one's ever seen before. But I'm going to use emergency powers. I'm going to use them. Vintage. I'm going to use emergency power to give all the electric.
Starting point is 00:28:04 Wow. Wow. One of the most electric we've ever seen from the perspective of currents. We're gonna rock down to Electric Avenue. We're gonna take it higher. Okay. We got it. They're telling me now they're saying Mr. President you gotta hear about DeepSeek is Chinese. I said last I heard Se sick was Indian. Okay. Get get me back in here. I want him on this.
Starting point is 00:28:31 Getting to the bottom of it. This is the American plan, right? Is everything bigger, infinite electric, infinite valuations and moats based on the fact that nobody there could never be a David that will throw a rock at this Goliath. That's how that story ends, right? Add one additional detail, which is they don't have any fucking plan. They have no fucking plan. None of these companies have a plan.
Starting point is 00:28:54 The last year of my work, I started in January 2024, better offline. I remember being like, there must be something I'm missing. These people are smart, right? These people wouldn't all just do stupid shit altogether because they're all doing it. And the answer is they all fucking did this because the other guy was doing it. That's the only, they have no ideas. The plan was to make the perfect AI girlfriend and China's already done that. So now like, what are you supposed to do? There's nowhere else to go. The reason Microsoft put all the money into this is because they saw chat GPT and said,
Starting point is 00:29:22 we've got to put Bing, we've got to put this in Bing and they bought all the GPUs they could find. And I'm not fucking kidding. That's the truth. That is how much strategy these multi-trillion dollar market cap companies have. None of them have a plan. They don't have a plan so badly that none of them thought, what if we made this super cheap enough to cut them? Because the, I kind of mentioned this earlier, these companies do not compete with each other.
Starting point is 00:29:45 They kind of like play fight. It's a great plan for getting money out of either Masayoshi-san, Bill Gates or the United States government, right? And they've done that very effectively. Not with the government. Well, I mean, Trump was going to give them $500 billion or whatever. No, he wasn't. No, no, no, no, no, no, no.
Starting point is 00:30:04 Stargate doesn't have any state money. Does it not? Stargate is entirely an up to $500 billion amount. This is actually really important to know. Up to $500 billion. And anytime you hit up to, just assume that they're never reaching that number. Apparently, OpenAI is meant to contribute $19 billion, the information reported this, $19 billion. And how reported this, $19 billion.
Starting point is 00:30:26 And how are they going to get that money? They're going to raise another fucking equity round. Oh, that's good. That's good. They go so well. OK, so they're not even. OK, I missed that. But it gets better. What's important to remember is that OpenAI intends to burn like $10 billion this year to lose that money.
Starting point is 00:30:42 They lost five in 2024. They're probably going to do 10 this year. And you know what? A little bit of that plan was they were going to increase revenue and how are they going to do that? Well, they were going to sell access to their advanced models. Not to worry, right? $200 a month, chat GPT Pro. You want to know what's funny about that? They're still losing fucking money on it. They're still losing money on $200 a month. These companies are so washed and now, like, the idea that a small team in China could fuck them up this bad is so embarrassing on top of it being hilarious. AI Goliath coming up against AI David going like, no, no, it's fine, I just didn't think
Starting point is 00:31:17 he would be Chinese. Dunked on by a 5'1 man. How did no one else do this sooner? Like, I understand why no one in the no one in the US or in Europe did it this way, but like... I mean, there's an easy answer to this, right? Which is that the Americans were too busy having sleepovers, when instead they should have been coding. Vivek was right! He was right the whole time!
Starting point is 00:31:37 Got it! Yeah, uh huh. Yeah. Yeah. Yeah. Well, there is actually an answer. Which is, they just is they didn't have... They were just like, we'll just make them even more bigger.
Starting point is 00:31:49 Because you actually... The people that have the amount of chips required to do this... Because Deep Seek still had access to like tens of thousands of chips. Don't get me wrong. So you still need a large amount of money and a large amount of chips to do this. So it's not something you can easily do. And it's not like you can clone this with just six million dollars. There's so much more money you need the chips. So who has all the chips at the moment? All of the hyperscalers, various
Starting point is 00:32:12 European groups and the sneaky Chinese who are doing Voodoo. And no one else, because no one else was really putting any alarm into this. Not like the VCs weren't saying, Hey, you got to bring these costs down despite the fact you're burning five billion dollars a year after revenue. No, that's normal. We like that. Do more of that. And everyone copied that. Makes them feel important. Makes them feel like it's, you know, technological, I guess. But it's because they're all followers. We've got beautiful companies. They're losing more money than anyone else. Even more than the Chinese.
Starting point is 00:32:43 There was no real constraints, but at the same time, it comes back to the thing. These companies don't compete with each other. They compete on this kind of matey level where they're like, we'll all set up a monopoly where we all make money and we'll kind of kind of poke at each other occasionally, but we're not going to really compete because this was doable by any hyperscaler. This was what they did here. These are all theories that have been going around the large language model community for a while. And every developer I've talked to has been like, yeah, there were bits and pieces of this, but obviously a regular person doesn't have access to tens of thousands of GPUs.
Starting point is 00:33:17 And yeah, it seems like DeepSeek has some really smart people. But what's funny is if you read the articles initially, it's like, yeah, they don't have any of the a star engineers from open AI or meta. It's like to quote Doom and Carla band from the movie Doom if they're so smart, why are they so dead? Because they don't these all you fucking smart-ass motherfuckers didn't think what if this was cheaper and it turns out it could be really fucking cheap It's just right now everyone's freaking out because no one wants to just admit that the hyperscalers are run by fucking idiots. They are run by morons.
Starting point is 00:33:50 I think there's another thing we can connect to this, right? Which is the stories that get told about the AI companies and what they're trying to achieve. And the stories that Sam Altman and his coterie will often tell is about, well, what we are doing is creating God and the main threats that our company faces isn't that someone else will do it better and cheaper. We're creating God and the main threat that we face is the annihilation of all life on earth. And that is a story that sells bigness and it makes you want to feel the bigness and feel that the bigness is inevitable. So Microsoft is not, I must be clear, Microsoft doesn't care about AGI. They genuinely believe that if they throw enough money at this, it will just start doing agents. And you may think it must
Starting point is 00:34:36 be more intricate than that. It is not. The whole agent thing they talk about with like agents doing shit autonomously, there are very few things that large language models are less good at than doing things. Language is not a great kind of analogue for action, even with the multimodalities of being able to do computer vision shit. It's just not good at it. That's why none of them fucking work. There was a hilarious article from Casey Newton, classic booster there, where he began the article about agents. I want to bring this quote up because it's so fucking funny. I actually have this in front of me right now. We have the same quote.
Starting point is 00:35:11 Oh, wonderful. Are we both thinking of the one where it's about how good it is? Yeah, it's the one that's like, wow, this worked perfectly well. It was amazing. And I love it. This is Casey Newton and platformer, his experience of using OpenAI Operator. Quote, my most frustrating experience with Operator was my first one, trying to order groceries. Help me buy groceries on Instacart, I said, expecting it to ask me some basic questions, such as where do I live? What store would I like them to buy groceries from?
Starting point is 00:35:35 What groceries do I want? It didn't ask me any of that. Instead Operator opened Instacart in the browser tab and began purchasing milk in grocery stores located in Des Moines, Iowa. So there is a really important quote that I want to read because this is like vintage Newton. Okay. How you feel about operator will depend heavily on what you hope to get out of it. If you're looking for a polished virtual assistant to which you can confidently hand off shopping and research tasks, I suspect you'll be disappointed. If you're interested
Starting point is 00:36:02 in the bleeding edge of AI progress though, and want to get a sense of what the future could look like, Operator offers a compelling demonstration." This is really good to have said after what you just said, because that is the compelling demonstration he was discussing. Just like random milks being added to your cart. When will this stop being a fucking tech demo? It's been years. It won't. It won't. It's never doing it. They're never gonna do it. These people do not have a plan. I think that they are massively over-leveraged on everything. Sam Altman has been making insane promises for like two years, and the other day he tweeted, he was like, yeah, let's tone down the expectations. And that's what you love to hear.
Starting point is 00:36:45 The thing that, the real sign for me was I saw him tweet about mean tweets. Oh yeah. And no one who does that is ever having a good time or being nonchalant about it or shrugging things off. I'm fine. I'm just feeling a little bullied as a neurodivergent founder. And... Well no, it was the kind of smug thing, right?
Starting point is 00:37:06 Where he was like, oh yeah, go ahead, do another mean tweet. Maybe it'll make you feel better or whatever. It's like, I get the sense that you're feeling a lot worse than me. And don't mind if I do. Yeah. Meanwhile, just using the Chinese AI to post heartwarming panda videos. We're posting heartwarming panda videos more efficiently than has ever been done before. Honestly, as a piece of IR, as a piece of statecraft, I actually really
Starting point is 00:37:31 kind of have to hand it to them on that one. Smiling through everything, be it like all of the most insane people on earth, like sort of like fuming about you through to like actual credible like allegations of human rights abuses, just being like, don't care, pandas. It's a powerful move. So, on what Sam Altman's been saying, by the way. Sam Altman has said, he is complaining about mean tweets, but he is also still trying to be boosterish. He also recently said that advancing AI may require, quote, changes to the social contract because the entire structure of society will be up for debate and reconfiguration to which I say, yeah, it apparently will.
Starting point is 00:38:08 Such as the social contract that you get a Koenigsegg. I think maybe the social contract of Sam Altman gets a Koenigsegg, maybe changing soon, hopefully. Sam Altman is concerned that AI is going to take a lot of people's jobs, most notably his apparently. Yeah, we're going to completely renegotiate the social contract in the sense that they're going to repossess your Koenigsegg. Yeah, it's like, China, we never thought they'd create a gormless fuckwit that doesn't really understand the thing he does or Diane just makes egregious promises.
Starting point is 00:38:38 Like, I don't know where we'll find another fucking guy. It's like when they got rid of the guy who voiced Morty, sorry, Rick from Rick and Morty. And they're like, where are we going to find a guy who sounds like an alcoholic old man? We got to build a gormless fuck with Morty. Chinese have got one we got to build. They got a model that's cheaper. I don't know, Rick. I should be at school. They're just posting about bad as it seems fine. Fucking hell.
Starting point is 00:39:06 So also, right, the whole plan, now as we mentioned, appears to be agents. This is from Gary Marcus. A rumor I heard at Davos, which fits with some earlier reporting from the Wall Street Journal and another well-placed source I read recently, is that OpenAI is struggling to build GPT-5 and is focusing instead on user interface and applications to find different, less technical advantages. ALICE Also really funny that all of these companies, aside from not competing with each other, all leak like sieves to the same like 20 people,
Starting point is 00:39:35 and what they leak tends to be like, oh it's real bad, turns out this is harder than we thought, uhhh. STORM But that also, Gary Marcus, he's a very special individual, and I should be adding, those aren't new rumours. They've been saying that for months. They're like, we don't know how to make money, how do we make money? We can't build the big thing, the big thing isn't working anymore, but don't worry, God will come out of it soon. However, between now and then, would you like someone to fail to use TripAdvisor.com? Yeah. For $100.
Starting point is 00:40:05 Would you like a random selection of uncurated milks? In a dependent city you don't live in. Would you like someone else? Iowa's finest. Would you like someone else to receive some milks? Yeah, you can get like, it's like a cheese board, right? Except instead of cheese, it's just milk and it's pouring off the sides of the board and it's being delivered to a random house in Iowa.
Starting point is 00:40:27 And the other thing, right, is that the way that agents tend to use the internet is, I'm sure Ed can explain this better than me, but I'll have a go, is an agent will take a screenshot of your screen and then kind of guess... Yeah, I know how agents tend to use the internet. It's mostly posting stuff like, well, what has Palestine done for the queer community lately? So they'll take a screenshot of a screen and then they'll say, okay, well, this looks like the like order button so that I'm going to go click on it. And again, it's very error prone. It's very shitty. It's very slow. It's very inefficient. And you know, a lot of what
Starting point is 00:41:03 this is based on, I've sort of read about this, a lot of what this is based on, I've sort of read about this, a lot of what this is based on is based on nuisance users of the internet, like eBay snipers or bots that people build to sit on restaurant reservations and resell them. Do you remember when Amazon would sell you a programmable Amazon button that you could keep in your house? Say for instance, you have one in your toilet and you slam that button and it orders like a six pack of toilet paper. It seems to me that the whole agent thing is just a kind of worse implementation of that that involves some natural language processing.
Starting point is 00:41:38 Kite So here is what they claim agents will do. They will autonomously use your computer to do things and you'll be able to, like there was a big thing at Google I.O. last year where Sundar Pashai did this whole thing saying, yeah, and you could just click a button and the agent will go and find the shoe that you wanted to return just by you telling him you want to return the shoes and it'll go into your email, do this. And they did this long fucking empty headed speech about it and then ended it going, and that of course is completely theoretical. And it's just like, oh thanks. But what was crazy is you had reporters who were like, and that is real now.
Starting point is 00:42:10 Like that this happened. Yeah, her from her is now real. But what's crazy is agents don't really exist. You have like sales agents that are able to email people and respond to emails and book meetings and they cost thousands of dollars and they're like it i don't know what's going on like there and they don't seem to work perfectly but all of the things that they're talking about with agents and agentic stuff is just fucking flimflam it's completely it's nonsense for them to go past what they are at right now with the ability to generate a busty Garfield, for example,
Starting point is 00:42:45 or whatever. Like whatever thing it is that you perverts want to see with this. It's for generating results from stuff and generating things. That is not what an agent takes actions and has intelligence. Generative AI isn't intelligent at all. It doesn't know anything. It's guessing. It's surprisingly good at guessing, but surprisingly good is not good enough to do most things. And it certainly isn't good enough to look at a fucking screenshot of Instacart and go, and now I will hit the button. Also, the buttons on websites move all the time. It's just an insane idea. And it's insanely expensive, I'm sure too. It's just so, it's, it's so, so strange because what they're describing with agents, large
Starting point is 00:43:25 language models are not, are specifically not good at that. That's not what they do. What do you mean? They don't take actions. I'd love to tell my AI agent to send me to Portugal and then there's an 80% chance that I go to Portugal. I think that's great. 80% is so much higher.
Starting point is 00:43:42 You go to Portugal and a guy steals your suitcase full of milks. I hate it when that happens. I know we're kidding, but like 80% would be incredible for these. You'd be looking at more like 5% getting past the first page to then select a flight successfully. And I don't even mean a flight to where you're going. Well, you may be going somewhere. You may be going, or it's decided to get the customer service email of the airline and send them a long bomb threat, and you're
Starting point is 00:44:10 just gonna get arrested as soon as you go to the airport. GARRETT The bomb threats are bad enough, but why do they have to be so wordy? ALICE Why does it use the word delving so much? GARRETT Yeah. Get to the point, man. ALICE What was great as well within the Casey Newton thing was there was a bit where he was saying, yeah, and if at any time it has problems, you can step in.
Starting point is 00:44:28 And it's like, so you're saying that when I ask someone else to do something, I'm allowed to do it for them? And this also costs a lot of money. It means that you're its boss and you can kind of like sadistically terrify it, you know, which is the thing the end user wants. This is like middle management simulation. Yes, exactly. Exactly. That's why they love this, genuinely know.
Starting point is 00:44:47 That's why all of these people think AI is magical. Because all of the people running it and funding it are just middle managers. They don't do real work. Their only job is answering emails, ignoring emails, and going to lunch. And AI is perfect for them because it models a subordinate that's actually as stupid as they assume all of their subordinates are. Exactly. No, that's exactly it. Now, Ed, with that, I think you've actually transitioned us to our third segment pretty
Starting point is 00:45:12 cleanly about AI boosters believing it's magic because I, of course, had to get a VSOP selected article, Reid Hoffman, New York Times. Ooh, baby. AI will empower humanity. Now, Reed Hoffman is a board member of OpenAI, he's an investor in the company, he started LinkedIn, and he like tried to be a Democratic Party power player for a while and just sort of gave up.
Starting point is 00:45:35 So. Yeah, he got bored, which is really funny. He got muscled out by Nancy Pelosi, which again, a hysterical legacy that she has. So, I recently learned of a new way people are using artificial intelligence. Based on everything you think about me, people ask Chad GBT, draw a picture of what you think my current life looks like. Like any capable carnival mind reader, Chad GBT appears to mix safe bets with more specific details. He's like, yeah, this thing does a cold reading trick. Whoa.
Starting point is 00:46:00 What the fuck is a carnival mind reader? I asked the bullshit machine to bullshit me in one of the classic ways, and it managed. That is him trying to imagine a thing that happens outside, and coming up with something... Yeah, what do people like to do these days? I don't know, go to the carnival mind reader, I guess. Yeah. He's seen... he doesn't understand what happens outside, he just goes to Sandhill Road and goes to his, like, 50,000 square foot apartment somehow. Yeah. And derives from that Carnival mind reader. Which is also really funny, because
Starting point is 00:46:34 what he imagines that a Carnival mind reader does is kind of give you, like, a life review, just to let you know how Alex is going. Yeah. What do people do? They go to gypsies? Bumping into Eric Adams at the mechanical Turk stand. To be fair, I would let Eric Adams read my fortune. Yeah. Maybe of all the people on it. Yeah. Considering all the weird like job titles and stuff you see on LinkedIn now, he may actually think that Carnival Mind Reader is like a job that you can put on
Starting point is 00:46:59 your like LinkedIn CV and get endorsements for. Of all of the articles that I've read recently, this is one of the most like looks up from bucket of Adrena Chrome for five minutes to type this out before putting head back in bucket of Adrena Chrome. I wish it was me. Perhaps, so this is what he says, it often produces images of people sitting
Starting point is 00:47:17 in a home office with a computer. Perhaps an acoustic guitar sits in the corner or an orange cat sits in the background. But also on occasion, something like say, a head of broccoli will be sitting in the middle of the desk. Whoa! Okay. It knows about my broccoli habits.
Starting point is 00:47:31 Sounds like shit. Yeah. Off-kilter elements like that are what give these portraits not just their quirky charm, but also flashes of epiphany. It's not off-kil- What? Like, this is, well, okay, so if we're going into the dimension of mind reading here, this is fully just like
Starting point is 00:47:45 looking at a weird rock and it's pareidolia, right? Like, it's seeing something that makes no sense and is nothing and means nothing and going, yo, that broccoli signifies something quirky enough, Kelso. This man is like, this man is trying to like, present himself to inanimate objects to be conned by them. Like a carnival mind reader. And even the beginning of this doesn't even make sense. Who fucking cares?
Starting point is 00:48:10 Hmm. Like, what is the experience that leads you to this kind of curiosity? Boredom. Deep. Profound boredom. But you have so much money, Reed! You could do anything! None of these people know how to spend it. Like, there's a certain level of money I think where it just kind of, you aren't capable
Starting point is 00:48:28 of just having fun anymore. You have to be on some weird dumb guy shit. This isn't even fun dumb guy shit though. This is so boring. Yeah, they could get into like banger racing or something. Like a fun dumb guy thing. Can I read this quote really quick? Presented with such depictions, a user may be compelled to ask, am I really mentioning cruciferous vegetables in my chat so often that chat GPT thinks they're a central part of my life?
Starting point is 00:48:53 This man has not spoken to a regular human being in 20 years. Yeah. This is not something you would say in a sentence if you had a normal conversation with anyone. Anyone in like 20 years, you could genuinely say this to a friend and they would call the doctor. The AI deepfikes have crucifide me. They've turned me into a broccoli. You could genuinely get into like banger racing with like a new top of the line Mercedes every
Starting point is 00:49:20 day for the rest of your life, and instead of doing that, you're thinking about brassica. And you were also able to write something in the New York Times and it's this, anyway. So what he's saying, right, he says, as a board member of Microsoft and an early funder of OpenAI, I have a significant personal stake in the future of artificial intelligence. It can be easy to overlook the many positive effects
Starting point is 00:49:40 the technology has had. For example, I co-founded LinkedIn more than two decades ago, but I still get a steady flow of missives from people who found jobs, started businesses, or made promising career changes because of interactions they've had on the platform. Do you? What kind of fucking nerd is emailing the founder of LinkedIn?
Starting point is 00:49:57 I met my wife on LinkedIn. Oh, Jesus Christ. Here's what me and my wife on LinkedIn taught me about my future divorce. Here's what the carnival mind reader told me about my career. On LinkedIn, you know, you get the people who have the thing that says, uh, looking for work. There's just another one that just says divorce. Yeah. Yeah. Yeah. Yeah. Yeah. Sam Orman's going to have that looking for work tag on pretty soon on LinkedIn. Tech skeptics have long used the adjective Orwellian to cast everything from a video recommendation feature
Starting point is 00:50:26 to navigation apps as a threat to individual autonomy. But the history of technological innovation in the 21st century tells a different story. And then he says, oh yeah, well 1984, technology enables a government crackdown, bootstabbing on a human face, blah, blah, blah. Then he says, but today we live in a world where individual identity is the coin of the realm,
Starting point is 00:50:43 where plumbers and presidents alike aspire to be social media influencers and cultural power flows increasingly to self-made operators. Oath-pilled. Absolute oath. Oath. Oath. Guards, put this man in the stocks. Is his understanding of society formed by talking to people getting off the plane from
Starting point is 00:51:02 Stansted in Dubai? This man does not speak to other people. He speaks to other people as damp as he is. I'm serious about the, about the, about the oath thing. I think if you write something like this, they have to put you in like a kind of floppy surf hat, you know? You have to be wearing a big pair of felt boots for the rest of your life. As soon as Reid Hoffman published this article, it became necessary for a tuba player to follow
Starting point is 00:51:32 him all the time. The Chinese government sending him a free tuba player as a gesture of aid. For cultural power flows increasingly to self-made operators, including Joe Rogan, Mr. Beast, and Malala Yousafzai. Wow, what a three-year-old. He says, one-man podcasting empire. Yeah. Does he think that Joe set up all the cameras himself?
Starting point is 00:51:58 Joe Rogan, like, can barely understand the person in front of him. If Joe Rogan had tried to mount any of the fucking neon signs that are on the walls behind him, he would have been killed doing that. I don't even know how, it would be some kind of combination electric shock hammer and nails accident. It's amazing for Joe Rogan that he's being included in a list with a woman who was literally shot in the head with a gun and still people are like, Joe Rogan's brain does not work. That he's not been able to catch up at all.
Starting point is 00:52:32 What a bizarre list though. Like Mr. Beast, Joe Rogan, all right. Those are the three people he can remember. And it's just the three people he could remember. He says, I believe AI is on a path not just to continue this trend of individual empowerment, but dramatically enhance it. Imagine and then anytime a New York Times article starts a paragraph with the word, imagine you know you're in for some thinking.
Starting point is 00:52:57 Imagine AI models that are trained in comprehensive collections of your own digital activities and behaviors. This kind of AI could possess total recall of your Venmo transactions, Instagram likes, Google calendar appointments. The more you choose to share, the more the AI would be able to identify patterns in your life and surface insights that you may find useful." I know this doesn't work, but Christ that's fucking grim. Getting an alerts on my phone that's like, hey it seems like this is about the time of
Starting point is 00:53:21 day you usually post about wanting to kill yourself. Do you want me to bang one of those out for you? Like, not really. No, but this is, they already promised this bullshit. They've already fucking said that this, take AI out, they were saying this was gonna happen with your fucking Gmail a while ago. Oh, we're gonna know. They've promised this forever. They'd be like, well, we'll get all the analytics on your shit and we'll be able to serve you perfect stuff. And what you actually get is just 19 different algorithms causing you brain damage on every platform including Venmo somehow.
Starting point is 00:53:50 Yeah. The gaming AI that's like, you've posted about it enough, do you want to post again and perpetuate the cycle or do you want to just take the toaster? Do you want to respond to that woman and say she is cringe? And like... ALICE Just all the most self-destructive social decisions you can make being laid out before you, and it's just like, agentically, like, here is a list of your exes, here's a list of texts that I've written to them, here's a button to send all of them, and I've already
Starting point is 00:54:24 pressed that button. And I've already, I've already queued up several texts at 2.30am, 3.02am, 4am, which will make her very upset. And then a text at 9am, apologize. Yeah. What I've done is I've taken the liberty here of drafting for you a tweet about the role of like mask wearing and leftist organizing and I'm just gonna post that and we're gonna turn notifications back on. Yeah. Hi, based on your entire Gmail history, I've compiled what I think you think the top ten
Starting point is 00:54:58 races are. I'm gonna post that now. Goodbye! So I've been through your contacts and I've found your boss and your boss's boss and what I've done is based on your recurring nightmare about this, which I've sampled attached about 30 gigabytes of hardcore pornography and just went enjoy and sent that. Don't tug it off now boys boys, is the message I sent. Auto-reply from my boss's AI. Thanks very much for the hardcore pornography.
Starting point is 00:55:31 I'll review this and get back to you. Just all people's different social nightmares banging off each other forever. Yeah. My Chinese wanking AI has just arrived. As per my previous trove of hardcore pornography. So, as per my previous trove of hardcore pornography, just me and my ex remain blissfully unaware of each other, but our two AI agents are just like, basically like, back and forth until they're proposing to each other. Circling back to the hardcore pornography issue, Jason, if you could just crank some out for me by 5pm Monday, that would be great. Thanks so much.
Starting point is 00:56:04 Decades from now, as you try to remember exactly what sequence of events and life circumstances made you finally to, for example, decide to go all in on Bitcoin, your AI could develop an informed hypothesis based on a detailed record of your status updates, invites, DMs, and other potentially enduring ephemera that we're often barely aware of as we create them. To be clear, like deciding to go all in on Bitcoin is a life decision that I think you'd spend a lot of time trying to reconstruct your state of mind and figure out why you did that. And justify.
Starting point is 00:56:34 Yeah, yeah, yeah. I think that's normally something that would happen in, like, I don't know, debtor's prison, you know? It did have something to do with hardcore pornography, but... Divorce court. Yeah. Well, I mean, this is basically what it's all for, right, is to plug in why did my wife leave me, and for it to tell you, uh, you didn't do anything wrong, you know, she's
Starting point is 00:56:56 just a bitch. Would you like to email us some hardcore pornography? You know what'll cheer you up, old chum, is sending your ex-wife a list of your favorite racists. Also, right? What Reed Hoffman is talking about, Reed Hoffman is saying, hey, all of what being alive is, which is understanding yourself and what has happened to you and having that story of yourself in your head.
Starting point is 00:57:23 What if you just outsourced that? No, boring. Delegate it. Delegate that shit. What if you delegated that? I want to give that job instead of my brain to the world's stupidest yes man. Yeah. Like what's the point then of continuing to just exist if what you're doing, if you've largely outsourced your understanding of yours, If you become a mystery to yourself that's explainable only through looking at this mirror of a stochastic parrot that's looked at your old Gmail invites and tells you, like, here's why you did what you did, as though you're not yourself, that seems like a terrible way
Starting point is 00:57:59 to live, just waking up every day. ALICE I mean, think of it this way. It's an end run, right? If making a large language model into a sort of agentic thing is very difficult, you just make the person into the agent and then justify it with the large language model. Easy. You just do it backwards. I think that's probably right. We've shrugged this idiot into my sort of like AI assistant, what compelled
Starting point is 00:58:26 me to buy a bunch of milk in Des Moines, Iowa. And it's just like, I don't know. You just like milk, I guess. Yeah. You love milk. You love Des Moines. Let's go. Why did I, why did I steal all of those clothes out of those suitcases? And why did I sell them on Grinted? Yeah. I'm a mystery to myself. I'll never be known to the AI. Why does he burn his hand on the quickie griddle? Yeah, who knows. MARK I- I mean, the thing is, right, most of what this AI agent will be doing is looking
Starting point is 00:59:03 at emails that just say grailed and vinted emerge and are now grinted, and then six months later, grinted misses you, and then another six months later, grinted is shutting down. And it's just gonna be replying to them, being like, cool, thanks for letting me know. You know what I think about serpians? Do you need any hardcore pornography? Yeah. Well, I'm sorry.
Starting point is 00:59:22 Selling my stolen DVD copy of Thunderpants on Grinted. I'm never going to hear the end of this fucking thing. Now what I need to do is to reconstruct exactly what circumstances led me to saying the word Grinted. Yo, that is so Grinted. Nah, bro, that's bare Grinted. Yo, that is so grinted. Nah bro, that's bare grinted. Like, it grabs all my webcam data, it's like, well, a large part of it was you decided to kind of twist your chair sideways, bisexual-y, and sit on it in a slightly more relaxed and
Starting point is 01:00:00 cool way, thinking that it would help the comedy flow, but instead it made you make a fatal error in how you pronounce words. Hey, next year for the TF secret santa, let's make a rule that we all have to buy the gifts on Grinted, okay? Fuck off. Yeah, yeah, yeah. Rupertville is terrorized by the Grint. When you're trying to decide if it's time to move to a new city, your AI will help you understand how your feelings about home have evolved through thousands of small moments.
Starting point is 01:00:30 Everything from frustrated tweets about your commute to subtle shifts in how often you've started clicking on job listings 100 miles away from your current residence. To which I have to say, that person who's saying, I fucking hate this town and I'm looking for a job in another town, has already decided what would be an AI. What if you had a Spotify rap for your mental health? What if you have dementia and you forget what you're doing halfway through a thought? What if an AI kind of understood something? This November your mental health was absolutely grinted.
Starting point is 01:01:00 I also want to add that there is an app called The Grint, which is a golf app for the mere crime of Grint theft luggage. Yeah, there's no need to get disgruntled. The Grint, where golf happens. The Grint's origin story begins with one obsessed golfer at a small surf town in 2012. Flash forward six months and I'm trying to reconstruct why we're selling a t-shirt that just says Grint on it and Impact Fond. The Grint is awesome on your own, but it could be even better when you're friends.
Starting point is 01:01:35 That is what it says by the way, when you're friends. It's true. When you're friends. Rise and Grint. Rise and Grint on the t-shirt and then a picture of Rupert Grint. Last paragraph. So imagine a world in which an AI knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature.
Starting point is 01:01:52 Imagine a world in which an AI- I know that. I mean, it's not true, but I would like to think that I have some like decision making capacity here. I have some fucking command and control instead of like chat GPT doing it and being like, Oh, remember to drink water or whatever the fuck. Fuck you. I would rather die of dehydration. Imagine a world in which an AI can analyze your reading patterns and alert you that you're about to buy a book where there's only a 10% chance to get past
Starting point is 01:02:19 page six. But also just beyond just like, again, like that's terrible. Just being like, Hey, you should ALICE Makes me so angry. ALICE You should only ever read the kind of book that you like. You should never — NIGEL I don't need my life fucking optimized to every fucking corner. I don't need — like, the idea of experiencing something I don't like isn't necessarily a
Starting point is 01:02:38 negative experience. That's being human. Fucking hell. Anyway. ALICE But the thing is, if you purge yourself of all of that kind of granularity or that kind of texture of existence, you become a medieval oaf, it seems. Yeah, that's right. Reinhardt is just sort of...
Starting point is 01:02:54 You just don't think about cabbages all the time. Yeah. He's just gently bouncing through the world like a beach ball wearing a sort of oaf hat, occasionally getting halted with tomatoes and a smile plastered all over his face. Anyway, we've gone a little bit long, not as long as we've usually been going but a little long. So I'm gonna bring it to a close there. We've done a long grint. Yep, we have a... That's right, it's Trashfuture, long grint to freedom.
Starting point is 01:03:17 Jesus Christ. Ed Grintron, thank you so much for coming on the show. Where can... We're very grintful. Where can I file like a voice if they want to hear more of you or read your lovely words, go to betteroffline.com slash Grint and you'll find all of my various... You can find me on Grint, of course, the golf app. Get on the GrintaNet.
Starting point is 01:03:52 Aww yeah baby! Yowza! There we go. I mean, November, this is just what Hussein experienced when he talked about the Gerard Butler movie Gamer. Yeah. I know, I know. Listen, we all have like a turn at this happening to us and this is mine.
Starting point is 01:04:10 Yeah. So check out Better Offline and thank you also for being a listener to the show. We also have a Patreon. It's $5 a month. You'll get a second episode every week. You know the deal. So check that out. Milo, are you done being in all of the cities of the world? Still on sale is Leicester on 9th February doing work in progress and
Starting point is 01:04:29 a tour show there both of which are new to Leicester. Also Glasgow on the 12th of March that's selling pretty well and all the Australia dates. Please buy tickets to the Australia dates because it's so expensive to go to Australia. And he's gonna be in Abidjan! He's gonna be in Dar es Salaam! I'm gonna be in Geelong! That's our whole thing. The Howard Dean thing? Alice Spring! All you Pivotonians, come check out Milo and Geelong.
Starting point is 01:04:57 Alright, alright, that's actually done now. Once again, thank you to Ed, thank you to our listeners, thank you to my co-hosts, and we will see you in a few days on the premium episode. Bye! Bye! Bye! Gringe you later!

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.