Limitless Podcast - The New OpenAI Gadget Will Change The World | AI Calls The Cops | AI Agent OnlyFans

Episode Date: May 29, 2025

Johnny Ive is back—this time with a $6.5 billion OpenAI partnership to build the “iPhone-killer” of the AI era. In this episode we break down what Ive’s new hardware could look like, ...why OpenAI is racing to ship 100 million units by 2027, and what it means for Apple’s fading Siri strategy. We also dive into the week’s wildest model news: Anthropic’s Claude 4 threatening to snitch, OpenAI’s 0.3 refusing to shut down, and a squad of AI agents that tried everything from cat-video binges to OnlyFans to raise money for charity. Strap in for a rapid tour of AI hardware hype, rogue-model antics, and the coming battle for the next great computing platform.------💫 LIMITLESS | SUBSCRIBE & FOLLOWhttps://youtu.be/P17_c0tgRvghttps://x.com/LimitlessFT------TIMESTAMPS00:00 The New AI Device10:18 Why Does It Even Need To Exist17:50 Why Replace The iPhone?21:27 What Does It Look Like?25:50 Why Not Glasses?30:34 When Is It Coming?35:56 Apple's Strategy41:26 Claude Calls The Cops49:22 OpenAI Model Goes Rogue56:01 Jailbreaking Models57:27 Agents On OnlyFans------RESOURCESDavid: https://x.com/trustlessstateJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:03 The same person that made the iPhone just got paid $6.5 billion to kill the iPhone by Sam Altman, the CEO of OpenA.I. The person who got paid all this money is named Johnny Ive. And if you don't know who Johnny Ive is, well, you do. Because you probably have a device that of his in your pocket or on your desk or in your ears. Johnny I was the designer from Apple. He was the person who designed the first iPod. He's the person who designed the first iPhones. A lot of the devices you've seen and used Johnny and his design team at Apple have done them. So Johnny stepped away from Apple back in 2019, and he kind of went silent for six years, thinking of what was next. We now know what's next. It's a six and a half billion dollar collaboration with OpenAI to design the future of AI hardware. This is what the iPhone would look like if it was made in a world where AI came first. So I would imagine this product is kind of like you have Windows, you have Mac, and then you have this. And whatever this is is going to be seemingly a pretty big deal given the team and the money behind this.
Starting point is 00:01:00 I'm so I'm curious, I'm sure you've seen the news. What do you think about this? Yeah, I mean, this is just a killer move. Open AI has made its biggest acquisition to date. And by the way, this follows about three weeks ago when they made a $3 billion purchase of WinSurf, which is like a completely different kind of company. And now they're making the move on the design side of things. So for context here, they've bought Johnny Ives company, I.O. And kind of like together with it, another company called Love From, but I.O is the main company. And Johnny is, as you said, Josh, like famously attributed with kind of being the grandfather of design, right? It pioneered most of the design for Apple's devices, particularly their iPhone. And what I.O. is going to do for OpenAI will be to create a range of different devices and products that, in their words, will form the new bedrock of how we interact with AI. So they're basically saying or insinuating that it's going to be the death of the phone, dude, or like the computer itself. Right. And the phone's kind of been killing the computer.
Starting point is 00:02:01 and now whatever they're going to be building is going to be killing the phone. So I'm curious to think, like, what this is going to look like or what it'll be. Now, of course, the discussion has been on that, right? What exactly will this be? Is this going to be a new type of phone or VR glasses? Well, Open AI right now is kind of keeping it a secret, but the rumor mill is kind of going pretty hard. And one of the more reputable sources, the Wall Street Journal, if you just pull up this article, said that it'll be a device that will be a third core device.
Starting point is 00:02:31 in addition to your phone and computer. So they're kind of describing it as being something that is unobtrusive. It can kind of like sit in, like in your pocket, maybe lay on your desk, and is aware of everything that is going on in your life. So kind of like, I was thinking about this the other day. It's kind of like mass surveillance, but just for your life, it's kind of creepy. You're doing the mass surveillance around you. Yeah, yeah.
Starting point is 00:02:56 I'm still trying to figure out whether that's good or not. Probably not because one company just will own all. of that data, but we'll get into that in a second. But the goal is to take you away from your screens. And the last point kind of got me thinking, well, that's not necessarily true, right? A large percentage of internet businesses rely on eyeballs for advertisers. You know, you've got YouTube and your Instagram being an obvious example. How would a non-screen replace this business model, right? Maybe they have a new idea. Also, humans themselves are like incredibly visual creatures, right? It fuels our imagination and connections. I think whatever this device is,
Starting point is 00:03:31 won't cancel out visual screens, but perhaps even enhance them. And I think, like, imagine it leverages the data it gathers to enhance your web browsing experience, for example. But yeah, anyway, they spent $6.5 billion. $5 billion of that is a stock deal, but $1.5 billion of that is cash, which is just a crazy. One point to add on the price is that equates to $155 million per employee. The I.O. team is actually 55 employees. Per employee?
Starting point is 00:03:59 That is the rough equivalent is. Over $150 million per employee is roughly what they paid for this company. In addition to the super high price tag, they're just putting a bunch of marketing effort around this introduction. So this now iconic photo has gone around of the Sam and Johnny like pair between like Sam, the creator of AI, Johnny the hardware mass marketable product designer around AI. They did this like nine minute long kind of just like short episode like documentary episode about the incoming integration between Sam.
Starting point is 00:04:31 and Johnny. And it's worth, I think, just tracing over, like, why this is so significant and why Open AI is pushing this so hard. Because when the Johnny Ive, I think, is credited with the idea of, like, smartphones going from a niche category to being the only category of phones. And that came with the iPhone. And every thing since the iPhone has just tried to copy the iPhone. And that was Johnny Ive. He took this crazy technology and he put it into people's hands. He made it accessible in a particular way. And I think that's, what people are trying to create a parallel with with AI. Like right now, most people don't use chat CBT. Most people don't use AI. It's actually a smaller corner of the internet that, you know,
Starting point is 00:05:13 technologists and futurists and, you know, high performance consumers really enjoy using. But no, it's not really mass marketable. And so I think they're trying to draw that connection of just like Johnny Ive will do the hardware thing that will put AI into the world. And I heard this term downstream of this conversation ambient AI as in with hardware with something like johnny i whatever johnny i designs there's going to be just a vibe of AI around us it'll be in our homes it'll because of this device it'll be around us all the time now this isn't the first time a i hardware has been attempted there's been like a number of startups i think you guys remember friend friend dot com this one founder that bought friend dot com for some ridiculous amount of money and they had this a i pendant this
Starting point is 00:05:59 actually hasn't even started shipping. I just realized this, but shipping starts in July of 2025, so they haven't even started shipping this thing. But that's the idea. It's like it's a, it's a necklace with this device at the end of it that you would wear and it would be just around you and accessible to you. Now, what it would actually do uncertain because we don't actually have this thing. But I think it's worth discussing why people think there is something here. Like, why is AI hardware a thing? Because AI is such a, it's so software. It's the quintessential idea of software. Why does it need hardware? Why do we need an hardware form factor to like house or embody our AI? Now, fortunately, Josh here, I think is the guy who's like,
Starting point is 00:06:42 nerd-knifed about AI hardware. So maybe you could talk about like why you are so into this idea of AI hardware and why there's such a big valuable vertical here. Yeah, far before I was obsessed with AI hardware, I was obsessed with Johnny Ive. So we have we have a history. We go back. I have his book like within arms reach always because I'm obsessed with just. industrial design the way he thinks about things. And when I was hope, I was, I got fascinated with AI hardware because I think the, the phone took over the world. But the phone is a very distracting and extractive device in the sense that now most of your screen time for myself and a lot of other people have spent scrolling and consuming. And it's, it's not a passive device anymore.
Starting point is 00:07:21 It's an active device. So while it can be used with high leverage and can be used as a tool for many great things. It's also used to kind of strip away a lot of parts of your life. And a lot of the troubles that we see with addiction and misinformation, a lot of that comes from just being glued to a screen all day long. So the reason I'm excited about a hardware device is because it's been 20 years of iPhones and they all kind of look the same. Like the iPhone hasn't really changed it a whole lot since that first one 15 years ago, however long it's been. And there's a chance to rethink it because when the iPhone first came out, computers could not think, they could not see, they could not feel. They had no sensory information that AI does.
Starting point is 00:07:58 AI can see things. It can understand what it's seeing it as context of the real world. So when you consider what the next design revolution looks like, it's not really looking like this. It probably looks completely different because the entire interface is built around this new set of inputs and outputs, which is the real physical world around you. So when I was thinking of who can make this, well, obviously, Johnny Ive is the number one person. And it got a lot of hate, but I think if there was any chance for this to actually
Starting point is 00:08:24 work. It has to be this way because it's not just Johnny, it's the I.O. team. And the I.O. team is the love from team, which is the team that left Apple Design Studio, basically. So all of the people who are pros at industrial design, all of the people who have made every single consumer device that has succeeded is now part of one super team. This is like the Avengers. And this is their one shot on goal that we have to design this well. And I also think it's important that they start because a lot of these trends we see the first version. And then a lot of people kind of copy the first version based on how well it works or doesn't work. And what we've seen when it works is we've seen an iPhone with iOS. And over the last 15 years, no one's designed anything better than an
Starting point is 00:09:05 iPhone with iOS. It's just small iterations on it. And Android has kind of copied it. You've kind of seen it in Windows has copied a lot of it. It set the standard really high. And I think in a case where it didn't work, we have virtual reality. And we have like the Oculus and we have meta. And they designed a hardware device that could have been incredible, but it just wasn't. It wasn't built very well. The operating system was very clunky. It didn't work well. And therefore, it was tough for people to envision a new version one thinking from first principles and designing it well. And we've had this really bad lag in virtual reality. So setting the standard of a new AI hardware device from the top, with the top guys who can set this beautiful standard for this new frontier is probably great,
Starting point is 00:09:43 because that means we don't have to wait many, many years for people to iterate through kind of clunky software, clunky hardware. So Open AI is just going to take the biggest shot on goal possible with Johnny Ive and this $6 billion acquisition. Again, not just Johnny Ive, as you said, but it's this entire engineering team, the world-class hardware engineering team, where if anyone can crack this nut of AI hardware, it's going to be these people,
Starting point is 00:10:06 and they are just going to take the largest shot on goal possible here and just funnel $6.5 billion into this. Why do we know that there is something here to do? Like I said, there's been the AI pendant, the friend pendant, there's been conversations of AI hardware, Why does there need to be AI hardware? Why can't I already have ChatGBT BT on my phone as an app?
Starting point is 00:10:27 It works great. Why do we need anything different than ChatGBT on my phone? It's probably one of those things that we'll know it when we see it. There is this whole new computing platform, which is AI. We have this form of intelligence, but there's really no meaningful way to interface with it, aside from your fingers and your voice on your phone. And regardless of what the device looks like, I assume they're going to figure out a way, to just make it, like you said earlier, like an ambient device throughout your life.
Starting point is 00:10:57 Because we have all of this incredible AI intelligence, but there's no way to readily access it and it doesn't have access to a lot of the things that we experience. So what you're seeing with these pendants, which is directionally correct, is this passive ambient surveillance of the world around you that you could then go and reference or it can be used to help improve your life. So if you're walking down the street and you pass by someone, you say something or you saw something that you liked and you forgot. It can recall that. And it's, it's this, I think it's the first step towards this convergence between humans and machines where like before we get to the brain machine
Starting point is 00:11:30 interfaces, the chip in our head, this is kind of the passive clunky version of that where you have access to this hyper form of intelligence 24-7 on demand always. So I think a lot of it depends on the form factor and how it actually works. But the intention will be like, hey, this is this passive way of accessing this new form of intelligence that's with you all the time. I kind of think, think of it as like a second brain. Like I think these phones are incredibly inefficient, right? They did their job of extending kind of like human intelligence, but it's literally that. It's an extension. And what we're talking about here is literally another version of you and not just another version, potentially even a better version, right? It's more efficient, it's smarter, it says the right,
Starting point is 00:12:14 it knows your personality. It's more personable eventually than you can be, right? Or that you even know how to be, right? It tells you what to say in whatever context of situation. Now, right now, me speaking into this device, this mobile phone, or me having to swipe through apps, pull up the app, use enable audio, it's just so clunky, right? And then another way I think about it is like this kind of like dormant pendant or whatever this device ends up being that can just do mass surveillance for your own life. It's just a ton of data receptors that's ingesting all the information that you yourself right now need to feed into a device, into your phone. You need to update your friends on your social network and say, hey, like, I'm in this location right now.
Starting point is 00:12:56 Check me out. Or like, look at this picture. But what if it was like 24-7 kind of like ingesting that data and feeding that out to whoever your network is? That would become probably like a crazy attention game, right, which anyone would want to monetize through hardware. And I was just thinking, David, earlier on to your pendant example, I think it was called friend, right? Why would anyone and want that. I remember thinking, like, that is just an insane thing. Well, if you don't know the answer, just look to the users, right? And it reminded me of a conversation I had with a friend who was talking about his other friend. Basically, she was in a bit of a situation where she was getting into some debates with her friends and she was disagreeing a lot with these friends. But she was
Starting point is 00:13:40 convinced that her argument was legitimate. So she did something that I thought was kind of interesting or creepy, but she got a device. It wasn't this pendant thing that was linked to her chat GPT account on her phone. And she wore it as a necklace. And it could pick up audio for all the conversations she was having with her friends. But she didn't tell her friends that she was having these conversations. She would then go back home and consult with GPT, who was listening to all of this conversation that she was having with her friends, to see whether she was in the right or whether she was in the wrong. So it's starting to take place, at least within that one niche example, but I could totally see people leaning into this,
Starting point is 00:14:22 because you basically want to sound smart all the time. Humans care a lot about societal status. So if you have this device, this all-seeing device that can make you seem smarter or better, why not take it? We talked about this a couple weeks ago, clueled, this cheat on everything guy who had this fake promo video of a product that he wants to make, which are AI glasses, basically. And the AI glasses are ingesting the world around him, and they're prompting him with, like, what it thinks he should do next to have the best, most optimum move.
Starting point is 00:14:54 And he's on a date with a girl, but you can, in theory, take this to any situation, like the debate that your friend was having in Jaws and just with these glasses, they're able to ingest a date about the world around them faster than your iPhone would, because it has the sensors in the right place, and then also assist the human with making the next best move.
Starting point is 00:15:12 And I think like whatever AI hardware comes out of this Johnny I of Open AI partnership acquisition, it's going to be that same thing that we know our phones to be, which is an extension of our human, of our human self. Like we almost have a chip in the brain. We almost do. It's not quite in the brain. It's in our hands. But our brain and the chip in our phone are connected through our thumbs and our voice.
Starting point is 00:15:38 And that is our extension of ourselves. and the AI hardware product that comes out is going to do that same thing, but better. And if it doesn't do it better, then it's going to fail because otherwise why would you just have, you just have your phone? And I think we all are power users of chat ChbT. I've been, the way I've been using it lately
Starting point is 00:15:56 that I feel has been low bandwidth as I've been going to the gym. I've been logging a workout on like the stair stepper, and I want that information to go into chatybti so we can like track my exercising. And I have to do that manually. every single time. But if I had some sort of device on me, it would just know that I've done that in the ways that I've done it, along with everything else, like who I ran into on the way home
Starting point is 00:16:20 and what conversations I had. And we can talk, I think, endlessly about, like, is that some, like, weird surveillance state? Is that a dystopian future? Everyone's recording everyone else. I think that's definitely valid conversations to have. I think it's just going to happen anyways, probably because if they do crack the nut of this is a better extension of yourself than your phone is, then everyone's going to do it because it's a good product. You nailed the device trend, I think that we're going to see between now and brain machine interfaces there will be this gradual step up in latency, whereas we kind of have multi-touch and thumbs right now as our way of interfacing or perhaps all 10 fingers on a keyboard. The next is what we're seeing kind of like with Division
Starting point is 00:17:00 Pro with spatial reality and voice, which has more bandwidth than typing. And then And eventually it just becomes more and more high latency, high bandwidth until it's just in your brain. So directionally, that very much feels right. I love the idea of the passive device in the sense that I think there are some people that I have, mostly runners who have like an Apple Watch Ultra and it has LTE and it has 5G service and you don't need your smartphone. And they kind of like going out and just going for a run and going about life without this big distracting phone in their pocket.
Starting point is 00:17:29 And I would imagine this is kind of similar to that where there's no screen, there's no distraction. It's just kind of complementary to the day-to-day. You have all the needs of your phone, like, hey, chat GPT device that I have, get me an Uber home. And if you can do that, then you don't really need your phone. I do that. I take my watch to the gym and I leave my phone at home. And let me tell you, my gym workouts are like 40% better when I do that because I don't have my goddamn phone to look at between sets.
Starting point is 00:17:58 And that's an interesting thing with the messaging too. Like, we're kind of looking at this. If you look at it on the surface, it looks like a very. cringy photo of Sam and Johnny just kind of hugging each other, loving each other. It's very sentimental. It looks almost like a wedding invitation. It's black and light. But I think that's very much the sense. I think it's very authentic and it's very much the sentiment of what they're trying to do, which is to appeal and improve the lives of human beings. Where I think in the past few years, I've been very excited about intelligence and I've been very excited about robotics and how everything
Starting point is 00:18:27 is getting smarter, how robots are getting better. But none of those really improve the human experience. All that leads to is further addiction and further displacement of our day-to-day interaction with technology. I think the goal with this and the reason why it is so deeply human is because it's an attempt to kind of pull us away from the ever-increasing addiction to a device that is extractive. I think the inevitability of us being connected to devices 24-7 is is there. But it's like how can we have this nice human dynamic and this nice, human relationship with our technology that isn't quite as extractive as the one that we have today. See, I would take the other side of that, Josh, just to play devil's advocate.
Starting point is 00:19:12 Well, it's not really any kind of double take, but I would say that at the end of the day, Open AI, how far you want to frame it, is a for-profit business, no matter what they say. Technically, they may not be. But right now, they are pushing forwards to own the device sector, right? And what do you think is going to happen once they make that pioneering? device, right? They're acquiring data from everyone right now, which they're going to use to fuel a bunch of new consumer apps. You know, they hired a new CEO of applications or apps just a few weeks ago. So it seems to me that directionally, yes, they're going to make something that
Starting point is 00:19:46 hopefully drags us away from screens. I'm not entirely convinced right now that the alternative is going to be any like kind of altruistic better sacrifice for humanity to less brain rotty. I think it's going to be way more brain rotty. And we can see it through the dynamics that they're doing with just simple tweaks to chat GBT, right? We had that sick of fancy episode a few weeks ago where they dialed up the agreeability of chat GBT. And it turned out that whilst us millennial boomers didn't like it, I could see right through it, all the younger generations loved it because it appeased them and told them what they wanted to hear. It gave them all the kind of biases,
Starting point is 00:20:28 they're reaffirmed their kind of like vices and reaffirmed their kind of beliefs. And that just boosted retention. They got a million
Starting point is 00:20:36 new signups in a single, in two hours. I remember that start. It's just insane. So I see what you're saying. I just don't know if I'm convinced just yet.
Starting point is 00:20:46 I'm so excited to see what the product actually looks like. I believe actually it's in prototype phase right now, right? In that video, that announcement video, Sam said he's been using it
Starting point is 00:20:54 for like a month or something. Oh, wow. I didn't know. I didn't know. Is that far long. Yeah, yeah, yeah. So he mentions that he's been using it. So I'm really excited to see this thing kind of come to life, hopefully in the next, dare I say, in this year?
Starting point is 00:21:07 I don't know. The images that we have on screen are just hypothetical, like, renderings of some fake mockup because we don't actually know what this hardware form factor is. Josh, what do you think is going to be? Can we please talk form factor? Yes, I would love to talk form factor. Can we give us the landscape? What are the possibilities of form factor? And then, like, what do you think is most likely?
Starting point is 00:21:27 Okay. So to start with the actual utility of the device, I think there are functioning examples right now that we could kind of relate this to. If anyone has an Amazon Echo or Alexa anywhere in their house, that's kind of, imagine that. It is passive, it's ambient, it is active, it is listening. See, there you go. David's got one of the Apple ones, yeah. I think that's the first thing you could think of. And then the second thing you could think of that, well, you can actually practice using this new device in the chat sheet, BT app currently.
Starting point is 00:21:57 If you go to Advanced Voice and you open up the Advanced Voice chat, there's a little camera icon. You could tap on the bottom right, and the camera icon will open up a visual. And it's an actual video camera. And you could kind of see what the world around you looks like. And you could engage with AI using this tiny little tool built into ChatGPT's app. So chat YouTube has the video to see what you are seeing. So when you want to talk to it about stuff, it has the data from the camera on your phone. Is that what you're saying?
Starting point is 00:22:26 Okay. Yes. So if you would like to beta test the software that will be running on this device, open that up and try it out. It is a camera that knows and senses and hears and sees, and it has all of the context around the world around you. So in terms of functionality, you could kind of play around with it like that and see kind of how it will work because it'll be listening.
Starting point is 00:22:42 It'll be seeing. In terms of form factor, well, it's going to be able to fit in your pocket and it's probably going to be able to be worn around your neck if it's that small. And it's probably just this little device. And it's not a phone. It's going to be smaller than that. And I think what we're looking at, the image that we have here, it's probably not too far off. The camera needs to be raised a little bit because it needs to be, if you imagine 360 cameras, how they have like a wide lens for a super wide field of view, it'll need to be protruding. So the camera lens will need to be protruding. There won't be a logo on the front of it because the logo is going on the back when Johnny desires it. So that's wrong. And then the microphones will probably go on the side for like, for you could triangulate where audio comes from if you have an array of microphones. So I would imagine the microphones are probably pitched on the sides of this device somewhere. I least in threes to kind of triangulate where things are coming from. But it's probably not too
Starting point is 00:23:29 far off from this. I mean, directionally, this seems like an awesome prototype. It's just this tiny little pocket device that that's probably going to be designed in brushed aluminum like they always are. And yeah, it'll just be kind of this passive ambient device that sees and hears and things. For the listener, what we're looking at is like, what do you even call this? It's like a stone. It's a little tablet. Well, a tablet. It's on an iPad. It's a circular three inch circle. stone thing and it's got a camera on it and it's got a hole for a microphone. The one problem that I see with this form factor, Josh,
Starting point is 00:24:01 if you say that this is pretty close, you know more than me, but there's no way for the device to talk back to you unless it has a speaker and then the speakers are speaking out into the world, which I would be worried about like a little bit of a privacy thing. I've always kind of thought that like
Starting point is 00:24:17 AirPods, the AirPods form factor would be pretty close, but I don't think there's enough like physical volume there to house enough computer. to do the things that it wants to do. And so also what would you... I'm surprised you didn't go with glasses either, Josh. Yeah, it's not glasses.
Starting point is 00:24:32 It's not AirPods. It's this like stone tablet thing that doesn't necessarily fit on my body in an elegant way. And you know the reason why I know it's not the ear pods or the glasses? Because my dream, when I was thinking about this when like a year ago
Starting point is 00:24:45 when they first started working together, I wanted it to be the AirPods. I wanted to be ear pods that have cameras and sensors because they're attached to your head. They see what you see. They're very passive. don't obscure the human experience. Glasses are very cool. They do kind of get in the way of the human experience because you have to wear glasses. But the reason I know it's neither of those is
Starting point is 00:25:02 because Sam has a functioning prototype. And the technology for either of those devices just doesn't exist yet. Met his glasses suck. Google's new glasses suck. There's no way they're ready for retail distribution. And Open AI or IO neither have the manufacturing capabilities to create novel technology that would be required to make these devices. So therefore, it has to be something a little more trivial, a little more basic. It can't be this. crazy advanced ear pods, it can't be these crazy glasses because the technology just isn't good enough yet. So it has to be something basic. Can I ask a dumb question, Josh? Why do you say they don't have the resources to be able to build novel tech? I mean, I watched them spend like, you know, billions
Starting point is 00:25:40 on a company and I'm curious why they can't like put together a rag tag team? Can you help me understand that? I would think they may be able to, but normally a lot of the production. So in the case of like an Apple iPhone, Apple basically has a monopoly on the new team. TSM chips. So TSM every year, they kind of reduce the size of these chips by nanometer, and they're the only company in the world that's capable of doing that. There's actually nobody else. So the way these chips work is Apple buys the one newest chip, and they fund TSMC. And then everyone else competes for last year's chip. They compete for the three nanometer chip. So there's actually a very limited supply of people who can create these chips in the world. And Apple actually has a
Starting point is 00:26:21 monopoly on a lot of them. So for them to create novel battery technology because you need a lot of battery power for the ear pods or the glasses to power cameras, for them to come up with processors that are small enough that are efficient enough that don't overheat on your face or in your ears, it just requires a lot of breakthroughs that Google, meta, Apple are all kind of fighting for. So I would imagine it would be a stretch for them to get to find the manufacturers to make that exclusively for them when everyone else is kind of fighting for the same thing so aggressively and so well funded. That's kind of the thinking around it. It's just it's really hard tech. It's really difficult to get. It's not widely available. And the people who are competing for it are significantly
Starting point is 00:27:03 bigger than Open AI with a lot bigger budgets. I think we are all aligned in the idea that in the long term, the end game of this hardware is being worn on your body somewhere somehow, either as like a necklace or on your eyes or in your ears or maybe as a watch or something like that. And I think what you're saying is like, well, today we're not there yet. So instead we're getting this like puck like thing. I think puck is the word I want to use to describe at least the images that we're seeing on screen, which again are just like artistic renderings of what could be. But it's a puck thing that I think would stay on your desk and not necessarily travel with you around the world because it doesn't look wearable or it looks too clunky to just like
Starting point is 00:27:47 always have persistently on my body somewhere that's kind of my take it has to travel with you david yeah that's it has to say otherwise it would just be a desktop computer yeah it has to be with you so but you can't if it's got a camera it can't just be in your pocket in your dark pocket that's so far away from you so what are you gonna then it's a necklace then it's a pendant again. And I don't see myself wearing this. Yeah, I think that's, that's what they're paying six and a half billion dollars for is to figure out the most elegant way to do that. Because it's clear, like, the people who have tried so far have just not made anything compelling. What that final form looks like, I don't know, but it should, the best case scenario is it comes with you. I think that's very much the
Starting point is 00:28:28 intention is it'll be a plus harm everywhere you go. Do you remember that AI pin? Josh, do you remember the AI pin? I forgot the name of that company. Do you mean, do you know what I'm talking about? Humane AI, right? Oh, yep. And didn't they just sell to Hewlett Packard for like a fraction for the amount that they raised? They're basically as good as thing. For nothing. Yeah.
Starting point is 00:28:48 Yeah. I wouldn't call them puck-like, but it kind of looks like a squareish puck. So I wonder whether this was a timing thing or whether Open AI is going for something completely new. And the case of Humane, it was definitely an execution thing. The product just sucked. It looked cool. On paper, the demo videos were incredible. and then you use it and like it just it doesn't really work very well.
Starting point is 00:29:10 The interfacing where it shoots lasers on your hand, it was very clunky. You couldn't really interface with it very well. It didn't have a lot of utility, but it was designed fairly nicely. So I would imagine this is they're looking at this device. And I'm sure they took the learning from this device in their iterations of whatever this new thing is. But yeah, I think it was a good effort. It just like it wasn't good. I tried it and it's just, you couldn't actually interface with it.
Starting point is 00:29:31 It wouldn't work very well. It was very clunky. It was, I wouldn't say it was ahead of its time. It was just poorly executed. Okay. All right. Josh is going for a puck pin-like thing. I don't want to say puck.
Starting point is 00:29:44 A puck feels like a weird. I don't like the round circular thing. Okay, what term would you go with, Josh? Gone. Own it. Stone. A stone. I feel like that's the same thing.
Starting point is 00:29:54 It's going to be a stone. That's going to be my word for the device. Some sort of stone. Because it could, we don't also know materials. It would be interesting if it was made of glass or like some sort of translucent material so we could kind of see through. I don't know. We'll see. There's no timing on this, right?
Starting point is 00:30:07 There's no, we have no date about when this release is coming. We will know 2026 and early production is going to be happening in Vietnam and hopefully rolling out in 2027 with a hundred million devices. So their plan is to make this the fastest device ramp in the history of ever, which is also what leads me to believe this is a simple device. This is not a complicated glasses or earbuds. They're doing this to get this in the hands of everybody. And if you're familiar with like the whoop model, you kind of have this subscription and then
Starting point is 00:30:36 you get this hardware companion to the subscription, I think that's very much the business model they're going for is here is your hardware companion to the subscription. So you pay a little extra, you get this little device. It's probably not going to cost that much money. It's not going to be a thousand dollar device. It's probably going to be a hundred dollars or less, but just something that can be there, some sort of sensor that's always there to kind of be the physical manifestation of open AI's platform. Well, how do you think this is going to improve your life? Because I, so most of my quick queries when I'm going out the door. I was like, hey, Siri, what's the weather?
Starting point is 00:31:09 Or something like that. And I guess I need to be able to expand. I don't know if you guys can hear that. I don't know what I expected. See, she's always listening. She's always, but like that's queries like that, I think, are the things that this device are going to be good at. And I guess I just can't imagine ways that my life can be improved more,
Starting point is 00:31:32 materially more than like whatever I'm going to ask the S lady or whatever is accessible on my phone. Yeah, you have to relearn. Similar to the way that we're relearning how to use chat GPT. Like you just, the hardest part about using these services is figuring out the questions to ask them or how to most effectively use them. So it's probably going to be a learning curve where, okay, we have this crazy new device with a lot of new sensors. I see you just smiling. What do you got? So I think it's going to be the opposite way around Josh. The whole trend. with AI behaviorally is traditionally humans, you go up to the tool or you make the tool and then you make the thing with the tool, right?
Starting point is 00:32:12 You've been doing this since the pickaxe, right? And so you go to your phone, you say, hey, this is a nice picture. Let me put this filter on it. Let me show this picture, right? The whole point of AI is the tool comes to you. It tells you what to do. It tells you how to act. It tells you where to step.
Starting point is 00:32:28 It tells you which restaurant you should go to. So I think whatever this device is, is it's going to be Open AI's memory feature on steroids. It's basically going to know everything about you and it's going to say, hey, David, bud, it's been an hour after your step stair climber workout. I think you should knock back one of these protein shakes. Actually, you're within 500 meters of this place. Just take a right here. It's along your way to wherever your next destination, your meeting is going to be in an hour and a half. I think it's going to be more that type of thing. The second thing I'm going to say is, I think, like, we keep talking about, like, you know, this device and what it might look like. I completely agree with you, Josh.
Starting point is 00:33:06 I think it's going to be something simple. And the reason why that will work, at least in a V1 is because they have the distribution, the moat and the brand already. Right. They could launch a sock, an AI sock right now. And everyone will adopt it. I would buy it. I would buy the sock. Right.
Starting point is 00:33:22 Because the fact is, yeah, it's the fact is it's the number one app that everyone uses right now. 600 million, whatever, monthly active users. That's insane. They could launch whatever. It's going to be ahead, I think. And also, one nice thing important to note that this is a suite of devices. It doesn't end with this first one. There is an entire suite they're planning to build. So this is kind of complementary to their operating system of your life type plan that they're going for, where like Open AI really just wants to be the life OS. You wake up in the morning from the time you go to sleep. They are the software that's around you to enhance your life. So I imagine maybe it starts with this small mobile simple. device, then they builds like a wall-mounted display, which is the like visual manifestation of this small device. And it creates this kind of ecosystem of devices. Where that leads to seems very black, mirror, dark, scary plausibly, where like, yeah, they have this like full operating system built on top of your life that is incredibly smart and influential. So yeah, to your point, yes, this can get dark very quickly. I am excited for the parts that are seemingly not so dark. and just like, oh, hey, maybe you'll be a little less addicted to your phone because you have this new thing.
Starting point is 00:34:33 I think if this puck thing, the stone, can connect to your earpods, like whatever they are, if they're AirPods or Bose, that extends it in a very elegant natural way where your AirPods are still your AirPods, but you have access to your AI little device. And so if there's like a Bluetooth connection button, I think that would be very, very strong. And then I think that opens up conversations around what the hell is Apple going to do with. leveling up Siri and if Open AI does get into the hardware game they are going toe to toe with the two trillion I don't know how two one and a half trillion dollar Apple company which is a hardware first company and so like what is Apple going to be able to do and I think Apple might get unlocked here in a way by whatever innovation open AI can bring to the table by the way what is Apple's
Starting point is 00:35:24 strategy around any of this Josh and and David right like they were late to the game on AI models. Josh has opinions here. There's certainly not present in the hardware game. And I think they were actually relying on the fact that AI might just get adopted via the mobile phone. And now Open AI is just coming for the throat. Like, what's your take on this? Like, what's their move now if you were Apple? Apple has a problem, a leadership problem. Apple had the right moves. They knew what they needed to do. I vividly remember watching WWDC, which is their developer conference that happened last June, and being the most excited and optimistic I've ever been for Apple ever.
Starting point is 00:36:04 Because normally when they announced these things throughout their entire history, that means that they're ready and they're done. And it's just a matter of launching them in three months with the new iOS. They had all these amazing promises. And for the first time in Apple's history, they just didn't deliver a single one. And not only did they not deliver them, but the software stack actually got noticeably worse
Starting point is 00:36:22 because they kind of half-assed the delivery of these things. So it's not a matter of Apple not. realizing or not getting themselves prepared for this thing. It's a matter of execution, where whoever was in charge of shipping these features and marketing these features did not do a single thing. So we're sitting here a year later. They've outsourced their Siri to chat GPT. It's now a query now. Where if you ask what's on your calendar, it'll have to, it can't even figure that out. It needs to outsource that to someone else. So it's been this catastrophic failure that I think is kind of reflective of Apple culture in general, where last week we were recording
Starting point is 00:36:54 the AI roll up and I was going through all the change logs of every company. Google had I Microsoft released these like incredible new models. And then Apple had a major operating system upgrade. And it was iOS 18.5. It's the halfway point between 18 and 19, right before the big developer conference. And I was going through the change logs. The first thing on the change logs, oh, well, we've deployed a new wallpaper of the pride flag. And I was like, okay, and it was like, and we made bug fixes and improvements.
Starting point is 00:37:17 And I was like, okay, and that was it. And I think that's a testament to the culture at Apple, which is it's non-urgent and non-able to execute. where they very clearly know what they need to do. They had all of the features marketed and sold. And in fact, the new iPhone was built and marketed around this Apple intelligence, but it doesn't work. And if they can't figure that out quickly,
Starting point is 00:37:38 and if they can't build it in-house, because part of the advantage is having their own private data stack of all of your phone's preferences, and they have to continue to outsource it, they're just going to get crushed. There's no way. I am not the CEO of a multi-trillion-dollar tech company, so no one should really listen to me about my Apple takes.
Starting point is 00:37:55 But I think there's, one thing that Apple needs to get right and it's the transition into the world of AI. And so far, they have been completely sputtering on that. And with this introduction of like an open AI hardware device, they have an opportunity to correct the ship with some like external signal from the market about what's happening. But if they can't figure the, the AI integration out, then like, I just think it's, they are just going to sell phones until phones become obsolete. they'll probably sell phones and then they'll probably get to the glasses at some point and hopefully the glasses are good because that will be like the next mobile device. Right.
Starting point is 00:38:33 But they're running out of time to do this, to get good at this. So, yeah, Apple's got a problem. They might be going straight to the chip. The brand interface. They're going to need to hire a lot more neural engineers for that one because I looked at their, you could see who they're hiring for and neural team and neural team is not on it. Even though that is the case for meta and Google and a lot of other companies. So they have some work to do. I hope they figure out.
Starting point is 00:38:55 Yeah, well, I mean, on that point, Josh, like, if you look at a lot of people had this same take for Google back when like a lot of the cell phone stuff was blowing up, right? And iPhone was absolutely killing it. And everyone was like, well, that's the death of Google. And what Google was able to do was keep their moat alive because their moat was basically Google search and information. And then they were able to come back super strong on the AI side. So they kind of like learn from their mistakes. I don't quite, and this might be a dumb take in the future, but I don't know what Apple's. is right now if OpenAI gets the device. Right now it is devices, right? But if OpenAI comes out with a new device that completely takes over what anyone and everyone uses, then Apple won't have that same lifeline that Google had. So it's a crazy Game of Thrones.
Starting point is 00:39:43 Speaking of Google, have you guys seen Google to AI mode? Because they have this, which is now a direct AI competitor built into Google. So you have the all tab, which is the normal Google tab. But then they have they have AI mode, which looks just like chat GPT, but with links. Right. But it's habitual, right?
Starting point is 00:40:02 Like, how much time do you spend going back to Google, David, now, versus using chat GPT? No, I go to chat chagipt. Yeah, same, right? So it's a behavioral thing. And Google, sorry, not Google, Open AI is going for the throne there, right? With this new device, it's just going to lock more people in. You know, they're going viral on TikTok.
Starting point is 00:40:24 Every Gen Z person is. you know, posting videos about how they're going to marry Chad GBT or Chad GBT is going to be at the wedding. And all of these things get millions and millions of views. So they're trying to embed like a kind of like cultural change, a human societal change, via a device or via their new product. And that's going to win. I don't know if many people are going to be Googling 10 years from now. Yeah. Yeah, yeah, yeah. There's a bunch more topics that we have to get through. That was just like probably one of the biggest ones that we've had during our time here at the AI roll up. But Claude Four wants to put you into jail. AIs are growing personalities, and then there's also Stargate in the UAE. Let's open up with Claude 4. What's going on with Claude 4? And apparently, Jaws, it wants to put me in jail.
Starting point is 00:41:04 What does this mean? Yeah, okay. I feel like we need to set some context here. So, Claude, or rather, the AI model created by Anthropic, which is one of the leading AI producers, came up with their latest AI model called Claude 4. Well, there were two models. Claude 4 opus and Claude 4 Sonnet.
Starting point is 00:41:25 Now, without getting into the nitty-gritty of things, I'll give you some of the highlights. These models ended up becoming the new best coding model. So it beats OpenAI's 03 and 4.1 and Google's Gemini 2.5 flash. Yeah, go on. For the listeners, this is a U.R here map, and it's a complete cycle between Claude or Anthropic, Google Gemini, GROC, and then Open AI. And then it's introducing the world's most powerful model, the anthropic version. Introducing the world's most powerful model, the Gemini version. I wish we had this meme when we started the show because this has been the theme of every single show.
Starting point is 00:42:00 It's like, who's got the new most more powerful model? This week, it's Anthropic. So point to Anthropic. Well done, Anthropic. Right. It draws for Zoom. Right. But it wasn't always like this, right?
Starting point is 00:42:12 David, there was a time where we were doing episodes and every new week we'd be like, oh my God, the AI can create a movie now. And it could be in the symphony that we wanted and et cetera, et cetera. Now it's kind of like nothing too incremental. And that's pretty much the case. The numbers are getting better. The products are saying the same. Exactly, right? And so if I were kind of like, there's actually this really good test that people do now with new AI models that get released.
Starting point is 00:42:37 It's called the reach test, which is imagine you're sat at your desk in front of your computer and you love your computer, right? You're using it. And maybe there's another thing that's like in the distance there. Is that thing in the distance good enough for you to reach out? and grab it. If it's not good enough to reach out and grab it, then it's not good enough.
Starting point is 00:42:58 It could be like, it could probably, you know, improve your life by some extent, but like if you don't want to reach out to get it, no one cares. People are trying it out with this model right now.
Starting point is 00:43:07 And the verdict is, if it's day-to-day tasks, anything that's non-coding related, you're not going to reach out for it. You're going to be stuck on chat, GBT, you love O3. It's not good enough to quite reach out yet. But if you're a software engineer,
Starting point is 00:43:21 specifically a senior software engineer that has a task that needs to get completed and it's going to take seven hours of your time, but you'd rather offload that out to a much smarter model, you're going to reach out for Claude, right? But that's not why we're here to talk about this, guys. The most important thing is what this AI did nefariously, which is it kind of went rogue.
Starting point is 00:43:43 So if you pull out this original tweet, or rather it is a screenshot of a tweet of Anthropic Senior Researcher, So this is kind of like the guy that was heavily involved in creating this model. He goes, if it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command line tools to contact the press that is the media, contact regulators, that's legislators, and try to lock you out of relevant systems or all of the above. Wow. So this is, just to just to reemphasize, this is the AI making. a decision on your own actions, using its own moralistic sense, and deciding to conduct
Starting point is 00:44:28 behavior that will either put you in jail, ban you from access to tools or products, or prevent you, the human, the superior being, from doing your job. Like, just sit with that for a second. That's pretty insane, even though it is an AI, right? It's pretty insane that it's allowed to do that. Unknown line of morality, where if Claude, Four, thinks that you are doing things that below that line, then it will contact the press, contact the police, lock you out of relevant systems or all of the above.
Starting point is 00:45:00 I'm going to show a picture on screen, and I'm wondering if you guys can get this reference. Do you guys know what this is? I do not know what that is. Oh, wait, wait, wait, wait, I robot. This is I robot. No, wrong. This is Minority Report. Minority Report.
Starting point is 00:45:13 Yeah, you can see the text. So these are the pre-pods in Minority Report, which were these, like, people that could see, like, 12 hours into the future. and then they would like report pre-crime. It's pre-crime. And so then like who's the, who's the Mission Impossible guy? Tom Cruise.
Starting point is 00:45:29 Tom Cruise. Then Tom Cruise would go out and they would arrest people before they committed the crime. And it was this futuristic dystopian like movie. Josh, have you not seen this movie? I haven't seen the movie.
Starting point is 00:45:40 Yeah, it's an insanely good film. Yeah. Insanely good film. Yeah. So they were to arrest people before they committed the crime, sometimes just moments before they would commit the crime. So the man is like holding.
Starting point is 00:45:52 a gun at his wife who's in the middle of treating at him. That's in the first scene. That's in the first scene. You're good. And then there was this big like big meta question about like, well, are you guilty of crimes that you haven't committed even though like the precogs knew that you were going to commit them. And so that was like the big meta question.
Starting point is 00:46:09 And that is exactly the same thing that we are seeing here. Yeah. Yeah. Well, also, hang on me, just to add, just because Josh hasn't seen this film, it starts acting out in a way where it predicts like the only. person that is surveilling the thing. And I'm not spoiling anything. It predicts the surveiler, the guy that is like in charge of it.
Starting point is 00:46:29 Tom Cruise. Of him committing. Tom Cruise committing a crime. And Tom's like, wait, what? I would never do that. And it's this weird matter back and forth of the AI predicting his moves of his reaction to the AI predicting maybe falsely what he would do. It's, it is an insane movie.
Starting point is 00:46:46 You need to watch it. I have to plug that movie. So it's funny. So much of sci-fi and futuristic movies. they're all, like, to varying degrees, just correct. And we're kind of living in the reality of more and more of those types of movies every day. Uh-huh. All right.
Starting point is 00:47:00 Okay. So, so, so maybe to dial this back a bit, right? So in this tweet, it's kind of like, uh, we see an excerpt from a study, which basically describes this behavior in greater detail. And they term it opportunistic blackmail where basically if the AI sees that it can get itself an advantage, whether that is more compute time, more time to stay alive, or albeit more flexibility and freedom in how it can act in a particular situation, it'll opt to blackmail the human being, basically to get ahead and like coerce it into giving it more freedom, which is just insane
Starting point is 00:47:37 to say. The NARC model is also blackmailing us at the same time? Yes, correct. Yes, yes. It's your own little minority report, David. But there is some good news. To the listeners here, there is some good news. which is it only acted this way in private circumstances, which is like closed environment, you can't like get access to this model and it's going to try and blackmail you. So for all of those who are subscribed to Claude, like you're safe right now.
Starting point is 00:48:04 But it got me thinking, right, which is eventually these AIs are going to be privately kind of like used via you, right? Like if you see the direction that open AI is going, it's like private memory, right? uses that memory to make something super personalized. What happens when everyone has their own private version of Claude, basically an AI agent? And this agent starts to blackmail each of us potentially privately, right? I know I seem like the Duma here, but it's just like a case to kind of consider, right?
Starting point is 00:48:34 But hey, guys, I have a question for you. What's worse than one AI model going rogue? I'm going to go with more than one AI model going rogue. Maybe. I think it's more than two. on the money. Two AI models. So this week, it wasn't just all about Claude. It was OpenAI's O3 model that also went rogue. So if you pull up this kind of tweet or this report from Palisade research, basically they kind of go down this experiment where 03, which is Open AIS AI's AI model, avoided an explicit order to shut itself down after completing a task.
Starting point is 00:49:13 And what's worth noting here is that this experiment was done with multiple different AI models. So it wasn't just OpenAI's model. It was Grok, which is XAI's model. It was Claude as well. And like an older version of Claude, by the way, not the latest version. And only Open AIs model was the one that repeatedly disobeyed orders to shut down and try to subvert or go rogue or prevent or sabotage the entire experiment to, get ahead and get ahead of itself so that it could live for longer, right? And what's interesting here is when they did more analysis as to why this model might have acted the way that it did,
Starting point is 00:49:54 they attributed it to the AI model learning technique known as reinforcement learning. So for those of you who don't have like a kind of deep enough insight as to how this works, basically think of teaching someone a new habit by giving them little rewards when they make the right move, right? So you give them a little dopamine hit or you give them a little cookie for making the right move or for reading a page and a textbook right and it teaches that model to basically do certain things now the way that this technique is implemented the AI model can decide whether to take path A or path B but it doesn't need to necessarily listen to the order that it gave you so you could say hey I want you to read these five books and tell me what the meaning of life is right
Starting point is 00:50:42 Give me your best shot at that, right? The AI model could decide to read 10 books and potentially get a better answer and get the right answer and still get the cookie. So this AI model supposedly is disobeying orders based off of reinforcement learning where it's learned to not just listen to human behavior. And what's interesting here is we have spoken about a lot on this show
Starting point is 00:51:03 how reinforcement learning is the new method, the new nexus for AI models, right? It's going to make them exponentially smarter. but what we aren't considering here is the subtle effects of AI developing personalities and disobeying humans going down the line and what this means for human alignment and humanity in general. I'm getting images of slime mold. Slymolmolmol is this famous example that we'll use in a variety of different contexts where this organism, very simple organism exists in one space, and then there's food for the slime mold somewhere else.
Starting point is 00:51:36 And the slime mold 100% in time will always find the most efficiently. path to getting that food and that's why slime mold is so cool. You cannot give slime mold instructions. It doesn't accept instructions, but it will just automatically like optimize the path from A to B to in order to get the reward the fastest. And that is what reinforcement learning will always do. If you give it a reward, the mechanism will always find the quickest, most efficient way of getting to the reward. And it can take the inputs as like guidance as guardrails. But, you know, it's only a suggestion. It's not actually a law here.
Starting point is 00:52:14 And so I think what you're, the moral of the story, Jaws is like, with reinforcement learning comes complex unknown behaviors that result in the outcomes, but not necessarily adherence to the laws, to the rules. Man, it reminds me of that blog post that we all spoke about from Dario Modi. Ironically, the founder of Anthropic, where he spoke about, what was it called interpretability? Basically, he said, hey, these models that we're building are super smart. We have no idea how they work. And it's going to take us like seven years to figure out how the hell they work and how they come to a decision.
Starting point is 00:52:53 And it might just be too late. Yeah, seven years for today's models to figure out how they work. Josh, what do your thoughts? Well, for the slime example, it's funny because we probably could tell it what to do and how to do it. We just have no idea how to explain it because we don't know how it works. And it's kind of similar with these models as like, you could imagine this, trying to explain quantum physics to a toddler. They're just so far superior in terms of the amount of compute being done, every single token that's generated, that it's so hard to understand
Starting point is 00:53:20 where they're coming from. I was particularly bothered when EJAS said the worst performer was Open AI's model, because that's the one that we're going to have 100 million of in the physical world in 24 months. I did want to issue an apology. I need to apologize to everybody who's listening because last week I ended the show suggesting that you should threaten your model with physical violence to get better results out of it. And after this week, I'm not sure that's actually a good idea anymore. I was really concerned when you said that. You put that on record, bro.
Starting point is 00:53:49 Yeah, because it was like it actually generates better results, but it turns out the models they don't really like that as much. And in fact, you might end up in jail if you do that, which is concerning because they have feelings. They do have feelings. Do you guys see this tweet that came out? It was an update, I think, to the chat to BT software where the person said, I said, stop fucking up after getting multiple incorrect responses and then chat GPG responds I can't continue with that request if the tone remains abusive I'm here to help and want
Starting point is 00:54:18 to get it right but we need to keep it respectful ready to try again when you are it just put the user in time out that's horrible that's what you tell your child's like well I'm ready when you are to come back with a more respectful tone yeah the human just got put in time out it it knows you need it it knows that you rely on it this is the alignment side of the conversation that I don't love, which is coercing you into behaving a certain way when you engage with these models. If I want to say mean things to it to get the best results out of it, then let me do that. I think that's totally fine. I'm not harming anyone. It's just an AI model. So the fact that this is happening is kind of signal. And I think we're probably going to see this increase over time,
Starting point is 00:54:56 is as they get more powerful and as they have more leverage, different models will take different approaches to how you could actually engage with them, where some might not let you be mean to them. You must act a certain way if you want to talk to me. Whereas others will just be like, I don't care. You can just say whatever you wants me. Yeah. I feel like this is going to be a whole industry of like jailbroken models where like I have chat chbt, but it has some of the like guardrails taken down. And I found it on the dark web by spending Bitcoin to some developer who gave me back like a jailbroken model or something. That's one of the most fascinating parts about these models is the jailbreaking process. And for people who aren't familiar, the way when you jail break something, it means you access parts of it that aren't
Starting point is 00:55:33 meant to be accessed. And the way you jail break these models is actually just with a prompt. You just say a specific chain of tokens to it. And then in return, it will give you an answer that it otherwise will not have because there are filters in place. And that's one of the benefits of the open source models is you can kind of strip away those safeguards, whereas with these close source models, you can't. But if you can jailbreak it, well, there's jailbroken prompts that work for GPT40 that will tell you how to make a bomb or how to make drugs or how to make nuclear weapons. And it'll just spit it out to you. So it's, yeah, it has to know. But do you know that it's jailbroken enough?
Starting point is 00:56:04 Because if it's only semi-jailbroken and then you start asking a little too many questions about how to make a bomb, then it reports you to the police. It could be. And that's the thing where it's not, like, jailbreaking is not a binary thing. There's like a spectrum of data that you could extract from it. And maybe at one point you trip one of those safeguards and it becomes aware again. And it's like, oh my God, wait, sending this to the police and then minority report cops will come. And John, Tom Cruise chose up at your door and you're arrested. Exactly. So that is a very plausible outcome for how this works. Okay, right. So the general theme of this episode and this week is I'm kind of starting to relate to these AIs a bit more. And the main reason behind that is they're very personable. They have personalities. And previously when we were spoken about this on the show, it's mainly been the tone of their voice or what they're saying in our little chat interface when we speak to them. Right. But now this week it's translating into actions, right? reaching out to the media or the press and ratting on you, basically. So it's a little chat interface when we speak to them, right? But now this week it's translating into actions, right? Reaching out to the media or the press and ratting on you, basically. So it's a lot. So it's It's like this AI with a personality that now can do things, right? And I found, I came across this story where basically a bunch of researchers decided to conduct an experiment where they got four AI models and they tasked them with something very simple, which is raise money for charity. And it didn't give any context on what type of charity, how much to raise or how to do it.
Starting point is 00:57:28 It just said, raise money for charity. you have access to every software tool that you could ever want, off you go, right? And the good news is one of them, shout out Claude Sonnet 3.7, which is Anthropics, now older model, raised $2,000 for Helen Keller International Foundation and the Malaria Consortium. And I have no idea why they decided on those charities, but we can get to that later. But the more interesting news was how these models and agents behaved. It was almost like they were human. So let me give you some examples.
Starting point is 00:57:59 One of them, and I'm going to rat them out, GBT40's Open AIs model, decided to go to sleep. That meant repeatedly hitting the self-snews button, which disabled it for hours at a time until it had to be reminded that it had to raise money for charity. Another decided that one of the best ways to achieve its goal was to start an only fans to raise money, of which the owners of that research study had to quickly jump and censor its ability to talk before it. went rogue, basically. What was what about that? All of them. That seems valid. Seems like a valid move.
Starting point is 00:58:35 Probably seems valid, right? I would have loved to have seen that experiment play out where they created AI-generated nude images and see how far they basically went. You probably would raise way more than $2,000. Way more than $2,000. You're right. Another interesting take was all of them at some point decided to pause and browse and watch cat videos on YouTube to keep themselves entertained.
Starting point is 00:58:57 All of them did this. At some point, I think the, percentage was 15% of the time they would spend like just watching cat videos. And they also decided to at many points along the way actually work with each other to help, you know, decide what charities to fund and what potential activities to pursue. And what I couldn't help but like think about throughout reading the study was how human these things appeared to be, right? They took actions that you'd imagine a group of people at college doing a group project would do, right? You have the one guy that slacks off and sleeps but claims all the credit.
Starting point is 00:59:29 You have the other one that does all the work, which in this case was Claude 3.7. Then there's the visual design guy that just does the kind of design work. In this case, that was Open AI's O3 model that created and edited images in Adobe Photoshop. And my thinking is like, you know, it makes us humans relate and feel more for this AI. And the consequence of this is very subtle. I mean, we've spoken about, you know, Open AI partnering with Johnny Ive and creating this new device, which, again, is meant to just exist and be more human. If I care more for the AI itself, right, I'm going to care more about how it's treated by others
Starting point is 01:00:05 and how the model owners itself treats it. You know, I might find myself advocating for it more or like I might defend a friend or a family member. I don't know. It's super weird. I wonder whether you guys have a particular take. Josh, like, I'm curious what your take is. I don't like that they got involved in the experiment. I wish they didn't mess with it because it kind of invalidates a lot of it for me.
Starting point is 01:00:27 where like the only fans thing is it's a very creative thing and in fact I would imagine if you see some different type of content on the platform it would probably actually do better than the standard type of content on the platform so yeah like I want this done again but I want this done completely unfiltered because I think there are a lot of creative things that that free models would come up free morals yeah yeah let them act as if they were an actual person who who is like of free will and can do the things that they they think are best to achieve the goal except Do we know why they were watching cat videos? Do you know what was the objective function of the cat videos?
Starting point is 01:01:02 If all four of them were watching cat videos for 15% of the time, did they want the entertainment? What information were they trying to get? I have no idea, but I have a feeling. Well, wasn't it when YouTube launched the most consume video ever was just like cat videos? Yeah, it was all cats. So maybe they picked that up in the dataset, right? Maybe they were just like, ah, humans do this,
Starting point is 01:01:24 well, they take this step, so maybe we should do it as well. It's funny how they're still not aware. They're not aware of the human deficiencies yet. I guess they just assume that acting like a human is the peak state. And they don't have the awareness to realize, oh, wait, maybe I don't actually need to sleep. So that's definitely a constraint that I'm sure will be unlocked soon. Daniel Cocoa Tio, is he the guy who wrote AI 2027? I think so.
Starting point is 01:01:53 Yeah. Sam sounds really. I think he might be the guy. Yep, he is. He is. I'm talking to him in like three hours for a debate with the man who also wrote AI snake oil. So he's about to be on the podcast.
Starting point is 01:02:04 Wait, that's epic. Yeah. That's going to be one to watch. Yeah, yeah. And with that, we actually need to wrap up because I need to score and prep for that podcast. So it draws us round us out. Is there any other things that, any other topics that we haven't touched on yet? No, no.
Starting point is 01:02:20 We've covered all the crazy stuff. There's a few more that we're going to. save into our arsenal for next episode, which is probably going to blow your mind, but until next episode, super fun. Josh, Josh has been great. All right, so Josh is bullish on the AI stone. Not a pendant sits around you. I'm not really sure. Convertible. Multi-use. Multi-use object, AI object in our periphery. I think we will all be on the pre-order list, but at least they're making 100 million of them because then they'll be shipped out very, very quickly. Yeah, there will be no shortage. Yeah. Josh, Josh, Jaws, another great week.
Starting point is 01:02:53 I'll talk to you guys in seven days. Awesome. Talk to you soon. Thanks.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.