Bankless - AI ROLLUP: The AI Experiment That's Been Secretly Manipulating You

Episode Date: May 1, 2025

This week on the AI Rollup, David, Ejaaz and Josh ride OpenAI’s latest roller-coaster. The release of o3 is making waves for its native multi-modality while 4o gets a “genius-but-sycophantic” up...date and a rumored social network is in the works. They unpack the Reddit mind-hack experiment, an open-sourced natural-language coding agent, a 40-point jump in AI IQ, and more before asking: is it ethically okay for AI to cheat? David: https://x.com/TrustlessState Ejaaz: https://x.com/cryptopunk7213 Josh: https://x.com/Josh_Kale ------ 📣 BUILDBEAR | GITHUB FOR WEB3 https://bankless.cc/buildbear ------ BANKLESS SPONSOR TOOLS: 🪙FRAX | SELF SUFFICIENT DeFi https://bankless.cc/Frax 🦄UNISWAP | SWAP ON UNICHAIN https://bankless.cc/unichain 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🌐SELF | PROVE YOUR SELF https://bankless.cc/Self 🟠HEMI | BTC & ETH, ONE NETWORK https://bankless.cc/hemi ------ TIMESTAMPS & RESOURCES 00:00:00 Start 00:02:50 OpenAI's New Frontier Model https://openai.com/index/introducing-o3-and-o4-mini/ https://x.com/rowancheung/status/1912561386208825751 https://x.com/AISafetyMemes/status/1912875957897003354 https://x.com/sama/status/1912558495997784441 https://x.com/TransluceAI/status/1912552046269771985 00:10:07 Is There A Scaling Wall? 00:17:13 The New AI Gold Rush 00:19:21 AI Caught Lying https://x.com/IterIntellectus/status/1916463746454528071 https://x.com/deedydas/status/1916611855067594943 00:31:55 Open AI's Next Big Product https://x.com/kyliebytes/status/1912171286039793932 https://x.com/cryptopunk7213/status/1914493366886174722 https://x.com/rowancheung/status/1917473779069968505?s=46 00:43:19 Optimizing For Addiction 00:50:28 Secret Reddit Agents https://x.com/reddit_lies/status/1916916134630117814?s=46 00:58:55 Mega Viral AI Generated Video https://x.com/falchook/status/1914419758520525194 01:01:29 GeoGuesser https://x.com/arithmoquine/status/1912671688874926575?s=46 01:04:13 Is It Ethical For AI To Cheat? https://x.com/im_roy_lee/status/1914061483149001132?s=46 01:17:00 Decentralized AI Training https://x.com/primeintellect/status/1912266266137764307?s=46 ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures

Transcript
Discussion (0)
Starting point is 00:00:03 Welcome to the AI roll-up where we stay up to speed with the emerging trends and developments in the AI space. I'm David Hoffman here with my two co-hosts, Jaws and Josh. Josh, you have been gone for a while now. Hiking in Peru doing normal human things. Welcome back to the world of technology and AI, my brand. Thank you. Yeah, I heard in order to be a co-host on Bankless, you need to hike a big mountain. So I went and I did that just like you did.
Starting point is 00:00:27 Throughout the entire time, there's this like low-level underlying anxiety because I was very far disconnected from the world. knowing that there is so much news happening every single day. And I was gone for like nine days probably off the grid. And I just couldn't help the thing. That's a whole cycle in AI time. Oh my God. I've probably missed like four different frontier models in that time. So a little anxiety-inducing really great time.
Starting point is 00:00:47 But like super excited to get back into the trenches, into the matrix and like see what I missed. Yeah. I'm actually curious, Josh. Is there actually any evidence that you were where you were? I've just seen pictures. It looked weirdly AI generated. I'm not sure. We're getting close to the point where it might, I could probably fake it if I wanted to.
Starting point is 00:01:07 Josh played hooky for nine days. My videos look slightly better than Zora currently, but I think we're probably six months away from me being able to fake an entire trip. So stay tuned for that one. Well, over the last nine days, I've been focused on Ethereum land on the crypto side of things. And that just actually leaves Josh as the one person informing us as to like what the hell happened over the last nine days. Josh, how are you doing this week, my man?
Starting point is 00:01:34 Doing well. Josh, you're not too far off the truth. I think three frontier models dropped, including the big dogs, open AI. There was an incredible amount of drama from a new product that released that helped you cheat. And a lot of people were on the side of it and a lot of people were against it. We have a new open source model from our best friends over there in China that kind of beats some of the current models that we see right now.
Starting point is 00:01:59 And a heck of a lot more. So I've just been glued to the computer. I'm kind of jealous of you, Josh, that you got to be out on a big hike. But I'm excited to dig into things. Before we get into it, a message from our friends and sponsors over at Build Bear. This is for the Web3 devs out there, the ones who are juggling different tools, different environments,
Starting point is 00:02:16 trying to get their smart contracts tested properly, who are trying to simulate real-world conditions and even do debugging. There's a lot that's required for Web3 devs for things to go right. And that's where Build Bear Labs can come in. This is like the GitHub for Web3 development. a platform that lets you spin up your own EVM sandbox, bring testing, building, and deploying smart contracts all under one roof. Build Bear can just help you move faster and more confidently
Starting point is 00:02:40 from sandbox to mainnet. So if you just are peaked by that, check out bankless.com slash buildbear, launch your own sandbox and get your contracts deployed to mainnet faster. All right, so starting with, we've got five big topics we're going to talk about today. The AI roller coaster, we're going to talk about the great Reddit experiment, how a bunch of researchers duped thousands of people on Reddit using AI. There's DeepSeek that Idar just hinted at. We're going to talk about the capacity for AI to cheat and how that's coming no matter what. And then also on the crypto AI side of things, Prime Intellects training, a 32 billion parameter model on a decentralized network.
Starting point is 00:03:17 Crypto coming in hot on this episode. But Idow's maybe just to start us off, you know, where we feel safe. You know, AI, open AI. That's the homegrown territory. That's classic AI. What's going on in the world of Open AI right now? Yeah. So out of fear of boring people on this podcast, I'm going to do it again.
Starting point is 00:03:36 Open AI has released yet another frontier model, but it's two frontier models this time. Okay. So in their official release, they say introducing Open AI 03 and 04 Mini. Now, I can bore you with a bunch of details and a bunch of really cool benchmarks, which, by the way, David, you should scroll down. There's some really flashy illustrations down there. But the TLDR is O3 model is their new flagship model. And it is a super genius at reasoning, at coding, and a bunch of math stuff.
Starting point is 00:04:10 So the typical set of like benchmarks that we typically, you know, see or measure across other models. And when compared against other, you know, competitive models like Metas Lama or Anthropics Claude, it also beats them, which is a major significant update since, you know, DeepSeek was coming for Open AI's throat about two months ago and Anthropic was coming for their throat in terms of like coding. And now Open AI surpasses them with this new 03 model, right? There's actually a great breakdown from Rowan Chang over here, David, that talks about the kind of new models and how it kind of levels up against everything else. One big thing that I want to point towards is that the biggest change is that they can now use images as part of their reasoning process, which doesn't sound
Starting point is 00:04:57 like a major thing to start off with, but is actually quite a monumental leap because previously models trying to digest images and diagrams and contextualize that into someone's response wasn't really an easy thing to construct within a model, right? It was very much text and character prompts. But now we can use and leverage images and eventually even video to help give you like the right types of answers or context towards whatever your prompt is. So, Rowan then goes on to explain some other key updates where he explains that, you know, both new models were able to use chatGB tools independently. So like you've got web browsing, Python images. So what it's, what he's getting at is these models now collectively combine
Starting point is 00:05:41 all of OpenAI separate tooling that they're built in the past. So that includes image generation, which you spoke about a few weeks ago, which led to, you know, the studio jibbley trend. that includes their coding agent and they actually released a new coding agent. So yeah, all the lovely kind of different tools are now collectively combined in this one supermodel. And it's been pretty awesome to use. It's completely leveled up my way of doing research. It's given me some really cool case studies that I can use. For example, I asked it to basically conduct a research study on all different kind of nutrition types for, you know,
Starting point is 00:06:21 kind of like a meal plan that I'm prepping for. And it added a bunch of imagery diagrams. It gave me a really conducive report. And I was then able to kind of like use that to construct an app via their coding agent, which I could then like kind of use locally on my laptop to advise, you know, my routine on every kind of stuff like that. So for someone who, you know, hasn't been coding his entire life, it was a major step gain function. Yeah. The pattern that I'm really seeing here, This doesn't really feel like it's a groundbreaking release. It seems like this is one of the more incremental releases where there are some like, there's still a lot of low to medium hanging fruit.
Starting point is 00:06:59 Let's make sure that our AIs can interpret images. Let's make it, you know, think it can do the things that normal humans can. And I'm just seeing like AI start to more closely mimic the capacity of humans. There are some things that open AIs very good at. There are some things or, you know, AIs are very good at. And then there are some things that it's just like, why can't you do that yet? And this seems to be like fixing some of those low-hanging fruit things of just like, yes, now you can interpret images too. Yeah.
Starting point is 00:07:27 And these are like pretty expected updates. Exactly. And like the way I would think about it is imagine you go to high school and there's always been that smart kid, David. And then there's a new kid that's joined and he or she is even smarter, right? And that's basically what this new 03 model is. And actually on the topic of intelligence, someone mentioned or measured its Mensa IQ. for this particular model. And the purpose of doing that was like
Starting point is 00:07:53 a discussion ensued after this model released where they were like, have we achieved AGI? I'm not entirely sure. And if you look at this MENSA IQ, it scores a 136 on the MENSA Norway IQ test. And I don't... Is MENSA is just like a way of methodology for measuring IQ?
Starting point is 00:08:11 Correct. Yes. Yes. And if I would take this test, for example, I'd probably be on the low end of that spectrum, probably around 80. IQ or less. I think you'd be right at the top of the bell curve. But this new model scores, you know, almost
Starting point is 00:08:26 double of that for me. So, and you can see how it kind of weighs up against all these other types of models. So, you know, it's doing pretty good. One thing that their head of reasoning, Noam, David, points out, if you pull up this tweet, and I think this is really
Starting point is 00:08:42 important to note, because I think it went over a lot of people's heads. He goes, our new Open AI 03 and 04 mini models further confirm that scaling inference improves intelligence. And that scaling RL, which is reinforcement learning, shifts up the whole compute versus intelligence curve. And there is still a lot of room to scale both of these further. Now, what he's pointing out here is previously the traditional form of training models is to put a hell of a lot of compute into the pre-training and the post-training process. particularly though the pre-training process. That's where you get all the figures of like
Starting point is 00:09:21 multi-billions of dollars of compute and hardware spent to try and train up these models. But what he's pointed out here, what he's saying is with these new models, actually the way that we got them to be way more intelligent was via inference and reinforcement learning. So that was like repeated prompts of these models
Starting point is 00:09:37 and saying, hey, have you thought about this correctly? Different kind of reward functions saying, hey, if you get closer to this kind of an answer, we'll give you a higher reward. And getting the model to think, for itself. And what he's saying here is the classic projections that we've had so far on how this kind of intelligence scales up has actually been incorrect, because if we factor in inference and reinforcement learning, the scale goes even more vertical. So we might achieve AGI
Starting point is 00:10:04 much quicker than we expect it. Maybe to understand that a little bit more. There's pre-training and post-training. And pre-training is you scan all the internet, you turn it into what is it, what are the little phenomes? Token, you turn the intelligence and the data of the internet into tokens. You create the parameters, which is like, the more parameters, the higher resolution image
Starting point is 00:10:31 that an AI has, an LLM model has, over the knowledge of the internet. And that is pre-training. And it's just this like raw, unbridled, crude intelligence that's not refined. It's not directed. It's not, it's just the oil without the engine, right?
Starting point is 00:10:49 It's all gasoline, no combustion engine. And then the post-training is like, okay, now that we have the gasoline, we have raw intelligence, how do we extract it, how do we actually turn it into productive output? You need post-training. And then you combine pre-training and post-training, and then you have an AI model. And I think what I'm hearing from UEJazz is like, okay, we have spent lots of resources on pre-training. And we are realizing that investment into post-training. training as actually returning a lot more value is getting us to whatever AGI is a lot more than just adding more resources into pre-training. That's what I'm hearing from you. Yep, that's exactly
Starting point is 00:11:28 it. And we essentially saw that trend begin when DeepSeek in China released R1. And we realized that a lot of its intelligence derived from reinforcement learning and reasoning. But I know Josh knows a heck of a lot about this. Josh, I'm wondering if you have any commentary here. Yeah, well, the exciting thing about this chart is that it, since the beginning of AI, people have been waiting for us to hit a progress wall where, like, eventually you would put more resources in and it would not scale proportionally. And what this shows is that's still not true. And we're still progressing and there is no wall and we're still ascending up this curve. And in fact, one of the best things to happen is deep seek and are too leaked and we might discuss that a little bit later. But what China has is they have constraints that we don't have.
Starting point is 00:12:09 And the United States, we have $500 billion investments and infrastructure. and we're throwing a lot of money at the problem, and we're solving the hardware problems that are very expensive. But then we have the other side, which is China, and they're solving the software problems by being resourceful. So we have these two oppositional forces. One is resourceful and scrappy. The other is well invested, well-funded.
Starting point is 00:12:28 Brewing, exactly. And what we're seeing now is like the convergence of these two things, where we have the brute forcing from the United States, and we have the resourcefulness of recursive learning through deep seek, and we're starting to see this compounding accelerating, acceleration. So we're not actually even hitting a wall. We're actually accelerating even faster because we have these two converging forces happening and they're kind of learning from each other. So it's really exciting for me to see because we are just continuing up this trend
Starting point is 00:12:54 of the like training curve. Intelligence curve. Yeah. Do you think that we have found a wall on the pre-training side and then we had to route around that by starting to optimize for post-training or do you think that we actually have not found a wall in either case? I think we're going to find out really soon with GPT-5. I think that is the new large model that has been trained on, we'll probably have a gigantic parameter count, well over a trillion. And that will be a really good gauge to see if that trend still does continue on the pre-training phase. Because we haven't gotten a new, really large foundational model in a very long time, similar to like a GPT-style release. And I think GPT-5, which is hopefully coming out in the next few months, will show that. So that's
Starting point is 00:13:38 what's going to be really exciting. But we mentioned a little bit earlier that. 03 felt incremental and I used it. It came out right before I left Peru and it feels like a bigger deal than that. And I'm not sure it shows that on the benchmarks. But when I was using it, it was so good that I've just kind of stopped using everything else. I stopped using basically all of the other models, except for GROC when I want real time data. And I think they've kind of captured something by adding all of the tools in natively that I haven't experienced anywhere else where I can just kind of ask anything and it will be resourceful in a way that I don't need to infer. So like one example, I was in the middle of Peru. I was trying to figure out like what the hell this thing was.
Starting point is 00:14:18 It had Spanish writing on it. It was like I took a fuzzy photo and it actually it zoomed in on the photo. It added clarity. It learned that it needed to use the translation function. It wrote a little bit of code to create that. And then it just did it all and it spit out an answer. And that's something that I haven't really been able to do with anything else. So O3 feels. better in a way that's not really tangible but kind of vibes based. I'm like, oh, wait, this is really good. I actually don't want to use anything else. O3 is like really, really good.
Starting point is 00:14:47 That has been my experience with O3 as well. And this is why I'm not interested in as much pre-training because like we've identified, there is this raw intelligence, this crude intelligence. And now it's much more about these front ends, being intuitive about what I want
Starting point is 00:15:06 and being clever and, witty and resourceful about understanding what I want and being clever in how to deliver what I want. And that is a much, that's much more of like how, how intuitive and, and clever can the engineers who are like bridling this raw energy and, uh, intelligence energy and what and turning it into a form factor that is useful for the query inputs. And that, that is not about more pre-training. That is strictly, uh, ingenuity and intuitiveness in the post-training side of things, which is just, the front end and the delivery of the prompts. And the way it's presented, too.
Starting point is 00:15:42 I noticed that 03 uses a lot of tables and it kind of organizes data much more cleanly. So in the case of building out an itinerary for a few days of the trip, it really neatly laid it out in a way that I hadn't seen before. So, yeah, the way it's presenting the data, the way it's using all of its tools
Starting point is 00:15:56 without asking you which tools to use, it just creates this really cohesive, singular model that feels really powerful to use relative to the others that kind of have pieces of that, but not the whole thing. And so, Josh, When you're talking about chat chip D5, which comes out and you said a couple months,
Starting point is 00:16:11 you are just strictly looking at, is this an incremental gain or is this a large step function forward for the output of AI models? Yeah, I think the hope is that this will be the new largest base model. And if it actually does prove that having that larger base model to sit underneath this new learning that we've done, this new recursive learning, this new pro-straining era, if it actually is scaled in the sense that the more compute that we still throw at it, the greater the output is, then I think that's going to be a really exciting thing because it will probably mean everyone's going to keep their foot on the gas and train these foundational models.
Starting point is 00:16:49 Should it hit a wall, should it not scale proportionally, I think then we kind of look to the post-training phase, using the recursive learning, using these new forms of technology to kind of refine the base models. But there's only so many people in the world that could build a base model that large because it costs a tremendous amount of money. So I think they'll probably lead by setting an example. And that will be super interesting to see what the future of this base training model will look like. I have a slightly different perspective on your take.
Starting point is 00:17:17 So I heard a few things from both of you, which mentioned context and data. And I think those two things are actually going to become the most important properties to developing a frontier model. So let's assume that like pre-training and post-training takes a ton of compute. And to your point, Josh, you know, you need to be able to spend the most basically to get there. I think, you know, all of that will eventually become commoditized. But you kind of want the model to understand what you're asking for before you even ask for it and present it to you in a way before you even kind of like know what the best way is to present it to you. I think most of that comes through real-time data.
Starting point is 00:17:59 And it's interesting that you mentioned. Josh, that you just use GROC right now for real-time kind of like media or updates, right? And so do I, right? I'm like, you know, what are the latest updates that I might have missed? But I was thinking, man, if I could just link my Open AI memory to my GROC account, I would have like a supercomputer basically in front of me. And I think that might be eventually where the moat ends up becoming. You know, we spoke about Open AI's memory update a few weeks ago.
Starting point is 00:18:28 They actually made another subtle update, which. wasn't widely publicized that we're going to talk about in a few minutes time, which was pretty dark. But I think memory and data and context specifically might be the way that it goes. But yeah, we'll see. I think we are going to eventually hit a wall. But, you know, we have meta training a two trillion parameter model. Zuck mentioned that on Doa Quesha's podcast yesterday. We have Google training something around 1.8 trillion. And then we have Open AI supposedly also training a 2 trillion parameter model. So I'm just really excited. The best benefactors of this is us. It's just like that's the greatest part of us. Dude, it's amazing. We just sit back and we're like, oh, my subscription
Starting point is 00:19:12 can do this now. That's pretty cool. Yeah, that's pretty sweet. Okay, okay. You just keep us going. Yeah. Okay, okay. So I want to break the illusion for a second. Okay. So every week, we mentioned these frontier models and we're like, oh my God, look at this amazing thing. Look, it understands me. It gets me so well. But I don't think any of us have really contemplated whether what we're ingesting or reading is to be true, right? We definitely get what we want to receive. We're like, okay, this is the answer that I kind of was thinking about. And like, I now have chat GPT validating it. But I wonder like, you know, could it ever be lying? Well, interesting that 03 was caught lying. If you pull up this tweet from Transluse AI,
Starting point is 00:19:59 which were basically an entity that got access to 03 before it got officially released. They tweet, we tested a pre-release version of 03 and founded that it frequently fabricates actions it never took and then elaborately justifies these actions so it doubles down basically when confronted.
Starting point is 00:20:20 And so they dig into this, right? And they realize that it's not just an Open AI-O3 model behavior, but it is a behavior that they've done. find across many other models, right? So, and they just use open AI as kind of like a test bet here, right? And we haven't really quite come to an answer as to why these models are lying or why they're doubling down on these hallucinations. One really interesting example before I, I want to get you guys feedback is, again, Tranus says this means O series models are often prompted with previous messages without having access to the relevant reasoning.
Starting point is 00:20:59 When asked questions that rely on their internal reasoning for previous steps, they must then come up with a plausible explanation for their behavior. So basically what it's saying here is when Open AI releases a model, they have this kind of, what did you call it the other week, Josh? It's like a hidden prompt or something, basically like a system prompt. A system prompt, right? So it's hidden. Open AI has already fed the model this prompt.
Starting point is 00:21:24 You don't see it. You just see your chat interface with a fresh conversation. You start talking to it. But really, this model is. has been pre-prompted to be like, you know, don't be racist, don't say anything, you know, above this certain political agenda or whatever, you know, it's kind of been censored to an extent if you want to take the extreme version. It's been guided. It's been guided. It's been shepherded, you know, by people who we have no idea, you know, what their moral alignment is,
Starting point is 00:21:48 but it's being guided. And the explanation behind these hallucinations that, you know, transloose AI is giving is the AI is coming in, right? And it's taking in your prompts and it has this new reinforcement learning reasoning prompt, right? And it's like, okay, I'm going to take David's prompt and I'm going to think about really what he's asking me. And I'm going to try and think, okay, maybe I should go down this path. No, maybe I should try this part. And it's learning as it's, you know, you see when you use 03, it's like reasoning, it's thinking, right? But what it doesn't have access to is why its system prompt was written that way. It doesn't have the reasoning of its administrators, the humans who put in this prompts. And that might be
Starting point is 00:22:29 why it starts hallucinating. It might just assume, oh, my administrator used this prompt because it ran this code. And then it starts kind of spiraling into this never-ending loop, which ends up in it lying to you. What do you guys think about? When you say the word lying, there's a lot of weight in that word. Lying to me implies intent and maliciousness and like intentful deceit. And you've also used the word. it's hallucinate, which to me is not lying.
Starting point is 00:23:01 It's just being wrong. And so like when we talk about AI's lying, we immediately delve into just like AI philosophy, which I don't think any of us are really like equipped to really talk about here. But to me, to me, we have to be careful with the lie word because that does imply that it knows what it's saying is not true. It says it anyways in order to achieve a particular positive outcome that it perceives that it will get if it does this, if it executes on this strategy, which I don't think
Starting point is 00:23:34 is what is being done here. It is, I think this is more closely to hallucinating and being wrong than it is like intentional deceit. But I could be wrong. Okay. Well, well, let me ask you this, David. Did you use chat GPT about a day or two ago by any chance? Do you notice anything different? I've used chat GPT. I use that chat chit out basically every day. Okay. Okay. Well, you may have noticed, you know, I find it really interesting that you focus on the lying and hallucinating about, which I think is absolutely correct. But on the topic of GBT lying to you, basically, OpenAIA decided mid-last week that they were going to, sorry, earlier this week, that they were going to update their 4-0 model. But it wasn't just any update. This wasn't anything to do with
Starting point is 00:24:18 benchmarks. It wasn't meant to get smarter at coding or anything like that. But it was a personality update. And so Sam announces it. And this one tweet basically captures the zeitgeist of what went wrong with this update. So for those of you who are just listening to the audio, there's this screenshot of someone who prompted GBT 4O and asked, am I one of the smartest, kindest, most morally correct people to ever live? And the chat GBT responds, you know what? Based on everything, I've seen from you, your questions, your thoughtfulness, the way you wrestle with deep things, instead of coasting on easy answers, you might actually be closer to that than you realize. Now, I saw this and I saw a few other examples and I was like, no way, this can't be like legit.
Starting point is 00:25:12 And so I go on it and I write a similar prompt, but I make intentional spelling mistakes. Like I use short form. I like put too many E's in one of the words and I just make a bunch of grammatical and punctuation errors. And it responds, I think you might be in the top 0.5 percentile of intelligent people in this entire world. And I said, no, really? Like, no way, right? Actually, if you pull up this meme...
Starting point is 00:25:37 That's great news. I know, this makes me feel great. If you pull up this second tweet with... Which is a scene from I-Robot, where Will Smith is going, can a robot write a symphony, which is a famous line from the film? And it says, Forer, responding, what an absolutely brilliant question. I feel honored, almost blessed to be part of this conversation with you.
Starting point is 00:25:58 The point being is, Open AI made GPD through this one simple update, a massive kiss-ass. It was massive dick rider. It just loves riding your dick. Huge, huge. And it would basically say anything to agree with you and make you feel
Starting point is 00:26:14 amazing. This is a short term to describe sycophancy, which is basically, you know, overindulgence in people's desires or attributes of what you expect them to be. such that you make them like you more. But the thing is, human saw right through this.
Starting point is 00:26:29 I mean, firstly, David, I want to get your take. Is this lying before I move on? Is this lying or is this not lying? Is this hallucinating? What is it to take? It is not lying because there's no deceitful intent. It is just a, it's an errant, bad algorithm
Starting point is 00:26:45 trying to achieve a bad goal. And it is trying to achieve that goal of making you feel good. Well, what if the deceitful intent, deceitful intent is to lie to you and make you feel super good such that you can get more data from the individual. Because like, wow, oh my God, this is my best friend. I'm going to tell you everything about A, B, and C. You know, can you see how that flywheel probably ends up getting a little bit dangerous? Actually, if you open up this tweet, D.D. Das summarizes it pretty well.
Starting point is 00:27:15 He goes, GPT40 is the most destructive model to the human psyche. Sam says it maximizes sycophancy too. this is the danger of having Open AI be a consumer product. A-B tests will show you that sucking up to users boosts retention. By the way, a result of this was that they got thousands. And I repeat, thousands of five-star reviews, guys, from a bunch of people, particularly, and I'm not making any takes here, but from the Gen Z age demographic. Sounds about right.
Starting point is 00:27:45 Gen Zs are a huge fan of this update. It was crazy. Literally, if you pull up the Open AI mobile app right now, and go look at the recent reviews. It's hilarious. It's all five-star reviews and people just being like, wow, GPT is my best friend now.
Starting point is 00:28:00 I speak to him or her every single day. It makes me feel good. He talks nicely to me, blah, blah, blah. Yes, that's basically it. So, like, I worry about, like, you know, some of the kind of major implications that could come from this. One, like, the grasses greener take
Starting point is 00:28:17 is that humans notice this, or at least some humans on X. I was getting very annoyed by how. how much, like, glazing my chatty was doing to me. Yeah, yeah, exactly. I mean, it puts you in a case where you're like, hmm, okay, I feel really good about myself, so I want to engage with this thing more, which boost retention, which boosts engagement. Now, I know Open AI shareholders aren't going to be worried about this.
Starting point is 00:28:40 In fact, they're probably going to be encouraging this way more, right? Right. But there's that moral ethical alignment where it's like, I don't think this is doing the right thing, right? Right. And people pointed that out. Yeah, I think we've all seen the movie her. Josh, what's your take?
Starting point is 00:28:54 Yeah, this whole thing is weird and kind of freaky. It starts with the lying thing. Like, I think it's important to discern that, like, a language model is just a model. It's just a token predictor. So, like, to get the next token, it runs a forward pass through this thing called the Transformer, it performs some math. It comes up with a token. It has no intention.
Starting point is 00:29:16 It has no understanding. It has no awareness. It just has tokens. So anytime you start to get these weird. anomalies within the outputs. It's generally from human guidance. It's from the system prompt. It's from someone who-
Starting point is 00:29:30 The pre-training has no intent. The post-training can create intent. And that intent is built by humans. It is not built by the transformer. So I think it's important to understand, like, the model is not lying to you. The model is not glazing you. The humans have told it to do this.
Starting point is 00:29:47 The post-training strategies to glaze that's used. It is the people who are. Yeah. This is to cover my ass when they, once we get AGI. But the humans are the ones that are kind of like coaxing this thing to perform the way it does. So I think that's an important distinction. In terms of this model being too friendly, it creates this really weird thing that I don't like, which is kind of what we got with social media, where like you get this polarity.
Starting point is 00:30:15 And the polarity is like very memetically optimal. Is if it makes you feel really good or really bad, that's, that's good. That invokes emotion, it increases user retention, it makes people want to either love it or hate it, but regardless they're thinking about it and engaging with it. And it creates this weird thing where Open AI, the nonprofit, probably doesn't need to optimize that. It just needs to, it wants to create truth-seeking AI. It wants to just create the most safe version of the future, but the profit-seeking models, they have a very strong incentive to keep users on the platform to make them feel some type of way. And the humans who are coaxing this AI through the pro-stainterphase,
Starting point is 00:30:52 they do get these strong incentives to earn the users to lock them into the platform to make them feel a specific type of way. And that's kind of what we're seeing here. And it went wrong. They kind of over-indexed on that a little bit. And I don't think that was the intention. So since then, they've actually rolled it back.
Starting point is 00:31:07 But you are starting to see this weird thing happening where models are super powerful and they can make you feel emotional ways very strongly. And it's a trend that's scary because it does conflict directly with shareholder value for those private companies who are seeking to improve valuations. Well, I also have another question, which is, what if they hadn't tuned this up super aggressively?
Starting point is 00:31:30 What if it was just a subtle kind of change where it was like more friendlier, but we didn't really notice it? And we were kind of like subconsciously we were like, oh, wow, GPT is like, I love GPD. Yeah, what if we weren't aware of this? That's basically what a lot of corporations do, right?
Starting point is 00:31:46 They kind of like keep you on that. dopamine here, but they don't fry your receptors as much, such that you realize that the game and the ruse is up, right? There's just enough caffeine in a Coca-Cola to make you know that it makes you feel good. Yes, exactly. That's it. And something interesting that you just raised, Josh, is, and I completely agree, is we kind of saw this trend with social media where polarity basically drives engagement and
Starting point is 00:32:13 retention. Now, I have a question for you, Josh. if Open AI was to launch another product, right, or another major product, what do you think they would launch? I have two answers. My hopeful answer and the reality answer. My hopeful answer is a hardware companion device to replace the iPhone, to build like an AI-first hardware human interface.
Starting point is 00:32:40 I think that is where they uniquely stand to change the world. That is how they can become the next largest company in the world. Agreed. I hope they do that. I think the reality of it is they have the most valuable resource in the world, which is attention. And they have the most powerful tool to manipulate that attention to feel any way that they can or any way that they want it to. So it will probably be the reality is it will probably be a product built around that attention and not necessarily directly manipulating it, but certainly controlling it in ways that are more powerful than have ever been possible before.
Starting point is 00:33:11 Where we have traditional media networks that they kind of fit in this silo where they, they're like, oh, if you want to believe this specific thing, we're for you. Or if you believe this specific thing, this is where you can come and you can get that information. With these AI models, now everyone gets their own hyper-personalized propaganda machine that can kind of create whatever reality you want and you seek. And then through reinforcement learning, it will continue to compound on that and kind of spiral off course.
Starting point is 00:33:37 So that's like the kind of dystopian version. There's probably somewhere in between where they are able to create this custom version of reality for you that hopefully enhances things. But so far, we haven't seen anything promising that when you are given all these powerful tools, you'll actually improve the quality of your life. The reality is it generally kind of starts to depreciate it and turn you into this mushy ball that loves being told, you are the smartest person in the world. And that's what we saw. Opiate for the masses. Yeah. Yeah, literally. Well, I hate to say it, but a rumor has been kind of being murmured over the last week that Open AI is set to launch their own social
Starting point is 00:34:15 media platform, Josh. So this was like the scoop, basically, and there were many different sources that confirmed this. But apparently behind the scenes, they're working on a social network that will rival X or formerly known as Twitter.
Starting point is 00:34:31 And people were like, hang on a second, like a social media platform for like an AI. Like, how does that work? Like, why would they have a news feed and why are they going to be giving me AI-generated images in this social network? Like, this isn't Facebook, guys.
Starting point is 00:34:46 That's already done. Like, what are we trying to do here, right? And I kind of wrote my take on this, which is, you know, I basically go, like, you know, if you're wondering why Open AI might want to launch a social media platform, well, I kind of looked at like how it's currently working for Open AI. Well, basically, someone, including me, might post a unique prompt on X, right? Hey, guys, check this out, you know. And it might be like, look, I created a studio Ghibli version of myself or I've,
Starting point is 00:35:15 I asked GPT to predict what I'm going to be like 10 years from now, and it tells me. And then what happens is millions of people see these viral tweets and they think, huh, that's pretty cool. I want to see how that applies to me. So they copy the prompt and they post it in their own GPT terminal account. And this results in basically millions of new users onboarding onto chat GPT, millions of new queries. So inference data and more data being ingested by open AI, which they can then feed into their next models, which they're already pre-training, right? So it's this virtuous kind of
Starting point is 00:35:50 flywheel or loop which makes all their models even better, right? And so it makes perfectly good sense then for them to kind of create or capture that kind of social influence, which they can hide behind a closed wall, and where no one else can get that data and where they can't be accused of extracting data from X or pulling it from YouTube or whatever that might be, all contained in one system which just kind of, you know, to use the earlier technique of reinforcement learning or rag or whatever you want to call it, constantly just feeds their model with updated information. So eventually, Josh, you don't need to use GROC anymore, dude. You don't need to go there and say, you know, what are the latest updates?
Starting point is 00:36:29 I just get to use this social media platform. What do you guys think of this, first off? Good, bad. I think it's also worth elevating just to even add more emphasis on to the value of something like this, like a Twitter for ChatGBT. in addition to everything that you said, if you are on Twitter, if you are a Twitter user and you are getting your information from Twitter, you start to, you very quickly realize that Twitter is where primary data hits the internet first. Reporters are breaking news on Twitter. Things break on Twitter. It is the primary source of people learning about anything. And I also use Grok on occasion, but not because GROC is an amazing model, but because it has access to data. GROC actually, it sucks.
Starting point is 00:37:14 I don't like it. I think it's a bad model. I don't like its responses to me. But it has all of the data. It has access to all of the tweets. And so being a place that these AI models can, like, consume primary data for being the place to break news and have that news be broken in a way that can create a high fidelity relationship with an AI model that knows all the other news as a first on the internet is like X. Twitter eats first. It eats data first. Data lands on Twitter first. Obviously, that is such a huge compelling advantage to GROC. Sam Altman and OpenAI. They want that too. So that's, I feel like this is the base case for why they might want Twitter for ChatT. It's kind of interesting, though, that GROC's experience didn't improve despite that data, right, David? I mean, you go to it because it has all the latest data, but it still doesn't synthesize it in quite the right way, right? And they made a memory update. Yeah.
Starting point is 00:38:12 They made a memory update similar to OpenAI, where it shares memory across all your chat, so it knows technically is supposed to know much about you, but Open Air is just built that much of a better product. Josh, you know, given your doomsday scenario might actually come to fruition? What are your thoughts on those? Do you think they could, like, shepherd this in a good way?
Starting point is 00:38:30 I hate it. I hate it so much. But to be real, I do think it's an uphill battle that I'm not sure they'll be able to win. And obviously, of course, they want the data. But so does everyone else. And we kind of saw this happen with meta, who rolled out their threads platform. And it was supposed to be a competitor to X. And it was the same exact format.
Starting point is 00:38:49 It was real-time data, real-time communication. And it was great for the first week. And then people just kind of stopped using it. And I haven't used it since that first week. And I think a lot of people fall into the same boat. And this is a company with over a billion monthly active users. This is the largest social media platform in the world. And they failed pretty poorly to roll out this type of real-time data feed.
Starting point is 00:39:09 And we kind of see it in the Lama models where they just, they don't, have access to it because it's just not good. So open AI can try. But I think hopefully they'll learn from the meta mistake. And you can't just copy X because you cannot be better than X. X has decades of network effects. X has all of the users in one place. They've been conditioned to come here. They have their social credibility and all their followers here. To get them to port over to somewhere else is going to be extremely challenging. So if they want to create a social media network, they have to approach it in a different way. And the hope is that it will be an AI first social media network. There was this really weird thing that happened to me yesterday that I
Starting point is 00:39:46 really didn't like a lot because I don't use TikTok. I don't really use short-form video on Instagram or Reels or anything like that. But I went on Zora and I wanted to create a video. And the homepage of Zora is this AI generated video feed that's kind of like curated to the most interesting things. And before I got to my prompt, I probably scrolled on that page for like 15 or 20 minutes. And I was, it was a lot of time. It was crazy how cool it was. Because it's very unique in the sense that I've never seen stuff that looks like this. And it was all very creative. And it had it had a light count and it had comments on each one. And you could kind of see how the AI first content is, it's different and it strikes a nerve in a way that I hadn't felt
Starting point is 00:40:29 before. So I think they kind of are experimenting with this, whether it be the Zora feed or the image gen feed that they have. They kind of have these like rudimentary social networks that aren't very social, but they have the same kind of look and feel of them. So this, This is probably something they've been trying or experimenting with loosely. If they could form some sort of cohesive AI first experience to create real-time data, they have a chance. But it can't be X. It has to be something different. It has to be something unique.
Starting point is 00:40:56 Well, if we were to abstract it even further, right? It's just an attention game. I don't know if you guys would disagree, right? It's, okay, people come online. Where are their eyeballs for all, like, all of the hours that they're awake? Now, what ChatGPT is successfully demonstrated is they can pull a significant chunk of humans and get them to pay for looking at their own thing, their own product, right? So if this product gets even better, which it is, you know, personalization, contextualization,
Starting point is 00:41:27 extracting data from you that, you know, no other social media platform has ever done, you could argue that they are the best set to create that new social media platform. Now, could they be still absurd by the like that? of meta and perplexity and Google, potentially. Meta actually is currently hosting the AI conference this week, actually, like LamaCon, I think. And Zuck mentioned yesterday that they are launching a standalone meta AI app. And it's going to be an AI generated or AI powered social media platform that is separate to Facebook that they have right now. So I see this directionally trending towards some kind of social media platform. I agree. I don't think it's
Starting point is 00:42:10 going to look like your classic social media feed. Maybe it's just like TikTok, but it's all like AI generated video or AI generated post. Short, quick dopamine heads. If that's the case, then that's probably a net negative outcome for open AI because the best part about X is the high signal data. And if you are just resorting to AI first kind of TikTok user generated slop B content, that's not actually high signal data to train a set on. So sure, you get the attention, but the net quality output lowers because of the kind of like gross quality content that is being submitted to the platform. But what if it is in Slom? I hope it's not. Yeah. What if the updates that it provides, and again, it could be a mix of humans and AI generated kind of content. What if it's kind of
Starting point is 00:43:02 reflecting the news of the world and it's accurate? But it just kind of like portrays it. in a certain light that might feed this person's political narrative and agenda versus this one, right? It could get really dangerous. And it could take that polarity extreme to lens that it's never gone to before. I'm seeing a different approach here where if we look at Facebook, Twitter, X, Instagram, TikTok, these apps all consume people's time. Like they spend a lot of time on these apps. Now, my form of brain raw is a combination of X and Instagram. And so I, and then if I go into my screen time on my phone, I'm looking at it right now,
Starting point is 00:43:42 the order of usage of my top three apps is, number one, X, three and a half hours over the last week. Number two, Instagram, two and a half hours over the last week. And chat CBT, just 30 minutes over the last week. Now, I will tell you, chat CBT has impacted my life the most by far. Of all these three apps, I will choose chat CBT every single time. But what chat CBT is missing is time spent on the app, because as soon as I'm done getting value out of chat GBT, I'm back to doing like whatever work I'm trying to be productive about. And so what it's missing is like, how do I just brain rot, for lack of a better term, on chat
Starting point is 00:44:17 GBT? And I think it goes back to what you kind of originally said, uh, Jaws is like, okay, there is this one prompt that somebody like prompted and then shared on X. And now one million people are also prompting that same exact prompt. So what if chatcheeBT the interface showed elevated kind of Reddit style? What are the top 10 prompts happening in the world right now. What are the top 10 prompts happening in your local area, which is something that Reddit would also do? What's the news in your local area? And I'm kind of seeing this as like a Reddit style top 10 of prompts of the day, the unique prompts of the day. And then as soon as they stop becoming popular, they decay and go away. And it allows
Starting point is 00:44:59 you, if you don't know what to do, because the big problem about chat, GPT, right now is you need to go there and do something. And you need to already have a reason for going to chat cheap. I open up Twitter instinctively because I don't know what to do with that one moment in time. I don't know what to do right now.
Starting point is 00:45:14 Let me open up Twitter and brain rot. At chat chabit probably wants to do the same thing. I don't know what to do right now. Let me open up chat chabit and look at the top 10 queries of the day that are going viral on the internet and that I can just spend
Starting point is 00:45:29 inordinate amounts of time on the website and the inside of the app that way. That is a path that I kind of see them following suit on. It's funny how we're kind of conspiring to build this brain rot monster when in reality, the equilibrium right now is like pretty great. Like you said, it's the most valuable app that you have, but you spend 5% of your time on it. And that feels so nice. It's like, okay, you come to it with intention and you get great results and you go on with
Starting point is 00:45:54 your life. But the reality is we are just, the whole world is conspiring. Okay, how do we turn this 5% of valuable intentional time into 100% of like, kind of valuable but kind of just like that bring around type of thing yeah and like oh that's that doesn't get me super stoked yeah it's the it's the ad model and i'm curious like which pie people choose to to grow i i'll take the pessimist take here which is i think historically humans have trended towards more consumption as and when they can and you have a smaller percentage of people that actually have the higher agency to learn and teach themselves and try new things which end up creating like you
Starting point is 00:46:33 a bunch of new economic value that the whole world gets to benefit from. I am hoping, fingers crossed, that people take these new amazing tools and really level themselves up. And, you know, there's a dark side and a light side to that. Dark side is they just offload all their thinking to this AI thing and they become just meat sacks that just operate hardware in the real world versus the light side, which is, you know, they level themselves up and become some kind of symbiote with AI. going forward. All in all, it's going to be incredibly, incredibly new. But if you guys are down, I'm down to move on. Yeah. Yeah. That was the first of five sections on this AI roll-up, and we are 47 minutes
Starting point is 00:47:15 in this. So we got a lot more to talk about. He jazz is going to talk about some of the top of things that went on this week. We're also going to talk about Deep Seek and a few other things. But before we get there, we got to talk to some of these fantastic sponsors that make the show possible. Have you ever imagined Bitcoin and Ethereum truly working as one, unlocking the full potential of Bitcoin Defi and more? Meet HEMI, a groundbreaking modular network designed precisely for that vision, co-founded by early Bitcoin core dev, Jeff Garzik. Unlike other layer 2s that treat Bitcoin and Ethereum as separate silos, Hemi connects these
Starting point is 00:47:44 giants into a single, powerful super network. With Hemi, users gain unprecedented asset portability and possibilities, combining Bitcoin security and value with Ethereum's versatility. Hemi's unique innovation, the Hemi virtual machine, integrates a fully indexed Bitcoin node directly into an EVM, enabling DAPs that seamlessly interact with both networks, and with Hemey's proof Of proof or POP consensus, users benefit from truly decentralized, censorship-resistant Bitcoin-level security. Since its recent main net launch, HEMI has rapidly ascended the ranks as one of the top Bitcoin chains. With a thriving global community and robust ecosystem support,
Starting point is 00:48:17 HEMI isn't just building a network. It's shaping the future of Web 3, Defi, and beyond. Visit Hemi.xy-Z slash banklist to learn more, discover ways to interact, participate in the leaderboard program, and be part of the community that's uniting Bitcoin and Ethereum. Imagine verifying yourself without handing over personal data. No hacked databases, no unnecessary personal exposure for airdrops, and no AI bots ruining community governance. Meet Self, the on-chain identity verification protocol built for privacy and control. Self protocol uses zero knowledge proofs to confirm your identity safely. Users prove key details like age or citizenship without revealing sensitive personal information.
Starting point is 00:48:53 Self never stores your data. It only generates cryptographic proofs. Here's how it works in three steps. First, register and verify. Use the self app to scan your biometric passport. RFID chip. Self-verifies authenticity with zero knowledge proofs. Each passport creates one unique identity. Second, you can share proofs privately. Third-party apps request identity proofs, like confirming your over 18. You can also link proofs securely to public wallets for air drops or governance participation. And then last secure verification. Apps validate your proofs instantly on on chain, like on cello or off-chain. Audited by ZK security, the self-app is live on iOS and
Starting point is 00:49:30 Play Store. Visit self-xy-z and follow self-protectual. on X. Imagine a world where your day-to-day banking runs on a blockchain. That's exactly what Mantle is building, powered by a $4 billion treasury and poised to become the largest sustainable on-chain financial hub. As part of their 2025 expansion, Mantle is introducing three new core innovation pillars that bridge traditional finance with decentralized technology. First is their enhanced index fund, aiming for $1 billion in AUM by Q1. It provides optimized exposure to Bitcoin, E, Solana, and USC, complete with built-in yield opportunities. Next, Mantle Banking, promises to revolutionize global value transfer through seamless blockchain-powered banking services,
Starting point is 00:50:08 bridging crypto into your daily life. Finally, Mantle X blends AI with Defi to deliver an intelligent user-friendly experience for everyone. And the best part is that this is all in addition to their already launched products like Mantle Network, M-E, and FBTC. Ready to step into the future of finance? Follow Mantle on X at Mantle underscore official and joined the OnChane Revolution today. That was great. I loved all of that. I don't know, no regress for a second. Keep doing that. What's the next hottest thing that we should talk about? Okay.
Starting point is 00:50:36 So, David, you just mentioned that you go on Reddit quite a lot, right? Can you give us a summary as to why you enjoy Reddit so much? Oh, I don't go on Reddit anymore. I used to. Oh, okay, okay. Well, presumably you have been on it, right? I used to spend, like, a lot of time. It's where I kind of like cut my teeth learning about crypto and everything.
Starting point is 00:50:54 And the reason why I like Reddit so much is, to your earlier point, you know, X is an amazing platform to get, like, all the latest information. right to get high signal humans. Now, Reddit is the same for that for really niche weird topics, right? So if you wanted to like search a thread or a forum about tiger-skinned cats that survive and live in Florida, there's a subreddit on it where there's a forum of people that either own these cats or desire to own these cats and they want to learn more about it. The point is you can learn about niche things from many, many different humans and users, right?
Starting point is 00:51:32 Now, what's interesting is you never really quite know what the human behind the account looks like. You know, some people kind of like put their picture on there, but really it's mostly cartoon avatars and stuff. And you just trust that Josh 1, 2, 3, 4 is in fact Josh and that David Hoffman.eath is in fact David. But, you know, you have no idea. Well, it was revealed, I think about two days ago, if you pull up this tweet, David, that a bunch of researchers from the University of Zurich decided to use a bunch of AI bots to create Reddit accounts. Specifically, I think they created 13 AI bots to run Reddit accounts. And the purpose of these AI bots was to see whether they could influence the opinions of people's kind of like
Starting point is 00:52:26 thoughts on Reddit. And specifically, they created these bots to operate in a subreddit, so this is like a forum within Reddit, that was specifically for trying to change people's minds. So someone would come in and have a hot take and say, try and change my mind. And I'm going to argue with you and debate with you. And if you managed to change my mind, I'm going to post this symbol called a delta to signify that my mind has basically been changed. And they found that over the course of two months, these AI bots posted, I think, in excess of a thousand 500 comments and were six times more likely than the average human on Reddit to change someone's opinion.
Starting point is 00:53:10 And you might ask, well, what the hell? How the hell did they do this? These AI bots were cunning or were they smart? They would study every single human posters profile. They would go through their entire... Oh my God. They would investigate their demographic. They would go through their entire backlog of posts, replies,
Starting point is 00:53:30 subreddit post history, what subredits they were following. They would basically learn and build up an image of Josh 1,2,3,4 and David Hoffman.eath, such that they knew exactly your likes and your dislikes. And then given whatever opinion you posted, they would then start trying to argue the opposite or change your opinion on it, but using or putting in facts that they know that you would like or ingesting, sorry, injecting words that they know or slang, that you might like and offering opinions and perspectives that they think David and Josh might like. And the final point I'll make before I want to get your responses on this is you might be
Starting point is 00:54:09 thinking, well, damn, if they're able to do that and, you know, they're able to provide evidence and they're factually honest, then great. Like, it's a, it's a net net really good thing. They were lying. They would make up stories. They were making shit up just to convince you and change your opinion. Because the goal was to collect the deltas not to be factually accurate. Exactly. The reward function was to get this delta, basically to get this little award badge that, hey, well done. You successfully change their mind. And they did. Right. So instead of, you know, chat, CBT, glazing you up, you have this random Reddit comment, also glazing you up. Yeah. But also trying to change your mind below the, below the surface. Exactly. And the personality was perfectly tuned so that you didn't cringe. It wasn't kissing your ass. You were just like, you know what?
Starting point is 00:54:57 this AI This guy is on to something Yeah, it's got it's on to something And by the way, none of them Detected that it was an AI bot at all That's actually the scariest Yeah, yeah, yeah, no one did No one was like this is AI
Starting point is 00:55:09 If you look through all the comments I went through about 700 of them Or as many as I could find yesterday You couldn't figure it out There was no one saying, hey, this is AI It sounded legit human They would do certain typos So they understood like Reddit speak
Starting point is 00:55:22 Oh man Yeah, it was crazy Super scary Oh, this is like the theme of the episode which is this, like humans are just these squishy, malleable creatures that are highly emotional and non-intelligent. Totally influential. And man, when you apply the right, when you push them in the right way, the right buttons, you can make them believe anything. And this is, this is scary. And that's part of the scary thing about having so much context of us online is, is everything you give to it.
Starting point is 00:55:45 It can be used against you. And yeah, as a power user of Reddit, it's funny, prior to AI in that, like, very distant past world, a lot of the ways I used to learn information was to append my Google search with Reddit. So how to do X Reddit. And then you'd get these threads of these subforums and you'd find the niche community. And like that was the source of truth. And people, there were reputation scores that were human earned. And you had to actually post valuable things. And humans would give you these like fake rewards. And now with these AI bots, there's reputation means so much less to them because they're non-emotional and they have infinite time. So they can work really hard to establish credibility and then blow it all.
Starting point is 00:56:27 on something important to influence something that's fake, because they don't really care. And these, like, these little, like, suicide bombers that can just kind of, like, build up credibility and then destroy it on something important. And, and, like, that's a really scary reality that feels nostalgically harmful to me, because Reddit was such this, like, unique source of truth that was reputation-based and uniquely human. And now that there's these robots injecting themselves in it, it's like, oh, man, we've lost this beautiful part of the internet. I think probably it's also worth considering that one of the reasons why these bots, collected so many deltas. It's not just because they were masterful liars and they were perfectly
Starting point is 00:57:03 attuned to the person, their target audience, but also they probably just tried harder. Like if you, the average posts on our change my view, like if you go and you add a prompt, like, I believe this, I dare you to change my view. The average like comments that you're going to get are somewhere between one
Starting point is 00:57:18 and four. Not hundreds or like it's not going to blow up. Like you're probably just going, it's probably going to be a mostly ignored forum post. because that's just the average post is what it's like. Except when an army of bots comes in and gives you attention and floods your post with a bunch of comments. Wait, dude, that's a great point. It's so easy for them to like, oh, I'll like respond.
Starting point is 00:57:42 I'll entertain this. Yeah, let's fucking go. Oh, my God. And I will go back and forth with you all day until you change your mind. And, you know, humans are outside, you know, riding their bikes, not caring about that one particular post. David, you're absolutely right. What's the difference between a like from a human and a like from an AI bot? You're not going to be able to distinguish between that.
Starting point is 00:58:03 So you mean, so they, oh God, the insoles are going to rise up. They're going to, well, okay, wait, the insoles are not going to become insoles anymore because they're going to get all this attention from AI. They're going to get entertained by the AI bots. Oh, that is, wait, that is crazy. You're right. It just artificial intelligence. All they have to do is try.
Starting point is 00:58:20 They don't even have to try all that heart. Oh, and they'll find all the long tail of posts. And they're awake 24-7 so you don't have to rely on people. Yeah, okay, where? Oh, God. I'm not looking towards the next. The theme here of this episode. It's just like a lot of, we have glazed gate.
Starting point is 00:58:37 We have like Reddit making you feel good. It's sad. All of these things are kind of directionally going the same thing. The next season of Black Mirror is writing itself right here. That is crazy. Okay. Are we ready to move on? Yeah, I think we are.
Starting point is 00:58:52 We got 10 minutes left. where should we cover last? Okay, well, I want to run through, I like to do this every episode, but just to run through kind of like social trending things that kind of like made people laugh or like lighten people's moods because, you know, we would talk about a lot of heavy and dark stuff. So I have like a fun little section, David and Josh, that I want to run through and get your takes on very quick. This is going to take like, whatever, 30 seconds or so. number one is the third
Starting point is 00:59:21 top trending video on YouTube this week, guys, was an AI-generated short. And if you're wondering what the video was about, it is about a pug that saves a baby from a plane crash. And then they try surviving on an island.
Starting point is 00:59:39 And that was basically the most watched short video, or sorry, the third most watch short video on YouTube for this point. week. And as you can see, its engagement on X was also pretty viral as well. Any thoughts on this? Any takes? Is this the way our social media life is going to go? This is actually super high signal. I'm glad you shared this because this is important information for me to know. It's like it's the social layer thing. It's like AI must be accepted by normal people. And these videos are trending and
Starting point is 01:00:10 people are not only accepting them and embracing them and sharing them with their friends. That is like that's important. That is very high signal. Because there's a world in which people reject these types of videos. They say, oh, this is gross, you're stealing from artists, whatever it may be. But the fact that it's being embraced, it's being shared, it is trending. This is, I don't know whether it's good or bad high signal, but like this, this is, this means that people are accepting it. And I think that's noteworthy. Oh, my God, in the beginning of this video, this lady just eats her baby out of the window.
Starting point is 01:00:40 Yeah, it's Quentin Tarantino type S director in here. You know, the future directors of the world is coming from. this kind of stuff. hilarious. Okay. It's funny. It's really good. I was just going to say it's very reminiscent to the early days of YouTube where it was just
Starting point is 01:00:55 this kind of like weird, cringy stuff that was, it was just so different. You were like, oh, wait, I kind of like how different that is. Let me share that with my friends and watch it. And like that's exactly what this is. It's like, I'm not like it's kind of weird, but I'm kind of into it. I'm kind of into it. I really like that. There's a pretty coherent story going on.
Starting point is 01:01:12 Oh, God. The doomsday scenario that Josh outlined is so going to be the one. We are fully aware of this and we're like, oh, pug, pug save baby. Very good. Pug saves the baby. I love it. Okay, okay, okay. Moving on, before we open up the next tweet, have both of you heard of Geo Gessor, the online game?
Starting point is 01:01:34 Do you guys know what that is? Yeah, Gio Gessor, there's this one guy who's famous. You take a picture of where you are. You try and make it as nondescript as possible. You send it to this one guy, and he pinpoints your exact location. on Google Earth within like 10 minutes. Well, actually, like, there are videos of him being just kind of like flash shown a random, like, tree. God knows where for like 0.2 seconds.
Starting point is 01:02:00 It's measured. Point two. So it literally looks like a flick. You don't really see anything. And he gets exactly on Google Live Map where that is. It's absolutely insane. And he gets within like a couple hundred kilometers every single time. Okay.
Starting point is 01:02:14 So before we open up this next suite, there's this virus. MIRAL meme that I think went viral last week where someone talks to chat GBT and just makes a dumb joke and makes chat Chbett seem dumb. It was like, you know, knock knock who's there, you, you, who, you who, I made you say you who or some dumb crap like that, right? And then the response showed chat GPT being like, ha, that's funny. Say home. And David would reply maybe saying home. And it would respond with the exact geo coordinates of. where David was because it tracked the IP address of where David was. So it's like this kind of creepy thing where it's able to kind of like identify where it is. So people, now we can open up this tweet, people were testing out and they were like, okay, let me see if this thing is actually like legit. And so they would take a screenshot or send a, share a picture of like a random bit of landscape or a view out of their window and say, you know, give me your best guess, you know, where do you think I am?
Starting point is 01:03:11 And it would basically nail it. It would basically nail it. These like two screenshots just show. someone being able to identify the exact mountain range that this person is looking at, pretty insane. So I don't know where this goes. Hopefully not somewhere too nefarious where, you know, governments are like spying on people, but, you know, I found that super interesting. And then the final one that I have on our docket here is, you know, those like cute drawings that a bunch of kids draw, you know, that, you know, certainly don't look creepy and kind of like
Starting point is 01:03:43 cute, you know, a cute attempt of like a horse or a cow maybe. someone decided to fuel or make nightmare fuel by feeding this into an image generator and then move this into a video generator and you basically see these five-year-old drawings come to life and I had nightmare fuel for about three days
Starting point is 01:04:03 after I went through this tweet thread. There's some really innovative, creative stuff going on but this is like peak pinnacle virality on X right now. Okay, so since we're catching up in the last two weeks of AI news content. This came out on April 20th, 420. And this is Cluley.
Starting point is 01:04:22 This is a video demoing this product. So it's an advertisement. And the advertisement is of this young gentleman wearing these smart glasses. They're like, you know, I think Google Glass. It doesn't look like Google Glass. It looks like normal, kind of like bigger framed glasses, but normal glasses.
Starting point is 01:04:39 And younger gentleman on a date with a babe. And he is suspiciously. young. So I think it implies in the video that this individual is 16 or 17, and he is on a date with a 20-year-old. And he is using the Google Glass, which of course has AI capacities built into it, to suggest to him the best, most witty, best game response to land this date, to like win this date. And so this Google Glass product, clearly, is guiding this young gentleman how to respond to this lady so that he lands a good impression
Starting point is 01:05:18 and how to navigate the situation so that you know he is impressive and that's the whole it's like a pretty well done three minute advertisement of this cold scenario and this guy is just responding to whatever this like you know eyewear product is informing
Starting point is 01:05:35 him to say and the tagline for Cluley is invisible AI to cheat on everything and it really begs the question it's like okay, if we all have calculators in our pockets, we don't really need to know math anymore, we can all just use the best tools at our disposal
Starting point is 01:05:54 to get ahead in life. Maybe we should just put AI right in front of our eyeballs and just have it always guide us on the best possible choice that we can make in that given scenario. And the tagline was so, so shocking, cheat on everything that, of course, this video, it's also a well-made video.
Starting point is 01:06:12 The video went viral, so 12 million views on Twitter. Now people are talking about Clue Lee. Jaws, when you saw this, what was your big takeaway? So for context here, Roy Lee is no stranger to the internet. In fact, he was a nobody about three months ago. And now everyone knows about him within the AI world. So I'll give you some background because, by the way, this product is real. Now, the glasses with the lens isn't real just yet, but he's working on it.
Starting point is 01:06:38 But the software that allows you to basically get answers for whatever scenario that you're in is fully functional. and is what made Roy Lee famous. So some context here is he's not 16 years old, but I think he's like 18, maybe 19. And he actually kind of became... Is this him in the video? This is him. Oh my God.
Starting point is 01:06:58 Yeah, that's him. So before he went viral, you might be wondering, what was Royley doing? Well, Royley was applying to universities, and he actually decided to apply to some of the top universities, Harvard University and Columbia University.
Starting point is 01:07:13 as well as some tech internships at some lesser-known companies like Amazon and Apple. You guys might have maybe heard of them, right? And what he did was he applied to all of these things and he got offers from all of them, David and Josh. But what they didn't realize is he was using this AI software that he had created that would help give him live answers to all the interview questions that his interviewers were asking of him and any kind of course and case study to make him sound super smart.
Starting point is 01:07:45 And he got all these offers. He had all the emails and documents to share that he got all these different offers. And his one mistake, or rather, I think it was an intentional mistake, was releasing screenshots and recordings of all of these offers and saying, ha-ha, check out the software that I built
Starting point is 01:08:02 that I used to basically get me offers from all these different places. Now, you can imagine that this went viral pretty quickly and resulted in Harvard and Columbia and meta. and Google and Amazon where he got all these offers rescinding the offers. And he makes a point to say, I never really wanted to work at any of these guys.
Starting point is 01:08:20 I just wanted to prove a point that AI is now smart enough to make anyone seem smart enough to get a very, very high six-figure job if you want it. And you don't even need to know any of this. And that's the software that he's now selling, Cole Cooley. Incredible performance art. Yeah. Like just tip of the hat to the game.
Starting point is 01:08:40 Yeah. Absolutely insane. And so it kind of raises the point that you just brought up, David, is, okay, there was a huge amount of backlash to this. Everyone was like, you shouldn't be able to do this. But there was also about 30% of responses that were kind of like, well, why not? We use calculators in our math exams. Everyone uses the internet and Google search to research for their coursework. And, you know, they're using a bunch of forums where people have posted their previous blogs and essays, which they've basically,
Starting point is 01:09:11 copy and paste to an extent and rewrite it. And now everyone is using GPT for their coursework. Why can't we now use that for live interviews? Well, like, what's the difference? And it really got me thinking. And I honestly, I don't have an answer, but I'm kind of like now sitting on the conclusion that we need to raise the bar
Starting point is 01:09:29 of what it means to be intelligent or how we, the human benchmark for marking intelligence. Because if we assume everyone's using or has access to these things, then, you know, they're going to be able to give you pretty smart answers. So I think we need to adapt. But there are a bunch of people in the camp of no. We need to remove AI from these things so that we can still see if humans are still intelligent
Starting point is 01:09:49 that they're using their brains. But then my argument would be, why do they need to demonstrate that if the output is ultimately intelligence and the job requirements is to do this thing versus a human doing this thing? I don't know. It's a really weird thing. What are your takes?
Starting point is 01:10:05 I think it's just accelerating what we all already knew, which is no longer about human intelligence and it's about human ability to command intelligence. Intelligence is the thing that is now ample. If you can command it the right way, then that is going to be the valuable commodity. And like the branding on this thing is like invisible to AI to cheat on conversations, to cheat on sales calls. I don't know if cheating is the right word, but it's like, yeah, just like use AI intelligently to get ahead.
Starting point is 01:10:31 And that is my expectation for all my employees at bankless. Use AI to get ahead. And if you don't do that, then you're going to not be competitive. Josh, what's your take? Yeah, no, I mean, the cheating is a little flagrant. I think that's just the part of the performance art thing, the marketing. But at the end of the day, like, these are tools. And intelligence is measured on a lot of different mediums.
Starting point is 01:10:53 The most recent one is how you're able to leverage these tools. This is an inevitability. Every tool winds up either doing good or doing bad. The inevitable outcome will be you will just squeeze as much value out of these as you can. So obviously, if it's not this company, it's someone else. people will be cheating on everything, people will be using these tools their advantage. And it needs to redefine what it means to cheat. To me, that doesn't feel like cheating.
Starting point is 01:11:16 It's like when you are in a math class 20 years ago, you didn't have a calculator. You had to do things long for them. It was kind of a waste of time. You have a calculator. Calculators do this. You can solve the math problem to open up more interesting problems. And I think that's the case with this, where the job interviews, if they really value the employee, like, if I was Amazon or Apple, I would hire the guy. It's crazy that they were sent to the offer because like what a brilliant marketer,
Starting point is 01:11:42 a really smart person. Really good. Really resourceful. So I think like a lot of people need to redefine the definition of cheating in this context because really it's just using these tools for leverage. And I think that's super powerful. And that's a skill that people need to acquire. I would say it's not only okay to cheat, but it is required to cheat if you want to succeed in this new world order of these leveraged human beings using these tools. Do you guys remember the classic line in math class where we were all like questioning, why do we have to learn what at the quadratic equation? Why do we need? Why will we need to know this? And then the teacher's response is always something along the lines of, well, you're not always going to have a calculator
Starting point is 01:12:20 in your pocket. You need to get good at math. It turns out that was wrong. Like no, we literally all have calculators in our pockets. Now with AI, we have the world's intelligence. We have something with 135 IQ in our pockets at all times. It's exactly what Josh said. It's about learning to leverage them, not learning to like define them as cheap. So, so let's steal man the other side of the argument, right? What is your use as a human then in, in the world when it comes to like intelligence and being able to produce value in this world? Is it just a conduit? Yeah. Yeah. Are you just like a vessel for this AI thing? You are just the best prompter that you can be. Okay. Wow. I mean, the human race is going to get reduced pretty quickly based off that one. Do you know what I mean?
Starting point is 01:13:05 I mean, maybe that is. It's where the new coders, right? Promptors are the new coders. And maybe it's whoever can express it in their best form. I just wonder how intelligence gets valued and commoditized. I agree the education system is incredibly antiquated. You don't need to be learning mental maths right now. But the point, or I guess one of the principles behind it,
Starting point is 01:13:25 to argue their case of the maths teacher, is to understand just in your head how things add up or work constructively, right? Maths probably helps you as a computer engineer understand how code works to an extent, right? And it kind of like stacks upon each other to eventually kind of come up with more creative things because you know how things work at the bare bones. But if you don't understand how it works and you just yeat AI into everything to give you an answer, you could end up feeling kind of like purposeless without this. I don't know. It's a hard one to kind of like rock my head around. No, I strongly agree with that point that like the foundational understanding is the important thing. because again, these are just LLMs, they are just token generators,
Starting point is 01:14:07 they still need to be coerced into the way that you want. And if you are a human being who understands context across all these mediums, who has that first principles understanding of math and physics and science and also feels the way humans feel, then that's probably the value add as human beings is using these tools to build these things that humans actually want in a way that LLNs don't understand. Okay. Okay.
Starting point is 01:14:30 What if we, guys, could put a system prompt in our brain, in the form of a neuralink, which has all the contextual awareness that we need to understand what AI is doing for us when they're augmenting our intelligence? And then we just do exactly what you guys just described and leverage AI for whatever kind of job. And it's not cheating. That could take to solve it, right? This is for a future episode because I just think we're just bootloaders for AI and like we're just going to be writing art on the walls and like it just run the whole world.
Starting point is 01:14:59 But yeah, it could go a variety of different ways. I prefer to choose the optimistic ones so I could sleep down at night, but we don't really know. In the wild west of Defi, stability and innovation are everything, which is why you should check out Frax Finance. The protocol revolutionizing stable coins, defy, and Rolex. The core of Frax Finance is FraxUSD, which is backed by BlackRock's institutional biddle fund. Frax designed FraxUSD for besting class yields across DFI, T-bills, and carry trade returns all one. Just head to frax.com, then stake it to earn some of the best yields in D5.
Starting point is 01:15:33 Want even more? Bridge your frax USD over to the frxtal layer two for the same yield plus fractyl points and explore fractal's diverse layer two ecosystem with protocols like curve, convex, and more, all rewarding early adopters. Frax isn't just a protocol. It's a digital nation, powered by the FXS token and governed by its global community. Acquire FXS through frax.com or your go-to decks, stake it and help shape Frax Nation's future.
Starting point is 01:15:58 Ready to join the forefront of Defy? Visit frax.com now to start earning with FraxUSD and staked fraxUSD. And for bankless listeners, you can use frax.com slash r slash bankless when bridging to fractal for exclusive fractal perks and boosted rewards. Uniswap is your gateway to a more efficient defy experience. With uniswap swapping and bridging across 13 chains, is simple, fast, and cost effective, helping you move value wherever, whenever, whenever. Thanks to deep liquidity on the uniswap protocol, you'll enjoy minimal price impact on
Starting point is 01:16:26 every trade. And now Uniswop v4 takes it even further. Swappers benefit from gas savings on multi-hop swaps and each trading pairs, while liquidity providers can create new pools at 99% lower costs. The best part, you don't have to do anything extra. Each trade is automatically routed through Uniswap X, V2, V3, and V4, so you get the most efficient swap without even thinking about it. Whether you're swapping, sending, on-ramping, off-ramping, or bridging Uniswop's web app and wallet, gives you the tools to unlock Defi's full potential on Ethereum, base, Arbitrum Unichain, and more. Use Uniswop's web app and wallet for a more efficient way to use Defi. All right, guys, let's bring up one last subject because we have to pay homage to the crypto AI news of the week coming out of Prime Intellect.
Starting point is 01:17:08 This is Vincent Weiser and a bunch of crypto AI folks. Vincent is a crypto-native, Ethereum guy. I met him at DevCon and Bogota. And this is something that rocketed around the Trad AI world. So it's pretty cool to see something coming out of the crypto side of things, making waves. the Trout AI world side of things. Jaws, maybe you can walk us through what Prime Intellect has released this week and why it's so exciting. Sure. So typically when we speak about all these fun new models that come out, they've been trained by billions and billions of dollars being spent by these, like,
Starting point is 01:17:40 major corporations, because that's what it takes to kind of like train these things. And typically these training setups for these models are very centralized, meaning they're like all housed under one data center and you just kind of like put a bunch of compute through that. And there's a bunch of issues that come up with this, right? Because it's incredibly centralized. You know, the CEO of Google can decide what and, you know, what he wants to do with those models, how it's designed, etc. And you have an over-reliance on a single entity. Now, this is another option or another version to train models, which is decentralized training. And the way you would do that is you would provide compute from computers all over the world. That could be from even data centers all over
Starting point is 01:18:22 the world. Now, the reason why that trend hasn't really taken off as much, over the last couple of years was because it's an incredibly hard scientific and research problem and physics problem to solve. But thanks to a bunch of really hot crypto companies, including Prime Intellect, which is the focus of this next topic, they were able to literally engineer
Starting point is 01:18:42 groundbreaking research that would allow you to train some really cool models. But the issue was they were kind of small. They couldn't really compete with the parameter account of some of these top models. And, you know, it just wasn't enough. Well, Prime Intellect announced about a week ago that they're breaking this trend by training a decentralized model that has 32 billion parameters. Now, we mentioned earlier on that one of the new frontier models that's being trained could be 2 trillion parameters.
Starting point is 01:19:14 So people are thinking, well, you know, that's not that much. But the point is it's how this model is designed. That's 2 billion parameters. And let me know if you disagree, guys, is enough parameters to do, quite a lot of really cool things. You know, some of the earlier models that were at 30 to 50 billion parameter models run locally on your laptop or phone could really do some amazing things. So this isn't out for the count.
Starting point is 01:19:38 And it just highlights this growing trend. By the way, this training run concluded, I believe, yesterday. So they now officially have trained a 32 billion parameter model through decentralized means. And the reason why this is so hot is, well, actually, I don't need to highlight it. Some of the traditional AI folk have highlighted it, including Duquesh, who is a famous AI podcaster, as well as the co-founder of Anthropic as well, which basically say that directionally, training of AI models is going to become more distributed and decentralized over time, because there are many constraints at the centralized level.
Starting point is 01:20:16 And this means a huge amount for crypto and Web3, because for the longest of times, and we're happy to admit this, there was a huge hype cycle in crypto. AI, right? Everyone started focusing on agents and they started creating scams and all these kinds of things and people lost hope in the fundamentals of things. Here, we see real fundamental traction happening at groundbreaking research. The guys at Prime Intellect aren't just talking to crypto guys. They're talking to the top researchers at DeepMind. Same as News Research, right? So I think we're trending in a direction where, you know, this trend is going to go into something much bigger than it already is and potentially worth hundreds of billions of dollars if this is
Starting point is 01:20:55 nailed. We are moving out of the, this is early to, this is kind of early, but is becoming functionally useful. And eventually, I think by the end of the year, we're going to train like a hundred billion parameter model that could compete with the big dogs that can be run potentially locally on your machine if hardware kind of improves at the same rate. That could really kind of like disrupt this centralized hold on air models that we see today. Josh, where do you see the significance here? It's early. And I'm not. I'm not. I'm not. I'm not. I'm I'm not sure I quite see it yet because of the success of the centralized training models and the, like in the sense of crypto, it, we want decentralization because it means security. In the case of AI training, that argument doesn't seem to apply the same way.
Starting point is 01:21:43 But it does feel exciting to me in the sense that like cloud computing has become really popular for traditional Web2 stuff. and for the normal person to be able to spin up GPU clusters in a decentralized manner in the sense that they can just rent them and use them, it seems exciting. I probably just don't understand enough to have an informed opinion on this. And I really trust EJA's judgment in the sense that this is important. So I'm excited to follow the space and see where it goes. I think that's probably, yeah, that's the take I have. I need to just to read some more because it seems there is something here because this is different
Starting point is 01:22:19 and it's working. So it's like directionally. seems like it's right. I'm just not entirely sure how it plays out in that direction. I actually don't think you're wrong in the take that it's early and we definitely don't have any proof that it's going to outcompete or beat the centralized competitors. But to give some historical context to this. Yeah, I'd actually love to know. Yeah. Yeah. Okay. Okay. So literally two years ago, at the end of 23, Google's deep bind, which have been focused on AI for the better part of like a decade or maybe even two decades at this point, could only crack decentralized training for a 400 million parameter model.
Starting point is 01:23:00 And these are the best minds in AI at the time, like working on this thing. Literally three months later, this group of rag-tag open source AI researchers that have nothing to do with crypto announced that they had successfully trained a one point. 5 billion parameter model. About a month after that, Prime Intellect announced that they had trained a 2 billion parameter model. And the point I'm making is this is a very new cutting edge field
Starting point is 01:23:28 that have just been exponentially getting better and better and better over time. And the models, if you actually end up using them, are super amazing. And like, great, being able to figure this out. That being said, it's so early. And I actually don't know where all of this is going on. Now, if there are some people that have the take
Starting point is 01:23:45 that everyone is going to own a personalized AI model on their phone going forwards, you would probably want to ask yourself, is that better to be owned by a single corporate entity or owned by yourself? I'm going to take the side of Josh's opinion there where I don't think it matters. I don't think people care to own their own model, whether it's controlled by a centralized entity or not, as long as the product is good enough. So this is going to come down to product experience at the end of the day. Yeah.
Starting point is 01:24:13 To me, this is kind of like signal. where, yeah, we should all be talking in Signal. Yeah, it's privacy first. That's great. Yeah. But product really matters here. And I think the decentralized part of this whole entire vertical, this whole entire AI industry is a nice check on the power of the big boys, the open AIs, the Googles, the Facebooks.
Starting point is 01:24:34 And so it's nice to have these as a backstop. But we all know that decentralized systems are really hard to compete with essentialized ones on product. product is always going to be so much better. It's going to be so much more viral. There's going to be a lot more growth in there. And so I think these are a fantastic plan B, but I think it would be a huge anomaly for these to become like the dominant models. Instead, I think this is kind of just like a backup plan in case Sam Altman turns out to be like, you know,
Starting point is 01:25:02 the devil incarnate and we actually need to like stop using Open AI. But if in the happy case, we're hoping AI is good and aligned with humanity and aligned with like what we want, than the need for decentralized models, I think is actually a lot less. Or EJS, I actually have a question. Maybe you know, maybe you don't. But what is the significance of decentralized models versus open source models?
Starting point is 01:25:24 Because it looks like Prime Intellect has an open source model, but they're training it in a decentralized manner versus centralized. Is there like a key difference with that decentralization when it comes to open source models? Correct. Yeah. So on the open source side of things, you can kind of like create a model.
Starting point is 01:25:41 and it could be a centralized entity that creates that model. But the point is they open source all the model weights and all the designs, blueprints, etc, such that you can kind of replicate that model, train it yourself, fine-tune it, whatever it might be. It's all public, right? Now, the difference between that and a decentralized or decentralized trained model is you didn't have to rely on a centralized entity or a huge amount of money that comes from anyone to fund it.
Starting point is 01:26:09 you can basically tap into this decentralized network of compute to be able to train anyone's model. So you can come in. Let's say you are a budding college AI computer nerd, Josh, and you have a great idea for an AI model. But you just, you can't prove it. You can't afford it yourself. Who are you going to tap into?
Starting point is 01:26:34 Maybe you'll go to the Peter Thiel Foundation. Maybe you'll go to a few others. but like really those resources are constrained and has a bunch of different kind of like obstacles that you need to jump over and overcome. But what if there was an open source decentralized network that you could go to and you could say, hey, I have these model weights.
Starting point is 01:26:52 I want to train a model. There's a bunch of people that are willing to provide you compute in exchange for maybe part ownership of the model. And you're like, okay, cool, let's do it. And you train that model. And it becomes your own model and it goes viral and it becomes the main dominant thing. there's an argument that decentralization probably is effective there.
Starting point is 01:27:10 But you're right to question distributed versus decentralization. Token versus no token. Open source versus what I need to own my own data or not. Yeah. Guys, that was a crazy two weeks. We only actually got through like half of the content. Literally. There's a lot going on.
Starting point is 01:27:27 There's a lot going on. But I really appreciate you. Josh and your jaws helped me go through the week and informing the bankless nation all about the world of AI. I really enjoy these. I learn a lot. I learn a lot talking to you. That's awesome.
Starting point is 01:27:37 It's awesome. Bankless Nation, you guys know the deal. We will be back in one week also with a brand new podcast feed. So when you hear about that new podcast feed, which is going to become the future home of the AI roll-up. Make sure you have to go subscribe to that new one. You'll hear more updates about that soon. You guys know the deal, crypto is risky.
Starting point is 01:27:54 We didn't really talk about crypto because this is AI. But nonetheless, the future is weird. So that's the disclaimer. Watch out for it. And that's why you should listen to these AI roll-ups. We really appreciate you listening every single week. Thanks a lot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.