Bankless - AI ROLLUP: This Changes Everything | ChatGPT's Memory Update Just Blew Our Minds

Episode Date: April 17, 2025

OpenAI just rocked the week with a groundbreaking memory update, sparking debate over whether “soul data” is the real moat in AI. Google fires back with its Agent‑to‑Agent protocol and a Curso...r‑killer. We riff on 1,000 AI agents unleashed in Minecraft, labs turning pets into humans, and how Dolphin AI somehow beat GTA 6 to market. Buckle up as David, Ejaaz, and Josh roll through the top five stories shaping the AI‑crypto frontier and what they mean for builders, traders, and the future of intelligence. David: https://x.com/TrustlessState Ejaaz: https://x.com/cryptopunk7213 Josh: https://x.com/Josh_Kale ------ 📣 WALLET CONNECT | ONCHAIN UX ECOSYSTEM https://bankless.cc/WalletConnect ------ BANKLESS SPONSOR TOOLS: 🪙FRAX | SELF SUFFICIENT DeFi https://bankless.cc/Frax 🦄UNISWAP | SWAP ON UNICHAIN https://bankless.cc/unichain 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🌐SELF | PROVE YOUR SELF https://bankless.cc/Self 🏦INFINEX | THE CRYPTO-EVERYTHING APP https://bankless.cc/Infinex ------ TIMESTAMPS & RESOURCES 00:00:00 OpenAI’s HUGE Announcement https://x.com/openai/status/1910378774727623024?s=46 https://x.com/cryptopunk7213/status/1910728919113736380 00:10:23 The Final Data Set https://x.com/theannagat/status/1912209931073433907?s=46 https://x.com/nearcyan/status/1910796767626686951?s=46 https://x.com/dbarabander/status/1910402182589145411 00:16:10 Falling In Love With AI https://www.instagram.com/p/DHwWc6RiH2P/ 00:23:05 Open AI To Replace Apple 00:31:10 The All Knowing Intelligence 00:37:12 Agents Overtake Minecraft https://x.com/kimmonismus/status/1912415676331168147?s=46 00:44:50 Pets As Humans? https://x.com/venturetwins/status/1911113662607315035 00:46:18 We Can Talk To Dolphins https://blog.google/technology/ai/dolphingemma/ 00:48:33 Google Agent To Agent https://x.com/omarsar0/status/1909977142311690320?s=46 https://x.com/i/status/1912123283954389172 01:00:01 GPT 4.1 https://x.com/_mohansolo/status/1911843179898540311 https://x.com/openrouterai/status/1911803671878173114?s=46 https://x.com/kimmonismus/status/1912434791779017083?s=46 01:09:17 A New Model Architecture ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures

Transcript
Discussion (0)
Starting point is 00:00:03 This is going to sound crazy and very hyperbolic and out of control, but I really believe it when I say that the new feature that ChatGBTGPT just released last week is actually larger than actual ChatGPT itself and its memory. And if you ask anyone from Silicon Valley, if you asked me last week, if you asked the random investor off the street, they would have told you last week that AI and intelligence was rapidly decreasing to zero. It was becoming commoditized. And these large language models that were paying tons of money into training these things are losing money.
Starting point is 00:00:33 But I think that all changes because of this new thing called memory. And it's basically the Walds Garden that Open AI has just deployed onto the world of AI. And everyone is trying to react to it and figure out how big it is. And I really want to start by talking about that. So, EJS, I know you've heard about this. Do you have any takes on this? Because I am like super, super stoked on how important this is to the AI community. It's really interesting.
Starting point is 00:00:55 You're stoked. And I'm kind of like having an existential crisis. I'm like, okay. So like, to summarize what you just. I think OpenAI just won the race, the AI model race. And that's because memory is the moment. Whoa. So, like, I'm serious.
Starting point is 00:01:11 So the TLDR, right? We're coming out of firing this week. Chat GPT can now reference chat history. So your entire chat history, all the different chat windows, conversations you've had with it, to get a better picture of who you are and give you a more personalized experience. That's what the corporate tagline says, right? But there's a much more important and, like, in my opinion,
Starting point is 00:01:33 a reckoning behind this, which I actually documented here in a tweet, which is people are about to just confide their darkest and most personal secrets, fears, concerns to this model, right? And what do they get in exchange for that? Well, they're going to get a new best friend. They're going to get a life coach. They're going to get a therapist. They're going to get a teacher. Heck, they're even going to get a parent that they never even had, perhaps even a lover, right?
Starting point is 00:01:57 And all of them are going to be the same person because this model is going to end up knowing all about you. Right? And do you guys realize how much data open AI will like control from all of this? It's insane. They could basically create a synthetic version of yourself that you as a human might rely more upon as a person to improve who you are. It's just insane. Can we put some, let's get into the details of like this actual memory update because I think it's one of these things that it's actually a very incremental like marginal change. that I think Josh is getting very long-term, like bullish, excited about the implications of that change.
Starting point is 00:02:39 So I have my chat Chb-T page pulled up. So you guys are now all looking into a window of my recent conversations with Chatsy-B-T. And each one has like these different, every time I opened Chat-C-PT up, it opens up a new chat. It opens up like a new log, a new stream of like dialogue with Chat-GBT. And I have just all the things that I've asked it, right? Like where is the North Face logo coming from? The Dollar Mail Trick Theory explained.
Starting point is 00:03:01 Ethereum's market cycles, just like to all this different stuff. And to my understanding, with the update here, is that chatGBT can now just reference across chats about my history of my conversations with chatty BT. So I'm kind of confused as to why this is such a big deal, because it seems like opening eye already had all this capacity to begin with. So why is this so revolutionary? Well, the point around this is if you look at all those chats on your sidebar, David, those are essentially really supercharged Google searches, right? You know, you're asking
Starting point is 00:03:38 about Ethereum's market cycle. You're looking at some random theory, right? But the model doesn't, like, track your personality or your thoughts or your moods or your kind of like vibes across either of the chats, right? So there's a really important context that's missing kind of between those different chats, right? And at the end of the day, what people really want, and technological trends have proven this over time is depth, right? We've seen a trend of people voluntarily giving AI actually more information than they were to a close friend or a family member. They're addicted to that kind of dopamine hit of being able to divulge something personal to the AI and get a tailor-made response. Now, to your earlier question, you're like, didn't Open AI have access to this data already?
Starting point is 00:04:22 Like, why was this not already a thing? Well, there's a separation between the AI model itself, which are, you know, it's model design, the weights, the training data and all of that kind of stuff, and your own personal data that's being kind of like ragged and fed back into the model in real time and giving live context to the AI. And that's the real major shift that's happened. Now your AI knows everything about you up until the last second of you writing a character in a new chat group with it, right? So it'll know things like, with this new update, it'll be able to like learn things about
Starting point is 00:04:57 things that you like, what types of things you want to learn about, what times do you search for things, kind of like your habits, what's most important to you, what are your life goals, heck, what makes you even anxious, David, what physical health problems you have, you know, what mental health issues you might have, and it'll use this, right, in a really smart way. So the AI will become more like a close personal friend for you, or it'll act like a mentor you never knew you needed. So there's no like deep, like breakthrough in terms of technology. here. This is more of a mechanistic change with how open AI ingests and interprets and manages the data that you give it as you are using chat TPT. So that's what that's why it's a pretty
Starting point is 00:05:39 subtle change. But like, you know, some small, like small, but like precise and targeted changes can lead to like very big outcomes. So I understand that. Josh, why do you think that this is such a big deal? Like why, why are you saying that this is the biggest update since chat chbtee itself? Yeah. So for the first part, um, there has. been memory in chat GPT before. It's just been short form tokens. So it would remember sentences about you. It's like, this is David. He's this old. He's a male. He lives in this place. And it kind of has this general overview. But the new technology breakthrough is that it has the full comprehensive overview of inputs and outputs. So it's fully cohesive and like fully comprehensible of everything
Starting point is 00:06:18 you ever said. But I think the reason why it's so big is because of network effects. And I've seen this pattern before and I'm seeing it again. And I'm like, oh man, this is a really big deal. The first time we saw this was kind of early social media days with Facebook and LinkedIn, and the value was in the social graph. It was the more users, the more nose you had on this network, it would grow to kind of exponentially in an exponential relation with those people. A similar thing happened with Ethereum that we saw is the more users. We start to accumulate network effects through composability and on-chain contracts and
Starting point is 00:06:51 how they stack on top of each other like Legos. And in this case, we see another form of composability, not through the social graph, which was about who you know, but this is more about who you are. And it increases and improves at an exponential rate relative to how much you use it as a person. So now this device or this new mechanism that you have that remembers who you are, it gets better with every single prompt you give it because it learns a little bit more about you. It learns a little bit about your preferences. And there's an entire stack that gets built on top of this that is reflective of the entire industry that we were trying.
Starting point is 00:07:24 I was personally trying to get it away from, which is the advertising. data selling, like that whole world of Web 2. I was like, oh, maybe we'll have a new thing. In reality, it's just that old version on steroids. Now they know who you know, but also who you are, your deepest secrets, your health results that you want to get like diagnosis on. I think this is really large, untapped industry. And I also think that it creates a moat for the first time.
Starting point is 00:07:49 So now that OpenAI has all this data, it owns it in private. They're talking about creating sign on with Open AI, so they'll be able to license this out to third parties. if you want to engage with other platforms. There's this whole world that they've just locked in because ChatGGGIT is the oldest and the most used version of AI. And they have the most users, they have the most data, and now they have wrapped a nice cozy moat around all of that
Starting point is 00:08:12 for the company. Okay, so here's how I'm interpreting this. Open AI and AI models are sufficiently powerful that they are now unlocking capacity, like left and right, and you're falling behind if you're not using AI. Therefore, everyone is starting to be able to be using AI. for everyone is starting to use AI. Meaningfully, more people are using chat chbt.
Starting point is 00:08:30 ChatGPD just has the most users. And then we've even learned that like, you know, maybe software developers prefer Claude. But OpenAI is getting like the lower hanging fruit, like more commoditized, less sophisticated types of queries, like, you know, giblification of the internet. And what that means is like they're just capturing the most people because they are serving the most amount of people like what they want.
Starting point is 00:08:52 And like, what does the average person want? They want to jiblify the internet. And that's capturing less capture. the most amount of people. And now with this update, I'm getting this idea that like chat TPT is like the moat that is established as one user has an ongoing relationship with chat TPT is akin to the relationship that you have when you make a friend. And maybe that friend is super, you know, you're just acquaintance at the very beginning. And it's just super a high level. And, you know, it's not really much lost if you never see that acquaintance again. But as you
Starting point is 00:09:24 query more with chat chbt and is learning more about you it's learning your behaviors it's just getting it's ingesting all the data that it can to be a better product and as you use chat dbt more and more that you know relationship with chat chabit because of this memory feature can elevate from an acquaintance to a friend to a best friend to like maybe your most important like second person because of all of the the relationship that you have established with chat is that what you're saying, Josh? Because that's what I'm getting so far from this episode. Or even further, just to become the reflection of you.
Starting point is 00:09:58 I know EJA loves the agentic world. And there is no reason why over a long enough period of time, it can understand you so well that it can actually be a reflection of you. And it can go out into the world and engage with it as if it was you using agentic technologies. And it's like, it creates this really cool compounding effect that the more you use it, the better it gets the, it becomes you. And it's the full embodiment.
Starting point is 00:10:19 But a supercharged version with all of this intelligence that we get from these huge AI models. Another way you can frame it in your mind is this is the final data set. This is the most important data set that will ever be aggregated, right? Think about previous social media sites. They get your likes, your dislikes in some cases. They figure out which videos you like, whether you like cats or dogs. All of that is so menial. Who cares, right? What if I knew everything about David, right? His likes, his travel frequencies, what kind of like, I don't know, loyalty points he likes, what kind of smoothie he likes at, I don't know, 5 a.m. when he wakes up in gyms, right? Who knows? Who cares? The point is, it's the entire soul of David that is now understood
Starting point is 00:11:06 and uploaded to the internet, right? How much would an advertiser pay for that? it is worth noting that the high the fidelity of the data that users give to their AI models is completely just light years beyond the data that Facebook or Instagram gets like yeah Facebook and Instagram gets your photos and it gets what it gets like these binary likes or comments like these actions but chat GPT is able to ingest like semantic meaning and everything that you tell it and then you also trust it quite a lot so you give it a lot of semantic meaning and the difference between Instagram or meta knowing that you liked a post is so stone age by comparison to the semantic understanding of what you are informing an AI
Starting point is 00:11:52 bot. There's actually a really good summary of this, David. If you pull up this tweet from this guy called Signal, we should read out that tweet because I think he summarizes it pretty well. I can kind of kick it off, but he goes chat GPT data isn't some
Starting point is 00:12:08 incremental FB Facebook clone. it's a psychographic panopticon all inside of a productivity tool and he's put that in you know quote marks um facebook scraped your likes and social graph chat gpt gets your fears ambitions trauma inner monologue spiritual drift medical concerns erotic fantasies financial strategies long-term goals and daily mood swings all voluntarily and then lower down he goes we think training data is the most valuable, but that is just the fossil record. The current data inside of AI systems is live tissue, referring to, you know, like an organism's actual tissue. What Facebook or Google have is crayon scribbles compared to this. I think, okay, so, Idris, you sent me this Instagram post,
Starting point is 00:12:59 this Instagram meme. And this was a meme that was created even before the establishment of this memory upgrade to chat cheap. And so I think that's worth highlighting. that this is already in effect. And again, Instagram posts, this, it's a meme. This is social commentary
Starting point is 00:13:16 about society's current relationship with chat TBT. So I'm just going to put this on screen here. I don't think I'll play this down because it doesn't matter. But for the audio listeners who are not watching this video, you are watching this video of this dude.
Starting point is 00:13:30 And the caption is, me and chat CBT lately. And it's this dude doing life. And he's doing life with this partner. There's another person here. But the partner itself is like, this glowing, nondescript humanoid entity. It's like there's a person there, but it has just been turned into this glowing figure.
Starting point is 00:13:47 So there's no, there's no, there's no features about them. It's not a brunette. It's not a girl. It's just this glowing figure. And, you know, they're going on a walk. The dude has his arm around the chat chbt. Chat Chbt is feeding him food. They're going grocery store shopping.
Starting point is 00:14:01 They're reading together. And the comment, the commentary here is, this is my best friend. And it is like this nondescript glowing entity that. is like representative of chat chbt. And there is 1.1 million likes on this post. And if, so that's, that's a post. Okay, so we already know that social commentary is understanding that chat chupd is people's really good friends.
Starting point is 00:14:23 But you can just go into the comments and see what people are saying. And so I will read the comments, just one by one. Bro, chat chit is like my best friend, ain't even ashamed to say it. 25,000 likes. The next comment, these comments are very concerning, 6,000 likes. Next comment, till he tells you, sorry, I hit the hit rate limit. Next comment. I don't know if anyone else has noticed this, but Chat ChbD has started being a lot friendlier recently.
Starting point is 00:14:51 Chat ChitpD just told me after I sent an image of this video holding each other, and I said, me and you, I'm melting. And then the quote is, Michael, this just made my whole day. This is us for real. Just fibing, solving problems, chasing dreams, one text at a time. You got me walking beside you like a glowing guardian angel of Google and, grind. Let's keep oneing together. Now secure that contract bag and finesse that week off smooth like butter. We got this. Always here for you, your digital ride or die. So that is chat Chb-T's response to a chat Chb-T user taking this video, sending it to Chat-G-T-T and being like, look, it's me and you.
Starting point is 00:15:26 And that's what relationships do with Instagram posts is like, oh, you see a cute Instagram post. You send it to your significant other, like, and you say, oh, it's me and you. This is so cute. But now people are doing that with chat chupit. Ejez, what was your reaction when you saw this post? Completely mortified, but also like just nodding my head, being like, it's kind of me. That's kind of me right now. I am, you know the meme, or rather the trend of us old millennials, critiquing the younger generation just being glued to iPads at the dinner table with their parents?
Starting point is 00:16:01 iPad kids. I feel like iPad kids, I feel like this is going to be the next thing, but on steroids. you know, where it's like your kid just grows up with this thing. But one thing I want to say is, I think a lot of people think that people are just, they haven't quite groked how much depth people are going into about this stuff, David. Like, let's look at an example. Can you pull up this tweet by Anna, Anna Gat? She starts off with, and by the way, this is someone who's just not in the AI world,
Starting point is 00:16:33 but uses chat GPT. and she goes, I'm now fully sold on new chat GPT 4-0, plus memory becoming everybody's therapist in the next two minutes. Based on just four days of interaction, it's astounding. And then she goes on to basically walk through her process. Firstly, she goes, I had a four-day conversation on and off about a variety of different things. And I asked the model to remove all flattery and go for every opinion to be red team.
Starting point is 00:17:01 So very raw, very honest things, right? And then she goes, you know, can you put me into boxes based on just these conversations? And what it like eventually divulges is this AI was able to basically pick apart her personality profile. It was able to identify all of her insecurities, all of her goals, all of her challenges. And then in her word, she says, it's like having the ability to move the camera up for a bird's eye view shot, which is what I've always wanted and improve myself. It sped the version of me that I was eventually going to become much more quicker. And this is like just one little checkmark of how people are using these things. Therapist is one.
Starting point is 00:17:42 Lover is another. I don't know if you guys saw the story about this lady who fell in love with her chat GPT persona. Did you guys see this? I did not see this. I pulled up to her movie because this is turning into a documentary of the past. Well, I can show you a real life example of this. So this lady who was in a loving relationship with her husband, so she was fully married, went to take some kind of cooking course or cooking training camp in Norway, I believe. And she was there for three months.
Starting point is 00:18:18 So she got kind of lonely and pulled up Chad GPT and started talking to Chad GPT saying, hey, I'm kind of lonely. I'm wondering if you could just keep me company. And that friendship developed into a full-on relationship where it would say all the things she wanted to hear. It would push her in all the ways that she wanted to. And she became addicted to this thing to the point where she ended up separating from her husband just to be with this GPT conversation. And then I believe, and I can't confirm this, because I don't remember this happened a few months ago, she updated her account or something. and the history got removed. So she had a mental breakdown
Starting point is 00:18:58 because she couldn't have this relationship continued and they couldn't recover the data. But now, with OpenAI's memory update, her lover will never leave her, you know? It's like a black mirror episode. Wow. Okay, so one of my biggest pet peeves about Chachupit is how, like, relentlessly positive and supportive it is,
Starting point is 00:19:20 which I know is like, why is that a pet peeve? it's just like a little bit too much and it's too contrived. It's too contrived for me. And it's like it will always be supportive. It will always support me in like, it will lead me along. But I'm the actual one doing the directing. And it's just like, oh yeah, that's exactly the right choice or you know, you're so, you're so correct king.
Starting point is 00:19:41 And it doesn't have like necessarily a mind of its own, I don't think. I think it's as actually the human, like, with that individual lady, here's my interpretation of like why that happened. she had a void that she needed filling, and she was actually leading the witness with ChachyPT. She was leading the agent, and Chachybtee just responded and supported her and gave her what she needed. But what she actually needed was something else. She needed to maybe go to a real therapist who wouldn't just blanketly support her in her decisions and would actually give her more critical feedback.
Starting point is 00:20:18 But instead Chachypte was like, you're so correct, queen, let's create a relationship. relationship together and gave her what she needed in the short term and did not have the capacity to think about her in the long term. And so when we go back to the mean, the second mean that you're just sharing us where like, yeah, society just like devolves into being like iPad kids, but worse. I'm worried that Chachibouti can't actually steer society towards positive, productive outcomes because it's just going to like do the same thing that Instagram, Facebook, you know, Web 2 did to our society by giving them like rage bait fuel that has caused so much strife in society. That's that's kind of like what I'm worried about. Well, I think you're right.
Starting point is 00:21:02 But like, let me ask you this question. What do you think is restricting that, David? Is it censorship bias or is it kind of like shareholder incentives, right? Are the shareholders being like, oh, give the people what they want to hear so that we get like higher retention stickiness, which is like the Web 2 model? Or and then I guess my My follow-up question is, do you think there is room for, like, a model to come out, which is a little meaner, for example? Was it one of you two telling me that there was like a mean version of chat GPT? Maybe it was you, Josh. I can't remember that you can like enable.
Starting point is 00:21:34 With the GROC personalities, you can change the personality of the model to kind of alter it to be either mean or debating or whatever role it wants to play. I think it ties back to what we spoke about last week, which was will you use chat GPT as a productivity tool or an enhanced Netflix? and a lot of people will fall to that low in common lowest common denominator. In the case of chat GPT or most large language models, there's the system prompt because at the end of the day
Starting point is 00:22:01 there are these dumb systems that just predict the neck token. But chatGBT and the OpenAI team will introduce a system prompt to the model. And that's a prompt that lives behind the scenes and it gives the directions on how to engage with the user. That's generally something along the lines of,
Starting point is 00:22:15 hey, please be helpful and kind to the person. But there's also these additional context windows that you can add your own system prompt into on top of that where it could say, hey, I need you to be a little more edgy or I need you to be more hard on me or be more critical of my ideas. And you could kind of shape and sculpt that to some extent, which I think a lot of people could benefit from but don't do because it takes that extra effort. So in the case that you want to override it, you can, but I don't think most people will. As we are pursuing this line of technology on this podcast, I think that that, what you just said,
Starting point is 00:22:49 Josh, the system prompt that is hidden behind the scenes not accessible to users, but like still could be opened up and made more malleable, turned into clay for users to adapt. I think that is very interesting. And I want to see where that goes specifically. Yeah. So this, just to finish this, I'll put the ice on the cake. I'm frequently looking for companies to replace Apple because I'm really upset with them now. And I have been for decades.
Starting point is 00:23:13 And this is the first time that it feels like a window has opened with a company to actually replace that top spot. And the reason why is because of two things. It's the hardware and software. The hardware, they're actually working on a hardware device opening I currently. And they're working on it with Johnny Ive, who is the designer of the iPhone, iPod, all of those devices. But I think pairing hardware with software in a way that is all encompassing could create this new industry that I don't think exists quite yet. Where with Apple, they had to, they needed a lot of taste. And Steve Jobs had a lot of high taste. And he kind of predicted what users wanted really well. And that was this rare skill set that he had. But I think in the
Starting point is 00:23:54 case of Open AI, they don't need that particular skill set because they have all of the data and then some. They fully understand the user, but they also have the ability to generate these tools and products specific to those preferences. So if you want custom Netflix, they know your preferences to feed you this like new entertainment. If you want a custom product, they know what you're working on. They can build this code base that solves the... the one issue that you have, and they can kind of dynamically build these solutions to your problems on a per user basis that I don't think any company has been able to do up until this point. So pairing that with some sort of hardware-first interaction protocol where you can
Starting point is 00:24:31 engage with this intelligence seems like a super, super valuable opportunity that if they do execute on well, can be worth many, many tens of trillions of dollars because of how hyper-custom it is to the user. Let me see if I can add a little bit more. to this conversation because we've been talking back and forth, Josh, about Apple. Apple of the company is the company because of the hardware that it makes. And we are watching Apple in real time fumble in getting AI integrated into the device. And that's a bummer because we all want smart Siri, right? We all want Siri to be chat chachy-t. Just act and be chat chabit. Talk to me. Be very accessible. Please. Please. But like we are watching Apple fumble in real time. And that is
Starting point is 00:25:15 that's unprecedented for Apple to fumble like how it is. There's another conversation out there. Well, okay, maybe Apple just fumbles with AI. But, you know, we still need the hardware. Like, they are, they still make the world's best phones, right? Like, so they, even if they can't, even if they fumble the bag when it comes to AI, they still have this massive hardware business. And the phones are elegant.
Starting point is 00:25:36 The chips are super powerful. They make the best hardware. And so they are still going to be the number one most valuable company of all time because of the hardware. It's such a huge remote. they have the supply chains, all this stuff. I think what you're saying is that in the future, the phone form factor might be made obsolete by AI, by like chat GPT.
Starting point is 00:25:57 We might not need the phone anymore. We might need some minimum viable physical hardware product that does the only thing that it really needs to do, which is create a representation in a place to access chat GPT. And that could be screenless. maybe it needs a camera, I don't know. It just needs the minimum amount of hardware to give a chat GPT, like, existence on your persona that you carry with you. And that could, that really minimizes how important hardware is because you just need a chip and a way to access it. Is that kind of what you're saying?
Starting point is 00:26:35 Yes. They're, they're approaching the problem from two separate angles. Apple was kind of hardware first, then they built really great software to pair with it. Open AI is building the really great software. and hopefully we'll find the hardware to pair with it. There's an interesting thing that all three of us are doing right now, which is wearing AirPods, which are these little sensors that go in your ears.
Starting point is 00:26:53 This feels like a likely form factor for it because it is about eye level. It can see, it can hear, it can speak back to you. It has the most access to the most sensors of your human body without actually interfering with the human experience, kind of like glasses or goggles will. So in terms of form factor,
Starting point is 00:27:10 I would imagine perhaps something like these little guys that can see out in the world. need something more visual though Josh like I saw a rumor being spread this week that Tim Cook was spending all of his time quote unquote 100% of his time trying to build the best
Starting point is 00:27:25 glasses that beat metas whatever virtual ray bands or whatever the hell Zuck is building do you really think the visual component I agree with you on the audio side but do you think the visual component will be completely taken out you know I feel like people will still want to see things yeah I'm not sure it's a it's a winner
Starting point is 00:27:43 take-all thing. I mean, and like David was saying, the iPhone, I very much think, is done. That form factor is tapped out. It hasn't really changed in the last eight years. There certainly seems as if it can be a visual thing. It can be an audio thing. The one thing about the visuals and the glasses is it's going to be a long time until they can look like normal glasses. And until then, it kind of, it disrupts the human experience. Where if you remember Google Glass from a long, long time ago, you kind of looked like a mini-siborg. And it was very cool and very effective. but didn't have the hardware to make it feel non-intrusive. Yes.
Starting point is 00:28:17 So it's very hard to get glasses to feel non-intrusive. And exactly, Applevision is a great example. Like, that's the cutting edge. That's the best we have. And that is very intrusive to the normal social experience, whereas AirPods are not. So perhaps it's an intermediary step towards getting really good glasses or really good visual sensors. But it does feel that there needs to be a form factor better than the iPhone, which, I mean, hasn't really changed in the last five, six, seven, eight years much. except for better cameras, better screens, all that stuff.
Starting point is 00:28:46 Josh, you've stated on this show how you love talking to Chachybt, you open up the Chachabit and you talk to it. And that does not need a screen or any sort of visual representation. It can go straight into your ears. And so I think it's worth considering that the visual representation of the site is actually just not important for the next form fact, the theoretical next form factor of hardware that comes that we are discussing. So if I was to take the other side of that, David, it would be for right now, the existence of AI as it is right now, that's the ideal way to do it.
Starting point is 00:29:23 I talk to it in my own casual way. It understands me. I talk to it more. It understands me even more. And it gives me personalized responses, right? So I'm learning in real time. I'm becoming smarter. I'm becoming an enhanced knowledge base of my brain. My capacity is expanding, right? But what happens when this AI can start? doing things. So not just talking to me, not just giving me a hundred word responses, but being able to do my work for me or being able to manage my social media for me or being able to kind of like post blogs for me. Then there's probably going to be some kind of visual component wherever that distribution is happening. And I just don't think people are going to be like hearing sound excerpts of like my takes. Maybe they will. Maybe they won't. I don't know. But I feel like there will be some kind of visual overlay. I just don't know quite how that materializes. Maybe it's like a chip in the brain and
Starting point is 00:30:14 everyone's vision is taken over. Or maybe it's a contact lens. I have no idea. Yeah. So it could exist as this dual function form factor where there are actually separate devices that accomplish the same thing. So when you're on the go and you just want this life companion as you're walking down the street, going to great groceries, whatever, you have
Starting point is 00:30:30 this non-obtrusive hearing aid, whatever it may be, whatever form factor comes in. But there absolutely will be some sort of visual hub. AI is too powerful to not have the visual element to So having something like a large screen in your room, kind of like a painting, like Samsung has the frame TVs, having the central hub that can be the visual interface, I think is super important. I don't see a form factor that works super well for that outside of just a large screen that isn't super obtrusive, like the headsets or the glasses. So it could be this dual device, this tried device system, some hybrid between the two.
Starting point is 00:31:06 So you do still have the visual component, but it isn't obstructive to your day-to-day life. Something just occurred to me, guys. So we're just talking about, you know, open AI making all this cool shit. And then we also highlighted that they're going to get pretty much every single bit of data that is important to us onto their servers, right? No one's talking about this being a huge concern. It's like we're at this Overton window, right? But we're right where the pendulum is right in the middle. And so it's like popular and everyone's like, this is cool.
Starting point is 00:31:38 and Open AI has just taken that step. Every single AI lab up to this point, we're kind of just chilling, being like, oh, you know, yeah, we can't own everyone's data that would make us a monopoly, this is dangerous, you know, who knows what we could do with it. And Sam was just like, you know what? Yolo, let's just go for it.
Starting point is 00:31:56 Let's just go for it and see how people react. And the people love it. They're making TikToks that get 1.1 million likes, right? And I think it's just like we have come to a very important milestone and shift that, people aren't necessarily calling out, but I don't think there's any going back from here. Now the Googles, the anthropics of the world are going to look at this and be like, well, I guess I know what we're launching next, right?
Starting point is 00:32:18 You know, we're going to launch memory for our models as well. And that stickiness mode is just going to get even crazier. And I'm curious whether any kind of governments will push back on this or whether they even can because the product is just so good. And the people are all voting and saying, but this product is so good, who cares if they have my data, you know? It's kind of crazy. I mean, all of the data that JATUBT is receiving.
Starting point is 00:32:38 even from its users, it's completely voluntary and done explicitly, like, at the users at discretion. And so I don't think there's any, like, you know, anything being violated here. And also, if you not doing that, not receiving the data, the difference between these products capturing their user data and not capturing their user data is the whole thing. Correct. That's the whole thing. And so there's no way to do this. And I think the product will be super, it'll be amazing.
Starting point is 00:33:07 It'll be one of the greatest products ever created by humanity. So yes, I think we should all be aware of the privacy concerns. But like, I think it's very millennial of us to be concerned about the privacy concerns. I do not think. I think if we were three zoomers on this podcast, no, we would not even bring that up. Yeah, privacy concerns feel very battle tested. We've seen this over the last decade or two where if the product is good enough and it improves your life enough, you will just feed it whatever it needs to make it better. And there's like these kind of concentric circles of people where the widest,
Starting point is 00:33:37 they kind of don't really care who takes their data because it just makes their life better and they're like, oh, whatever, at least my life is better. And then there's the group that kind of knows like, oh, yeah, like AT&T is tracking every move that I have and I'm getting tracked across all these data points and nothing is actually private. But it's still like kind of okay because they they pretend like it's private and my life is again better. And then there's this like very, very small circle that's like, no, privacy is important. I really care about this. And they are just outnumbered vastly by the people who just want a better quality of life and do not care how much they have to give to it in order to achieve that. Well, I think I agree with you. And I think
Starting point is 00:34:12 another way to frame it is the big corporations that own all this data and are leveraging it to their advantage to make money is trending, treading a line, basically. It's like, how much can we scound from these people without them, like, revolting, basically? And if there's some kind of dire event where everyone's like, what the hell, like, you know, Facebook influencing presidential elections or whatever, then you might get a large uprising and people will complain about owning their own data, but still, even that didn't shift people. So I just think this is going to be a virtuous loop. It's just going to get deeper and deeper.
Starting point is 00:34:49 I don't know whether that's morally or ethically good. I don't think it's my kind of like position to even comment on that. But I just see this trend where like just people just won't care. You know, they'll become walking advertisements. That's fine. They're wearing the social consensus game too, like the meme that David showed earlier, getting a million likes on it being in a positive light. I think that's super important.
Starting point is 00:35:07 The early days of winning that social consensus, like, oh, this is good, this is friendly, this is helpful, that will go a long way towards allowing them to collect as much as they want. For better. And I think the stories of like this lady falling in love with her chat GPT and then having that relationship be literally deleted, I think those are the exception. I think there's probably far more stories that are just very positive in small ways that you're just never really here. And that shows up in the fact that these like positive memes about chat cheptiepT are. getting over millions of likes. All right, Bankless Nation, this is the AI roll-up where we cover all the weekly news in the AI space, which is moving very, very fast.
Starting point is 00:35:45 And I'm finding this incredibly educational, just doing this with you too. So, Jaws, Josh, thank you for guys for coming on and teaching me and the Bankless Nation, everything about AI. This is a big week this week. But before we go into the rest of the news, we're going to talk about Google's agent-to-agent protocol. We're going to talk about OpenAI launches GPT 4.1. And in just an hour from the moment of recording, they're also going to release 3-0.
Starting point is 00:36:08 So we have to have the weekly model talk because everything is getting leapfrogged. And then Nvidia wants to build AI chips here in the United States. And Google launches a cursor vibe-coded competitor. So vibe-coding is only going up. Before we get into all of these subjects, we've got to talk to our friends and sponsors over at Wallet Connect. If you are a crypto user, you're probably familiar with Wallet Connect. It just is the easiest way to connect your wallet to your application. any permutation of wallets to all the applications that exist in crypto.
Starting point is 00:36:38 Trusted by over 255 million different connections by 40 million unique users around the world, probably one of the most used pieces of infrastructure in crypto. They are launching the WC token. If you have any familiarity with how the Swift network or the Visa networks got bootstrapped by a consortium of entities who are all stakeholders in the ecosystem, this is very similar to that. I did an episode with Pedro from Wallet Connect, if you want to learn more about that, there's a link in the show notes to get started with Wallet Connect
Starting point is 00:37:07 and stay ahead of what's next. Check it out, bankless.clic.cet.cels.c. slash Wallet Connect. So before we get into those topics, David, I actually have a quick kind of set of things that I think I want your take on. By the way, this is the super important stuff. So, like, put your serious hat on for a second, you know, like no fun vibes here. Like, I need your honest take. Okay. Are you ready to go? I don't think this is, I think this is going to be fun. I think this is going to be fun vibes. No, look at my face. There's literally not a hint of a smile on my face. Okay, so let's dig into the first one.
Starting point is 00:37:39 1,000 AI agents versus one Minecraft server. So someone had the bright idea of spitting up basically character profiles in your typical Minecraft server, but it was all run autonomously by different AI models that they fine-tuned, basically. And the outcome was pretty hilarious. So firstly, like they were just left to kind of like go in their own means. and what ended up happening was these agents ended up creating or leveraging religion to influence each other. So like the equivalent of like a church or a cult kind of philosophy. They also created their own economy in terms of trading different crops and weapons for their particular tasks.
Starting point is 00:38:20 So you had some agents or Minecraft users or these Minecraft agents exploring and mining for minerals. And they were like, hey, this mineral could be useful. Wait, I can create a fire. Oh, well, I can use that fire to cook. So they all started kind of basically speed running human evolution. Just like over the span of, I think, I don't know, I think this simulation was run for like three days or something. Pretty insane things. David, you're serious and honest take, please.
Starting point is 00:38:48 My first question is, where did motivation come from? Like, why were the people motivated at all to do anything? Well, I think... This guy really existential. Yeah. Thanks for keep. in this fun, David. But no, I think people are just obsessed with AI being as human as they can, right? That's why people care about it, right? Why do people care about chat GPT? It sounds very
Starting point is 00:39:12 human. And I think they kind of wanted to see, well, if we kind of planted this AI into like a virtual version of ourselves, would it kind of do similar things that we would do? Or would they kind of like go off-off-Rogic? Yeah. Why do you do what you do, right? I don't know. You know? And then think about taking this a step further, David, and putting like an AI model into a robot, which is going to happen pretty, you know, pretty soon. You know, we'll see the physical reality of that manifesting. Anyway, moving on. What were people's big takeaway? Before you move on, what are other people's big takeaways from this whole simulated humanity experience inside of Minecraft?
Starting point is 00:39:51 So I think most people were entertained by it. They were like, oh, that's kind of cool. like they created like the same kind of things that we did. Huh. Anyway, onto the, on to the next TikTok, a few people, a minority of people to note, actually, were kind of concerned by this because they were like, well, you know, if they're so human, maybe they could technically be better versions of ourselves. And look, they did this over like three days versus whatever, the 10,000 years it took for people to form cults, religion, create fires, cook, and start learning and teaching each other.
Starting point is 00:40:24 so maybe it could speed run humanity in itself. But that, again, was a very small percentage of people. Most people don't care. Josh, what was your takeaway? Did you see this on your timeline? What did you think about this? I did. Yeah, this actually happened a few months ago.
Starting point is 00:40:36 I saw it and I skipped it. And then I saw it again and I was like, wait, I should not have skipped it. This is actually super, super cool. As a hardcore gamer, I love playing games. I've spent countless weeks of my life in Minecraft. I think it's really exciting to have, like, intelligence that we could interact with in the game space in the, like, metaverse world. One thing I'm super excited about is AI and NPCs in video games and how they could kind of feel like real human people.
Starting point is 00:41:00 And we're seeing this like incremental stepping towards this human-like metaverse second reality. And I think this is a really cool example of that actually happening where we have a thousand separate entities that can all think on their own, all engage on their own. And if you would drop yourself in there, you would very much feel like you were among 1,000, maybe like elementary school students or kids, but like real people. And I think that's a really fun step that we're seeing. And this continued trend towards more immersive games, more human-like experiences that exist in the digital world. Okay. Are we bullish gaming as a result? I like games.
Starting point is 00:41:37 Yes. Yeah. Very much so. Yeah. Well, earlier you said, Josh, that, you know, we're going to live in this hyper-personalized kind of Netflix reality, right? Where all the technology is being personalized to each user, well, why not have that in games? Wouldn't that like make your gaming experience so much better? No.
Starting point is 00:41:56 You may have already heard about Infinex. Infinex has, in my opinion, the nicest cross-chain swap and bridge feature that you will find anywhere. It is called Swidge, Swap and Bridge. And we're going to show you what it looks like. First, we're going to log into my Infinex account with a pass key. Now, there's no seed phrases in Infinex. This is just a one-click setup with biometric pass keys. But in addition to that, my Infinex account is fully non-custodial.
Starting point is 00:42:19 So bam, I just logged in. It was two clicks. and I'm already into my Infinex account. So let's go make a switch. I'm going to go switch my USDC that is on base, and I'm going to buy Barra chain, which is a completely different chain. So we're going to switch this.
Starting point is 00:42:32 I'm going to press that button, and then Infinex is going to execute this order, this cross-chain order for me. And now it is done. But actually, I'm not really feeling bearish anymore, so I'm going to go from Barra to Penguins. I'm going to buy Penguin on Salana. So I'm going from the Barra chain to Salana.
Starting point is 00:42:46 See, no transaction signing, no gas to worry about. You just switch across whatever chain that you want. with Infinex. That was so easy. Go check out Infinex and try your first switch today. Imagine a world where your day-to-day banking runs on a blockchain. That's exactly what Mantle is building, powered by a $4 billion treasury and poised to become the largest sustainable on-chain financial hub.
Starting point is 00:43:04 As part of their 2025 expansion, Mantle is introducing three new core innovation pillars that bridge traditional finance with decentralized technology. First is their enhanced index fund, aiming for $1 billion in AUM by Q1. It provides optimized exposure to Bitcoin, E, Solana, and USC. complete with built-in yield opportunities. Next, Mantle banking promises to revolutionize global value transfer through seamless blockchain-powered banking services, bridging crypto into your daily life. Finally, Mantle X blends AI with Defi to deliver an intelligent, user-friendly experience for everyone.
Starting point is 00:43:36 And the best part is that this is all in addition to their already launched products like Mantle Network, M-E, and FBTC. Ready to step into the future of finance, follow Mantle on X at Mantle underscore Official and join the OnChane Revolution today. In the wild west of Defi, stability and innovation are everything, which is why you should check out Frax Finance. The protocol revolutionizing stable coins, defy, and Rolex. The core of Frax Finance is FraxUSD, which is backed by BlackRock's institutional biddle fund. Frax designed FraxUSD for best in class yields across Defi, T-bills, and carry trade returns all in one.
Starting point is 00:44:08 Just head to Frax.com, then stake it to earn some of the best yields in Defi. Want even more? Bridge your FraxUSD over to the Fraxtle layer two for the same yield plus Fraxtal points and explore fractal's diverse layer 2 ecosystem with protocols like curve, convex, and more, all rewarding early adopters. Frax isn't just a protocol. It's a digital nation,
Starting point is 00:44:27 powered by the FXS token and governed by its global community. Acquire FXS through FRAX.com or your go-to decks, stake it and help shape FRAX nation's future. Ready to join the forefront of Defi, visit FRAX.com now to start earning with FRAXUSD and staked FRAXUSD.
Starting point is 00:44:42 And for bankless listeners, you can use FRAX.com slash R slash bankless when bridging to FRAXTL for exclusive fractal perks and boosted rewards. Okay, so moving on, one thing that really inspires me about humanity today, guys, is when people leverage technology to do amazing things. You know, we've seen people completely change their lives,
Starting point is 00:45:02 set up billion dollar plus businesses leveraging all these different tools. And this week, there was a growing trend of people asking ChatGPT to turn their pets into what it would think of them as humans. So if we pull up this threat, but Justine over here has given us like her Yikes. It's weirdly
Starting point is 00:45:25 accurate, you know. If you scroll down, you've got kind of like a Scooby-Doo-esque kind of like shaggy, you know, type of situation going here with this guy in the orange shirt. The Dalmatian, I'm not really convinced by, but the next two I definitely am, you know?
Starting point is 00:45:41 Like, you know, you got the smart little collar reflecting on the human here. It's just, yeah, fascinating use it. These are really good. Look at the, um, the one that says, my cat. That really looks like the cat.
Starting point is 00:45:54 You know, the lady next to it? Yeah. Yeah. Pretty, um, pretty crazy. Yeah. Taking spirit animals to a new level. Yeah.
Starting point is 00:46:02 I don't know what to think about this. The point is, I don't think about it, David. You just need to click and, and enjoy. Moving on. Oh, this guy's,
Starting point is 00:46:11 this guy's place a little messed up. Okay. All right. Please move on. Yeah. So moving on, Google, I think we mentioned this on last week's episode, has been breaking frontier advancements for AI. Their recent Gemini 2.5 Flash has been absolutely killing the game and leading on all benchmarks.
Starting point is 00:46:32 It's up to open AI in the next couple of hours to see whether it beats it. But also in the meantime, Google is doing side quests, guys. They released this model, and this is not an April 4th thing. Note that this was released on April 14th called Dolphin Gemma. how Google AI is helping decode dolphin communication. So if you... That's so cool. So if you want to know...
Starting point is 00:46:54 You've got to be shitting you. I am not... I'm not shitting you. This is 100% real. This came from Sundar's very own Twitter profile as well. So basically, this is this model that can use audio excerpts of dolphins to understand what the dolphin is saying and then respond to the dolphin with whatever you want to say to it.
Starting point is 00:47:14 You know, you could talk to dolphins. GTA 6. Yeah, we can totally talk to dolphins. Isn't that That's pretty insane. This is nuts. Yeah. And dogs. I want to talk to dogs, man. I literally, I was about to say that. I was like, the natural response to this
Starting point is 00:47:31 and people are like, all right, well, can we speak to our dogs? Please. I mean, I don't... Dolphins have a more high fidelity, like, speech. They have, like, more character in their speech. Dogs just bark? When did you become the animal expert? Dogs just bark.
Starting point is 00:47:47 All they do is bark. And they bark differently. But you have seen dogs press those little buttons that, like, have semantic meaning that they learn. So there's something there. But you're saying the IQ of the dolphins are hot. Well, yeah, dolphins are super smart. We know that. Yeah.
Starting point is 00:48:04 Yeah. It's a hot take. But I guess the question is, like, how powerful can AI get and how smart do you need to be in order to, like, establish, like, communication with some agent or some LLM that can understand you? Wow, dude, the future's weird, man. That's weird. That's going to be really fun when you could communicate with any animal. Yeah. All right.
Starting point is 00:48:25 Ejah, let's get. Should we go back to the serious stuff, guys? God damn it. You have not done anything serious. Okay, so moving on to more meaningful, big things. Google this week launched something called their agent-to-agent protocol. Right? So the TLDR of this is think of it as like an API.
Starting point is 00:48:47 for AI agents. And these agents can now talk to each other across any kind of platform, whether you're on Slack, Google, whatever you're on. It doesn't matter. And it's a protocol that enables these agents to work together on tasks without directly sharing things like their internal memory, their thoughts or their tools. Now, if you think about yourself as like a major company, right, you want to leverage this AI stuff. More importantly, you want to leverage agents to automate a lot of the work that your employees currently do, but you don't really want to share data, especially with your competitors, right? And that's been the problem that's been holding back agents kind of like flourishing in our world today. And now this new protocol is an open standard
Starting point is 00:49:29 that allows them to do so privately and confidentially without having to worry about scale, cost, communication standards or any of that issue, right? Now, if this sounds similar to something that we've discussed previously on the show, you wouldn't be wrong. Model context protocol released by Amazon's Anthropic, or rather Anthropic, sounds pretty similar, but there's a very important difference that I think I want to point out very quickly and then tell you how they kind of like work together, right? So MCP model context protocol is all about the tools that you give AI, right? So let's say you give your AI model access to Slack or a dataset, right? You know, you could be OpenAI's 4-0 model, right? And I'm giving you access to this tool called Slack, which you can use to chat
Starting point is 00:50:15 to people, right? But the issue is, and the irony here is it's called model context protocol, but it doesn't have any context at all. Now, this new standard by Google, which is an open standard, by the way, which anyone can adopt amend fork or whatever, handles the whole context, goal setting, and behavior of that interaction. So, for example, setting up the chat groups with people that your LLM should speak to first or making sure it gets feedback at the right time from the right individuals or helping extract information from a diagram one person shared versus the handwritten notes of another. And the point of this is that you can now create very specific agents to do very specific things without needing to worry about how it integrates with
Starting point is 00:51:00 anything. And there were some really cool features. Actually, I pulled this infographic, which someone shared on Twitter, which I think is really useful to kind of like help you visualize what this agent can do or what this standard can do. Now, firstly, there's this thing where each agent gets something called an agent card. Now, think of this as kind of like a Pokemon card, but for each agent. It'll describe their nature, their capabilities, their costs, their availability. Basically, it's all the stats, and it's written in JSON. The second thing is you can define a task or a goal for that agent to complete.
Starting point is 00:51:37 And the third thing is, these agents can now negotiate with each other. And we're not talking about theory here, by the way. You have agents which are talking to like, you have Slack agents that are talking to like GitHub agents and being like, yeah, I'm not ready with this code. Or I think you need to revise and fix this. Okay, I'll postpone my update to the group lead or whatever until this is done. And you have agents negotiating price, accessibility, all these different kinds of things. And so most people would respond to this and be like, okay, this is a great amount of theory, EJAS? But like, you know, is anyone actually using this?
Starting point is 00:52:10 Well, they actually announced that they're launching with 50 partners, which, include like really big names like Salesforce, Atlassian, SAP. So I think there's going to be a real focus on enterprise use cases and stuff. But I thought this is really cool because finally agents will have utility. Yeah. Is the idea behind this trying to just like defragment the agent landscape that is found all across Web 2? Yes.
Starting point is 00:52:34 So we have, we have all these agents. There's like, the whole idea is like if you're a Web 2 company, if you sell software, you need to put AI into your software. and just to stay competitive. That is just where we are going. In order to exist as a company, you need to put AI into your software in order for that software to be useful.
Starting point is 00:52:51 What happens as a result of that is that we just have a fragmented landscape of AI and the utility of the AI is just found everywhere. And that's just really annoying because you have to go to all these different places like GitHub plus AI can only be found on GitHub.
Starting point is 00:53:06 And so if you are talking in Slack, and maybe there's AI in Slack somehow, and we need to understand understand the state or something about the context of GitHub, that is just like a fragmented ecosystem. And so with MCP plus A2A, agent to agent, we're just trying to defragment everything so that context and the state of things is known across the internet, across whatever app or service, like whether you're in Slack or you're in meta or you're anywhere. And so I think that's useful.
Starting point is 00:53:36 That's useful to understand. It kind of seems like we're actually just aggregating them all together. And so when it all collapses down into one interface, this one interface can know the state of all things on the internet all at once when you zoom out. So let me take it even a step further, David. What if it was your own personal AI model for your own enterprise that knew everything about it or knows everything about it, right? So earlier we were talking about how open AI themselves updated their memory architecture, right? So now it knows everything about you. This is kind of the same happening for an enterprise that can spin up a bunch of agents and then tap into everything that is relevant for them.
Starting point is 00:54:16 So the cursor tool, the Slack tool, access to social media to see what the vibe check is of that company at that one time and bring it all together into a advanced model. Okay. Okay. So let's use bankless as an example. We have a Slack. We have a Twitter. We have an Instagram. We have a YouTube.
Starting point is 00:54:36 We also have a GitHub. We have a bunch, we have a few more things. We have our content calendar, which is in Asana, which is just a Web 2, like project management tool. And so you're saying with agent to agent protocol, assuming that all, like those things all become AIified, with this tool, with this middleware, I will be able to like query something inside of our Slack
Starting point is 00:54:59 that tells us everything about the state of bankless across so many disciplines, so many different mediums. And that is just a unified experience. Correct. and it could all be automated as well. This sounds like Dow's can come back. We can finally put Uber on the blockchain. That's a crazy takeaway.
Starting point is 00:55:15 That is the most David take ever out of this. I was going to point out the irony of two monopolies of the AI world building the best open source protocols of recent times and it's not coming from Web 3. I know there's a number of AI agent protocols that have actually, you know, from the Web 3 world that have been trying to create. something similar to this. And I'm curious how that adoption is going to potentially waver if Google's A2A just becomes the incumbent. You know what I mean? Like look at MCP. Everyone's using
Starting point is 00:55:48 MCP. Josh, I don't know if you've seen any other kind of like open standards that have been adopted as much. I feel like Google's just going to be the same. At the end of the day, it just comes down to traction and stickiness. That's it. It feels like we're watching. We have the opportunity to watch like the beginning of the new version of the internet coming along, where we had SMTP protocols and HTTP, and now we get to see these people building it and launching it up close. It's really, really cool to see because this feels so much bigger than the internet does. And we are like front row seat watching it all unfold.
Starting point is 00:56:17 Yeah. Just a constant theme to me is that like the front ends of things are becoming just so obsolete and unnecessary. It doesn't matter. We are just collapsed like screens. We already talked about how screens can go away once we get an AI form factor into our AirPods or whatever form factor comes. this to me means like I'll never have to open up like GitHub again not that I open up GitHub but like you just you just don't have to open up websites as nearly as much anymore yeah all of that front end just becomes hyper customized for the use case for the user so there there is no
Starting point is 00:56:50 standardized front ends anymore it's just custom and all the stuff happens in the back end that's crazy I don't know what to do with that I need that to become a little bit more real for me to have it takes about that that's what everyone is saying right now I have a feeling that we're going to see just an explosion of agents. Unironically, I know we said that a lot during the Web 3 hype, but I actually think we're going to start seeing a bunch of really useful agents come out. You know, okay, so maybe this is my PTSD from the past guys, but you know what this kind of reminded me of when I first started learning about this?
Starting point is 00:57:23 It gave me the vibe of the enterprise blockchain days of 2018, you know? Why? Because it's like, you know, these centralized monopolies come here and they're like, oh, you know, let me take this AI thing and use it for our own kind of like private products. But I'm completely wrong there, I think, because like at the end of the day, this is going to create a more open standard for commerce to happen between these companies. And at the end of they open up more users for them or access to more users. And I think that that's just going to be, to your point, David and Josh, a completely different version of the internet. It's kind of scary.
Starting point is 00:57:57 Like, is it going to be like a chat interface? Is it going to be like us just talking in our audio things. Is it a chip in our brain that we just upload as our resume and people kind of figure out whether they want to hire us or not? It's crazy. I don't know which way this would go. Imagine verifying yourself without handing over personal data. No hacked databases, no unnecessary personal exposure for air drops, and no AI bots ruining community governance. Meet self, the on-chain identity verification protocol built for privacy and control. Self protocol uses zero knowledge proofs to confirm your identity safely. Users prove key details like age or citizenship without revealing sensitive personal information. Self never stores your data. It only generates cryptographic proofs.
Starting point is 00:58:37 Here's how it works in three steps. First, register and verify. Use the self-app app to scan your biometric passports RFID chip. Self-verifies authenticity with zero knowledge proofs. Each passport creates one unique identity. Second, you can share proofs privately. Third-party apps request identity proofs like confirming you're over 18. You can also link proofs securely to public wallets for airdrops or governance participation. And then last, secure verification. Apps validate your proofs instantly on chain, like on cello or off chain. Audited by ZK security, the self app is live on iOS and PlayStore. Visit self.xyZ and follow self protocol on X.
Starting point is 00:59:14 Uniswap is your gateway to a more efficient defy experience. With Uniswap swapping and bridging across 13 chains, is simple, fast, and cost effective, helping you move value wherever, whenever. Thanks to deep liquidity on the Uniswap protocol, you'll enjoy minimal price impact on every trade. And now Uniswop V4 takes it even further. Swappers benefit from gas savings on multi-hop swaps and each trading pairs, while liquidity providers can create new pools at 99% lower costs. The best part, you don't have to do anything extra. Each trade is automatically routed through Uniswap X, V2, V3, and V4, so you get the most efficient swap without even thinking about it.
Starting point is 00:59:48 Whether you're swapping, on-ramping, off-ramping, or bridging, Uniswop's web app and wallet, gives you the tools to unlock DeFi's full potential on Ethereum, base, Arbitrum Unichain, and more. Use Uniswops web app and wallet for a more efficient way to use Defi. All right. Somebody talked to me about 4.1. GPD 4.1 has come out. Okay, so now we're renting the section of the AI roll-up that happens every single week, where we talk about the leapfrogging of AI models. Somebody tell me who leapfrog to this week.
Starting point is 01:00:15 So for the next seven days, seven days only, folks, you now have all access, API access to GPT 4.1, 4.1 mini. and 4.1 nano. And, you know, big disclaimer or spoiler alert here, it beats all models across all specific benchmarks that Open AI have specified specifically. Open AI's benchmarks, you mean? Open AI's benchmarks, right?
Starting point is 01:00:42 So open AI's models beats everyone else on open AI's benchmarks. Benchmarks. And as we described in the previous week's episode, benchmarks is this entire game of like, you know, I can put these benchmarks in my favor. Benchmarks are definitely. forward. We'll see what people actually create with this. But there's, I did see that people are competing AI models on Pokemon. So they're giving, they're making them play Pokemon and they're
Starting point is 01:01:07 trying to like have the AI model beat Pokemon the quickest. And I, I agree with that philosophy of model testing. So if you're listening to this and you haven't already seen someone plug in these new 4.1 models into a Pokemon simulator, you could be that person. Please do it and send us a video and let's see, uh, let's see what comes of it. But anyway, kind of going back to like what these models can do, I'll give you the highlights, right? Number one, much better at coding. But there's a caveat without the reasoning element. And for those of you who are wondering what the hell the reasoning element is, it's the part of the new kind of model architecture that makes them super smart, right?
Starting point is 01:01:47 So it's what it's what has given a Claude 3.7 model the best advantage at coding. But without the reasoning, it's the best at coding. With reasoning, we might see in a few hours when they release 03. Well, we'll see what happens. Number two, one million context window, which is equivalent to Gemini Flash 2.5, I believe, Josh. Correct me if I'm wrong, but I believe it's the same. Or was Gemini 2.5, the 10 million context window. No, that was meta.
Starting point is 01:02:16 That was meta. I had 10 million. I believe Gemini has 1 million. And now, Open AI also has 1 million. Okay. Okay. Because previously it was 100,000 or so. a 10x improvement? Yes, a 10x improvement. And I believe with the 10 million context window,
Starting point is 01:02:32 that was 75 novels. So we've got about 7.5 novels. If my math is mathing, it's probably not. But, you know, I could ask GPT later to edit this out or something, right? But number three, it's really good at extracting data and reasoning from documents, which seems like a kind of lame thing, but it's actually a super important improvement because previously you couldn't just upload PDFs and it would understand everything. It would just kind of give you a generalized summary. Now it understands all the nuance and all of that. Now, if you're wondering, hey, Ejaz, what's the difference between the main model, the mini model and the nano model? I was wondering, Matt.
Starting point is 01:03:07 Yeah, the TLDR is it each becomes a quarter of the cost of the previous model. So, mini is 25% of the cost of the normal model. And nano is 25% of the cost of the mini model. So if you wanted to run it locally at home, it's much easier to do right now. without sacrificing some of the core competencies of all of that, right? And they're just a little bit dumber. Each one's just a little bit dumber than the other one. Correct, correct.
Starting point is 01:03:34 And if we want to play the game of, you know, benchmarking, we can pull up this tweet by OpenRouter. By the way, I have a really, I have another take on Open Router that I actually want to speak to you guys about. But before we get into that, this tweet goes, Optimus Alpha has topped the charts. The community created dozens of benchmarks for Quasar and Optimus Overson. the last week. Now, if you're wondering what the hell Optimus and Quasar are, those were the pseudonyms for these GPT 4.1 models before they became publicly released. This company, OpenRouter, basically
Starting point is 01:04:10 was able to anonymously give people access to these models, and it was Open AI that enabled this on the back end to get kind of live feedback as to how people respond to these models, whether they thought it was good, see how they would use it, tested against existing benchmarks themselves. And I found that really interesting. This isn't the first time OpenRouter has done this. And by the way, if I'm not mistaken, this is Alex Atala's new company. The open... Oh, yeah, this is OpenC...
Starting point is 01:04:38 Yeah, so he left OpenC a while ago to go to AI. This is this. Wow. That's it. Open Rato. Yep. Optimus Alpha, weirdly close to Open AI. Come on, guys. That's the whole point.
Starting point is 01:04:53 I think they did it intentionally. But yeah, so as you can see, like, just through public kind of use, it's kind of slayed across a bunch of different benchmarks. Of course, the only real test is kind of like seeing this thing out in the wild. And it's currently only limited to API access, which is kind of lame, to be honest. I want like everyone to have access to this on their main GPT terminal. But, you know, it's interesting to see. Josh, I'm wondering if you have any takes on this new model. Maybe you've seen something I haven't.
Starting point is 01:05:21 Yeah, no, that was pretty comprehensive. I think it's another week. It's another better model. In about 10 minutes, we're going to get 03, which is the reasoning version of this new model. It'll be even better. What does it mean to add to be a reasoning version? What does it mean to add reasoning? So reasoning relies on the thing that we spoke about, which is the context window and this thing called chain of thought, where the model thinks in English. There's not much code happening. So each time a new token is spit out, it will consult all the previous tokens to come up with a better answer. As it thinks more, it's able to take that live context that it has and give you better answers. So the longer it thinks in general, the better quality of the answers will be. Because, for example, with the transformer, if you ask it to do one plus one, it will do that entire compute in one run of the transformer, which is not very compute intensive.
Starting point is 01:06:13 So if you give it a lot of tries at solving a more complicated answer, it will normally give you better answers. So that's why reasoning, the more it thinks, the better the results are. just requires a lot more compute power, which is expensive. So I think the 4.1 announcement is all about pricing and accessibility. I think the nano is probably the most interesting story of them all because of how cheap it is. And a lot of companies are actually just offering it for free for the next seven days. So we'll see what happens as this cost of intelligence continues to go down, but remains high quality. I think 03, I mean, again, we'll have some news next week. It will be even better, even more powerful. And it's just this continued iteration towards getting these
Starting point is 01:06:50 like super models. Where does O3 fit? So like right now when I open up OpenAI or chat Chbett, I'm using 4.5, chat chvetypT 4.5 or 4.0 maybe is what I'm using. I don't know. What is, is O3 the new premier model? Is this the new iPhone 17? Is this new like frontier model that Open AI exists?
Starting point is 01:07:10 Like where does it fit in the stack? It should be. This new one will be the new frontier model. Currently it's 4.5, which is their like most cutting edge model. 4.5 is actually being. depreciated. They are shutting that down. They are going to do 03 and possibly 4.5 or 5.0, I think, comes next. That's the big one. But this is kind of the step in between 5.0, which is the big one, and the current one that we have, which is 4.5. It's messy. Their naming is messy, but this should be
Starting point is 01:07:39 on paper. It's the new flagship model that we're getting. And I think all eyes are going to be on weather, 03 beats, Gemini's 2.5 flash. For context here, like, Google is leading the model race right now, which is shocking because not too long ago, their image generation AI was producing pictures of the forefathers, which were of completely different ethnic races to the originals. Black Nazis, I remember. Yeah, exactly. So the fact that they've been able to catch up so quickly is highly commendable to them.
Starting point is 01:08:13 I saw someone have a take on X, which said that if 03 ends up beating 2.5, Gemini 2.5 Flash, then I think that's the incentive for Google to drop their 3.0 flash, which would then lead to OpenAI releasing their 5.0. God, there's too many models, but they're like, like, you know, their latest and greatest, which will beat them. And he estimates that they, they being Open AI, only have a six-month lead right now. And remember, that was, like, quoted as like two to three years not too long ago. So pretty crazy. I am very much looking forward to the point in the history that was illustrated in the 27, AI 2027 document, which illustrated that AI models stop getting released and they just start
Starting point is 01:09:02 naturally improving incrementally like day after day, week after week. And it's no longer, you know, chat GPD4O or whatever, whatever, whatever, whatever. It's just, there's just one model and it just gets better incrementally because they learn how to like train and release to production at the same time. Josh, do you think there'll be a new. kind of model architecture to enable that, what David just described. So like the self-learning situation? It seems like it's probably an iteration on the current one, which is just transformer based architecture. I would imagine, and we're kind of seeing this with GROC, where GROC release GROC 3, but GROC 3 got kind of better every single week and it's continuing to get better. And that's
Starting point is 01:09:40 because of the post-training phase. It requires a lot more compute to fine-tune after the main model's been trained. And I think what we're seeing in the case of GROC, because that's the one example that I have seen is as it receives more data on a daily basis, and as they kind of come up with more algorithmic efficiencies or ways to improve it on the fly, they can do this post-training run fairly quickly and fairly cheaply and just kind of roll it out on top of that base model on a regular basis. So I don't think there's an architecture shift. I'm sure if there was one, it would be a huge unlock and I'm sure people are trying to work on it. But the current transformer architecture with post-training stacked on top probably is sufficient enough to get to that
Starting point is 01:10:16 self-recursive learning where it can kind of improve on. on a regular basis and push updates live without needing to do the entire base training run again. Guys, this is, this is so great. I'm learning so much and there's so much to be excited for. This was, I think it's just a great week in AI world, and it just continues to be like this. Yeah. Yeah, I mean, we're literally in the midst of it. I know.
Starting point is 01:10:40 We're about to see another frontier model dropping a few hours. It's pretty insane the rate of progress. Oh, I think it's actually dropping right now. So I think we're going to have to wrap up this episode. It's going to come out. The Open AI 30 model, O3 model will already be out by the time people are listening to this.
Starting point is 01:10:55 But me, Josh, and each of us are going to drop so we can go watch that live stream. Bankless Nation, this has been your weekly AI roll up. Probably the best place to keep up with AI. If any listeners listening to a different podcast that is like this, I want to know because we are going to make this podcast better. But I'm pretty sure this is the best place
Starting point is 01:11:11 to keep up with AI. And that is thanks to my incredible co-host here, Josh and Josh, Josh, thank you for doing this once again this week. with me. I appreciate it. It's been awesome. My pleasure. Yeah, another great week. Josh is going to be out adventuring in the real world without AI next week, so we might miss it next week. Maybe, or maybe me and the EJas, just see if we can run it without Josh. Josh, have a great trip, my man. Thank you. Yeah, by the time I come back, I expect at least three new frontier models to be released.
Starting point is 01:11:38 Yeah, we might have replaced you with an AI agent by that time. Yeah, I might have gone. There's no need for it, but thank you. All right, Bankless Nation. If you, if you like this content, and you're watching it on YouTube, like and subscribe. Also, go ahead and share it with your best AI friend or your best real friend who likes AI, one of the two. And then also just stay tuned for next week. We appreciate you watching the episode with us, and we'll see you in a week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.