Bankless - AI ROLLUP: 26 Months Until AGI | Meta's New Top Model Cheated? | $20 Billion To AI Apps

Episode Date: April 10, 2025

Welcome to the AI Rollup, your weekly ride through the fast-paced world of artificial intelligence! This week, we’re joined by frontier tech enthusiast Josh Kale to unpack the drama around Meta’s ...controversial launch of Llama 4, scrutinized for potentially overstating its capabilities. We dive into “AI 2027,” a provocative roadmap predicting espionage, geopolitical turmoil, and runaway intelligence, and explore Shopify’s bold move making AI usage mandatory for employees. Plus, we cover Claude Education’s groundbreaking efforts to boost student performance with tailored AI tutors and a16z’s massive $20 billion AI startup fund aiming to shape the industry’s next era.------🪙FRAX | SELF SUFFICIENT DeFihttps://bankless.cc/Frax🦄UNISWAP | SWAP ON UNICHAINhttps://bankless.cc/unichain🛞MANTLE | MODULAR LAYER 2 NETWORKhttps://bankless.cc/Mantle🌐SELF | PROVE YOUR SELFhttps://bankless.cc/Self🏦INFINEX | THE CRYPTO-EVERYTHING APPhttps://bankless.cc/Infinex------TIMESTAMPS & RESOURCES00:00:00 AI Generated Tarriff’s00:03:58 Meta’s Huge New LLAMA 4 Announcementhttps://x.com/ahmad_al_dahle/status/1908595680828154198?s=46https://x.com/alexocheema/status/1908651942777397737?s=4600:11:54 Did Meta Cheat?https://x.com/Yuchenj_UW/status/1909061004207816960https://x.com/andrewallenxo/status/1909143884321481157https://x.com/ahmad_al_dahle/status/1909302532306092107?s=46https://www.lesswrong.com/posts/4mvphwx5pdsZLMmpY/recent-ai-model-progress-feels-mostly-like-bullshit00:20:45 AGI Is Coming in 2027https://ai-2027.com/https://situational-awareness.ai/00:28:02 What Happens After AGIhttps://ai-2027.com/https://situational-awareness.ai/00:39:58 Shopify And The Employment Crisishttps://x.com/tobi/status/190925194623543751400:48:34 The Future Of Educationhttps://www.anthropic.com/news/introducing-claude-for-educationhttps://x.com/jowettbrendan/status/1907695965726978184https://x.com/cryptopunk7213/status/1907969387958513727https://x.com/itsPaulAi/status/1907122207136137368https://www.foxnews.com/media/texas-private-schools-use-ai-tutor-rockets-student-test-scores-top-2-country00:59:55 How Much Does AI Know About You01:04:58 A16Z’s $30B Fundhttps://x.com/tkexpress11/status/1909626333711548526https://www.reuters.com/business/finance/andreessen-horowitz-seeks-raise-20-billion-megafund-amid-global-interest-us-ai-2025-04-08/------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures

Transcript
Discussion (0)
Starting point is 00:00:03 Welcome Bankless Nation to the AI roll-up where we cover the weekly news in the AI industry. AI is accelerating us into a weirder and more chaotic future and we are here to help you keep up with the future that's hurtling towards us. We're trying out a different format this week because the crypto AI industry is pretty dormant while the Silicon Valley AI industry is only getting hotter. So me and Jaws are bringing in a third commentator, Josh Kale, who's done a couple episodes with me on Bankless before, all about frontier technology and who we are tapping in to help us go through the AI news. this week. Josh, how are you guys doing? I mean, I'm, yeah, I'm feeling good. It's awesome to have Josh.
Starting point is 00:00:41 Though I can't really get rid of the, yeah, the $10 trillion wiped out of the stock market this week. Thanks to Trump's wonderful Liberation Day tariffs, which is, in my opinion, obliterated any kind of economic recovery that we wanted to get. But more so, it looks like he's using Chad GPT to create his tariff formula. I don't know if you guys saw this, but if you type into any AI LLM, so that's like chat GBT, Claude, whatever, you know, can you create a basic tariff formula? You get essentially what the U.S. announced for their tariffs across every single country. So it's just insane.
Starting point is 00:01:20 It's pretty hard to get away from talking about tariffs this week just because everyone is just looking at the market, looking at tariffs, tariffs, tariffs, tariffs, but I'm pretty glad that we are also going to focus on AI, which I think. think we're all pretty stoked about. Josh, it's good to have you on the AI roll-up this week, my man. It's great to be here. Yeah, thanks for having me. I'm excited to talk about this stuff. The funny thing about what EJ has just said is the way that we found out they used AI is actually by using AI ourselves and kind of reverse engineering it. And then they openly admitted it after the fact. Wait, did they admit that they did use chat GPT to come up with their tariff? Not that AI was used, but that there was this weird formula that wasn't just tariffs. And the way it was discovered was by actually using AI and kind of reverse engineering it. And then they were like,
Starting point is 00:02:02 oh, yeah, we actually did do that. So it was implied. It wasn't directly stated, but like there was some artificial help in the decisions being made last week. I mean, I feel like that's just a harbinger of things to come of like global decisions being made using AI influence. You know, the crypto technologist would love to say is like, oh, yeah, eventually the future will just be governed by AI. And, you know, we accidentally got there, maybe a little bit too ahead of schedule, a little bit too soon, a little too early for comfort. Yeah, makes the AI alignment conversation a lot more scary when world's powers are using it to make decisions.
Starting point is 00:02:36 Or even write lull. All right, let's get into the news of the week. Big news this week, five big things that we're going to cover. First, the most controversial AI model release ever came out of meta. Meta released new Lama 4 AI model, which on paper is very impressive, but people are worried that they are lying about his performance, which will also lead into an Jaws conversation about why he thinks no one is using AI models correctly. That's up first.
Starting point is 00:03:05 Second, some legends in the AI space author a document called AI 2027, a theoretical roadmap for how the next three years of AI development will go, featuring things like espionage, geopolitical warfare, and runaway AI intelligence. Third topic this week, Toby Lutki from Shopify, ran an internal, memo saying that using AI is now a job requirement is an expectation for all employees. He released that memo on Twitter, so we're going to go and talk about that. Fourth, Claude Education focuses on using AI models for just revamping the education industry. And we've already seen a school in Texas rocket their students' learning scores to become like top 2% in the nation. And then lastly,
Starting point is 00:03:49 AI16C is planning a $20 billion dollar AI super fund. Focusing on vibe creation platforms like cursor for vibe coding. So those are the big news of the week. Before we go into each one of those specifically, though, Josh, maybe I'll just throw it to you. Set the tone for us this week. How was this week as a vibe in the AI space? I feel like over the last couple of weeks,
Starting point is 00:04:15 we've been kind of increasing tension between model producers, right? So what do I mean by that? It's obviously a very highly competitive market. And ever since we had DeepSeek come out and compete with Open AI, you know, Open Air was the darling child, right? It was, you know, way far ahead. No one was competing with it. And now we have these guys that are like, you know, competing every week now. It's become a meme on this show that there's a new frontier model drop.
Starting point is 00:04:41 I feel like we've reached the boiling point this week, David. Controversy with a top AI model producer, not just from any company, from meta themselves. You know, we've got the Shopify CEO announced. that if all his employees don't adopt AI tooling in their workflow processes, you're probably going to get axed. You know, tension is getting to a point where, like, AI might start replacing jobs, it might start having significant impact. And I feel like people are really feeling it this week.
Starting point is 00:05:09 Josh, what do you think about just the acceleration of the AI arms race? Because, like, Josh says, every single week on this show, we are like, oh, there is a new, like a new leapfrogging of AI models. There's a new number one. And it's been consistent. It's been consistent for the past like 10 weeks. How do you think that when it's gone on for so long, how do you think that finally is manifesting in the current day?
Starting point is 00:05:31 It is unbelievable that we're still getting 10x improvements week over week. And that happened this week with the meta release. It was controversial, but the main headline that I was excited about is a 10 million token context window. And previously, Google's Gemini model had 1 million as the flagship. So we've gone from one million. Which was last week. So week over week, we've 10xed the context window, which is a very big deal because that is the, like, quick access memory for a large language model. And it allows you to get a lot more data packed into your queries.
Starting point is 00:06:03 So to go from 1 million to 10 million week over week, we are still accelerating so, so quickly. Let's unpack context window just pretty thoroughly here. For people who are, like, learning about a context window for the first time, Josh, how would you define a context window? Okay. It is, it's fun to look at it, like, if you had a book, if you have a really big book of pages. and you're able to rip out those pages and place them out in front of you so you can see them all very clearly.
Starting point is 00:06:26 The context window is the amount of those pages that you can see in clear view without having to actually turn the page and find something. It is just quick access data to whatever you want to query. So imagine it's like not your eyes, but it's an AI and it's a camera
Starting point is 00:06:40 and it could see everything that's laid out in front of you. That's kind of the context window. And it's helpful to have more pages because it can see more things quickly and it's also more accurate. It doesn't have to infer what the next token is because the next token is clearly visible in that context.
Starting point is 00:06:54 Okay, so if I'm maybe a human way to turn this into a metaphor is like, say I'm doing mental math and I'm holding numbers in my brain and I'm manipulating the numbers. I'm thinking about the numbers. I'm retrieving numbers differently in a sequence. A context window means I can think about a larger array of numbers all at once before I like lose them in my mental space. Was that a good analogy? Yeah, that's right. And you can store a lot more. So if it is a very complex model where there is a lot more numbers, you can store them all in one place.
Starting point is 00:07:25 And you don't have to guess. It is all very clear because it is all seen out in the general view. Is that the big reason why this, a llama for the new release from meta? That's why this is such a big deal, or was it more of a collection of things? Maybe I'll throw this one to you. What were the big standout components of this release from meta? Sure. So the first standout was what Josh just mentioned,
Starting point is 00:07:49 is the 10 million context window. Actually, to build off your example, Josh, it's not just one big book. It's the case or equivalent of 75 novels or another way to look at it. Seventy-five of a context window in a single context window, or you could think of it as one million lines of code, right? So that's like one of the major things. You know, it's easy to recall memory, data, et cetera. The second most important feature of these models, and I say models, because they technically released three models, two which are kind of like basic, and then one gigantic model, which is like a two trillion parameter beast. But the second most important thing is it's a mixture of experts design, David,
Starting point is 00:08:34 which means that at any one point when you're querying the model, you're only querying around 14 to 17 billion parameters. So it's hyper-efficient when you're like querying it. And for the bigger model, it's obviously a larger, query, so it's going to probably hit around 250 billion parameters, which is a lot more computationally heavy, but it's still way more efficient
Starting point is 00:08:56 than how Open AI runs their models where they just query the entire trillion parameter model, and it takes a while to like come back to you and stuff, yeah. We've defined parameters on the show before, but maybe it's worth it just to take some time since we're in the mode of defining
Starting point is 00:09:12 things. Like, to me, a parameter is a specific like note of information or a neuron. Like it's a unit of information or a node of information or a neural connection in the brain. And when you have more parameters,
Starting point is 00:09:26 you just have like a higher resolution camera pointed at the internet, which all of these AI models are trained on. You have a higher resolution camera pointed at knowledge at the internet. And these parameters with their weights are set to a specific number to actually be a map of knowledge
Starting point is 00:09:43 that is digested from the internet. And so if you have a higher parameter knowledge, higher parameter model, you simply just have a more knowledgeable model. There's truly just knowledge is contained in these parameters. That's kind of how I think about it. Does anyone want to amend that definition or add to it? Yeah, that's pretty good. I think with parameters, they're also known as weights. The way it works, and this was a big debate early on, is does it actually improve with scale? So does it actually increase directly exponentially with the amount of data you feed it? And so far, been yes. So so far, the more data you can feed into this, the higher resolution the model will be.
Starting point is 00:10:23 So in the case of weights, it kind of goes through this pre-training phase where it's fed just a ton of data, a ton of information. And then each one of those training runs that happens actually alters those parameters just slightly. And then it compares the results of the new run to the previous run, and it gets better and better and better. And that's kind of how they tune it. And generally speaking, the more parameters, the smarter it is. So the fact that this has, I mean, the largest one has two trillion parameters. That is huge. That is a tremendous amount of training data. I don't know where they found it from. I don't know how they saw two trillion things, but hopefully that will result in very, very high quality answers. Okay. Okay. So we now have a 10 million token context window.
Starting point is 00:11:00 And again, a token is just like a unit. It's like a syllable of sorts is almost a word. And then we also have an increase in parameter size, but also three different models came out. So between all of those things, that's kind of summarizing why this announcement from meta is what it is. But there was some follow-on controversy downstream of this announcement because people, meta, maybe just set the context. Meta has been lagging as a model, as an AI lab. Like, no one really uses the Lama models as far as I understand it to be. Like, everyone's using either chat GPD4 or Claude. And so meta has been just like losing this game. And with these release of their new models, they are hopefully trying to leapfrog their place into number one,
Starting point is 00:11:45 that would be the easy way to like talk about this announcement. But you're telling me that there's been some controversy with the release of this model. Can you walk me through that? Okay. So to summarize this, every AI model that is released doesn't matter which company produces it are typically graded across what we call benchmarks. Right. And they're known as SOTA, SOTA benchmarks or state of the art benchmarks. and typically every model producer attributes it to this.
Starting point is 00:12:17 The reason why they do this is, how do you know whether one model is better than the other? You can't really kind of test or query. You can ask them the same questions, but you're going to get different answers. But how do you really know? Well, there's a measurement of different things like parameters, like we just discussed, weights, et cetera.
Starting point is 00:12:33 And then there's the actual output of the actual model, right? So when meta released these three models this week, they had all these different grades across their benchmarks. And they were claiming David and Josh to be as good, if not better, than Deepseek, V3 and V1 or R1, and OpenAI's O3 model, which is like a really high performative reasoning model, right? But for the first time ever, there was a huge amount of backlash about these benchmarks. For two reasons. Number one, it was stated, and there's like a few screenshots that we've seen that you
Starting point is 00:13:10 can share on the screen here, David, that they had blended test results across different models to basically give a falsified or rather skewed result for their benchmarks across their models. So basically, they kind of blended data sets to give a falsified answer just so that they looked clearer or nearer to the actual state-of-the-art benchmarks. The second thing was, and this is the time-tested trial, is people just use the model and was like, this is kind of shit. And it isn't as good as open AI's reasoning model or it's
Starting point is 00:13:45 not as good as clothes model or et cetera. And so people were just like, is meta lying about this? Now, there was a lot of back and forth, a lot of rumor, fearmongering, etc. Their head of generative AI, Ahmed, actually came out and stated that
Starting point is 00:14:00 this isn't got anything to do with their benchmark. Their benchmark testing is to a high quality, a high grade. However, there is an implementation issue. So if you look at this tweet that he says here, it says, you know, we've also heard claims that we're trained on test sets that is simply not true and we would never do that. Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementation. So he's putting it down to the fact that, you know, inferencing is taking a while,
Starting point is 00:14:28 our models are overheating, et cetera. That's the kind of like equivalent of what he's saying. Yeah. So a big amount of controversy as to whether this is real or not, if it is true, that is a huge mark on metas, not just brand, but the entire AI model. Josh, what do you think is going on here? It also feels kind of like a dark mark on the industry. I'm not super well up to date on the meta situation in terms of benchmarks, but benchmarks as a whole very much feel like they're broken because they can be gamified. I think benchmarks early on were really effective way of measuring AI models because so much
Starting point is 00:15:01 of it was new and they were actually challenging problems to solve. But now that AI models have gotten so advanced, it's difficult to generate these benchmark problems and standardize them because the process of standardizing them makes them cheatable. So I think it just starts a broader conversation of how you actually measure the intelligence of these models as to get better. Because there is the option where people can gamify them. And when they do gamify them, it looks great on paper. But when you use them, you're like, this doesn't quite match up. So that's kind of how I've been gauging the models that I've been using is mostly through either my own use or just smart people who use them differently and how they react to them.
Starting point is 00:15:35 I don't think benchmarks are a super reliable way. And I think there's a lot of work trying to figure out how to make them better. But in this case, I don't know if meta did them, but I think a conversation needs to be had about how we actually benchmark these models properly. I think that's just fundamentally true for benchmarks as a concept. There's something out there called Goodhart's Law, which is just often stated as when a measure becomes a target, it ceases to become a good measure. As in when if we are comparing whose model is best using this particular test, then all of these AI labs are incentivized to much. make a model that's good for that test and not really care about whether that thing is actually useful or not. There was a blog post on LesRong.Lorong.org. LessRong is kind of this like
Starting point is 00:16:18 rationalist corner of the internet. It's where a lot of AI safety people come from here. L. E.A. Z. Yerakowski comes from here. And a blog post was posted on the 24th of March. So actually not, not terribly recently. But this also shows that this actually has been a growing conversation in the AI industry. The blog post is titled, Recent AI Model Progress. feels mostly like bullshit. And it starts off with two main claims that either AI labs are cheating and they are cheating their way to find a good benchmark number to report a good number. Or they are just accidentally testing. They're accidentally gearing their AI models to be good at taking that test, to be good at that benchmark. And it also kind of goes through some of the
Starting point is 00:17:01 incentives here. One that really stood out to me was talent acquisition because there's not that much talent, AI talent to go around and attracting very good talent for your AI models, very, very valuable. And no good AI talent wants to work for the fourth or fifth or sixth best AI model. And so if Facebook can show who, again, we started off this conversation saying Facebook has been lagging. They have been showing poor performance on their models. Now they're number one. Now their performance is number one. But people are looking at these models being like, this is not a useful tool for me. And so just there's a big, a growing conversation about like, yo, benchmarking sucks. And also there's just a ton of warped outcomes and warped incentives around this whole
Starting point is 00:17:45 benchmarking complex. Did any way, any of you guys read this blog? I did not. No. Just me. Okay. I haven't told you everything you need to know. Yeah, but it sounds, it sounds right. And to benchmark a model that's supposed to be reflective of the collective intelligence of humanity by using a few math problems, it just seems wrong. So yeah, something needs to be changed. I mean, would you both agree that the best way to just assess the model's capability is whether it is whether it makes material impact on your life? I feel like that's pretty much it, right? Like, what if we just had a test bed of people across a variety of different economic standards and professions? And we just said, here's a new model. Let me know what you think about it or
Starting point is 00:18:27 what the general feedback is. Give me a rating of one to ten. Is that a dumb idea? Yeah. Yeah. It feels closer. It's more subjective. David is a dumb idea. Okay. No, no, sorry. That was a, yeah, I was agreeing with you. Yeah, I was like, yes, I agree with your point. Because it is, it's increasingly subjective how people use models, how it affects their lives. Most people don't use them for code or math problems. They want them for general purpose stuff. So, yeah, I'm all in favor of that. The way that we benchmark GPUs, I think is actually genius, where if you want to compare, like, the next higher end GPU, the Nvidia, like, 590 to the 490. you actually just benchmark it using real games, like high intensity, what are the most like triple A graphically intensive games? And then you just do a frame rate comparison on how, like,
Starting point is 00:19:14 how good that GPU is for that game, which that is true usefulness because if it spits out higher frame rates on a higher end game, well, that is useful for gamers who want exactly those properties. And so I think we need some sort of benchmarking system like that where it were actually testing against things that people want. rather than like standardized tests. Because we all know the failures of standardized testing in the school, in the school system, right? Like anyone who is just like wealthy or has the means will practice the ACT or the SAT 10,000
Starting point is 00:19:45 times and then they will get a good SAT score. But they're actually just an idiot who has money. Yeah. And yeah. I actually just thought of a specific distinction between that, David, which is, and the standardized tests that we're used to, they're normally theoretical or paper questions, right? It's like, hypothetically, what if this happens, you know, or give me the answer to this, you know, random equation. But with AI, it's actually practical. You can see it happen immediately in
Starting point is 00:20:16 real life, right? So it's like, it's not like, you know, tell me the weight of these 10 oranges and then use that to calculate the mass of the sun. It's like, no, I'm going to show you right now whether that makes sense. And if you're wrong, you're really wrong. And it's going to taint your reputation almost entirely. And also, I'm going to do this calculation in like 15 seconds. Here we go. Right. So I think people aren't really ready for that feedback loop. And it's obviously showing with meta. Right. Any last comments on the subject before we move on? No. All right, moving on to one of the big releases of the week, I would say, is this website, which is kind of just an interactive document. The document titled AI 2027. And these are, it was authored by a collection of just AI Legends.
Starting point is 00:20:59 Scott Alexander is the notable one to me. He is the author behind the Slate Star Codex blog, which is this very well-respected, well-read for nerds blog that Silicon Valley tech leaders read. And if bankless listeners are familiar with the concept of Moloch, Moloch came out of the Slate Star Codex blog. A bunch of other things came out of there. And so Scott Alexander and four other people wrote this AI-2020 model
Starting point is 00:21:27 for how they think, a plausible model for how AI development goes from here. And they give dates, right? So mid-2025, stumbling agents is where we are in current AI landscape. And I think anyone who just came out of the AI slot-bop-bought meta understands exactly where we are with stumbling agents. And then they kind of give a prediction. Late 2025, what that looks like. Early 2026, coding automation. Mid-20206, China wakes up. Late 2026, AI takes some jobs. January, 2027. agents never stop learning. And they give this predictive idea for where this could go. And they also talk about incentives from, they bring in anything that's relevant. They talk about AI copycatting Silicon Valley Tech Labs. They talk about Department of Defense realizing that they need to pair
Starting point is 00:22:17 with their own domestic AI labs as a matter of national security. They talk about China trying to figure out their ways around the chip shortage. And really where this ends up at, and really where this ends up that is where they think this is going is like a super intelligence explosion by the end of 2027 where the governments of the world have all of their resources pointing to because this is the most valuable thing. That's how I'll summarize this in just a sentence. Jazz, how would you summarize what's going on here? What do you think about it? Yeah, I mean, so it reminds me of what a bunch of the top CEOs in AI have said, right? So CEO of OpenAI, Google deep mind have all said that AGI will arrive within the next five years. And I think Sam
Starting point is 00:23:00 Altman's kind of said this in 2023, right, which is what's kind of like set up the kind of four, five year timeline to 2027. And so what we basically have here is a band of like cracked AI builders and researchers have got together and said, okay, well, how will that actually kind of pan out? How do we actually get to ASI, artificial superintelligence as they reference it here, right? it kind of, before I dig into it, if this sounds familiar, it's probably because you read Leopold Ashenbrenner or however you pronounce his name's,
Starting point is 00:23:30 famous piece situational awareness, which was, I think, released sometime last year where he details his prediction for AGI by the same date, 2027. And it's important to note that like, you know, a lot of these pieces are technically, you know, opinionated and they get a lot of things wrong, but they also creepily, get a lot of things right.
Starting point is 00:23:52 And the band of people that have put together this particular prediction are very in tune. Yeah, yeah, exactly. And so you kind of hit some of the key themes on the head, David, but I want to kind of like stack it across to the timeline that you mentioned, right?
Starting point is 00:24:06 So next 12 to 18 months, you mentioned there's going to be these kind of basically mid-agents, kind of like semi-autonomous, kind of unimpressive, they kind of automate a few things, but whatever, right? And these AI agents are expected to like drive their own research, a developmental process, you know, automate complex tasks. So, you know, we see these with Open AI's operator agent or Claude's coding agent or Anthropics computer use
Starting point is 00:24:31 agent, you know, we see these releases of agents that do basic tasks for you like, hey, I'm going to book you a holiday or predict, you know, where your next food shopping trip should be or how much it's going to cost, right? Kind of not that impressive, but useful, right? And the article predicts that over the next 12 to 18 months, we'll see these generative AI models and early forms of agents kind of automate a bunch of these general tasks. But the main goal of this is it's going to accelerate us towards artificial superintelligence. So the question is, well, how is this supposed to be feasible, given that the agents today don't do jack shit, right? And well, it's the whole point that they make is these agents are going to be tactically developed to accelerate AI research itself. So instead
Starting point is 00:25:16 of just releasing models that make, you know, us pathetic humans clap excitedly and be like, oh, wow, you know, that's a cool Ghibli image. The models themselves will conduct research on itself to make itself better, including discovering new forms of training itself to be smarter, et cetera, which would lead to basically an exponential curve for development of AI. And we're already seeing the seeds of this foundation kind of be planted today. We've got, you know, reasoning data traces. We've got reinforcement learning and rag. We've got Claude's coding tool, which is, you know, all of this is happening behind the scenes, right? It's not very front-facing because it's kind of like too technical or boring for the public audience to really care about.
Starting point is 00:25:58 But of course, if we do end up getting AISI, then we might be in quite a bit of trouble when all of this comes to fruition. But that's kind of like the general takeaway. And then later on, the article goes on to kind of discover like or like look into the secondary effects of what this is going to have, like geopolitical impacts, like an AI arms race, right? You know, how do major powers like the U.S. and China interact with each other? Just China just go for the throat and take over Taiwan and all these kinds of different things. So, yeah. Josh, I know you read this article.
Starting point is 00:26:27 What was going through your brain when you read it? I encourage everyone to read this because the presentation and the quality of the ideas are so high. These are not just some random schmachshmash off the street. These are very impressive people who have a good track record of predicting stuff and did a really good job of packaging it. The really startling thing to me was the timelines because everyone kind of talks about
Starting point is 00:26:45 oh yeah, AGI is coming, ASI is coming quickly, but they really lay it out month by month of how this is going to happen and the reality is it's coming very soon. It's on the horizon. They are giving quarter long timeframes for when they think things are going to arrive. Yes, and if you have that level of clarity then we are getting closer than I think a lot of people realize.
Starting point is 00:27:05 There's another blogger, not Slate's our Codex, but Tim Urban who has the way but why blog. and he was on a podcast recently talking about his new book, which was the history of the world, from the beginning of time, the Big Bang, till now. And he said the very last chapter he had to make AGI because once you kind of solve AGI artificial superintelligence, nothing else really matters.
Starting point is 00:27:27 So the rest of history doesn't really matter because the rate of acceleration, once this one problem gets solved, once we have this super level of intelligence, it makes every other problem seem useless and meaningless. And that was the startling realization of this, is, oh, wow, this is actually something that will happen this decade. And what does this world look like when all of our menial problems today are actually useless? Because ASI is able to
Starting point is 00:27:51 solve all of these difficult things. And I loved how they mapped it out, how specific they were about their examples. And just the repercussions, it's pretty mind-blowing. It's a little unnerving. It's happening quicker than than I expected. You know what I want to know is what happens after the point of fact. Like what happens after ASI is created? I think. think it's fair. Like, have you noticed, like, all these predictions forecasts have told us how we get to ASI, but no one's talking about what happens after it? No one knows what happens after? But, like, we lose control.
Starting point is 00:28:22 The infinite boom is at the end of the world? Like, what actually happens, you know? Are we just, do we become slaves to our laptops? Like, it's kind of this ambiguous thing. I think, like, what happens afterwards depends on who you ask. So economists will tell you that productivity goes to. to infinity and GDP also goes to infinity, but money doesn't go necessarily to humans. And capital doesn't necessarily go to humans broadly. And so this is partly going back into kind of the current conversation that we see with like global trade and manufacturing. That's that's somewhat relevant. And then it also go into just the conversations of like, where do humans
Starting point is 00:29:06 get value from? Because humans, it's like sociologists. will tell you that humans need to do work in order to feel good about themselves and not be depressed. And so I think, like, you can get a bunch of different answers depending on who you ask. And I think every discipline has perspective to add about, like, what happens after super intelligence, that what happens after humans are not the most productive, the most intelligent entities on the planet. Josh, what do you think? It's almost, I think we don't know because it's just not comprehensible to us. It's not comprehensive. Like, you were talking about decades of progress in any given
Starting point is 00:29:41 industry happening in minutes. And you're just going to unlock so many things that are unforeseen that will change the fabric of reality around us. It just seems so mind-boggling to even try to guess because it is so all-encompassing what it can do. So I don't know. I have no good way. Okay, okay.
Starting point is 00:29:56 But to push back on both of you, I think it's super important to like try and broach the topic at least, right? So like one thing that kind of pops to my head is a material removal of humans from economic impact on society. Right. So what does that look like? You know, is that the universal basic income world where we just all like kind of slaves and we walk on our treadmills at home and we can only leave when Papa ASI says, okay, go on, go grab your food because I still need you to execute some physical tasks for me, you know, until the robots go up until scale. Like it's just like a super weird world and I can't really cross my head around it. I think it's a, it's a quadrillion dollar question at Jaws. And you can actually see this in, the existential angst of the Zoomer generation, where a lot of them are trying to make very big decisions right now
Starting point is 00:30:47 about do they go to college? What do they study? Do they invest their time and what does a career look like for them? And most of them as a generation are reporting being fucking confused about what to do with their lives because they don't know where to spend their time. It's a tough question. I would encourage people to look at. to sci-fi for the answers to this. They generally have the most creative things that are
Starting point is 00:31:14 right on the correct time scale. They're not always right, but on a long enough time scale they are. Ready Player 1 is a fun example of that where like if things do just get commoditized in our world and it's there's not much productivity we can do, can we just put ourselves into a digital world and do we just live in? I live into slums. That's more exciting. And that's like that is a plausible reality, which is scary. And that's why there's so it's such a wide array of plausible outcomes that could happen from this. Do we escape? the world of atoms, do we just put these headsets on and are we gone for good? Or do we try to align ourselves with the AI and do we put chips in our brain? Is that how we coexist?
Starting point is 00:31:51 How do we peacefully coexist with this form of intelligence? It's a weird question to ask. Yeah. In this article, Josh, you were talking about just how good the layout is, I want to just emphasize, double-click on this idea that this article is an interactive article. They have these like graphs and metrics that when you scroll down and are reading the article, the graphs change because you're going forward in time. And you just see this curve of capabilities just go increasingly up and up and up and up as you read it. And then they also have different actual like vectors of AI capabilities,
Starting point is 00:32:22 including coding, coding, hacking, bio weapons, robotics, forecasting, and politics. You know, and each one of these things grows in capability as we, as you scroll down and read the article to the point where AI just becomes hyper-capable in all of them. and just a big plus one to just like getting your head wrapped around why this is significant is conveyed by this article
Starting point is 00:32:47 and also it's also like why I think we are doing these podcasts because like yo future is weird man and I would like to have at least a little bit more foresight of it coming down the pipe yeah I don't think people are locked in dude okay you want to you was crazy I asked some of my non-crypto non-a-I non-tech friends
Starting point is 00:33:06 you know how often do you guys is use chat GPT. And often we're like, I don't use it. I don't use chat GPT. I was going to bring up this exact point. Whoa. My friends either don't use chat GPT or believe that AI is chat GPT.
Starting point is 00:33:20 Like that is the entire existence of AI. And it's normally, to each has this point earlier, it's used as if it was an enhanced Google query. So it's used as a search box instead of an actual tool and utility. And a lot of that's just there's people are not educated on how to actually extract value from these machines. Like you're just given a text box instead like, Here is the collective knowledge of humanity.
Starting point is 00:33:40 Like, do what you want with that. And that's a difficult... Well, it's also like optionality as well, right? It's like Netflix subscription or chat GPT, like, I'm going to go with Netflix because I want to watch the latest series of blah, blah, blah, right? So it's like... And the whole Google Query thing really annoys me. I think we're going to get into it later.
Starting point is 00:33:58 But I just think you're massively underusing these models. Like, the whole point of these models is to make you smarter and just using it as a simple Google Query function. isn't going to get you anywhere. In fact, it's probably going to remove you from the race entirely. I'm also mad at the designers for making it the single text box. I think it's such a terrible form factor for this type of intelligence where they're deferring the hard design choices to the user. They're like, here's a text box, do what you will, instead of guiding them through the process of how to effectively extract value. So I don't necessarily blame my friends for that. Yeah. Well, there should be an
Starting point is 00:34:33 addictive app player. Don't you agree? Joshua, I think we've discussed this kind of like, offline as well. It's like, like, what's the TikTok for AILLM usage, right? Like, what are like trends that I can just get involved in and ask or talk to it, you know? To me, that's one of the biggest opportunities is how do you package this into ways that are exciting and accessible to the average person who would choose this over Netflix? Because still Netflix wins most times, because it's too challenging to use. But can you handle that design decision? Can you create something that is better, more addictive than Netflix, better and more addictive than TikTok, but gives you these leveraged tools to actually do something productive
Starting point is 00:35:08 versus just consuming all the time. And that's a really fun problem for people to solve. Oh, hang on. Let me ask you this though, right? Do you genuinely think people are going to err more on the productive use of AI versus consumptive? I would argue it's the opposite. I would say there's going to be chat GPT image gen in the future
Starting point is 00:35:30 for video. And people are going to be like, huh, I'm going to watch a bunch of these videos that people create it. I don't think they're going to be, I don't think they're going to go on and be like, hey, here's a picture of Josh and David, create a series of us. I don't think people are going to do that. They're going to be like, I'm just going to scroll through some of these. Both will definitely happen. Because we saw when YouTube came out, TikTok came out, Instagram Reels came out, Instagram generally, everyone got more access to creativity and to producing stuff easily. And also, you just created a thousand X.
Starting point is 00:36:06 and consumption. So you will get both. Yeah. I just think it'll lean more on consumpting. I'll place a bet with you right now. I'll place a bet with you right now. I'm not going to use you on that one. Yeah, that one scares me. I mean, imagine TikTok if you can get hyper custom videos just made for you every time you swipe. Yeah. Image gen is getting
Starting point is 00:36:24 very good, very quick. Yeah. That's a scary future. Like, are we just going to brain rot ourselves away? Is that Darwin's, like, final forms? Is that a future or is that the present? We're there. I think it just gets worse. Well, to go to Josh's earlier point, he said to relate to sci-fi, most of them are low-key kind of depressing. Do you know what I mean? Like, look at like Dune, look at like fucking Star Trek and all these like wars and shit. It's not going to be easy.
Starting point is 00:36:48 I don't think so. This is like Darwinism's final frontier. If like the human race can survive the addictive, increasingly addictive nature of AI content, then we can survive as a population. If we lose like, that's it. Asi just runs the world. Global warming. It's grain rot.
Starting point is 00:37:01 That's the existential threat to humanity. We're cooked. It's over. So we'll see who makes sure. to thrill. Yeah. Introducing Unichane. Built for Defy, empowered by Uniswap, Unichain is the fast, decentralized layer two, designed to tackle blockchain speed and cost challenges. With this Maynett Now Live, you can enjoy transactions at up to 95% cheaper than the
Starting point is 00:37:19 ETH layer one, all while benefiting from an impressive one second block time that will be getting even faster very soon. Unichane is the first layer two to launch as a stage one rollup on day one. That means it comes with a fully functional, permissionless proof system. From the start, increasing transparency and further decent. centralizing the chain. More than 80 apps are joining the Unichain community, including Coinbase, Circle, Lido, Morpho, and Uniswop. You'll be able to bridge, swap, borrow, lend, and launch new assets, and more from day one. Built by Uniswap Labs, the team behind the protocol that's processed over $2.75 trillion in all-time volume with zero hacks.
Starting point is 00:37:51 Unichain truly enhances defy experiences with faster, cheaper, and seamless transactions, even across chains. And soon, the Unichain Validation Network will allow anyone to run a node and earn by securing the network. Visit uniswap.org and swap on Unichain today. You may have already heard about Infinex. Infinex has, in my opinion, the nicest cross-chain swap and bridge feature that you will find anywhere. It is called Swidge, Swap and Bridge, and we're going to show you what it looks like. First, we're going to log into my Infinex account with a pass key. Now, there's no seed phrases in Infinex.
Starting point is 00:38:22 This is just a one-click set up with biometric pass keys. But in addition to that, my Infinex account is fully non-custodial. So bam, I just logged in. It was two clicks, and I'm already into my Infinex account. So let's go make a switch. I'm going to go swidge my USDC that is on base. And I'm going to buy Barra chain, which is a completely different chain. So we're going to switch this. I'm going to press that button.
Starting point is 00:38:43 And then Infinex is going to execute this order, this cross-chain order for me. And now it is done. But actually, I'm not really feeling bearish anymore. So I'm going to go from Barra to Penguins. I'm going to buy Penguin on Salana. So I'm going from the Barra chain to Solana. See, no transaction signing, no gas to worry about. You just switch across whatever chain that you want with Infinex.
Starting point is 00:39:01 That was so easy. Go check out Infinex and try your first switch today. Imagine a world where your day-to-day banking runs on a blockchain. That's exactly what Mantle is building, powered by a $4 billion treasury and poised to become the largest sustainable on-chain financial hub. As part of their 2025 expansion, Mantle is introducing three new core innovation pillars that bridge traditional finance with decentralized technology. First is their enhanced index fund, aiming for $1 billion in AUM by Q1.
Starting point is 00:39:27 It provides optimized exposure to Bitcoin, E, Solana, and USC, complete with built-in yield opportunity. Next, Mantle Banking promises to revolutionize global value transfer through seamless blockchain-powered banking services, bridging crypto into your daily life. Finally, Mantle X blends AI with Defi to deliver an intelligent, user-friendly experience for everyone. And the best part is that this is all in addition to their already launched products like Mantle Network, ME, and FBTC. Ready to step into the future of finance, follow Mantle on X at Mantle underscore official and join the OnChane Revolution today. All right, let's go into Toby Lucky's memo. He's a Shopify CEO. And he released a memo that he wrote to the Shopify company, to Shopify employees.
Starting point is 00:40:10 He just released it on Twitter. And it basically goes, the title of it is, Reflexive AI Usage is now a baseline expectation at Shopify. And it's basically a medium-length memo that this punchline is, you as a employee of Shopify are now expected to use AI in your job. So he lays out five, six points. Number one, using AI effectively is now a fundamental expectation of everyone at Shopify. Two, AI must be part of your GSD prototype phase.
Starting point is 00:40:38 I don't know what that means. Three, we will add AI usage questions to our performance and peer review questionnaire. Four, learning is self-directed, but share what you learned. Five, before asking for more headcount and resources, team must demonstrate why they cannot get what they want done using AI. Wow. And then six, everyone means everyone. This applies to all of us, including me and the executive team. So CEO of Shopify stating AI is an expectation inside the company.
Starting point is 00:41:06 Since he released this on Twitter, on X for everyone to read, many other CEOs chimed in saying, yes, we have also implemented this expectation, or we are going to implement this expectation. Even at bankless, we have an internal just server, a server like channel that we just titled AI Lab. And it's all every member of the company just talking about how they, use AI in their workflow. So I think this is already becoming like the industry status quo. It's just like, yo, use AI. Share how you use AI. We're all trying to figure this out together. But Toby, I think kind of just made this shot across the bow for all tech company leaders to say like, yo, it is an expectation to be using AI. And if you don't use AI, you're out. Josh, when you read this memo,
Starting point is 00:41:52 what do you think? I was really excited about it. And I think it's one of those cases where he's just saying the quiet part out loud because a lot of people don't want. want to scare other people who are looking for jobs. But he's basically saying we are going to stall hiring and we are going to try to increase the leverage of our employees as much as possible. And I love the collective exploratory thing that he has because I think that's the most valuable thing. Going back to what we were talking about in the last segment, no one really knows how to maximally extract value from this, especially in narrow use cases like a particular company like Shopify. So collectively getting everyone together saying you must use this and you must share
Starting point is 00:42:26 your findings, I think is very productive. I forget the specific numbers. I think it was Shopify's getting 40% growth is, I believe, the number. And the expectation is that the employees can match that growth using this leverage tool. So he's expecting a 40% growth of production from individual employees just by leveraging these tools. I think it's super exciting. It is a little scary because this is a large company who is now saying, hey, we're
Starting point is 00:42:49 not going to hire anyone unless you can prove that we really need them because AI is able to, and this is after they had a pretty large layoffs in the last 12 months. So it's, it is a new world order. It is a shift that is permanent. I don't see this changing the opposite direction anytime soon. So let me just go ahead and look. There are 11,600 employees at Shopify. And so the CEO of a almost 12,000 employee company saying, hey, we are not hiring anyone unless you can prove that you cannot do that job with AI, I think that is pretty alarming from a jobs market's perspective, because that means that maybe you're not cutting anyone, but adding new jobs, I think, is not going to be a thing. It's going to start to diminish inside of Silicon Valley.
Starting point is 00:43:39 Well, I also think a stat that goes on the look in this memo is he's essentially asking every employee to 10x themselves. So he uses the example of, you know, you're all in your teams and then comes across that one golden employee that just 10xes the entire team, right? He says, well, now the tools that each of you use and have access to has the ability to 10x yourself. So, you know, it's not just a matter of like, we're not going to hire employees and you guys maintain the 20 to 40%, which is, as he states here, I think it kind of wants people to like 10x the ability of each of themselves, which would mean, you know, what does a potential 100,000 plus version of Shopify look like, right? It's a $100 billion company as it sits right now. And Toby is, I think,
Starting point is 00:44:28 due to his history, definitely been one to push with these frontier technologies. Do you guys remember when NFTs were the craze? Shopify was one of the first ones to test it out, you know, token gated access to A, B, and C service, right? And, you know, as necessary as they were, they candid, but like, I feel like... I feel like AI is just a little bit of a different point. And the second thing to your, that I want to kind of emphasize Josh's earlier point, which is
Starting point is 00:44:56 there's a slight nuance here because people have been afraid that AI is going to replace them, but he is saying the quiet part out loud, but doing it himself. And I think CEOs are actually, and this is the subtle nuance here, going to push AI onto their employees and like oust their employees. And like, oust their employees first. So it's like a human enforced displacement versus AI, you know,
Starting point is 00:45:20 creating this like tool that just replaces you entirely and then you just kind of like slotted in. Do you know what I mean? We'll see approaches from both. You know, you've seen customer support agents replace people so far. But I think top down kind of like pushing of AI tooling to replace individuals or stop hiring people is definitely going to become more of a trend. It feels like that rift is growing. And this is kind of the trend from the last segment, which was are you going to use it for the Netflix version or for the productive output version? It's the same thing with Shopify and all these other companies where these are high leverage tools, where if you are a person of high agency who uses them and leverages them, you will get very far ahead.
Starting point is 00:45:57 But if you don't, you will be left behind by that amount of leverage. So it creates this very large gap that might not necessarily exist today, but will become increasingly large very quickly. And that probably causes a lot of concerns for people too, because now there are the people who leverage it and now they are 10x employees. and there are the people who do not, and now they are just one. And that's a much larger divide than someone who's like 5x versus 2x or whatever it may be. We can even go back to the AI 2027 document that we were talking about earlier. Down at the bottom of the document, you know, towards the end of times, closer to 2027, this hypothetical AI lab, which you could just think of chat GPT or Open AI.
Starting point is 00:46:37 They just didn't want to use Open AI specifically. they even talk about all of the AI developers by the time that we approach the super intelligence explosion in 2027 that they think is coming. All of the AI developers that are working for the AI labs have all made themselves redundant. And they're all kind of just there
Starting point is 00:46:55 doing loose oversight without actually doing much of the work. And so maybe even the AI labs will be the first people to actually fully AIify their workforce because they are the ones who are intimately aware. they know how to use these things the best. And so maybe the canary in the coal mine is that actually AI labs start to use their own models as their own employees. And all employees of like OpenAI or Anthropic or DeepSeek or whatever,
Starting point is 00:47:24 they actually end up being AI managers more than actual like laborers themselves. So that's kind of the model that AI 2027 gave in their document. Yeah. I would say that's the whole point, right? What is ASI? What is AGI? It is replicating human intelligence, but not just replicating it, just accelerating it completely. It just reminded me, open AI, didn't Open AI tease their agents that they're releasing soon,
Starting point is 00:47:52 and it goes through different tiers with the top one being $20,000 a month, and it's like a PhD-level employee that can come and work at your company. And then you had like something stupid, like a 200 bucks a month, and it's meant to be a high knowledge work. which is basically like most people at like fan companies of the bankers. Yeah. Yeah. Most people. I definitely think that that's the case.
Starting point is 00:48:17 And like to go to the point of the AGI or AI 2027, they want to, one of the key themes is they want to use AI to improve itself iteratively over time. So it seems like that will be the main place that it gets displaced first. Yeah. Okay. Let's turn to education because this is something that came out this week, Claude Anthropic just released Claude for Education. And this is basically trying to AIify, you know, higher education up to the college level.
Starting point is 00:48:50 And so this is what it looks like. And they're basically, part of this announcement comes a couple of different things. Learning mode, a new Claude experience that guides students reasoning process rather than providing answers, helping develop critical thinking skills, university-wide Claude availability, full campus access agreements with Northern University, London School of Economics and Political Science, a few other colleges, making Claude available to students,
Starting point is 00:49:15 academic partnerships and student programs. And so basically, Claude is making a university as a product or a school as a product to help educate. And it's not just giving, I think the important difference is, if you go and type into chat, CBT, hey, what's this answer?
Starting point is 00:49:30 It'll give you the answer. But here, Claude, education is working through how to arrive at the same thing. the answer for people that are querying it. And then I think maybe one big headline that I think everyone should reflect on is this one here from Fox News.com. Texas private school's use of new AI tutor, that's Claude for Education, rockets student test scores to top 2% in the country. So a Texas private school started using Claude for Education, and then all of a sudden their students start testing at the top 2% in the country. And so this has massive implications.
Starting point is 00:50:07 We already know the university structure is already under threat. Universities have been leaving, going out of vogue for a while now because they are just entrenched ideological systems without any sort of like actual educational value. As students are taking in like quarter million dollars of debt to become like a speech therapist, things like this. And so education has already been like needing a revamp. And now here we have AI, which for a use. case that I think is just a match made in heaven. AI becoming your tutor, your teacher in this very streamlined way. I Jaws, what are your thoughts? I just don't see the point of universities within the next two decades going forwards. And I'm probably being generous. I'm probably being super generous with that timeline, to be honest, right? If we're getting AGI in two years, why is that, you know, crazy to say? I think there's a few points I want to make around this. Number one, the way it's
Starting point is 00:51:04 packaged this product called education makes it seem like it's meant to aid lecturers. And to use your analogy earlier, David, you know, maybe employees become the coach of the AI, right? I don't think that's going to happen when it comes to education. I think it's going to become incredibly personalized. Heck, YouTube has already accelerated that massively. I've learned a ton of stuff that I wouldn't have normally learned via school, college, university, or any kind of prestigious degree that I would have ever attained. And I think personalized education just makes sense. Everyone is different. Everyone learns and comes from a different background. Why would it make sense to, you know, use this arbitrary person, I'm sorry, I'm reducing professorship massively, but who has been, you know,
Starting point is 00:51:48 honing their work and craft very, you know, commendably for decades to then be expected to teach each individual in a certain or specific way. It just seems like AI is meant to scale this kind of thing. The Texas data point kind of proves that. But the second wider trend that I'm noticing here, David, is that I just don't think people are using AI effectively. Like, I would love to, like, contest and see what is so special about Claude Education. I would bet you that it's the exact same model. It's just being crafted and curated in an educational circumstance. So behind the scenes, they've just given a long-ass prompt, which says, you are a professional university educator now, and you are going to specialize in economics 101.
Starting point is 00:52:33 Please identify and analyze the London School of Economic Syllabus for 2025, term one, and create an agenda and schedule lectures, et cetera, et cetera, right? And then it's just being fed straight to the people there. So I put out this tweet recently. I don't know if you can pull it up, but I basically caught up with some friends, and they were telling me how they spent. four plus hours, just this week alone, prompting chat GPT within a single chat room to improve their life and how they learn. Now, if you look through this tweet, you know, when I asked why they were doing
Starting point is 00:53:12 this, they said, so that you can tell me what job I should do next, you know, and this is in response to me asking him why he put his entire career history into chat GPT. And then another guy said he logs his entire set of notes from therapy and asks chat chbd to give him life advice i said what do you mean like what what qualifies chat gbt for this but he goes it doesn't matter who chat gb t is i just tell chat gbt who to be so i said who did you ask him or it to be and he goes i simply said chat gbt you are now karl jung one of the forefathers of psychology modern day psychology and carl jung is sigman freud's Sigmunds Freud. So he's hot Freud. That's a good way to describe him.
Starting point is 00:53:58 And he basically said, here are all my therapist notes. Also, you have the entire history of my chat log with you talking about random arbitrary recipes and random Star Trek scenes that I missed and then random hypothetical situations that, you know, I never ended up getting into, but I wanted your opinion on. Can you create an identity or personal personification profile of me? And then tell me what's wrong with me? How can I improve? should I have broken up with that girlfriend or should I get back together with her?
Starting point is 00:54:27 You know, it's this crazy thing. And then the third kind of like subtle point is like, we are just giving troves of our personal data to this thing, you know? I asked him like, how did you, on earth, did you focus for two hours? Giving career, all your career history. He said, oh, I just spoke to it. I just put it on voice. And I spoke to it.
Starting point is 00:54:46 I'm talking to another person there. And I think, you know, there's a dark side of it, which is like, you know, we're giving away all our data. we're going to, you know, give away all control to this AI. But then there's the positive side, which is like, if you use this very well, you could end up having similar effects to that Texas school, that shot to the top 2%, right? And it maybe had no business being there without AI, right? I don't know, Josh, do you have any thoughts on this?
Starting point is 00:55:10 Yeah, I am in love with this headline because I have been at war with school for so long. It is such an abomination how people are educated. And it's the AI segment is just the acceleration on the gasoline that has already been happening with self-education. In the case of high school, classrooms are pretty crappy. Most of them fall to the quality of the most disorderly student. It's a lot of disciplinary stuff. It's not very custom. The customization part when you get to high school or college is the same thing. It's the same system for every single person. It's the same lectures, the same classes. Everyone learns differently. There is no reason why it shouldn't be hyper-customized to you and why
Starting point is 00:55:47 you should not just have your own AI that understands your preferences, understands everything about you, and is able to deliver this information and educate you in the way that you most effectively learn. And I think the privacy thing is a concern. But even if the illusion of privacy is given, that's probably good enough because that's the reality we live in today. Where all of your cookies are tracked, your online traffic is tracked, your GPS location on your cell phone is tracked. It's not very obvious. The average person wouldn't know that. But it still happens.
Starting point is 00:56:15 So the privacy is a concern. But if you can get around that and give people these hyper-personalized AIs, it is so much more. effective than college. The only tricky part is if people do start self-educating using AIs and if they do get these hyper-custom models, they lose these social elements of colleges and education and socializing in general, which contributes to this increasing trend of people just being a little more quirky socially. But we have the internet, right? We have a little digital avatar friends, right? But to me, it seems like there's absolutely no reason why I shouldn't be hyper-customized. And it's reflected in this headline, which is like the people who are using AI are now in the top
Starting point is 00:56:51 2% in the country. And that, this is very early AI. It will only ever get better from here and get more personalized as understands your preferences. So I am all for it and very excited about it. Okay. So if we were to take the other side of that, Josh, some immediate thoughts that come to mind and I'm curious what you both think is number one, alignment of these models need to be super important and governed very well. Can you imagine if the creative model came in with an agenda, you know, to sway the masses into a particular. educational, political stance, for example, or just, you know, teaching basic math completely incorrectly. Obviously, that's a more extreme example. What happens when humans lose oversight,
Starting point is 00:57:32 basically, right? And they stop fact checking, which leads me to my second point, which is, do we just become incredibly lazy? It sounds super productive, right? It's like, hey, we're going to learn all these different things. But then do we just stop trying to pioneer? It goes back to the, what do we do we do when AGI manifest, right? Do we just become incredibly lazy and just said, oh, you know, let AI discover who God actually is and figure that out? You know, where do we end up? Imagine verifying yourself without handing over personal data. No hacked databases, no unnecessary personal exposure for air drops, and no AI bots ruining community governance. Meet Self, the on-chain identity verification protocol built for privacy and control. Self-Proticle uses
Starting point is 00:58:12 zero knowledge proofs to confirm your identity safely. Users prove key details like age or citizenship without revealing sensitive personal information. Self never stores your data. It only generates cryptographic proofs. Here's how it works in three steps. First, register and verify. Use the self app to scan your biometric passports RFID chip. Self verifies authenticity with zero knowledge proofs. Each passport creates one unique identity. Second, you can share proofs privately. Third party apps request identity proofs like confirming your over 18. You can also link proofs securely to public wallets for air drops or governance participation. and then last secure verification.
Starting point is 00:58:49 Apps validate your proofs instantly on chain, like on cello or off chain. Audited by ZK security, the Self app is live on iOS and Play Store. Visit Self.x.Z and follow Self Protocol on X. The Arbitrum portal is your one-stop hub to entering the Ethereum ecosystem. With over 800 apps, Arbitrum offers something for everyone. Dive into the epicenter of Defy, where advanced trading, lending, and staking platforms are redefining how we interact with money. Explore Arbitrum's rapidly growing gaming hub from immersed role-playing games, fast-paced fantasy
Starting point is 00:59:21 MMOs to casual luck battle mobile games. Move assets effortlessly between chains and access the ecosystem with ease via Arbitrum's expansive network of bridges and onrifts. Step into Arbitrum's flourishing NFT and creators-based where artists, collectors, and social converge and support your favorite streamers all on-chain. Find new and trending apps and learn how to earn rewards across the Arbitrum ecosystem with limited time campaigns from your favorite projects. Empower your future with Arbitrum.
Starting point is 00:59:48 Visit portal.arbitrum.io to find out what's next on your web-free journey. At the end of the day, school, the institution of school, public school at least, public school is basically glorified daycare just to get kids out of the hands of their parents so the parents can go to work and be productive. And then school just teaches you, public school just teaches you the basic modicum of knowledge, but really just puts you in a place where you won't get into trouble for like eight hours, six hours of the day. And that is so homogenous because we are all going through this extremely homogenous school program all across the country, all across the United States. Like, yeah, it differs from state to state, but like not really.
Starting point is 01:00:32 And I think what we are, what we will see with education is the same thing as what YouTube or Instagram Reels did to mainstream media where there were just like seven news channels. and then that exploded into 70,000 YouTubers, all giving their takes on the news, some terrible, some great, some way better than mainstream media. But it's really, you know, choose your own adventure. And now we are going to see that fragment even further into just every single student has their school. It's one school per student.
Starting point is 01:01:05 I don't know if you guys went to public school, but I remember class sizes were a huge issue in high school, where they were packing in 35, kids into a classroom and it always got bigger every single year. And this is the exact opposite of that where you get one hyper-personalized teacher, professor, who's the best at their job to teach you exactly what you want to learn. Have you guys ever asked chat GPT, generate an image of me based on everything that you know about me? You guys have done that? Yeah. Okay, so it comes, it is, you first you forget, you first realize how much data you have given.
Starting point is 01:01:42 in chat TPT about yourself in ways that like you have not given any data to any other application before because like it's truly friendly it's a friendly chat chbtee especially is very supportive uh and so you i've anyone who's using chat tpt go open it up ask generate an image of me based on everything that you know about me and it'll generate an image of you and you will realize how much it knows about you and your style uh and think about how high fidelity that relationship that it has with you and now now it's your professor right it already knows you all of your interests and your style and your quirks and now it's now it's your school system to carry you or your child from like six years to 16 years old i think it's i think it's really cool cool uh i'm
Starting point is 01:02:30 not really in my learning years anymore but nonetheless i'm still looking forward to like learning some university level education from chat chabit i just go rejected i just go rejected i just go rejected David. I don't have any personal details about your appearance or characteristics, so I can't create a tailored image of you. If you'd like, you can provide me with some descriptive details, blah, blah, blah, blah, blah. What do you mean? I've given, like, my entire life to this thing. You just force it to make an image. Just say, make your best guess. Okay. Let me give it out. We'll see how it goes. I have seen this come out wrong where like my, you know, generic looking white friend came out as like a black overweight female. Nice. I was like, wow, that's different. Mine was startling to the point where it hadn't, the actual. view out my window was correct. It knew what I was looking at and everything. I was like, wait, this did I ever tell you this? Wait, wait, wait, wait, what do you mean? What do you mean?
Starting point is 01:03:18 It had the William Oboot? I don't have docks you, but yeah. Yeah, no, it had, it had the bridge in the background of my image and I was like, wait, that's the exact view outside of the window. Um, well, we must have spoken about it because it didn't just learn this on its own. So we were talking about something moving, whatever I was looking at, trying to figure out the name of whatever street it was, but it knew. And it was very accurate. It had a little drone in the background and a camera and a microphone. And I was like, wait, This is weird. That's kind of like my room.
Starting point is 01:03:43 Okay, you guys want to see mine? My image. Oh, please. I love to. All right. Here we go. Here's me. For the listener, we have a guy who's definitely on the Brooklyn side of New York with the New York skyline in the background.
Starting point is 01:04:00 He's holding an espresso martini. I have a rock climbing rope slung around my shoulder, an Apple Watch, a climbing helmet, and an espresso martini because it knows that I ask how to make cocktails. pretty frequently. This is pretty good. It also works through its logic as well. It just tells you its logic and its workflow and it's like pretty cool. Can I share mine? It actually surprisingly nailed it one second.
Starting point is 01:04:25 Yeah, yeah. I'm shocked. And I'll put it on screen. Here we go. Nailed it. That's funny because I think we both look like generalized AI engineers. Yeah. Yeah. For some reason I look entirely like, you know, an even nerderier David Hoffman. That's crazy.
Starting point is 01:04:45 Much nerder, yeah. You are not the attractive Indian man that we see on screen. Maybe potentially a tech billionaire? I can't quite tell. Like, I know he has, this guy has at least 12 hoodies. Yeah, wow. All right, let's bring up the rear of this,
Starting point is 01:04:59 this AI roll up as we close this out. The last big one is, the news of the week, is A16Z, not AI16Z, the actual Areal A16Z, raising a $20 billion mega fund to invest in AI startups in the United States. So that's pretty damn big.
Starting point is 01:05:19 I mean, this is A16Z's job. It's not any surprise that they're raising $20 billion. But I think it just goes to show how much juice there is left to grow out the AI industry. Josh, when you saw this headline, $20 billion, being raised, not yet raised, but being raised by A16C to invest in the United States AI.
Starting point is 01:05:36 What were your first thoughts? So I had an initial kind of like gossip. which I was like, that's a humongous number for a single fund. But then I was like, oh, yeah, no, that probably makes total sense, right? Like, it wasn't too long ago where, you know, Sam Altwoman was prospectively, you know, trying to raise a trillion dollars or whatever crazy figure. Was it $8 trillion, $7 trillion? Something crazy like that?
Starting point is 01:05:59 Yeah. So I don't think this is, you know, out of turn, especially given like some of the valuations that are coming. Do you remember, I think like Curser literally last week got like a $10 billion dollar valuation just based off its recent round. So I think, you know, these numbers kind of make sense. I'm kind of curious where it's going into, right? And I think what's most interesting about this particular announcement is they didn't just, they weren't kind of lazy about it. They weren't kind of just like, yeah, we're going to dump a bunch of this into like compute and data
Starting point is 01:06:29 center companies and la la la, la, create a model. They were like, no, we want to own the app player of whatever this behemoth of technology ends up becoming. And they use the term vibe creation platform. And so for those of you who are wondering what the hell that is, think of cursor, which is a vibe coding platform, but instead it's a platform that can hit almost anything that you want to do, right? Not necessarily coding. It could be, I want to become the next biggest music artist and perform on my concerts
Starting point is 01:07:03 all around the world. Well, to start off with, I need to be able to create some music, right? So what if I go on this vibe creation music platform and I can, you know, become the next superstar, right? And what's the difference between like, you know, just like this and maybe, I don't know, Apple's garage studio where you can kind of like mix beats is one, it uses AI so you can just like prompt and type in the kind of flow and you can edit your music and all that kind of stuff. But then it also connects to your socials directly. It also ties and brings in all your friends to listen to it in a really easy, simple manner. What they're trying to do is create this unanimous platform, right? Or they're trying to fund founders that are building this unanimous platform to connect different things.
Starting point is 01:07:43 It doesn't have to be busy. It could be movie editing or whatever that might be. And why I found this so interesting, and I'm curious to hear whether you two have the same take. Obviously, we've spent like a lot of our time in the Web 3 world, right? And the whole promise of the Web 3 world was to kind of connect all these different intermediaries and entities to create some kind of unanimous experience, it kind of sounds like what they're trying to do. And I am super happy to see this investment happen at the app player.
Starting point is 01:08:14 I mean, Josh, what do you think? I'm just so excited to see a lot of money going to something other than training. I think all of the money that's been raised is going to GPUs and to building larger clusters. And it's exciting to see smart people with a lot of money shifting away from that focus on cluster training to actual application layer. It goes back kind of to what we were talking about earlier, where there is just this single text window and you're supposed to extract this value
Starting point is 01:08:36 and they're slowly narrowing those use cases and I think that's the most valuable thing that you can do is just teach people how to get value out of this. So using your example, he does. If you want to become a professional musician, this is the narrow box that we have built that is optimized for allowing you to do that thing.
Starting point is 01:08:51 And so far those solutions don't exist. It's still very broad. So investing $20 billion into creating narrow but hyper useful AI for whatever you might be interested in is so, so exciting. So I'm very pumped about this. one. I remember reading a tweet thread last week about how this one, like, a mall commercial real estate investor had just revamped his entire process with architects that he would work with
Starting point is 01:09:15 for his whole like mall reinvitalization process. So the idea here is like he would buy up a cheap distressed mall. He would work with an architect to plan a new design, revitalize it, and then increase the value of the mall that he bought, he invested in. And the architect design process he said would be so much back and forth about how to make a good looking mall, like the six to nine months of back and forth in labor with his architect. And he was able to take a picture of this mall,
Starting point is 01:09:45 prompt chat GPT to get something that was 90% of the way there. And that was the new starting point that he had with this architect to go. And he said it just cut off like eight out of the nine months of time frame for that one development process and saved like tons of man hours.
Starting point is 01:10:01 And the whole thing, that I was thinking about when I was reading this, is he is doing this inside of chat GPT, which is unoptimized to do this job, but it is still the right brain to do it. So just like Josh has been harping on this entire episode, where this form factor for how we engage with chat GPT is so constricting because it's just this text box
Starting point is 01:10:22 that looks like Google, but nonetheless it is doing incredible work. It doesn't take that much more effort and labor from a startup to make a architect and design-optimized, interface for chat GPT. There's another wrapper of chat GPT for a more narrow but specialized application on this thing. And you can apply that across every single industry under the sun. And we just, there's just so much investment in L1.
Starting point is 01:10:48 I see the same pattern here between the L1 investment or the FAT protocol investment from the crypto space where people want to invest in the layer ones. They want to invest in Ethereum. They want to invest in Solana. Do they want to build a layer two or do they just want another build another layer one? they just want to invest in the blockchains and they don't want to invest in the apps. And that is the issue plaguing the crypto space. And right now we have plenty of layer ones.
Starting point is 01:11:10 We have Anthropic. We have Open AI. We have Deepseek. We have all of these different. God damn. LG, the television company came out with an AI model layer one. And right now, I think what Josh is saying is like, yo, we need the app layer of all these layer ones. We need all these layer ones to become more expressive, more opinionated, more directed at specific outcomes.
Starting point is 01:11:32 And I think that's what Josh is excited about with A16C, actually building out the app layer and not just funneling more money into just making chat EBT that much better in a very generalized, broad, diffuse sense. And it goes back to the benchmarks, too, is now you have these like predefined benchmarks across these categories that you can actively compare because it is a more narrow band of scope. So you can see who can actually create the best architecture drawings. And that is a clear benchmark that you can set once you have these narrow focused purposes. So that's another way to solve the benchmarking thing is like, okay, you could have your crazy good models, but narrow it down into something that's actually useful for me. And then we could compare how that helps me personally. I love how we just went full circle with that. I love that. I think that's a great way to end it. Josh is Jaws, thank you for helping it. Josh, thank you. Thank you for helping the bankless nation go through the weekly news this week. It was a big week. Next week will be even bigger and we'll be back in seven days to bring you the news in the AI space. Josh. Thank you guys so much. Also, thanks for having me. Bankless Nation, you guys know the deal.
Starting point is 01:12:31 We didn't even talk about crypto, so I need to find a new sign off there. I don't know if there's any disclaimers needed because, you know, crypto's risky, but we didn't talk about it. So I guess this is the frontier. It's not for everyone, but we are glad you are with us on the bankless journey. Thanks a lot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.