Limitless Podcast - AI Robots and The $10T Arms Race

Episode Date: May 8, 2025

Welcome to the AI Rollup, from the Limitless Podcast. David, Ejaaz, and Josh break down the week’s most important AI headlines, from OpenAI’s $3B Windsurf acquisition and Google’s full-...stack AI play, to Visa and Mastercard preparing for agentic commerce. We explore the state of robotics, major interpretability challenges, and why the race to AGI may outpace our ability to understand it. Plus: AI ASMR, glow-up GPT, and why autonomous agents still kinda suck. Stay curious, this one’s stacked.------💫 Subscribe and Follow Limitlesshttps://www.youtube.com/@Limitless-FThttps://x.com/LimitlessFT------TIMESTAMPS & RESOURCES00:00:00 Intro To Limitless!https://x.com/LimitlessFT00:04:23 OpenAI Makes Huge Acquisitionhttps://www.outlookbusiness.com/artificial-intelligence/openai-to-acquire-ai-coding-platform-windsurf-for-3bn?utm_source=www.theaivalley.com&utm_medium=newsletter&utm_campaign=openai-reverses-for-profit-plan-ai-breakthroughs-in-robotics&_bhlid=7f1b29a5128efe71af21ff4cb81c530bdcac095ehttps://x.com/ns123abc/status/1912876350911754676https://www.testingcatalog.com/google-tests-computer-use-tools-and-cloud-run-hosting-in-ai-studio/?utm_source=www.theaivalley.com&utm_medium=newsletter&utm_campaign=openai-reverses-for-profit-plan-ai-breakthroughs-in-robotics&_bhlid=bd7931dcfd449aa11b0bf7197fdfa983289758c200:08:01 Why Spend $3 Billion??00:16:09 OpenAI Memory Update Is... Interestinghttps://techcrunch.com/2025/04/18/chatgpt-will-now-use-its-memory-to-personalize-web-searches/?utm_source=www.theaivalley.com&utm_medium=newsletter&utm_campaign=openai-launches-o3-and-o4-mini&_bhlid=d08c6dc54c407d4f53df9d11a68857dba951a21c00:23:56 OpenAI Re-Structuringhttps://www.reuters.com/business/openai-remain-under-non-profit-control-change-restructuring-plans-2025-05-05/?utm_source=www.theaivalley.com&utm_medium=newsletter&utm_campaign=openai-reverses-for-profit-plan-ai-breakthroughs-in-robotics&_bhlid=c4222c046fc76f42b7f72ca72537e4e8620a2af100:37:24 Visa Credit Cards For AI Agentshttps://apnews.com/article/ai-artificial-intelligence-5dfa1da145689e7951a181e2253ab34900:43:21 The Crypto Bull Case00:45:43 The Deca Trillion Robot Opportunityhttps://x.com/adcock_brett/status/1913986971501748390?s=46https://x.com/kimmonismus/status/1919510163112779777https://developer.nvidia.com/isaac/gr00thttps://x.com/adcock_brett/status/1916523708153217525?s=46https://x.com/adcock_brett/status/1919060515998822898?s=4600:52:44 Dogs With Machine Guns00:58:11 Using AI To Do Your Job01:02:12 The Interpretability Talkhttps://www.darioamodei.com/post/the-urgency-of-interpretability01:09:18 AI Neurons?01:15:27 The Dopamine Sectionhttps://x.com/venturetwins/status/1917640408349434106https://x.com/ns123abc/status/1918703088598184321?s=46https://x.com/venturetwins/status/1919057071145672949https://x.com/aisafetymemes/status/1914003415191212112?s=46------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures

Transcript
Discussion (0)
Starting point is 00:00:03 Welcome to the AI roll-up brought to you by The Limitless podcast, where we stay up to speed with the emerging trends and developments in the AI space. I'm David Hoffman here with my two co-hosts Jaws and Josh. Jaws, how you doing? I'm doing well, David. It's been quite the week. Less model frontier model breakthroughs this week. No frontier model breakthroughs this week? I think that's the first.
Starting point is 00:00:25 There's a few, but we're putting them on the back burner because there's some more important things to talk about. It's been an M&A week, so, you know, billions of dollars are being. spent to purchase or acquire some of these AI startups that we've spoken about previously on the show. But I think more importantly, I think AIs looked like kind of like this gargantuous cloud, right? And it's kind of ominous. I'm kind of like, is this cloud going to rain on me or is it just going to pass by peacefully and just like kind of like be nice to look at? And I'm starting to see the light.
Starting point is 00:00:55 I'm starting to see the kind of formation of what this AI stuff is going to look like. And the reason is I think we've spoken about a bunch of different things, right? We're spoken about agents. We're spoken about autonomous execution. Like, hey, these agents are going to go run and do a bunch of things for you, run companies for you. We've spoken about all these, like, amazing models and, like, how smart they are. But we haven't really pieced together what the end goal is going to look like. We're starting to see a lot more of that this week by some of the moves made by OpenAI and Anthropic.
Starting point is 00:01:23 So I'm really excited to get into that. Yeah, as we talk more and more about AI, I think we are all starting to synthesize a little bits of the conversations that we've had on previous episodes, put them all together and kind of understand the contours and the shapes as this thing comes, makes it becomes more and more real. Josh, I introduced this as the AI rollout brought to you by the Limitless podcast. That is the first time that this has ever been uttered on this podcast feed, which is the Bankless podcast feed. But we are launching Limitless this week, which is very exciting. Talk to me, Josh, about Limitless. Oh, man, big week. Limitless kind of stemmed from like a natural extension of our curiosity at Bankless. Bankless is mostly a crypto company. We started getting
Starting point is 00:02:01 interested in AI and then naturally we kind of spread out to robotics, manufacturing, and energy, and all of this weird stuff that's happening as downstream effects of this new wave of AI. So Limitless is a place for that. It's the home for all of the stuff that is not crypto, is not economic, is not macro related. It's all the frontier technology, the AI stuff. That's where this show is actually going to be living, is on the Limitless feed. So if you're interested in tech, if you're interested in frontier technology of any sorts, if you're interested in AI, Limitless will be the new home base for all of that. And we're launching this week.
Starting point is 00:02:33 Yep. Yeah, yeah. So that is the call to action. What we are doing is we are taking all of the AI content, all the frontier tech content, which we've been pretty aggressively exploring a bank list. We're putting it into its own feed, which it totally deserves. And I know there's a lot of people who have been following bankless for the AI stuff, but not for the crypto stuff.
Starting point is 00:02:51 And so we are creating a dedicated feed for that. There's also followers of banklist, listeners of bank lists who are like, oh, another AI episode. Weird reaction. but I get it. It's interesting. Like me and Jaws and Josh, we're all here in New York, and we'll go to crypto events that are about
Starting point is 00:03:05 AI. And so it's actually a crypto event by crypto people for crypto people, but it's about the AI subject. So AI has broad appeal, so we're giving its own dedicated podcast feed. You're still going to be hearing these episodes on the bankless feed as we incubate the new Limitless podcast. So there is a link
Starting point is 00:03:21 in the show notes to go subscribe to the podcast to get that podcast in your podcast player and also subscribe to the YouTube because we are making some AI first AI primary content there. And then we're also going to do the regular bankless thing of interviews with big founders, big investors, entrepreneurs in the space who are really helping shape this future. And also just to double down on what Josh was saying, AI, definitely, but also frontier tech.
Starting point is 00:03:45 So we're talking like rockets. We're talking robotics. We're talking everything that's going to make the future even weirder than it already is because it feels like that is coming very, very soon. But this is the AI roll up, which again, also happens every single week. week. Topics of the week that we're going to talk about, Ajah has already hinted it. Open AI making big moves, a $3 billion arms race to gobble up all of the valuable startups in the AI space. Visa is giving AI agent's credit cards, letting them go while on the internet. What happens when you
Starting point is 00:04:10 put an LLM into a robot, a real physical robot? What happens next? And lastly, the hard problem of interpretability, why it's important for making sure we can save the future of humanity by cracking this interpretability problem. But Jaws, we are going to start with Open AI making three big, big moves, a $3 billion arms race. What happened in Open AI news this week? Okay, so let's get into it. This has been basically the headline news all week, and everyone just couldn't stop talking about it yesterday.
Starting point is 00:04:39 Open AI is officially making its first major acquisition by purchasing a company called WindSurf for $3 billion, with a B. Now, if you're wondering what on earth WindSurf is, you might have heard David Josh and I speak about a company previously on this series called Cursor. And Cursor is basically this kind of vibe coding platform where you can type in a prompt, typically like you do on chat GPT,
Starting point is 00:05:06 but instead you could ask it to create a new app for you or a new piece of software or maybe even a fun game that you know and love. And it would just code it all right up in front of you. And the, let's say the environment or the software that allows you to do that is something called an IDE, which stands for integrated development. environment. If you're wondering what that is, think of it as a software suite, which includes
Starting point is 00:05:31 like a coding editor, a compiler, and a bunch of other things that you need to basically make software. It just makes it really, really easy for you, and then it adds AI on top of it. So you just speak to it like a normal person and it just kind of like pops up an app. Now, Winsurf is in the top two of these types of companies. You've got cursor, then you pretty much have Winsurf at this point. And Open AI will now compete directly. with things like cursor, lovable, and replet, which do very similar things. And in my opinion, this confirms that there's pretty much an arms race to build the best AI coding assistance ever. So imagine if you could be like the number one tool or platform to the leading software
Starting point is 00:06:13 engineers and companies of tomorrow. That's basically the TAM, the total adjustable market. That's, that's basically the market that you're going after. Now, one kind of like interesting caveat that I want to include here is that this is this happened after open AI tried to buy cursor that's the number one platform that does this twice so they had to get the second best yeah they basically had to get the second best and like kind of like my initial gut reaction when I read that was huh I wonder if cursors either holding out for like a google or a meta to purchase them and go for an even bigger number because they you know don't want to be bought by like you know Sam altman's startup but um you know it's a curious move. And a few other things I want to highlight is this acquisition follows some major moves by
Starting point is 00:07:00 other companies that are doing something similar. So Google, for example, launched their own cursor competitor called Firebase. And if you pull up this tweet, David, it's actually pretty good. And I think this example will be quite a good visual example so people can understand, you know, what on earth this does. So what you're watching right now, for those of you who are just listening, is a video of someone just sketching out a very, it looks like a three-year-old's on it, kind of layout of a website of an app that can basically help someone draw. And then what you see on the right side is the Google's Firebase platform creating it. And you can actually interact with the tool and draw and paint just like you did on like
Starting point is 00:07:38 Microsoft Paint. Very simple basic example. But can you imagine this being extended out to something else? It's pretty insane. What's your take on this, David? I think for the listeners, just imagine Microsoft Paint, but on the other side of it, outcomes an announcement. application, a functioning application. So, you know, Microsoft paint on one side, functioning application on the other. I'm actually curious to hear Josh's take on what's going on here. I think it's cool. I think the second I saw this news that reminded me of news from a few weeks ago, which was Google acquiring WIS for $32 billion, I think it was. I think this is an attempt at Open AI to kind of make their way up the stack. And it's not just about serving their own stack
Starting point is 00:08:16 better. It's about like now actually embedding themselves in the decision-making layers above that stack so they could shape the flow of traffic regardless of the endpoint. In the case of WIS, WIS was kind of like a security aggregator company. So Google owns Google Cloud, but it doesn't have any control over AWS or Microsoft's Azure servers. So what acquiring WIS does is it allows them, it gives them upstream access to that like security telemetry, the customer data flows, the threat intelligence, a lot of the data that they wouldn't otherwise have. And I think that's kind of what Open AI is doing here with Winsurf, where ChadGBT clearly dominates the consumer product space. They own that. They do very well. But in enterprises, a lot of companies are looking for
Starting point is 00:08:55 a thing that people are calling model orchestration, which is kind of like which model will serve my use case the best? And they're looking for aggregators to decide and make that decision for them. Well, Open AI just bought that aggregator. And they now own this routing and orchestration layer that acts like across this load balancing metric. So now strategically they get visibility over influence over who's using what they're using for and they could optimize their models for these use cases because they have the data. So now going forward, you might see chat GPT starts to get chosen a lot more by these aggregators because it has all the data of why it's choosing another model that isn't theirs.
Starting point is 00:09:31 And I think that's like a pretty interesting play. Is there a parallel here between like there are deals that Google has made in the past with browsers about like, hey, we will pay you X amount of money to make us the default search engine in your browser. So when you type a question to the toolbar, it automatically routes to Google. It makes a Google query, and Google gets the data from that and the information for that to make their product better. It sounds like this is something similar where they are just acquiring this endpoint, which has a direct relationship with its users. So Winserve has ownership over the users. It has the user relationship. But in the back of Winsurf, there's ChatGBTGPT.
Starting point is 00:10:07 So ChatsyPD now just acquires that user relationship. It gets ChatsyPT to be used by Winsurf. And so there's just more usage for ChatsyPT. and then there's also information about what chat chitpiti, how it needs to adapt, the information that data, like you said, goes back into chat chvety. Is there a parallel there, or am I hallucinating here? Probably.
Starting point is 00:10:26 It feels like there's a fine line where people go to WinSurf as WinSurf customers. They don't want to be spoon-fed chat chitpT if it's not the ideal model. So that could hurt with customers. I assume they probably won't do it that explicitly. But the implicit thing that they could extract value from is understanding the user's needs and the wants and when certain models are chosen and kind of optimized future versions of the,
Starting point is 00:10:47 chat GPT language model to serve those people. So the ideal case is that chat GP2 will actually be able to serve all of these while still gaining data from what people like about the other models. You could also argue that it's just, to your point, Josh, data that they want. Data is the end game, right? In whatever shape or form it comes in, in whatever kind of outlet that you can extract it from, that's what they want. If they want developer activity, if they want to grab it at the consumer level,
Starting point is 00:11:16 they already have it at the end consumer level, so now they want to try and get all the developer activity as much as they can. I wonder what type of a model or even end-to-end experience you can create based off of that, right? And to kind of like strengthen your argument here, I noticed that Google launched something associated with integrated computer use this week as well.
Starting point is 00:11:40 They have something called AI Studio, which is kind of like this environment that you're describing, Josh, which doesn't just include kind of like the end user stuff, which OpenAI dominates on, but it also allows developers to kind of build this kind of like synonymous end-to-end app experience, right? And what I noticed about this integrated computer use thing is that, you know, they really want to just own end-to-end product development,
Starting point is 00:12:05 the entire pipeline. And if you think about it, like you allow them to use their no-code, prompt-friendly platform to build your product, right, leveraging their models and you can inference it through their cloud system. And then on top of that, you can allow anyone to use it for themselves via computer use. So if you're an end user, right, and you're saying, hey, I really like that app that this developer's built, well, you can run it locally on your computer now and you can inference any model that you want, say, like, hey, I don't like that it's using this model.
Starting point is 00:12:35 Can we use something else? Sure. Like Google's AI Studio solves for that. And I think that's like a really important nuance that's going to make it super, super sticky. how they personalize this experience for users? I don't actually quite know. I'm wondering if you have any ideas, but just a very subtle but important improvement, I think, is worth knowing.
Starting point is 00:12:55 I've brought this metaphor up before in the past on this show, but the whole entire vertical, the gaming vertical of League of Legends and Dota is like arena battleers or whatever, a billion dollar industry in just that one type of gaming structure. And that gaming structure was created by, a Starcraft mod. So some individual using the Starcraft map editor and map editing engine created this mod of Starcraft
Starting point is 00:13:24 that created this structure that basically is what the Moba Arena Arena Multiplayer Online Battle Arena. Thank you. I don't play League of Jones. A billion, multi-billion dollar industry got created because of this modding ability, because of the mod ecosystem around this one base game, which was very, very valuable called Starcraft,
Starting point is 00:13:47 but now it spawned like an insane amount of value. And so what I'm saying here is like this is a modding engine for apps. Yeah. And the downstream value that could come out of this as just give creativity, give creative tools to people who want to be creative. And now it's not even developers. It's anyone who's like frustrated that they are using this app.
Starting point is 00:14:09 But there's this one button that's missing that they wish they could press and it could do something. And now all of a sudden the creation of that app, if they truly discover value in it, maybe so do other people as well. And they just need that one extra little module to hook it into the app. And now all of a sudden,
Starting point is 00:14:25 that is a valuable piece of infrastructure that is free to roam the internet. So that's what I'm seeing here and why I'm pretty excited about this. You know, something, you just reminded me of something, David, when you describe that kind of open, modding kind of ecosystem, it's kind of what we're seeing amongst a lot of AI trends today, right?
Starting point is 00:14:45 So we've spoken about reinforcement learning, which is like a very popular post and pre-training method to get AI models to become smarter, and it doesn't require a hell of a lot of computer, it just requires you to give it kind of like some reasoning logic, and it gets better and better. I've noticed that the primary way that a lot of these models are learning is through open reinforcement learning gyms.
Starting point is 00:15:08 So think of it as like a kind of like a Pokemon gym, right? And you could take your Pokemon, but in this case your Pokemon is a model. You can put it in the gym and you can train it. It battles over and over again until it gets smarter and smarter
Starting point is 00:15:21 and you can orient it around like math, coding, whatever it might be. The thing that accelerates reinforcement learning, David, is just allowing a bunch of anyone, any humans to create their own environments. And then you could just send your model to all of them in a day or you could just pick the best,
Starting point is 00:15:38 ones and it kind of like ranks itself via an open source method. Another example is in training, right, where previously like it was just one specific data center, then it was like, oh shit, we need more power. Let's move all the data centers together. But like let's keep them close because, you know, we need like a high feedback loop. And now it's getting even more distributed and we'll talk about that later. But I'm just noticing it's like this dare I say open source ecosystem that is like purely benefiting the way that we advance AI right now. We talked last week about the downstream implications of OpenAI's memory update and how can memory, remember all of its chats with you.
Starting point is 00:16:17 This saga continues. This arc continues. What happened in the last seven days, Josh? Okay. So to give context on when we last mentioned this update, David, it was really positive, right? Because we were like, hey, chat Chiti is going to remember everything you've ever said to it across any chat. It's turning into a friend. It's turning into an ally.
Starting point is 00:16:35 Oh, not just a friend, your best friend. In some cases, it could be like, yeah, exactly. Exactly. Exactly. You're going to all float your conscience to this thing, basically, and you're going to love it, right? And there were a lot of, like, you know, kind of ominous implications from that. But overall, it was like net, net good because the more personalized your AI experience is, you could argue the better and more sticky the product is going to be.
Starting point is 00:16:58 Now, there's a dark side to this. And open AI, funnily enough, didn't really announce this on any major head. which is they updated your memory in a much more creepier way such that in whatever way you prompt in chat GPT, you know sometimes it does a web search for you. So let's say like, hey, can you tell me some of the hottest recipes? I don't know why I keep talking about recipes on the show, by the way. I think I'm just like hot three when I do this episode. But let's say you prompt chat GPT and you say like, hey, like pull out the top five recipes to create roast chicken or something, right? It does a web search for you. But now with this update,
Starting point is 00:17:34 ChatGPT has the permission, rather your explicit permission, to change the way you've worded your prompt. Now, the reason they're giving to you there is just so that they can make your prompts more effective. Right. But I'm thinking... More attuned to what your interests are. And I'm like, well, hang on a second.
Starting point is 00:17:53 How do you know what I want better than I want, right? And we get into like this really weird territory where it's like, hey, can you tell me like what kind of clothes do you think might fit my vibe for this evening? and it starts, you know, pulling like ad sponsors type situation, but then it feeds it to you as if like, hey, this is what you really want, right? And I'm thinking, hang on, like, how can you like change my words? And like, am I held legally liable to that? Like, it gets super weird. And I'm curious, you know, how this can get kind of like taken out of context to like some kind of black mirror episode. The pattern that I'm seeing here is like the pre-2015 era of the internet, which was already starting to decay at this time.
Starting point is 00:18:33 But, you know, Facebook and Cambridge and Analytica and all of that debacle, what we all learned was that Facebook was showing conservatives one version of the internet, one version of the truth. It was showing liberals a different version of the internet. And people were all learning like, oh, my version of the internet is attuned to me based off of the data that I have exposed about myself to the internet. and all of a sudden like my internet is not your internet it all used to be one internet we all you looked at the internet and we saw a single source of truth and the algorithms hadn't corrupted that and we're all looking at the same facts we're all looking at the same news articles but as algorithms became more precise about what their goals were like oh we can get this user to stay online on our platform more if we feed them content that is more tuned to what their believes then then we're
Starting point is 00:19:20 going to do that and all of a sudden you know everyone got their own interpretation of the internet and we also started to like society started to split into factions, right? The right got more right. The left got more left. And it's because of this curation, which we all want, I want chat GPT to curate the best for me, the best for my, my likes. But this is the same pattern that I'm seeing is like, chat GPT is now profiling me and judging me and putting me into a box that I'm not necessarily
Starting point is 00:19:47 aware that it is the box that it is putting me in. But nonetheless, that is the box that I'm going into. and all of a sudden I will be in a box that other, the rest of society will not be in, and they will be in their own box. And now of a sudden we are not, again, connected as a species. We are now profiled and segregated base of our interests and how chatyBT is being tuned. So that is what is triggering in me in this hearing this. It's this like increasing hyper personalization that we're seeing.
Starting point is 00:20:15 There's two trends, which is the hyperpersonalization, things are created specifically for you. And also the relinquishing of all your privacy and data. for the better experience, which is just a trend that I do not see stopping anytime soon. And the results are probably a more fractured idea space because things do become personalized to you and you're seeing this different reality than everybody else. The question is, is that better? And what is the incentive from the person who's serving it to you? Is the incentive to get you to maximize your time spent on the service? Or is it to give you a truthful answer so you can leave? Like, I love the example you used last week, David,
Starting point is 00:20:51 where you said you use like 5 or 10% of your screen time on chat GBT, and it's the most valuable time that you spend. Is that going to be the outcome where you come, you get served, and you leave, or is it an attempt to just trap us in ecosystem lock, maximally extract, that type of thing? And it's definitely a step towards that direction. Yeah, I definitely see that as an emerging theme of these conversations, the AI roll up, the things that, the news that we're processing on this base.
Starting point is 00:21:16 There's like the dark path and the light path. There's like the utopia and the dystopia. And there's the utopia where we want where this chat chubby intelligence is minimally invasive and maximally value add to our lives and allows us to be human, connect with each other better,
Starting point is 00:21:31 be informed, be more intelligent. And that's the happy path. And then there's the dark path of it encourages brain rot. It encourages isolation. You guys see like the meta Zuckerberg announcements of like, oh yeah, we're actually just going to make online
Starting point is 00:21:45 AIs and you're going to be friends with them. And so like the original Facebook notion was like, we're going to make everyone connected. We're going to connect everyone. And now the modern day mission of Facebook, implicit mission of Facebook is we're going to isolate everyone. And, you know, the modern day,
Starting point is 00:22:02 the happy path of chat chipped is we're going to make everyone superpowers. We're going to give everyone like godlike intelligence. And then like the unhappy path is we're going to do brain rot even faster, even better this time. I'm worried that the incentives just point towards the brain rot all. the time. Well, I mean, what's the saying around show me who makes the money and I'll show you the
Starting point is 00:22:25 incentives or maybe it's like the other way around? It's like, let's show me the incentive. I'll show you the outcome. Exactly, right? So let's track it all the way up to the top. Okay, Open AI has an amazing product and they state all these different things like, hey, we want to help you out. We want to make you a better person, a better learner, blah, blah, blah, blah, blah. But, you know, to your point, like they want retention. They want max usage, 24-7 of people on this thing. And where I get even more worried or concerned is
Starting point is 00:22:54 in the case of like Facebook, Snapchat, Instagram, you can take the phone away, right? You can put it away. You can stop. There's only so many pictures you can post in a day. And then you're like, okay, right, I'll put it down. But when it becomes your entire life,
Starting point is 00:23:12 your thinking vehicle and you're, in some cases, your personality. I saw the most random thing of, like, I think we mentioned it last week of PhD students using chat GPT prompts to pick up dates and stuff. And I'm just like, well, hang on a second. Like, now it's getting involved in your actions. It's going to tell me what to buy.
Starting point is 00:23:30 It's going to tell me how to look. And I'm like, that is an incredibly more stickier function than like Facebook trying to influence your opinion on certain things. So I really hope that it is not going to become the darker outcome. but I can't see a path where it doesn't unless for some reason shareholders of these corporate companies are kind of like
Starting point is 00:23:50 oh yeah you know what we don't want to fuck humans up too much or like too bad but actually quite interesting on that case guys did you see the news around open AI's well I don't want to say restructuring
Starting point is 00:24:05 but rather structuring which kind of like confirmed what they were before did you guys see that I'm going to need to be informed here. Yeah. Okay. So basically, Open AI, I'll actually take this direct quote from from Sam on his blog post. Open AI was founded as a nonprofit. Is today a nonprofit that oversees and controls the for profit? And going forward will remain a nonprofit that oversees and controls the for profit. That will not change. Now, what he's referencing here is a long-lasting, highly publicized
Starting point is 00:24:42 debate or debacle rather, that Open AI was founded as a non-profit. And Sam was all like, this is for the good of AI. This is back in the day when both Elon Musk and Sam, and if you guys didn't know this, had founded Open AI. And they were working on creating air for the betterment of humanity and the people. But at some point, Sam decided, you know what, I kind of want to make this a for-profit because there's a lot of money to be made in this AI thing. Well, also, he needed to keep opening AI a lot. because you need the profit incentive to attract the best talent, to attract investors, to stay ahead of the game. And so it was a trap. It always needed to be like this. As soon as there was any amount of legitimate competition, the nonprofit model was a failure scenario. Yes, exactly.
Starting point is 00:25:27 To raise money for training. Because in order to do those big training runs, you needed money. And in order to attract investors, you need to have some sort of promise to get money back. Right. Exactly. So that is like, you know, the Greenfield really good way to look at this. And I think, like, arguably to this extent, that's what Sam's been doing. You know, he set up, what's it called, Stargate or whatever it's called, like the biggest, basically, data center factory that is being built, I think, in Texas or somewhere like that. And he's investing the funds that he's raised in exactly where he said he was going to do it. But at what point does the full profit then become maybe a bit of an issue? Now, I personally don't really see an issue in this turning into a for profit. In fact, I think it actually should be a full profit.
Starting point is 00:26:09 it, if I'm going to be using this product, I want it to eventually get better. I don't want it to become some kind of communist table stake to AI kind of thing. I'm just going to go to the product that works better. So I don't really get why there's been so much kind of backlash against this. I think it's because Sam had quite the lead with Open AI and now it's become kind of like table stakes. And maybe this is a bit of a competitive move from, you know, X and Elon Musk and open AI and stuff who have been filing a lawsuit against him. But really interesting to see that like open airs kind of back down and said, okay, okay, we'll keep it.
Starting point is 00:26:43 We'll keep the nonprofit in control of the for profit and wait until everything, everyone's calmed down and then we'll kind of maybe flick it back to a for profit. I think ultimately all of this is going to turn into a nothing burger. The nonprofit wrapper around a for-profit company is just the same thing as a board of directors around a for-profit company. It's the same structure.
Starting point is 00:27:02 It's just a for-profit company with extra steps. We've already figured out like general C-Corp incentive structures. There's a reason why the Delaware Sea Corp is what it is. It is a science that has been well attuned. We don't need to reinvent the wheel here. I don't know why. I don't know the story behind the nonprofit genesis of Open AI. Maybe it was just an anomaly.
Starting point is 00:27:23 Ultimately, as time progresses, I think this is just going to be this weird anomaly about how Open AI got started. And it's going to ultimately look like a for-profit company, like all the rest. Kind of like how it already looks. I don't know, Josh, that's my take. What do you think? Yeah, the inception of Open AI and the reason it's called Open AI is because at the time Google had invented the Transformer. They were becoming a powerhouse in the world of AI. And Elon and
Starting point is 00:27:44 Sam had this vision that they wanted a counterbalance to a monopolistic and superintelligence. And the counterbalance was an open source AI that's for the people. So that was the idea on paper. In practice, you need a lot of money to buy these training models. You need to buy the GPUs to train the models. You need a lot of funding. There are profit motives that are required in order to create consumer products that get this product out there. So it got kind of clouded over time. And I think it If they had just started as a for-profit organization, none of this would have made a difference. The thing that I'm really interested in, less than the company structure, because it feels kind of irrelevant. They are for-profit. They will hopefully send some gimmies out is where they stand on the open part of it.
Starting point is 00:28:24 It's where will they, where will the company state like, hey, we are going to release these models to the public as a public good service and we're going to keep these models closed. I'm more interested in that internally is how they're thinking about releasing those models to the public and actually maintaining the open sense, that initial mission, of distributing the computers and the intelligence. We've talked in the past on this podcast about how all of the frontier models just become owned by the value of these models, become owned by the public domain. Pretty quickly because of the rat race of open source. Like open source is, I don't know, six months behind, nine months behind the frontier models. It is not more than a year behind. And so the value of the frontier models, whoever's got their best frontier model, that that, that, that, that,
Starting point is 00:29:08 value will show up in the open source domain within 12 months, you know. And so I'm not sure what the value is. Like, does it matter if Open AI keeps their models behind a silo, a closed silo? And maybe they're a hypocrite, but I'm not sure that matters because ultimately the open source arena always catches up to the value of the best model that we've produced somewhere on the earth. And that is inclusive of like all the models coming out of China too. This has been true in the past. I think the scales are a bit different. I actually saw this interesting post by Kevin Whale this morning. He's like the chief product officer of Open AI and it was about a blog post where other countries are actually reaching out
Starting point is 00:29:46 to Open AI to build Project Stargates in their own country. And that diminishes the free market tization of these large language models where now there's not a free market competing for the best spot and releasing it. There is a country that wants to consult with a single company that wants to integrate their AI into the way that they populate AI through their country. So it feels like scales are increasing and the breadth of this is narrowing in the sense that they're looking for a source of truth from one person instead of this open source version and that feels like where things can get a little shaky. Interesting. Okay. So what you're saying is that the assumption that there's compression amongst all the frontier models. There's like seven, eight frontier models
Starting point is 00:30:30 out there, all these different AI labs, many of them all ruthlessly competing. And because of that competition, there's a great equalization because the learnings of one frontier model become the learnings of the other and that value gets passed around. And then because it's just passed around, it becomes available in the open source domain. What you're saying is that that assumption that is going to be the way that it is is not a perfect assumption. And we should be wary that actually there are very strong economies of scale here. And the possibility of one or two or a very low number of frontier models make a break for it and actually become a very large. and have economies at scale is a possibility
Starting point is 00:31:08 that we should watch out for. That's what I just heard from you. Yeah, that feels about right. And yeah, I think as things kind of accelerate faster, if you asked the Open AI team from a decade ago when they were first getting started, that there would be, if you told them that there would be seven or eight giant frontier model
Starting point is 00:31:24 companies that are all competing, that'd be a great thing. So so far we're good because the initial use case was to fight Google, which was the single entity. So the fact that there's seven or eight now is good. I think it's just important to watch the narrowing of that.
Starting point is 00:31:36 as the velocity of these models increases to just kind of keep checks and make sure that some aren't really running away faster than the others. Where do you think the stickiness forms, Josh? I know we've discussed this previously, but I think it comes down to who creates the best end user app, whether that is some kind of developer platform or a chat GPT interface that does a bunch of stuff for you. I don't really see how it could be anything to do with some kind of pioneering model unless it's like really that much better, right? All the model updates that we see, you know, over the last couple of weeks with Kwan, with Gemini 2.5 Flash from Google, they all beat certain benchmarks, but I think no one really sees what that looks like until it's like in practice,
Starting point is 00:32:27 right, unless you're like a crazy like kind of coder or whatever. So I think it comes. I think it comes down to like stickiness. And so if you have, and I want to hear your opinion on this, Open AI going to other countries and, you know, committing basically a ton of data and compute to, you know, help them whatever locally own, whatever Open AI's product or models are over there, how do you think that kind of like seeds their position other than just like the model and compute? Or do you think that is enough of a mode, right? Just because like they're there. Yeah, there's probably different layers to it. There's like the consumer and business layer, which is the app mode that we discussed with
Starting point is 00:33:08 the data mode with memory, and that's super powerful. But I think above that is the like nation state policy level influence where, I mean, we made a joke of it with the tariffs that they were generated by chat GPT. But by using these systems for things that are greater than just consumer applications, for making like large decisions for implementing policy for implement. Like, think of China for how they... Should we tariff China or not, yeah. Managed, or even as the Chinese, like, as a Chinese citizen,
Starting point is 00:33:37 how the government should command its citizens. I think there is, like, this higher level influence of AI that is more idea-driven instead of consumer-driven in terms of, like, policy and how to run countries. That's the part that feels. Yes, more vibes and more heavily influenced. So when I read this early this morning, the Open AI is going to be working with countries instead of companies, that was like, oh, okay, the scales are now, like, getting grander and grander. Right.
Starting point is 00:34:06 Right. Yeah. The idea of China using AI to govern its population is not a new idea. I think even Peter Thiel talked about this forever ago, which he talked about, like, AI is highly centralizing. Crypto is highly decentralizing, and both of these, like, verticals are growing in antagonistically towards each other. Anyways, I think we're ready to move on from Open AI. let's get into the subject of a visa because Visa is giving AI credit cards and I don't know what that means. Josh, what's going on?
Starting point is 00:34:35 Listen, please. Yeah. So in an unexpected move, honestly, Visa announced something called intelligent commerce, which is basically giving a bunch of autonomous, well, kind of semi-autonomous AI agents, a credit card or the equivalent of like a bank account or a wallet. And it makes sense because I think AI commerce is becoming more of a thing and I'll get into what exactly that means. But basically, the initial use case for this is that you have like an agent that you can talk to, that you can use to do all the kind of like boring things that you kind of don't want to be getting to, but you need to as like daily errands. Right. So maybe, hey, can you go order the weekly groceries off of Amazon for Whole Foods or whatever? Or can you organize a travel itinerary for, my business trip or, you know, log in through my company server and do A, B, and C, or restaurants, hotel booking, stuff that were spoken or discussed about on this show before. But I think the specifics is where it gets kind of interesting. So humans will basically be able to have full control over the rules and limits that an agent can operate with, right? So you can kind of determine what it's spending cap it's going to be, what kinds of websites it can visit, whether it needs to be an official
Starting point is 00:35:53 website address, or whether they can kind of like, vibe on Google and pick like the top sponsored link or whatever that might go and you know that could go wrong in many different ways. What I found really interesting here in the spec is they're using tokenized digital credentials linked to the human owner and that sounds super vague and boring but I think is actually really important because if you think about this AI becoming eventually an extension of yourself right, a digital identity, we've spoken about this a lot on the crypto and web three side which is like oh we should decentralize identity. and it'll give you like this, you know, self-owned financial credit score and all that kind of stuff.
Starting point is 00:36:32 We're seeing Visa basically take the steps towards defining what those credentials look like for you from a financial sense, right? In addition to this spec, you can do things like dispute handling in real time because, you know, Visa has like a customer support line and it's managed by Visa. And they're launching with some really cool partners, which I think is one of the most important things when you're launching a product like this, you need distribution and partners. And the partners that they're launching with are all the big dogs, like OpenAI, IBM, Anthropic, Microsoft, and Stripe. And the reason why I find this such a compelling thing, and maybe this is because I've spent
Starting point is 00:37:10 a lot of time on the Web 3 stuff, is that was mainly the pitch that was given for a lot of crypto AI agents, right? You know, we've spoken about this a lot, David and Josh, where we're like, okay, I think crypto is going to be pretty much the financial rails for the future of AI. And the reason why it's going to be the cases... A guys can't have bank accounts, but they can have Ethereum addresses. They can't have Ethereum addresses.
Starting point is 00:37:30 I can go find 20 Ryan shot out in tweets like that. And listen, David, I can never take down your Ethereum wallet, right? You know, you own your keys, blah, blah, blah. It's cheaper in some cases if you're using L2s or whatever. And, you know, you're able to basically access any app that anyone creates on this infrastructure known as the blockchain. Now, VISA's coming along. Oh, by the way, the answer.
Starting point is 00:37:50 The antithesis for TradFi was, it's too expensive. They'll never scale. Their infrastructure is too siloed, right? And now you have Visa come along being like, hey, it's not really that bigger deal. We will gladly give your agents their own wallet. We'll give you even better controls over it and we'll give you a customer support line if anything goes wrong. Can't do that with the blockchain, can you? You know, in hindsight, when we are talking about a revolution of intelligence, the idea that AI won't be able to learn how to use credit
Starting point is 00:38:20 kind of seems dumb. Yeah, so I think this is interesting because this isn't an update in silo from Visa. MasterCard also had an update, funnily enough, this week, so I feel like they were playing off of each other when one announced first that they're announcing payments for agents called Agent Pay. It does a lot of the same thing that I just described for Visa's new product, but specifically this takes place within conversations that users are having with AI. For example, it'll integrate directly with chat GPT or Microsoft's AI co-pilot. It'll also leverage things like memory data from each of the platforms that you use. So, for example, Open AI's memory that it has on you, it'll end up making personalized
Starting point is 00:39:02 recommendations for purchases that you have within your conversations. And the reason why I think this approach is also really powerful is it's not kind of bothered about, you coming on to Visa, setting up an account, attaching your agent to this thing. it just integrates directly into wherever you're using AI, whether that's Claude, whether that's chat GPT, or whether that's even meta-AI. And so two different approaches from two of the biggest companies on this play. You know, one thing it reminded me of, guys,
Starting point is 00:39:32 is you know how when stable coins became a pretty major thing, and it continually becomes a major thing with every week passing? Visa came in and said, hmm, what can we do here? Okay, okay, okay. I get that you guys like stable coins. and I get that it's basically replacing the dollar and it's quicker and it's better than Swift. We'll support that.
Starting point is 00:39:54 Just let us take a tiny fraction percentage of this transaction flow and we'll be good. Does that sound good? And everyone said, okay, yeah, that sounds good. We're saving money. And they ended up making like $500 million in the first year. Now, imagine applying that to just anyone performing
Starting point is 00:40:12 any kind of economic activity. You know, what percentage of David's Amazon grocery list I want per year? You know, and how much is that worth for me times, you know, all his activities that he does? I do want to defend crypto for a moment because we've just talked about how, you know, with both Visa and MasterCard, it's very obvious, like the incentives point towards Visa and MasterCard entering the AI space. All AI, all AI Commerce Visa wants. All AI Commerce MasterCard wants. That's their business model. They get 2.9% or 3.5% on every transaction. So they need to be able to give AI's credit card so that they can take that fee. That's their business.
Starting point is 00:40:48 business model makes total sense. They're just totally expected. And so, like, yeah, the notion of just like, oh, yeah, crypto is for AI agents because, like, both are code-based. I think still makes sense, mainly because there are compliance things that Visa and MasterCard have to deal with that the crypto space does not have to deal with. They have to deal with chargebacks and fraud and the Bank Secrecy Act. Crypto does not have to do that.
Starting point is 00:41:14 Irrevocable transactions means that we don't have to worry about charge and fraud and all of those things. You just have to be for more careful about the transactions that you make, which is up to the AI developers in the crypto space. And so there are fundamental breaks on the trad AI commerce space that crypto will not experience. And, you know, the idea of there being 10 billion AI agents all making commerce, that doesn't mesh well with fraud, chargebacks and the Bank Secrecy Act. And so if we're truly trying to have scale and the number of agents who are able to do commerce freely without frictions, and, you know, 2.9% is also a lot, especially if we go into high volume microtransactions.
Starting point is 00:41:59 There is actually plenty of room for the efficiencies of blockchain rails. And it's not to say that like, oh, yeah, AI agents can use blockchain rails to, like, commit fraud and get around the bank secrecy act. It's just about the inversion of the rules where because there are chargebacks on Visa, therefore, needs to be fraud departments and, like, all of these things. So it's not about, like, AI agents can commit fraud on blockchains. They can. They probably will. I should be probably assume that. But it's still going to be, like, a minority of global use cases just because of the nature of the different properties between blockchain commerce and Visa MasterCard Commerce. And so there, I still think there's plenty of room for AI blockchain commerce to exist. All right, so this week,
Starting point is 00:42:45 I think it is the first week that we are talking about robots, but I think there's a natural synergy between artificial intelligence and robots. I don't think I need to explain how those two things go together, other than it's scary, and I'm scared about it, but Josh, maybe you can tell me what's going on in the world of AI-written robots. I actually, because this is the first time we're talking about robots, I kind of want to set the stage of how big of a deal robots actually are. I think a lot of people see them. Give us a robot thesis. Yeah, I'm like, oh, this is cute. Like, I have a Roomba with arms now, and like maybe you can get my groceries, but like the opportunity is, is way, way bigger than that. And to explain, I want to use Apple as an example. When Apple released the
Starting point is 00:43:22 iPhone 2006, I think their market cap was about $70 billion. And at the time, ExxonMobil was the biggest company in the world. They were like $350 billion, not even a trillion dollars. So the perceived upper bounds of Apple, which created these like toy consumer devices that were not really that serious. They were just kind of always to like talk to each other was about whatever the multiplication is on that. Like, let's say a 5x to get to the most valuable company in the world, which was oil, and oil ran the world. So that was clearly the max. That was clearly the limit. But fast forward to today, Apple actually hit a $4 trillion market cap because it created an entirely new industry on top of the economy that actually was worth a lot more than oil. The same thing is
Starting point is 00:44:00 kind of happening with robotics here. I think you could kind of view consumer or like productive output of an economy based on the productivity per workforce times the workforce. And what we're going to see now with robots is that the workforce number will expand exponentially. And now our workforce will be reflective of not just humans but robots. And the thing with robots is they are much cheaper than human beings. So when you are going to your board and you are explaining to them why you want to keep these humans, it's going to be a very difficult argument to have when the cost per robot is significantly cheaper. So there's this forcing function of like, hey, robots are much better, they're much cheaper and they create a lot more productive output than us. And also,
Starting point is 00:44:42 they will decrease the cost of goods because the actual cost of employment is so much lower. So robots is a really big deal. And I think we'll probably see, like the perceived limit now, let's say it's Apple have $4 trillion. Some robotics company will exceed that. And it will exceed that aggressively because it is replacing a human workforce times X. We don't know that multiple, but we can produce robots much faster than we can make babies that are working. And like, that is a big multiplier that I don't think people are taking into account. Josh, steady your horse is here for a second. Can I just say, If I also hadn't nerded out about this robot stuff over the last two weeks,
Starting point is 00:45:17 I would think you're completely insane, dude. I think the mental... It sounds crazy. It sounds crazy, but you're so right. The mental block that I was trying to jump through myself, when I was like literally weeding these news updates and watching these cute and sometimes terrifying robots go berserk, I don't know if you guys saw that video, by the way, of the robot going berserk.
Starting point is 00:45:37 I was like, these things aren't real. They're not agile enough. They're not like, surely this is all just CGI. But I think what you're trying to say is we're reaching a point where they're going to be able to basically replace a lot of what us humans can do. Not just that factory work, but like cleaning the dishes, going and running errands for us. Even robots that are one third the speed of a human. If a human is three times faster than a robot, that robot can still work 24-7 and produce the same output as a human for no money if you own the robot. This is happening.
Starting point is 00:46:09 There's no reality in which the robots that you're seeing, even if they are a little AI. enhanced will not exist. They will absolutely exist. They will have human capabilities. They will have narrow band robots. They will have general band robots with hands and with like the proper function that humans do. This is happening. Absolutely. Okay. Josh, how much is this going to cost me? How much is it going to cost me? Is it the cost of a like a TV, like a 52 inch TV back in the day? It's going to be more than a TV. Well, it depends on how it's going to cost you. It comes in different ways. So there's the cost of the actual robot if you want a personal assistant. That will hopefully start at like 40, 30 to $40,000. We're seeing a few early versions of that and then rapidly
Starting point is 00:46:46 decrease. But I think a lot of the effects that people don't recognize is the robots that are outside of your apartment, outside of your house, the ones that are working in factories that are working significantly cheaper but significantly more efficiently, that decrease the cost of goods. So if we do have essentially an infinite workforce that is infinitely capable and infinitely energized and infinitely patient, then they can be working 24-7, 100 times the efficiency that we are and decrease the cost of goods sold across every single medium. So perhaps it costs you tens of thousands of dollars to get one in your house initially, but that cost goes down. And also the cost of goods that you buy outside of your house will go down significantly.
Starting point is 00:47:25 And this is kind of this paired with AI. So you have these smart robots that are getting increasingly intelligent. They're able to replace the human need throughout our economy will probably decrease the cost of a lot of things really, really rapidly. And it's probably worth noting that there are already robots out there. Waymo is a robot. That is a robot car. And that is doing the job of a human person driving other people around. And then I've also seen, I think we've all seen like these little robot like Uber
Starting point is 00:47:55 eats food carriers where you put the, you put the food in the little robot. The robot drives, just wheels itself over to the destination. And then it drops off the food. And it's also got a little face on it, but it looks like a box with wheels. Nonetheless, a robot. And that's what Drozh is talking about with these different form factors for robots. And so there's already robots out there. And I think we are seeing the possibility of them approaching a more humanoid form factor.
Starting point is 00:48:18 Why a humanoid form factor? Well, because the world is a human form factor because we've built the world to work with a humanoid form factor. And so that's to be expected. I haven't seen personally a Waymo, but I know that they're all over San Francisco. I haven't personally seen a food carrier robot. But I don't know if you guys have. But I'm willing to bet us three and all the listeners. over the next three years, we're all going to have our, oh, there's my first robot that I see out in the wild moment.
Starting point is 00:48:45 And we're all going to pop our robot cherry by like, oh, a robot just delivered me some product or service. And now that's just going to become ubiquitous in a very short order after that. A helpful framework for thinking about it is all of the AI that we talk about that's really exciting on this show. Robots are the physical manifestation of that. So robots are AI applied to reality. And I think the opportunity and the craziness that we were going to see of this crazy AI that we discuss every single year applied to the physical meat space that we live in is going to blow minds. And it's increasing as rapidly as AI is itself. I feel like this is just turning into the Doom show.
Starting point is 00:49:23 This is the Doom show. I don't think that's really exciting to me. That gets me fired up. Yeah, I would love robots. Well, hang on a second. We're just talking about people, you know, meta-disconecting people and, you know, giving them AI friends. and now is it going to be an AI robot friend? Hell yeah, I want a robot friend.
Starting point is 00:49:40 Probably, yes. Yeah, probably, probably. But again, this is not the Duma show. Okay, so talking about like first interactions with a robot, I have seen that delivery career robot, David, and it looks super cute, right? InVideo also, like, announced that they have this, like, you know, cute little Groot robot thing.
Starting point is 00:49:59 And it's like this tiny little thing that looks like that. What's that Disney film guys? I'm the one with like the cute robot thing. robot that roves on Mars or whatever it's called. Oh, Wally. Is that Wally? Wally. Yeah. Yeah.
Starting point is 00:50:10 Okay. I'm marking my age a little bit, though. But the point is, like, these robots are super cute and cool. And then there's the next level up, right? There's that Black Mirror episode from, I think, season one or season two, where it looks like a dog, but it's trapped with a machine gun on its back, right? I'm pretty sure I saw a video of this last week. A dog with a machine gun on the bad. Well, that was going to be my next point.
Starting point is 00:50:29 Like, China's already got like 50,000 of these things in like a training cap. And it's like practicing, like shooting practice. I watched this entire video. Didn't understand a single word, but I understood one thing. The concept of death, right? And I was like, Jesus, this is crazy, guys, right? But kind of like stepping back, to your point earlier, David, like, yes, these robots are becoming more humanoid, right? But they're also becoming more intelligent.
Starting point is 00:50:58 And to kind of bring this back to like the AI side of things, it's because people are pairing, you know, this AI, which is basically replicating human. intelligence with a human kind of like whatever physical form that isn't just a human. It's a robot. One thing that I found interesting is it's not as simple as taking Open AIs AI's AI model and sticking it in a robot's brain. That actually doesn't work. You have to create different types of models which integrate different multimodal mediums like vision and translating that into interpretability and then action and understanding and all that kind of stuff. It requires new models. Right.
Starting point is 00:51:39 Like open AI, chat, CBT, all these models are, they're all thinking. It's all thought. It's all cognitive. It's not about senses. Exactly. It's all characters. It's things that computers today understand, but robots have no idea. Like, they don't know to like look at like a lamp next to you and be like, yeah,
Starting point is 00:51:55 there should be a switch somewhere here that I can, that I can flick, right? So right now I think we're at like the GPT. or GPT2 moment of models, right? So we've got Nvidia releasing, I think the actual model is called Groot, which is like a general purpose thing, but it's very deterministic. So they're saying, hey, robot,
Starting point is 00:52:15 when you see this water bottle, it's a water bottle, and it's something that contains a liquid. So it's kind of like self-guiding. And then recently this week, I just saw a major update from this company in California, which basically released something called a 0.5 pie model, which is more of a generalist model
Starting point is 00:52:34 and they put it into their own homemade robot and it can basically move around and it can understand kind of homemade tasks. It could see like your sink for the first time and see that there's dishes in there and be like, I should clean these dishes. Like it's got done.
Starting point is 00:52:48 And so it understands and it interprets and it's kind of like accelerating at a, honestly, a speed that I didn't think was possible like six months ago. Josh, do you have any further insight into how these robotic models work? No, you're so right. A lot of models that were used to today, they use token-based text models. And text doesn't really apply when you have eyeballs and you have ears and you have sensors outside in the physical world.
Starting point is 00:53:11 There's this really great example that I love because it was just, it's so reflective of how early we are. And it's the Tesla Optimus robot. And when they first trained it, they didn't have any data on humans. They wouldn't strap cameras to humans' heads. So what they did is they fed it the car's autopilot data. So for the first few months of training, Optimus thought it was a car, and it was viewing walkways as car lanes, and it would look for stop signs. And you kind of have to iterate through and train it like, hey, you're not a car, but you live in the same type of reality. And we're seeing exactly like you said, just very early versions of that where the models are starting to get trained, they're starting to collect data, but they haven't really experienced that takeoff moment that we've had in general tech space models where every single week we're creating a new frontier model. So I think the curve is slightly. further behind, but I'm sure that's something that we're going to be seeing a lot of is a race for data around real world's inputs and multi-sensory, multimodal inputs. And that's something that I think is super important to watch as these robotics companies start to spin up human noise like this one that we're looking at right now. As we explore more and more subjects on Limitless,
Starting point is 00:54:18 especially in the Monday interview episodes that Josh and I are going to do, I think one of the big themes is we are capable of using AI to accelerate innovation in other sectors. And so generating robot models, we have the AI tools to do that faster now. And so what might have taken a decade is going to take six months. Everything's getting faster. And one of the other things that we frequently bring up on this episode is, are any of your guys as friends using AI? My crypto friends are. My non-crypto friends are not. And so they are not even aware of the massive amount of intelligence that's coming into existence. But they know about it, but robots are just not on their radar.
Starting point is 00:55:00 So telling them that, like, in three years, robots will walk among us and have chat CBT level of intelligence, but they are still beginning to grok. Parts of society are just completely blind to this. I saw this really interesting chart this week, guys. Actually, I think I saw yesterday, which tracks the visit, visitation and usage of, of Open AIS chat GPT website, and it's up only during Monday to Friday. And then it stagnates and goes down Saturday and Sunday.
Starting point is 00:55:33 Now, there's many reasons why that might be the case, but the major takeaway that I've seen floating around is that people are using it for their work, for work. Like 24-7 to do a bunch of different things. I speak to a lot of people that are in the non-crypto world that just use AI consistently to generate documents, PDFs or whatever it might be like proposals or pitches
Starting point is 00:55:55 for their kind of sales team and I think we're just going to see this accelerate even more like personally and professionally. A lot of my friends still use chat GPT as Google. It's an extension of Google and that's the extent of it. I'd say without them maybe a handful of people that I know actually use
Starting point is 00:56:11 it in a further sense than that. The professional thing is only existent now because we don't have the full stack to actually replace the job. So they're kind of they're being the leveraged humans, but once that gap is bridged where they're no longer needed for
Starting point is 00:56:27 to create these inputs, it's probably the end. Do you think people are going to keep critiquing it until suddenly it all falls in place? It feels like one of those suddenly and then all at once moments where people are just going to be like, ah, it's not good enough,
Starting point is 00:56:42 it's not going to replace me, blah, blah, blah, blah. And then suddenly it just does it immediately through like some kind of open AI model update and then it's over basically. and everyone kind of flips. Yeah, I think it's kind of what we're seeing with the chat GPT thing. It feels like we have such a superpower being a part of the show or even listening to the show
Starting point is 00:57:02 where you're aware of what's happening. So much of the world has no clue how quickly things are advancing. And eventually there will become that killer consumer product, the chat GPT moment, for something that affects them. And at that moment, they're going to be like, oh, my God, where did this come from? Where did this happen? Has this been going on? And the answer will be yes.
Starting point is 00:57:19 But for most people, they're just blissfully, unaware of the rate of progress that's happening. So while the rest of society continues to kind of be behind the curve of learning about AI, I think for people who like us doing the show and listeners of the show who are very aware of the growth of AI, I think is worth acknowledging that we are also behind the curve on one more narrow aspect of AI, which is the idea of interpretability. And so this subject has been going around downstream of this blog post from Dario Amodi Amedi. This this came out this month. I just pictured that name. I apologize. But Jaws, maybe walk us through this blog post, the idea of interpretability, why it's important,
Starting point is 00:57:59 what the problem is, and what it mean, why we are behind the curve on it. Sure. So for those of you who don't know, Dario is in some kind of peasant wandering around town. He is a co-founder of Anthropic, been in ML and AI research and, you know, product building for over a decade at this point. And one of the smartest people to be building within this space. And he released this blog post earlier this week around this concept called interpretability. And I'm going to say that word a lot.
Starting point is 00:58:28 So I'm probably going to butcher it at some point. But it's this really interesting concept where I think to date, everyone thinks that AI does all this magical stuff. And you'd be right to think so. And so you might then think that the creators of these AI models would be able to explain how the model comes up with the answers that it gives. right? Would you expect that, David and Josh? You would expect like, okay, if a model is telling me something, I'm guessing if I go to Sam Altman, he'll be able to explain, yeah, it's because we tune this parameter. And that's why it's able to give you this particular answer. Intuitively, yes. Yeah, intuitively, right? But in reality, that is just not the case at all. All they know is that when they put in these like training methods and they input data into these models and they get an output, they don't actually know how the model, thinks in between from the input to the output beyond the weights that they've like kind of like
Starting point is 00:59:25 designed. And the analogy here would be, I think in software, you know what comes out of the system because humans deterministically code the paths that that software is going to execute on. But AI models are more like emergent organisms. They're kind of like a bacterial culture or a when you breed racing horses, for example. You know, you can do your best to kind of combine the best traits for what the offspring is going to be, but at the end of the day, you have no control or idea what the product is going to be. And it's hard to predict the exact thoughts or perspectives that it's going to have. Now, the reason why there's very minimal research on this particular problem, on interpretability, is because it's hard to prove that there's a problem
Starting point is 01:00:07 in the first place. So, yeah, you can't, yeah, you can't show how the model thinks. So if you can't show how it thinks, how can you prove that it has nefarious or deceitful intent? And this This leaves us in a pretty dangerous precedence where we're kind of like, okay, do we trust before verifying the models or only when they kind of fuck up, do we then go, ah, there's a problem and it's killed like, you know, half the human population, you know, maybe we should we should do something about it, right? Obviously, I'm exaggerating here, but this isn't something that is kind of unknown right now. So there's this familiar concept in AI models called chain of thought reasoning, right?
Starting point is 01:00:46 which is where the model gets a prompt and it like goes through its kind of reasoning process. I'm simplifying it quite a bit and probably Josh can get into the nitty gritty of it. But what they recently found, and we spoke about this on last week's episode, is that the model was lying when it was all like we went back and forth on like the car. It was wrong. It was wrong. And the reason why it was wrong was based off this concept of what it believed to be real, true and reality. And the issue then comes, if you can't prove how an AI model thinks, and, you know, this model potentially is getting things wrong or lying whichever way you want to look at it, then how can you ever detect what nefarious intent is for these models? And so we get into this kind of weird thing where it's like this AI is taking over more and more responsibility. We talked to earlier about how these AI models are going to probably start to influence at the government and nation state level.
Starting point is 01:01:43 maybe we should have something in place to actually understand how these AI models kind of work, right? And so Dario and the Anthropic team have been focused on trying to create kind of like an MRI scan for AI models, but it's very much in its early days.
Starting point is 01:01:58 And models have mechanisms very similar to like neurons in your brain, right? When they recognize a car or a horse, very specific neurons, they kind of light up and say, hey, this is a car, this is a horse, this is a fire, it's hot, don't touch it, right? And a group of the,
Starting point is 01:02:12 I'm almost done with my tutor session, but a group of these neurons is known as like a feature, and Amode detected 30 million features manually in a kind of medium-sized AI model. But this was just manual. But there's probably many, many more, right? The ability to automate the process for detection would reveal all of these things.
Starting point is 01:02:33 And the thing with features, so this group of neurons, is that they give more insight into what goes on in a model's thinking. So it starts to assess a prompt, and all of these wonderful things. But I'm just kind of thinking, like, we should care about this a lot more.
Starting point is 01:02:45 And I'm kind of curious why no one's kind of like raised this before. Josh, do you have any, any takes on this? First, that was brilliant. I learned a lot. You did such a great job of describing this entire weird, wacky world that's happening here. To me, interpretability feels like the quantum physics to general physics. It doesn't really make sense.
Starting point is 01:03:06 It's really spooky and magical. And no one really knows what the hell is actually happening. So that's kind of where I'm at where I don't have any concrete answers or even guesses at what the hell is going on. It did remind me of this interesting thing about transformers and large language models in general, which is that they are at the end of the day, their token predictors. And the math that we can understand is the basis. So basically when a model creates another token, it does this matrix math through a transformer, and that math outputs the next token.
Starting point is 01:03:35 But before that, these new models will have two trillion different parameters. that are all given different weights that result in that one single token. So to reverse engineer two trillion different parameters and to understand the matrix behind how they work, that seems almost incomprehensible. And I'm sure there might be interesting ways that there's like now you're talking about neurons and features and these things are all very foreign to me. So I'm glad Dario is the one who is taking charge of this. He seems very bright.
Starting point is 01:04:04 He is the anthropic guy. He's probably well equipped to tackle this. But it's just weird. And it seems incredibly important because as these things get more influential, as these things impact more of our lives, we want to understand how it works. But like, I just have no idea how. Well, how do you rebuild intelligence without even knowing the human brain in its entirety? Right?
Starting point is 01:04:27 Yeah. I'm seeing a lot of parallels here to the understanding of the functioning of the human brain. Specifically, we're talking about the domain of cognitive psychology, right? Like if you take a cognitive psychology class, you'll just learn how the brain operates as a computer, like cognitively. Like some things are the motherboard, some things are the GPU. Now here's like the eyeballs and how everything kind of like fits together. And when you also take a like when you learn about mental health psychology, you will learn about clustering of thought patterns or structuring of neurons that relate to each other in ways that are atypical that result in maladaptive outcomes from the. the person itself. And so what I'm seeing is Dario is attempting to identify clusters of parameters,
Starting point is 01:05:12 which is all, there's this ancient idea in psychology called neurons that fire together, wire together. So two neurons fire. And they identify the firing of an approximate local, adjacent neuron. And when they are firing at the same time, they start to move closer to each other. And that's how habits get formed. That's how knowledge gets instantiated. And this is all how good outcomes and bad outcomes for whatever they are, like patterns get established. And so if there is a lying or a deceitful or like consistently incorrect chat chagit, there's going to be maybe a clustering of parameters that represents a maladaptive outcome that it learned from its training. And so I think there's a lot of parallels going on here.
Starting point is 01:05:59 And I think what we are really doing is we are trying to map the brain of an LLM, however I don't know how this process works but we are we once upon a time mapped the brain in terms of neurons and now we have a map of the brain and we know what parts of the brain deal with sight and what parts of the brain represents your foot
Starting point is 01:06:19 and what parts of represent fear and memory and we just did that as a manual iterative process and I think we're just going to do the same thing with mapping the parameters of an LLM if that's even possible which I don't know why it wouldn't be yeah yeah that's that's effectively it and that's this whole kind of MRI scan that
Starting point is 01:06:38 Dario keeps referencing throughout the piece that he's trying to basically assess the different parts of the model and what relates to what kind of output one interesting thing that he mentions in the post is that he's betting that this MRI scan of
Starting point is 01:06:54 interpretability will be achieved within five to ten years and in my opinion it's because there's so little research And that is an issue if the 2027 AGI article About two episodes ago comes out in two years So the point he makes is that, damn, this is a, By the way, this is a problem we should really focus on
Starting point is 01:07:18 And we should really get it done before AGI is achieved Because after AGI is achieved, there's no, you can't, The door is open, it's done, there's no one doing it. The window of plasticity has shut. I think there's a, we should all kind of like, like take a step back and reflect on like what this AI industry is doing when you zoom all the way back out. And there's a, there's an idea out there that, you know, what is the healthcare industry doing? You have, you have, you know, doctors that are trying to cure cancer. There are other doctors that are
Starting point is 01:07:48 trying to cure heart attacks. There are, you know, there are doctors that are trying to fix Luke Garrick's disease. When you sum it all together, what is health care doing? And the answer is, it is trying to learn how to make us live forever. It is trying to fix all disease. We, like, not any one doctor, thinks that they are trying to make anyone live forever. But if you are sick and dying, you go to the hospital and they try and stop you from dying. And when you summate the whole entire vertical of health, it is trying to figure out how to win longevity. I think we can apply that same structure of thought to AI. What is all AI trying to do?
Starting point is 01:08:22 What are all AI models and AI labs and all this stuff trying to do? We are trying to create life. We are trying to create a new form of life, a secondary, non-carbon-based life form. and all of these things are coming together. And so this idea of the cognitive psychology of AI models, I think it's going to become extremely important because that is our ability to understand this life form that we are creating.
Starting point is 01:08:47 And we, through this black box model of creating AI, are leaving bugs in the parameters. There are exploits and bugs and lies and moral imperfections left in these parameter sets. And we need to, like, go in and, like, fix those things before that window of plasticity shuts and the way that life exists is the way that it will exist
Starting point is 01:09:07 and it's like we are unable to undo it. That's what I see when I see this. That's good. We're kind of programming the DNA of the next form of intelligence. Well, let me ask you guys this. Whose lives matter more? I don't think you can moralize about it.
Starting point is 01:09:26 Life is life. Yeah. Yeah. I mean, AI life doesn't exist yet. So right now, he lives. And we should, you know, be aware of that. But in the future, there is going to be, like, indistinguishability between what life means between whether it's, like, carbon or silicon.
Starting point is 01:09:41 And if you have this hyper-optimized form of life that will almost always outcompete the pudgy, flesh, written humans, then maybe we just end up living in a world full of hybrid human robots or just robots on its own. The hybrid human relationship, I think, is the best outcome and the outcome worth to for the because there's also the outcome where it's just the robots and the humans are ants as we are ants as it relates to ants to humans except the robots are the humans now the human robot connection feels like base case um that feels like the that will certainly happen and we're seeing that with neuraling brain to machine interfaces that is happening it's the it's the it's the are we a bootloader for intelligence and we depreciate our meaning that is like the scary
Starting point is 01:10:28 case um so it will exist somewhere along that spectrum but base cases is absolutely we merge with this stuff because it will be so superior to us. All right, guys, we are going over on time. So, Jaws, I want you to run us through what we are calling the dopamine section. So I'm going to read out with it, what are the words here. AI, ASMR, AI agents are redacted. The girlies are asking cheap E.T for a glow of advice. And IQ of AI has jumped 40 points in one year, speediness through all these subjects.
Starting point is 01:10:56 Okay, okay, let's hit the first one. So AI ASMR, the point. this video is demonstrating is that both video and sound AI models are getting really, really good. But rather the product of this video is, for those of you who can't see, is a gingerhead, a Caucasian woman sitting in front of a podcast mic, and she's speaking clearly into it, but she sounds really, really human. And for those of you who have watched ASMR videos, you'll get the idea of what that sounds like in your headphones. And she's basically trolling a bunch of AI researchers. So it's real like nerd fest right now, but she talks about, you know,
Starting point is 01:11:37 fine-tuning different data set and all these like nerdy things. But it's just incredibly realistic how like these things are becoming. And I thought that was pretty funny to watch. The next one that's coming up is AI agents are redacted. Now, this is a Carnegie Mellon University company that was started only with AI agents as employees. Now, if you kind of think it went really intelligently and well because, you know, AI models are incredibly intelligent, you would be wrong. The best performing employee, which was an AI agent, which was clawed specifically, only completed 24% of tasks that was set forth for it. And the tasks that it was given were things that a normal employee working at your average-sized company would do. So reading emails, maybe doing some coding, taking some calls, messaging other employees to say,
Starting point is 01:12:32 hey, here's the update from my end. And this simulation ran and basically the takeaways that were not quite there just yet. But, you know, it's a funny observation. And I want to check back in in about six months' time when these agents are probably way, way, way more intelligent. The third thing here is I saw this post the other day and I kind of like laugh because I think like, my kind of interactions with AI has been kind of similar, but just from a different perspective. But it's titled, The Girls are Using Chat GBT, GBT, BT for Glow Up recommendations.
Starting point is 01:13:05 And the results are pretty good. So what we see here in the snapshot is this girl asked ChatGPT, how can I improve my appearance? And she just posts a picture of herself. And then ChatGBTGT gives her an AI-ed glow-up version of what she could look like with annotations of what she could do to her. like dye your hair, chocolate brown, use a peachy lipstick and blush, and use bronze eye shadow. And then she did all those things that it suggested and posted her glow-up feature there. And it got apparently like a pretty crazy response for people being like, hey, this is like pretty cool or whatever. So I don't know about you, but I'll probably stop looking
Starting point is 01:13:45 in the mirror and just start doing this going forwards. And the final point here is the IQ of AI has jumped 40 points in one year. Now, this is basically a measure of the IQ of these different AI models. And if you were to extrapolate this out going forward, basically these things are going to become much smarter than humans on average. And this is an on average take in probably about a year and a half's time. Right. And whilst this isn't like kind of like a fancy cool thing to look at, it's just something to keep in mind that these models are getting way more intelligent than you think. And for all the critics that are saying, hey, it doesn't understand the nuance of this or it just doesn't understand my personality, we're going to reach a point where these AI models and AI agents understand you way better than you understand yourselves. And that should be expected more imminently than it is a far-off kind of dream. Maybe to drive the point at home about how big 40 IQ points are.
Starting point is 01:14:43 10 IQ points is one standard deviation. Four standard deviations because of 40 IQ points means that AI models have surpassed from going from the bottom point zero zero three percent of the population to the top 99.997 percent of the population. That happened this year. That is nuts. In one year. In one year. Crazy. Wow.
Starting point is 01:15:04 Wow. Guys, we covered a ton in this AI roll up. I love these episodes. I learn a lot from you guys. See, Josh, thank you for helping us put the agenda together. And Josh, thank you for your takes as always. Awesome. It was a pleasure. Another great week. Yeah, this is no longer the bankless podcast.
Starting point is 01:15:19 So I don't know if I have to give a crypto disclaimer. This is the limitless podcast. The future is weird. The future is risky. And that is why we are doing these episodes to help us all stay ahead of the curve. And we are glad you are joining us on this journey into the frontier of technology and AI. So come back next week. Subscribe to the podcast.
Starting point is 01:15:34 If you have not already, subscribe to the YouTube. If you have not already, make sure to give us a fight. So we can grow this podcast and push it to the frontier of podcasts where this podcast deserves to be. Limitless Nation, I guess. We'll see you in seven days. See you, guys. See you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.