Limitless Podcast - The Dark Side Of ChatGPT: Is AI Making Us Dumber?

Episode Date: June 26, 2025

ChatGPT has become ubiquitous, transforming how we write, code, and even think. Today we explore the potential downsides of relying too heavily on AI, examining whether its convenience might ...be hindering our own cognitive abilities. ------ 💫 LIMITLESS | SUBSCRIBE & FOLLOWhttps://limitless.bankless.com/https://x.com/LimitlessFT ------TIMESTAMPS00:00 Start07:27 The Human Soul Is Missing16:11 How To Properly Use AI20:32 Multi Agent Breakthrough28:58 Takeaways------RESOURCESDavid: https://x.com/trustlessstateJosh: https://x.com/Josh_KaleEjaaz:https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:03 Hey everyone. I have some not so exciting news for you. If you are using ChatGBTGPT and you are listening to this podcast, well, chances are you are probably actually dumber for it. And this is a scary trend that has been uncovered this week that we are starting to discover all of these tools maybe aren't actually helping us in the way that we thought they were. And you could kind of relate this to things in the past where we've kind of tooled our way out of thinking for ourselves. We have like calculators where like we can't really do math that well. We can't really spell that well. We have autocorrected. And God forbid you ask me to navigate anywhere. without GPS. It's kind of a hard thing. So while we've improved in a lot of places, we've kind of gotten slightly dumber in others. And that's kind of what's happening this week with this new study. And EJ's, I'd love for you to introduce this to everyone because you're the one that introduced this to me and kind of scared the hell out of me. I'm like, am I actually becoming dumber or am I still able to think for myself? It raised a lot of questions that yeah, are worth talking about today. Honestly, it calls me to look hard in the mirror, Josh. And the answer is I think I am getting dumber using this GBT stuff. We totally.
Starting point is 00:01:03 Okay, so for context to everyone here, the geniuses at MIT performed a research study which looked at college university kids that were using chat GBT. Now, we've mentioned this a few times on the show before. One of the most popular use cases for GPT is to write your college essay because, of course, you don't want to be spending tens of hours the night or the couple of nights before your essay is due to write the essay. Kids these days, they got it so good. I know, I know. Can you imagine if we had that back in the name? Yeah, I dream. I'm sure.
Starting point is 00:01:35 I would have done the same. I would have got would be insane. Anyway, so they found that of the college students that were using chat GPT to write their essays, here's a crazy set of stats from this study. 83.3.3% of chat GPT users couldn't quote from essays. They wrote minutes earlier. I'm saying minutes. Because they didn't write it.
Starting point is 00:01:57 Because they didn't write it, right? Which should come as no surprise, but it is also warringly. an issue if you are like getting a college degree as a result of this and you're meant to like, you know, run into a job which is like requiring important roles and stuff like that. But furthermore, brain scans revealed the damage. Neural connections collapsed from 79 to just 42. That's a 47% reduction in brain connectivity. So Josh, the examples you just gave, we're not talking about the inability to not do math because you're using a calculator. We're not talking about not being able to whip out a map and guide yourself from A to B. We're talking about your entire
Starting point is 00:02:36 intelligence level collapsing by 50% because you're using chat GPT. That is a crazy takeaway. This is different because this is structural. It would appear as, like, I'm not sure not being as competent in math because we have calculators is the same as not being competent at generally thinking because you are not doing so. So this feels a little more extreme because of this, like actual structural damage that's occurring. Is it damage or is it just a rewiring to use your brain in a different way? So the way that neurons work, I think I've used this metaphor before on the brain. Neurons that fire together, wire together, right?
Starting point is 00:03:12 And so when you do thinking neurons fire, like neurotransmitters come out of the neural ends and then like that sends signals out to local neurons to like come closer and that's how thoughts and cognition gets encoded into the brain. But overall, you do need some basal level of cognition. to exercise your brain. Your brain's a muscle, and you need to exercise that in order to grow it. Now, I don't know if it is as stark or as drastic as this study is really making it out to be a 47% in brain connectivity reduction.
Starting point is 00:03:44 Now, I think the reason why this is such a big deal is like, yeah, so like when we all get bad at math, like, I can't do long division anymore. I couldn't do it. I can do multiplication for up to three numbers, you know? Actually, I really tried I could do four or five, but I would need some pen and paper, right? now I just use a calculator. And so the math part of my brain slowly just decays. It just gets weak.
Starting point is 00:04:06 But also at the same time, the way that your brain works is that there is this like raw computational energy that your brain can repoint elsewhere. And I think why this is so drastic is it's because it's such a just a holistic cognition, this decline when you frequently use chat GPT. And so it's not something incremental like, yeah, now we have a calculator in our pocket. Or now we have a GPS in our car. It's now we have a brain on our phones. And we actually have the whole entire brain can kind of be outsourced to the device.
Starting point is 00:04:39 And that's why it's showing up in studies like this. That's my take on this. Well, I saw this being described as a muscle, David. So if you don't train a muscle for a long time, it basically atrophies. And that's how they're describing it here in the study. Another takeaway was that they then tested these kids who used chat GBT, to write an essay without AI. And all of them, not some of them,
Starting point is 00:05:01 all of them underperformed people who had never used AI before when it came to writing these essays and essays. So it is definitely an atrophy of like your brain. It's like it's hemorrhaging intelligence. Now, one thing on that, the reason why humans, humans are some of the weakest animals on the planet. And when we grew cognition, when we grew brains,
Starting point is 00:05:23 it came out of our muscles. Our muscles got smaller. We got less muscular than our ancestors because we learned how to think, you know, work harder, not smarter. We learned how to throw a spear instead of like beat with a club, right? And so there is this sense that like,
Starting point is 00:05:39 yeah, the brain is actually a leverage tool on like you can do more with less with more brain power. The only problem with that is like now we have learned to extend that even further and now like the muscles are going to stay the same size. Now the brain's going to go down. But the chip in the brain will hopefully bring it back
Starting point is 00:05:56 once we get there, but we're not there yet. We're in the chat GPT era. Is this why aliens all have big heads? Big heads are like no bodies, yeah. Okay. Well, chip in the brain expert, Josh, what's your take on this, please? Are we heading in the right direction? Well, we're heading in the direction that makes sense.
Starting point is 00:06:15 It's like when presented with the easier thing to do, people often just do the easier thing to do. Water flows downhill. Yeah, when you could defer your thinking to things that are perceived as smart. than you, and they give you results that you feel excited about. Like, that's great. And this is just a continuation of the trend. It's like, how much time are people spending on Twitter getting their takes or on Instagram shorts, getting their ideas? And like, you're just kind of giving root access to more and more ideas. And this is a natural extension of that where now you can actually
Starting point is 00:06:42 prompt to get the direct injection of the information you want without having to think for it yourself instead of seeking it out on social media. Okay. So the Duma take is in a couple of decades, every newborn has a chip in their head, which has an AI which helps them excel at any level of their life. Come on. A couple of, okay, fine, maybe five years time. All right, let's go.
Starting point is 00:07:05 Neurolank, human trials coming soon, right? But if we were to take this out even further, so humans become just, I guess, like, meat vessels for this AI thing and they do all the thinking for us. What's the purpose of us then going forward? Always been the plan. We just take it. We can have an episode on this, but that's the thesis.
Starting point is 00:07:21 It's like, hey, we're just a bootloader. Getting back into this article, I really want to read this quote because I think this kind of speaks to a lot of what people's experiences are with using chat ShepT in things that are more than Google searches, right? Because I'm happy to use 4-0 for just like looking up a quick fact. But when I need to do it, do more work than that I start to run into this experience that I think you guys have frequently run into that I think people that use chat Chbitty will be able to relate to. So here's a direct quote from two English teachers who evaluated the essays. So these were the teachers grading the essays that were produced by Chatubit. And so here's the quote. Some essays across all topics stood out because of a close to perfect use of language and structure
Starting point is 00:08:01 while simultaneously failing to give personal insights or clear statements. These often lengthy. Essays included in standard ideas reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We as English teachers perceived these essays as. as soulless. As many sentences were empty with regard to content and lacked personal nuances. Empty in regard to content and lacked personal nuances. I think there's like a broad trend that we are going to see on the internet where there's a lot of, there's going to be a lot of cheap
Starting point is 00:08:37 AI slop out there that's going to be soulless. It's going to be empty. It's getting to be hollow. And it's interesting to be able to kind of see that show up in a scientific study talking about how neural connections and brains are reducing downstream of that and teachers are reporting association with soulless content or soulless writing. I wonder what the Gen Z is have to say about that, David. Because the content they're like consuming on TikTok and stuff, I would call. Is this a boomer take? Am I giving up boomer takes?
Starting point is 00:09:08 I think you are, dude. I think you are. Right? Right? What's that what's that Gen Z skibbitty toilet thing that just goes viral and viral day after day, that's just like dopamine, sensory, you know, bullshit, basically. And I think that that is where we're trending. I think the Gen Z is would disagree with you. It's disagree with me that it's not soulless. That it's not solace. That it's not solace. That it's not solace. That they,
Starting point is 00:09:32 that they, you know, this is their culture. This is their soul. Yeah, I'm just, yeah, the kids aren't all right. This is also just, I mean, it's a model. I mean, just, these are the current models that we have also. So you have to take this, this like evaluation with the of salt because should you want to train a value, a model on values of like having soul and having that like human nature, you can, you can tune a model to do that. And Chatsyipat T tried this with 4.5, which was the attempt to make it more human feeling and to be a better writer. And it failed pretty miserably. So clearly this is a kind of hard thing to do is to make the English language sound natural when writing a long form. But this, again, this feels just
Starting point is 00:10:10 kind of like a technical constraint that's temporary. And we could just tune another model, make it a little better. Could keep running evaluations until it gets to be exactly what we expect out of a good human writer. I love Josh's sake. We can actually put the souls
Starting point is 00:10:21 into the LLMs. We can give them souls. Just some recursive learning on that bad boy and your good deal. I would like to redeem myself and my boomer takes. And so I'll like to take the other side of this argument
Starting point is 00:10:32 where like, I don't know which philosopher this is, but that's because my cognitive load has lowered because I use chat chagipt, but something like Socrates was like very fearful of the notion of writing. Like, he thought that the writing would be invented and then everyone would be able to think less because they wouldn't be able to use their brain
Starting point is 00:10:49 to store stories, and it's going to be, like, an early form of, you know, chat chutea, like writing. And so the people were fearful of writing. People were fearful of the internet, you know, people were fearful of calculators. And so there is an ancient story here of, we invent tools that make our lives easier. And then the older generations are like,
Starting point is 00:11:10 oh, no, the kids aren't all. right. The kids are going to just like, you know, decay because they're never actually going to have to work for anything. So this fits into that pattern, I think, very, very neatly. At the same time, artificial intelligence is a net new thing that we have never seen before at the same time. And so it's one thing to be able to like make leverage with tools in the old days with pen and paper. That's one thing. But recreation of a brain, being able to simulate a brain is something that I don't know it has a very strong parallel to anything that's come before. So I'm like, I'm in between these two things. Right. I think something else that's worth pointing out is who made writing the
Starting point is 00:11:51 de facto way for one to express themselves academically or intelligently across any board? If you think about it, right? So I'll speak for myself. I'm a very visual learner. I'm a very visual describer when it comes to things. And I think that if I had some sort of AI assistant or chip in my brain, whichever way that could help me articulate what I'm thinking, what I'm seeing, what I'm envisioning in a really easy to understand way for each individual that's listening to this, then that's amazing for me, right? So I think this could be actually used as a super tool if that is the intended use of this tool. But if people are just going to, by de facto, just be lazy about this. I'm not positive about what this ends up, guys. So this is a DHS tweet. DHS is this
Starting point is 00:12:37 famous developer, he created a Ruby on Rails. He tweeted out, he's retweeting the study and he says, this tracks completely with what I've experienced using AI as a pair programmer. As soon as I'm tempted to let it drive, I learn nothing and I retain nothing. But if I do the programming and it does the API lookups and I can explain the concepts and I learned a lot. And so this is just like the difference between do you let the robot do the work for you or do you do the work, grow the your experience yourself and be able to like, you know, teach someone. So, so no one who uses chat chitb-t could teach someone anything because they just learn it in a very cursory level.
Starting point is 00:13:14 And again, this tracks everything that we've seen before. At the same time, though, like, I'm looking at the study and it's measuring the decline and cognitive load that, you know, kids are able to go under. But like, I'm thinking that there is a different thing out there that has grown in capacity that they are not measuring. So it's a trade-off, right? We are trading off the ability to think. and we are growing in our ability to access information, access intelligence at a moment's notice,
Starting point is 00:13:43 like at our fingertips. And that is not being measured in this study. And I think with the information age, with the infinite amount of data that we add onto the internet every single month, year, day, there's too much information out there to ever know. And so we need this external tool to parse through that information and apply it. And there's a tradeoff of like, well, now we have all this information at our fingertips. Now we can leverage it with these LLM models, but it's not in our brains anymore.
Starting point is 00:14:11 And so we are measuring the part that's not in our brains, and this study is not measuring the part that is the benefit here. That's an interesting take, David. What do you think the extra brain compute that we're not using could be used for here? Like history would tell us that it's innovation and creativity. Would you agree with that trend? That's a good question. I think the skill that people will grow is prompt engineering.
Starting point is 00:14:36 and so learning how to think critically about how to manage the AIs is going to be how to use your tools appropriately. And that can be very creative. And that opens up a brand new playing field of opportunity where like, yeah, maybe it seems when I articulate it here, it seems so reductive, it seems so small. Yeah, just get good at your prompts. But I think you can imagine a very big world of like thinking about prompts creatively and have that relate to other prompts in a different context. also creatively. And all of a sudden you're like, you are truly an engineer. You're not a construction engineer. You're not a tech engineer. You're not writing code. But you are engineering. You are doing that. It feels like the like Jevin's paradox applied to now brains where we've just kind of been like
Starting point is 00:15:20 offloading compute to less intensive or less important things that we could let computers do. And this is just kind of a natural extension of that. So now we don't have to think about a lot of the things that we're thinking about because you just prompt you at GPT and it gives you an answer. And then you unlock all of this new productive thinking space where you're, brain hasn't actually changed in size. You could just retrain to think about other things. But now the big question that I'm interested in is like, what does that new thinking space get applied to? Are people actually going to use that to solve more creative ideas and use this tools as leverage? Or is this just a net reduction? Does that space get wasted? Because we're all floating so much of it to
Starting point is 00:15:56 these AI models. It's actually a really good point, Josh. And I think that the first step is learning how to interact with the AI in the first place. Andrew Carpathy had a really good take this week where he basically describes humans as the constraining point when it comes to using AI. So he describes this really interesting cycle where a human will write a prompt and expect the AI to come up with a complete answer, right? Except that humans are kind of bad at prompting, right? We miss out a lot of context. We miss out a lot of nuance, right?
Starting point is 00:16:26 And then we kind of like follow up and we say, oh, no, I meant this, I meant this and this and that. And it gets really confused. You can see the AI getting really confused. and its answer gets very much less effective. But what Andrew suggests is a new form of interacting with the AI, which is ask it a simple question, introduce the scenario, and then let it respond to you in a short, simple answer.
Starting point is 00:16:47 And then what you do as a human is you verify whether that answer is correct or not, right? It's easier to ask a small question of which an answer that you typically have a good idea of, verify that, and then follow up with another question. And he says that the sum of all these different parts or prompts, if you like, ends up in a much better answer than if you were to just write one entirely long prompt. So it's this kind of like self- iterating loop of human to agent or human to AI. And I really think that's a much more optimistic model of how humans and AI can work together versus this just slop chip in the brain, which we'll eventually get to,
Starting point is 00:17:24 but just, you know, offloading everything onto them. So really the difference here is instead of trying to like one-shoulding, shot your prompt. You incrementally take small steps forwards towards a goal that you want. And you don't take, you don't try and like jump the entire staircase. You take one step at a time. And you slowly get there rather than just like one shotting this, hey, write me an essay about like the thesis of like the market economy in the Civil War. And then turning that in it's like you actually kind of use it as a tool rather than an outsource. Yeah, exactly. Like actually in this example, in this presentation that he gives, he says,
Starting point is 00:18:02 don't ask the AI to write 10,000 lines of code for your new app that it has no context on, right? Talk to it. Let it write a bit of code, review it, iterate, and then kind of like build into this larger thing. Josh, what's your take? And this is kind of how you get better at everything, right? It's just you want as much feedback loops as you can,
Starting point is 00:18:20 and you want to tighten those loops as tight as possible. For maximum control and maximum just like learning through each iteration. And I think, yeah, when you do ask the, AI for a large amount, for 10,000 lines of code. It just, it lacks the context that it needs to actually produce a good answer. So by doing this iterative formula, not only are you getting closer to the correct answer, but you could evaluate quickly and it augments your ability instead of replacing it. So it becomes like, to DHH's point earlier, it's like you have this pair programmer that you can work with and you could kind of evaluate and through the evaluation process,
Starting point is 00:18:53 you are learning, but it's, it's helping you tighten that feedback loop. It's helping you iterate faster. And I think that's the most exciting part of this, to Andre's point, is like, you just want to move faster. And if this tool can help you climb those steps in each step faster and faster and faster, while you're still learning, you're still retaining information and you still have the context that the AI model doesn't have, that's a huge one. And that feels like the ideal use case, for now at least. While AIs are still lacking the context that we have, this is an amazing way to work. And you can just move so much faster. Maybe I'm reading into this like a little bit too much, but it feels like just trying to, the right way to use these tools, according to
Starting point is 00:19:28 Andre Carpathie's take, is much less of a, like, master slave relationship and much more of two collaborators, two co-collaborators iteratively working towards a future rather than this one person of like, I've got this essay, chat to BT, write this essay for me while I go, fuck off. Or instead, it's like, hey, chatchubit, we need to write an essay together. Here's what I'm thinking. what are you thinking about that? And then that starts off the process. And it's much, if it's much we're healthy of a relationship,
Starting point is 00:19:59 it's much for a collaborative, it's much more sustainable. When the robots take over, I feel better about this path than the alternative. And so, like, rather than just like outsourcing your work so you can go out and play, it's like, no, you guys do the work together
Starting point is 00:20:13 and you iterate towards a better outcome. Okay, David, but you know what's better than a human AI relationship? An AI-AI relationship. Right now. Oh, no, we're taking the human out again. Yeah, we're taking the human again. The reason why I bring that up is if any of you have been using Claude's new research feature, so that's the AI model from Anthropic, it has this new deep research feature,
Starting point is 00:20:38 which is kind of similar to chatGBT deep research, and it's really, really good. And they reveal this week how it works. And the fact is, they're using a bunch of AI agents in the back end to basically run this iteration loop that we just described between, human and AI, but instead it's AI and AI. So it's a Claude model talking to another Claude model. And what happened was when compared to their previous research feature, it improved by 90%, 90, on the output for the average user that used Claude Research. So what it's basically showing is, number one, it's verifying what Andrew Carpathy is saying, that having this iterative cycle of
Starting point is 00:21:18 back and forth, smaller questions, understanding the nuance and having a deeper conversation, is actually very useful. But the slightly doom a take is maybe AIs are the best people or things to do that. And we should just cut the humans out entirely, which brings us back to big brains, skinny arms, no muscle mass, and a chip in our brain. So how does this work? Are there just agents talking to agents? Yeah, if you scroll down.
Starting point is 00:21:44 Yeah, so if you look at this diagram that you've got pulled up here, David, essentially it shows you that you have like this kind of like master Augustrator agents. So think of this as like the chat GBT that you talk to on your interface. But what chat GBT is doing on the back end in this case is talking to a number of different sub-agents. And the reason why I call them sub-agents is they're tasked with smaller things. So typically an AI model that you interact with is a very generalized model, right? It's meant to try and know everything and interact with everything. But these smaller sub-agents are tasked with, hey, can you just check the facts of what this guy has asked us and just see if like, you know,
Starting point is 00:22:25 the current events that he's referencing actually happened. Then you have another agent, which is a reasoning agent, being like, okay, now that I've talked to this agent and they've verified that these facts that this guy's claiming is true, let me think about like the possibilities of where the solutions might end up for the question that he's asked. Then you have another agent which checks the reasoning agent being like, okay, is this agent biased in any way? Has it used any kind of like political sources that I wouldn't have used, et cetera?
Starting point is 00:22:52 So it's tossed with many different things and it goes in endless loops until it kind of like creates this kind of like average point. And actually off this conversation when we were introducing this topic, David, you made a really good point that this kind of sounds like reasoning, doesn't it? Yeah. Yeah, I thought that this was what reasoning was, which is like this one model thinks and then it checks its work. And the only thing that I see being different here is that there are. There's more segregation in the roles. But other than that, like, when you zoom out and view it from a Burzai view, it's more or less the same. You have a thinking process.
Starting point is 00:23:28 You have an iterative meta thinking about the thinking. You have an evaluation of the thinking. And then you have a redirected outcome based off of the meta thinking. And yeah, here's there's different agents. And I could imagine, like, some, if we're using Chat Chitabit, Open AI models, you have the 4-O model doing some quick fact-checking. Just, you know, use 4-0 for the quick fact check, just like run through that really quickly. But then you have O3 Pro doing the deep work, the deep analysis. And then you have 4-0 doing the quick stuff, just the quick stuff.
Starting point is 00:23:59 Just like, is that really? Let's double-check that. So you could imagine yourself, like writing an essay, doing deep work. And then you're like, oh, what year did that thing happen in? And then you open up your 4-0 model and it does a Google search. And then that iterates and informs the O-O-3-Pro model. And that's kind of what I'm seeing here. Dude, that's a good take because even I use open AIs models exactly like that.
Starting point is 00:24:20 Like I use Foro when I'm like replacing a Google search with a chat GPT search. I give an example. Today I got an email from one of the sports memberships that I have and they said, hey, we've got to close down the establishment for a month. Here are your three options. You can either pause your membership. You can either take some credits that we're going to give you or you can, you know, basically opt to do nothing.
Starting point is 00:24:43 And I didn't spend a second thinking about this, literally. I copy-pasted the entire email and stuck it into 03 specifically, right? So you make a really good point that it's almost like these AI models have personalities and different attributes. Because the question I had was, why don't just use a single instance? You're literally using Claude as these different sub-agents, so why not just run it through one single thing? And maybe the missing fact is context. Maybe the missing fact is like a combined memory kind of confuses the AI. and prevents it from like thinking clearly,
Starting point is 00:25:16 whereas segregating it into these different models is a better hierarchy? I don't know. Yeah. Yeah. It's not unlike how the brain works. So the brain has different zones, different regions.
Starting point is 00:25:27 You know, you have different dedicated pieces of architecture in your brain that specialize in different ways to think, right? You have your memory, you have your senses, you have all these different things. You have your feelings,
Starting point is 00:25:38 your emotions, all this kind of stuff. And those different zones of your brain are all kind of in competition. for attention. They're all kind of like flagging things and some things are going to be flagged louder than others as in like there is a lion. I'm going to yell my this part of my brain that is my role to tell me that there's a lion in front of me is going to yell so goddamn loud. And my relationship stress, I'm just, that's just not making it way to the surface. And so there's
Starting point is 00:26:06 this like this internal market economy of negotiation that your brain has in order to produce an effective output. And so what I'm saying here is I'm seeing different modules working in orchestration and some things are going to like have priority or urgency being like, oh, I just fact-checked your thingy and you are so off base that I'm going to yell and scream. And once I'm heard, then I'm going to quiet down. I'm going to let a different model take over and move forward there. So I'm seeing a lot of parallels with how brains work here. It feels like the natural extension of what we've been seeing with chat GPT where it just has tools. So now, Now it can search when it feels like it needs to search.
Starting point is 00:26:43 It could generate an image when it thinks that it's helpful. It could retext off of things. And this is just kind of the extension of that where now this large model has the tools, well, here's my fact checker, and then here's my logic checker, and here's my math checker. And it's just this tool set instead of a calculator, it's an actual thinking model. So you get this kind of like compound reasoning effect, but hyper-specific with the specific domain knowledge and context that's required, which probably just yields to much better results. But now I'm curious about what does this look like to compute?
Starting point is 00:27:12 Because this seems like a huge increase in token generation relative to just asking a prompt and getting one reason there instead of like these 10 different tools that are all thinking. Jensen Huang wins again, Josh. That's a lot of tokens. You would think that like one good prompt would actually satisfy the needs of like seven or eight or nine more iterative prompts. You would think you could. So what I'm saying here is like the architecture to do a better one shot prompt. You just make one prompt, and then the output is actually what you need,
Starting point is 00:27:42 and you don't have to keep on prompting it again and again and again. That's kind of what I'm seeing here. Right, but then the prompting is just getting offloaded to agents. So it's still happening, but it's just happening behind the scenes. So, you know, it's still using the same amount of computing. It's taking cognitive load off of humans, and it's finding ways to put it into shaft GPT. Making us into big brain, small-bodied aliens. Actually, no way, small-brain, small-brain.
Starting point is 00:28:07 Small brain aliens. Small brain, small body. Small brain, small body. Yeah, we just lose. Yes. Oh, God. There is a note here. It says they do require more tokens to achieve this.
Starting point is 00:28:19 It's four times the tokens for regular chats and 15 times the token count for multi-agent systems. So the output, we need to beat the cost for the latter, which is a 15 times multiple on required tokens per query. So a lot of compute required, but progressing in the right direction. So I think the theme of this episode, the question of this episode is, does humanity collapse into just exporting its cognitive load externally and our brains just kind of atrophy over time? And what I'm saying here after like kind of like working through some of the stuff, there is an immense gravitational pull for that outcome happening. And so how do we want to prevent that? Do we want to have that not
Starting point is 00:29:00 happen? Is that a bad thing? Whether or not that happens to the individual, I think that kind of comes down to just individual willpower. And like, because you can think harder while you use chat CBT. That is an option to you. You can also think dumber also while you use chat chabotty. That is also an option to you. And so some people are just going to become sheeple. Some people are going to actually use these tools to make mega trillion dollar tech companies.
Starting point is 00:29:26 And it comes back down to motivation and willpower, which has always been the case. That's always been what it's been. And so I don't know. if anything is meaningfully different here, other than there will definitely be more sheeple. Yeah, again, these are just tools for leverage, and it's just a tool, and you could use it to improve yourself, or you could use it to offload your cognitive load
Starting point is 00:29:47 and not think for yourself. It's very much in the user's hands. But there are ways that you can kind of help push things in the right direction. There was this great video that I've been obsessed with. I've probably said it to you guys a couple times. I'd love for you to pull it up just briefly before we wrap up here for the people as a reward, where it's this video. of Sydney Swinney and Drake teaching math,
Starting point is 00:30:07 and we haven't included this yet, I'd love people to see it, where you can actually generate meaningfully great content with AI and push this onto other people in a way that's digestible. So when you do see your favorite actress or your favorite rapper
Starting point is 00:30:20 and they're talking about these complicated topics, like that is a meaningful change that you could kind of push onto others using these tools. Can we get Sidney-Sweeney to teach a generation how to do calculus? Yeah. And like, so it is hyperpersonal. It's on you to decide how you want to use these tools. But you also do have the opportunity to create things that can make it easier for other people to get aligned and think for themselves as well. So I think that's probably my takeaway is, is you can also change things for other people as well. Hmm. That's very optimistic, Josh. It does make sense that we start with brain rot and then we move into like, okay, I'm done with the brain rot. Let me be productive now. Seems like direct order of operations.
Starting point is 00:31:01 I was like, man, I understand this. I can, like, I'm into it. I was, I was hooked and like, I would never watch a math video. But this one, I was like, okay, yeah, that's cool. Was it, who did it for you? Sydney Sweeney or Drake? Well, they started with Sydney Sweeney and that was a good hook. And then when they got Drake, that was it, I was locked in.
Starting point is 00:31:17 This was meant for Josh. It was the one two pun. This video was meant to teach Josh about the Pythagorean theorem. Yeah, let me tell you, I can rehearse this word for word. Yeah, I was about to say, Josh, what is the Pythagoras theorem again? A squared, B squared, equal C squared, baby. Come on. Thank you, Sidney.
Starting point is 00:31:34 So there you go. It's possible. All right. Let us know what you think in the comments. Do you think we are doomed? Do you think that we are going to offload all of our cognitive load onto chat, GPT, and we're never going to be able to think again? Or will we just be smarter because we'll have the tools to be smarter? Let's know what you think.
Starting point is 00:31:52 If you found this video on YouTube, make sure to subscribe. We do these AI roll-ups. We talk about the news, the week, and the drama. AI, the Game of Thrones race to create God. We talk about this and all the other things that are going on in the AI labs world pretty frequently. So click that subscribe button, click that like button, and we will see you in the next video.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.