Deep Questions with Cal Newport - AI Reality Check: Did the LLM Job Apocalypse Begin Last Week?

Episode Date: March 5, 2026

AI Reality Check: Did the LLM Job Apocalypse Begin Last Week? Cal Newport takes a closer look at recent AI news. Below are the topics covered in today's episode (with their timestamps).  Get your que...stions answered by Cal! Here’s the link: https://bit.ly/3U3sTvo Video from today’s episode:youtube.com/calnewportmedia STORY #1: Jack Dorsey announces layoffs at Block [1:28] STORY #2: The education level of LLM-based tools [11:45] STORY #3: What’s happening in the world of computer programming? [19:24] Links: Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow  Get a signed copy of Cal’s “Slow Productivity” at https://peoplesbooktakoma.com/event/cal-newport/ https://x.com/jack/status/2027129697092731343 https://www.nytimes.com/2026/02/26/technology/block-square-job-cuts-ai.html https://x.com/emollick/status/2027153371241607420 https://www.forbes.com/sites/ronshevlin/2026/02/27/block-lays-off-40-of-staff-and-blames-it-on-ai-dont-buy-the-excuse/ https://www.youtube.com/watch?v=56HJQm5nb0U http://calnewport.com Thanks to Jesse Miller for production and mastering.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 Did the FinTech company block just lay off 40% of its workforce due to AI automation? Can the best AI models pass a freshman computer science class? Programmers love a Gentic AI, but how exactly are they using these tools? For those of you who followed the tech news this past week, these are all pressing questions. And we're going to try to find some answers. I'm Cal Newport, and this is the AI Reality Check. Now, I want to do a quick aside before we get into this week's stories, because this is a new format for my podcast feed. I want to give you a quick explanation.
Starting point is 00:00:43 More and more on the main Monday episode of this show, I've been reacting to the latest AI news where I put on my computer science hat, and I try to push back on hype and vibe reporting and surface the deeper trends in these topics that I think really matter. but not everyone who listens to that Monday episode wants to hear about this. So I decided I would move the AI discussion to its own many episodes on Thursdays. This is an experiment. Maybe I'll move it back. Maybe I'll move it to its own feed. Maybe I won't do it every week.
Starting point is 00:01:14 So just bear with me. But keep in mind, if you want to share any of these episodes, we're also putting up on YouTube so you can send the video link to someone who might need to hear some of this reality checking. All right, that's enough logistics. Let's get into our first story. story of the week. All right. Late last week, Jack Dorsey, the CEO of the fintech company Block, you know, they're
Starting point is 00:01:34 responsible for Stripe and Cash App among some other products, posted a note on X announcing massive layoffs at his company. Let me read you from this note. Dorsey said, today we're making one of the hardest decisions in the history of our company. We're reducing our organization by nearly half. from over 10,000 people to just under 6,000, that means over 4,000 of you are being asked to leave. Later on, he says the following,
Starting point is 00:02:07 we're not making this decision because we're in trouble. Our business is strong, dot, dot, dot, but something has changed. We're already seeing that the intelligence tools we're creating and using paired with smaller and flatter teams are enabling a new way of working, which fundamentally changes what it means to build and run a company and it's accelerating rapidly. Can I make a quick aside?
Starting point is 00:02:29 This is like a hint to CEOs. If you are announcing the layoff of 40% of your staff, can you use capital letters at the beginning of your sentences? It really caught my attention in this tweet that he doesn't capitalize any of his words. I don't know, it feels a little disrespectful. But let's get back to the actual story here. The traditional media was quick to embrace and amplify Dorsey's claim
Starting point is 00:02:53 that these layoffs were because, AI made these positions redundant or unnecessary. Here is the headline, for example, from a New York Times article about the layoffs. The headline read, Block cuts 40% of its workforce because of its embrace of AI. Here's the subhead from that article. About 4,000 workers will lose their jobs as the payment company does more work with new artificial intelligent tools, comma, its top executive said. Another quick aside, because this is a journalistic thing I begin to notice more and more, I think really starting around the COVID coverage era, where you have a claim that feels right that you want to put in your subhead because there's a point you're trying to make. But either it's hard to fact check or you don't want to fact check it because you're not quite sure what you're going to find.
Starting point is 00:03:41 It'll be complicated. So you just make the claim, then you put a comma and attribute it to someone else. We didn't used to see attributed claims in subheadlines or headlines. but we began to see it more. It's a good way of, I'm trying to make a point here, and I don't actually want to go and directly verify, did they lay off all these people because of AI tools? I'll just say, they lay off people because AI tools, said someone.
Starting point is 00:04:06 So you add as a comment. So just keep in mind that sort of reporting trick. If we read the article itself, the framing makes it super clear what they're implying here. Here's from the article, the cuts made as Block reported strong financial results for its most recent quarter, or perhaps the most striking example so far
Starting point is 00:04:21 of a technology company's making plans to eliminate employees because of AI. I don't mean to pick on the times a lot of publications had similar coverage and the stock price went up 20% for Block. This is an important article to look at in part because I got sent it a lot of times.
Starting point is 00:04:38 When I get sent an article a lot of times, that means it is catching people's attention and is either exciting or upsetting them. So it's worth some closer scrutiny. I think there's a general viable that this article is trying to verify or validate, which is the vibe of something big is happening. Yeah, we've been talking about AI,
Starting point is 00:04:57 could get rid of jobs or whatever, but now it's happening. See, look, this is the first shoot-a-drop of a major crisis. Like, it's the first company that laid off almost half of its workforce. This is the thing we've been warning you about. Major economic disruption, it has begun. That is a story that is very sticky and very attention-catching. But is it true?
Starting point is 00:05:17 Well, if you dig a little deeper, there's a lot of commentators online who know this industry sector a little bit better who are not at all convinced. Let me give you a few bits of contextual information about Block and its layoffs. Between 2019 and 2025, Block's employee count grew from around 4,000 employees to over 10,000. So they had massive growth during the pandemic. A lot of this growth actually came from acquisitions in the crypto and blockchain space earlier in the pandemic. When those things were still hot, those acquisitions are now, of course, floundering as those technologies, especially the blockchain-based software technologies are having a hard time. A lot of their startups are really struggling. Despite the fact that the Times said that they had, quote, strong financial results, end quote,
Starting point is 00:06:10 if you actually read the industry analyst who study the quarterly reports from Block, they're not impressed because the last two quarters, they actually fell short of their earnings target. So here's an alternative explanation for what might be going on here. Like just about every major tech company in America, block overhired during the pandemic when that industry was booming. Also, like just about every major tech company right now in the last two years, they're shedding jobs to try to right size back
Starting point is 00:06:42 because they had over-hire during the pandemic. We've talked about on this show before. Amazon doing this. Microsoft is doing this. This is a common trend in recent years. But how do we know it really wasn't AI? AI is the reason why they laid off these 4,000 people. Well, there's a couple things going on.
Starting point is 00:06:59 One, a lack of specificity in Dorsey's statement. He just says, like, well, we have these intelligence tools, and then he talks about non-AI things. Again, we have, like, different types of teams. and we just, we don't need as many people anymore. No specific reference of this particular tool has taken on this role, so we fired, we shut down this division because we don't need employees there.
Starting point is 00:07:18 Or in this division, what we did is we laid off the entire entry-level class because the managers can now get by with less. It's very vague what he said. Two, as we'll hear later in today's episode, though there is major changes happening in computer programming because of new agentic AI tools, basically every serious commentator who is studying this industry, says, yeah, we're not yet, we haven't figured out, the companies haven't figured out exactly
Starting point is 00:07:41 what this means. We're certainly not laying off, ready to lay off half of our workforce yet. These tools are very new, the versions that people are getting excited about. But maybe the most telling reason why we know this is not AI is that Ethan Mollick didn't buy this claim. Ethan Mollick from Pinn is a respected AI commentator who is very much on the booster site. He's very, AI is going to change everything. And even he didn't buy this idea that AI was responsible for the layoffs at Locke.
Starting point is 00:08:13 On a LinkedIn post, Ethan Mollock said the following, referring to the layoffs, this isn't about AI. But that is a smart way to sell it if you want to see your stock jump 20%. Then on X, Ethan Mollick said the following in response to Dorsey's tweet. Two things. One, given that effective AI tools are very new and we have little sense of how to organize work around them, it is hard to imagine a firm-wide, sudden 50% efficiency gain. Two, CEOs with Vision who hired well should also use AI for expansion and augmentation, not decimation.
Starting point is 00:08:49 I'll just say as an aside, I've been hearing this from the managers and programmers I've been talking to in the last couple weeks about how they're using agentic programming, I am much more likely to see the effect to be, I mean, I haven't had any of them say we're letting people off, but I have heard a lot of people say like Mollick implies here, The reaction to these tools at a lot of these startups has been, do more work. Great. Now we can do more work with the same people.
Starting point is 00:09:13 Let's make more money out of the same people, not let's lay people off. All right. We have another voice of skepticism here. This one comes from Ron Shevlin, sorry, who is an industry analyst who specializes in the fintech sector. So he specializes in the sector where Block is, and he writes and covers Block professionally as a financial journalist. He wrote a column right after this. It was titled The Following,
Starting point is 00:09:34 I mean, Locke lays off 40% of staff and blames it on AI. Don't buy the excuse. And he goes on to say, yeah, they overacquired, they made some bad acquisitions, they need the right size. And they're blaming AI because it sounds better than saying, yeah, we made some bad calls during the pandemic. And now we have to adjust to it. All right. So what's the bottom line here in terms of reality checking this story? AI will have an impact on jobs.
Starting point is 00:10:02 I'm not one of these skeptics that says this is a fad that's going to go away, that this is going to be like a blockchain-based software that really just failed to catch on. But we're not really there yet, outside of some narrow instances. The tools have not matured to the phase where we really understand what's going on, where we're really seeing major changes to the way companies are structuring themselves. Most of the commentators I can find who follow this closely say, yeah, sure, this is probably, there is going to be things happen. jobs. We don't know if this could lead to expansions or contractions or what sector is going to hit more than yet, but we're not there yet. There is a tendency, I think, among coverage right now to lean into the debt vibe that AI is going to affect jobs and try to keep making the claim is happening right now. And what's happening is the CEOs of these companies, especially tech
Starting point is 00:10:51 companies, so CEOs like Jack Dorsey are seeing the tendency towards that vibe reporting. This is very tempting for journalists. And so they're trying to, there's a term Annie Lowry introduced. I think it was something like AI washing. They're trying to justify layoffs that are due to things like pandemic overhiring by saying, well, AI, we're being smart. So they look better, like better decision makers and like they're more forward thinking. It's important that we cover AI's impacts on jobs accurately so that when real impacts come, we can see them with clear eyes and react to them honestly and hold to account the actual change. Why are you firing these people. Do we, what's happening here? What, what leaders doing this? We really do need to cover that
Starting point is 00:11:34 accurately. So we have to stop the vibe reporting on the AI job apocalypse. It's not here yet. And we don't know if it's going to come at all. But the best we can do is try to be accurate about what we're saying. All right. Second story. This one's kind of a fun one. All right. So Anthropic CEO Dario Amade famously said in recent, I guess this is all this last last year, famously said that their LLM products have the intelligence. of someone with a doctorate. Before, like, well, it was as smart as a high school student, then as smart as a college student. Now it's as smart as someone with a doctorate.
Starting point is 00:12:08 He described his product, deploying his product like having an, quote, army of PhDs, in quote, in your data center. Last month, he used a related terminology. He said, we can offer you a country of geniuses in a data center. Well, I was thinking about this approach of sort of describing AI with human education levels. When I came across an interesting video that was posted in January, which did a really cool experiment, a TA for Cornell University's freshman computer science course CS 212, they probably call it 2112. This is their sort of advanced freshman fall CS course. So if you come into the CS
Starting point is 00:12:50 program there as a pretty advanced student, this would be the course you would take. But it's for freshmen in their first semester. He was TAing it. So he said, here's what I'm going to do. I'm going to take the three leading AI models, and I'm going to give them every graded thing we do in this class. I will give to the models, and then I will grade their results. At the same time, I'm grading the real students in the class using the exact same rubrics.
Starting point is 00:13:17 And then at the end, I will, you know, wait the grades, just treat them like a student in this class and see how they do. Let me play a quick clip here. This is the intro, the intro to that video. Can AI pass a first semester freshman CS class? To answer this question, I ran every single assignment, every exam, every quiz, every graded interaction the students got this semester through the three best models I could get my hands on from ChatGPT, Claude, and Gemini.
Starting point is 00:13:46 Then I graded each result with the exact same rubric we use on students so that I could give each AI the most accurate possible grade in the class. All right, so this was a very entertaining video. watched the whole thing because he goes through specific assignments. He's like, whoa, look, this is really cool. Oh, my God, look at this crazy thing. It's well edited. I thought it was really cool. In the end, they have a competition in the class where you create these like critters that evolve and they had the AI models critters compete with the critters from the class. A couple things I noticed from the videos, sometimes these models did very well
Starting point is 00:14:18 on assignments. Sometimes they really struggled. Sometimes they made very revealing, baffling mistakes. Like in an early assignment where they were doing some simple string concatenation, and the assignment had you write a program that was going to output the word, you're going to create a string concatenation, but basically you're going to output the word hello is what it asked you to do on the screen. And Claude's submission outputted Hello World World. Because what's going on here is there's a lot of AI assignments out there, I mean CS assignments out there that famously say, hey, write Hello World as the first thing you do when you're using a new programming environment. and clearly it was just trying to statistically grow out its answer.
Starting point is 00:14:56 It's like, well, if I'm printing hello in an assignment, I got to print hello world, and then added another world just to be safe. But how did they end up grade-wise? Okay, so I have the grades in front of me here. They used the latest, greatest models from Chat ChpT, Cloud, and Gemini. They actually upgraded during the fall. They did this last fall. They were using the very most expensive version of the Cloud LLM available.
Starting point is 00:15:18 I forgot which one. And then when a new one came out, they upgraded to that. new one. On some assignments, these things did pretty well, especially the early assignments. We got, like, on the first assignment, ChatGPT got a 102 out of 104, Claude got a 99 out of 104, Jim and I got a 101 out of 104. They also did well on the final exam, because this was an in-class final exam where you're just writing answers, right? So, like, you just have to use the knowledge in your head. That's a good setup, again, for LLMs, and so, like, Chat-C-C-T got a 93 out of 100. Jim and I got an 84.
Starting point is 00:15:53 there's other assignments where they really struggled. Assignment 6, chat GPT got 32 out of 100, Claude got 20 out of 100, Jim and I got 13 out of 100. On assignment 5, chat GPT got 60 out of 100. Claude got 6 out of 100. Jimini got 67 out of 100. There's a lot of issues it had with hallucinating.
Starting point is 00:16:13 It had a hard time if you watched this video where the assignment would give you multiple, you know, some rules for what to do in the assignment and it would just sort of skip some of the rules. Sometimes, I think in the example where Claude got six out of 100, it just kind of made up its own assignment and solved that one instead. So it's sort of a mixed bat. In terms of its final grades, two of the models, Claude and Jim and I ended up getting a C plus in the class. This is a freshman computer science.
Starting point is 00:16:40 You need a 2.5 to declare in the initial classes, you need a 2.5 GPA at Cornell to declare yourself as a computer science major. A C plus is like a 2.3 or something. so they weren't doing well enough to actually even major in computer science. Chat Chacupiti did better with the B plus. It was below the median for the class, but it did somewhat better. Anyways, here's what's interesting about this. I mean, there's kind of the catchy thing is like this is an army of geniuses. This is a PhD level, whatever.
Starting point is 00:17:09 They're struggling with the first class you take as a freshman in computer science, which is the topic that these models are best suited for. So there's that sort of like gotcha moment. But that's not really what this is about, right? Because I'm sure you could get these chatbots to get you the right answer to these assignments if you're willing to be sufficiently interactive and hold their hands and get the prompts in just the right way and correct them. That's not really the right way, the right takeaway here. I think the right takeaway here was that it was stupid all along for Dario Amadei to try to use human education levels as a way to describe a large language model. This is just different.
Starting point is 00:17:47 The human brain, we have a general purpose integrated brain that does lots of things. The whole person is educated. It makes sense to talk about the educated education level of a person, but not really a language model. It turns out a lot of these claims, like when Daria Amadee, I went back and checked this out, excuse me, why did he originally say that their language models were now PhD level? It's because they had the original time he started saying that is that they had given it math problems, like a problem set. and it was doing well on the math problems from this problem set. And one of the professors who worked on creating those problems said, those are hard problems.
Starting point is 00:18:24 Those are the type of problems I would assign to my graduate students. That's where they originally got the claim that this is a PhD level, right? So this idea of just generally talking about the intelligence level of language models, I think it's anthropomorphizing and is not useful. The reality is these are very specialized tools. They tend to get tuned for specialized purposes. and to get their real value, it's a combination of the tool and learning as to human how best to use and deploy the tool and check its work and redeploy it towards that particular goal. That is a very different tool use scenario. It's a tool use scenario. It's very different than imagining just an anthropomorphize brain that has a general education level.
Starting point is 00:19:03 So hopefully we can stop using terms like having a data center full of PhDs. Also, that was a clever video. So, you know, kudos to that TA for putting that together. It's a hard. It was a hard. CS class. It was definitely harder than the intro CS classes I took at Dartmouth, but it reminds me of the type of classes we had at MIT. So, you know, it's a hard class. All right. One final story here. The story actually comes from me. Obviously, there's a lot going on in the last four or five months with new agentic coding tools being enthusiastically embraced by computer programmers. A lot of these viral essays are going around and articles that are influenced by those essays. And, and articles that are influenced by those essays. and podcasts where people are talking about, oh my God, huge changes are happening in the world of computer programming. This is, and this is really going to be, this is like ground zero for the long promise. We're about three years in now. The long promise claim that the language model based tools are going to have massive disruptions.
Starting point is 00:20:02 But what actually is going on? I've been trying to find out, as people who subscribe to my newsletter at calnewport.com know, a week or two ago, I put out a call for professional computer programmers to send me detailed reports about exactly how they and them teams use language model-based AI tools and how this has changed in the recent past. I have over 350 such reports in so far. I've carefully made my way through 100. I'm really trying to get my brain around what's really happening with professional programmers and these tools. I thought it would be useful today. The read you excerpts from two responses that I think are very typical of the type of responses I'm reading that try to give you a
Starting point is 00:20:42 better picture of what exactly does it mean for these programmers to be using these new tools. I cut out details in these and have some allision to get rid of identifying details. All right. So here's my first excerpt. I'm a software developer working out a tech startup. Our use of AI varies by person at the company, but my use has skyrocketed starting in the fall of 2025, so much so that I don't write any code anymore. but I'm still heavily involved in oversight in architecture.
Starting point is 00:21:15 I used cursor quite a bit last year, but have moved on to working directly into terminal with codex at work. The workflow goes something like this. Plan a feature or start a discussion about a bug fix with AI. Discuss until I'm satisfied. Have it output a plan. iterate on the plan. Then execute the plan.
Starting point is 00:21:34 After execution, I verify the outcome. I use Git extensively throughout this process. get is a repository software for managing code that multiple people are working on. I've tried the multi-agent approach where multiple agents are working on different Git work trees at the same time. I can't do it. It's too much context switching and I end up just accepting things. I wouldn't normally accept because it's an exhausting process. The quality dips dramatically. I love my current workflow. I've developed things in the past week that would have taken me months before. All right, let's pause there before I do the second excerpt. This I would say
Starting point is 00:22:10 is very typical of what I would call the enthusiastic all-in user from among the subset of professional programmers. Most of the code they're producing is now actually being generated by an AI agentic tool. Typically, it is Claude Code, where they switch the model behind it. I don't know if it was Opus to Sonnet or Sonata Opus in the fall, and that really seemed to make it good enough now that a lot of people want to use it. Though I would say I also see ChatGPT Codex is also commonly used. But an interesting thing about this, I want to point out two things.
Starting point is 00:22:43 One, there's a lot of just chatbot discussion happening in these workflows. Remember, he talked about making a plan, iterating on the plan. That's all actually like chatbot interaction. So sort of related to using these tools to produce more code, these programmers have entered a more interactive way. They want to talk back and forth. It reminds me a lot of the research I did for the New Yorker about how students are using chatbot. to write paper, they find talking back and forth with the chatbot as they write is less straining.
Starting point is 00:23:15 So that's picking up here. But also notice this programmer is not really big on the multi-agentic approach, which is what you see most often told in the sort of breathless online articles and YouTube videos is this idea of, I have 20 agents working at the same time, and this agent checks this agent, and there's a supervising agent that looks at those agents, and then it reports over here to the hierarchy agent. And then that agent is on OpenClaw so that it can, it can send recommendations to my YouTube channel and to make sure that it pays that, you know,
Starting point is 00:23:44 these super complicated trees of different agents supervising other agents, you really aren't seen that, at least in my study here, it's, you're not seeing a ton of that in professional programmers. You tend to see it more in people who are like working on their own personal bespoke projects and find it really fun. But I don't see as much, and that's what we saw reflected here. All right, let me read you one other typical, uh, uh, excerpt here from a real professional professional programmer. I think this captures well another very common type of response,
Starting point is 00:24:16 which is a little bit more reticent, but still appreciating the power of these new tools. Let me read this. I'm a software developer working at a tech startup. Our use of AI varies by person at the company, but my use has skyrocketed starting in the fall of 2025. Oh, wait, that was the last one. I'm sorry. This is the new one. I don't want to just reread the last one. All right. I'm like a language model here, just sort of randomly hallucinating the same answer twice. No, no, here's the real second excerpt. I'm a staff software engineer at a tech startup. The AI models have made the easiest tasks even easier, scaffolding a solution, boilerplate code, replacing variables or moving and import. Repetitive tasks are good candidates. LLMs are also useful as a way to quickly
Starting point is 00:25:00 investigate the documentation of a tool or get a reminder on syntax for something I'm trying to do. But the easy stuff, the tasks that AI can do well, was never the hardest nor most time-consuming part of my job. When actively using these coding agents, I found that it generally slows me down. Using them introduced tasks I didn't have before, composing a prompt, checking the output, reprompt, manually refactor when it isn't quite right. It also slows down the code review process. I'm much more detailed than my reviews when I know a coworker used in LLM to generate some or all of the code. That's also a very common response as well. pointing out this idea, which I think is a fair criticism, that the people like our first excerpt,
Starting point is 00:25:41 which is doing most of their code generation with agentic AI, like this is saving so much time, they're noting the more reticent users are noticing, you're downplaying the huge amount of time that now surrounds. Yeah, you don't write the code yourself. That's faster. But now you have to do so much other work, all of this iteration with the model and the prompts and try the prompt again and work on your agent on markdown file and your skills harness and then all of the review on the other side and if it was produced with AI,
Starting point is 00:26:10 you really have to review it. And he's like, there's all of this other work that's surrounding this workflow, which is none of it's very fun. I mean, and this is taking a lot of time. Are we sure? Are we sure that this is actually producing the best code?
Starting point is 00:26:23 So there's sort of this tension going on in the computer programming world. Here's a takeaway from this. One, agentic coding tools, past a threshold of usefulness with the Claudecotex update in the fall that has made them much more heavily used. In my survey, something like 45% of the people I talk to
Starting point is 00:26:44 are now producing the majority of their code with an agenic tool such as Cloud Code. All right. Two, it's really unclear exactly what the best practices are for this are. There seems to be a spectrum of enthusiasm of the users of it in the space for sure on one end
Starting point is 00:27:02 there's way too much AI interaction going on this can't be the most efficient way to do it on the other end there's a lot of reticence the reality is going to fall somewhere in the middle
Starting point is 00:27:10 we don't yet know what the future computer programming looks like I think by the summer there's going to be some best practices they'll have some clever acronyms
Starting point is 00:27:19 to go with them there'll be some best practices about how best to use these there will be automatic code production I think we're going
Starting point is 00:27:27 to pull back a little bit on how much AI chatbot should be involved in review as well as planning. I think that's a little bit of just enthusiasm there. I do think a lot of code will still be generated, but we'll be better at where we deploy the code. I think there'll be more standardization about planning and architecture documents, et cetera, which will have a high overhead at first, but it'll allow us to deploy these tools better. I do not think, based on these interviews, that the hyper-multi-agent approach that we see most talked on the internet is going to become some sort of standard for serious programmers
Starting point is 00:27:56 in most places. And the vibe coding, like you see talked about a lot. Give me this app. And I come back a week later and it's done. That really is in the realm of like hobbyist and apps for personal apps for yourself or people who are doing experiments. None of the serious programmers I heard of so far are doing anything like that for the most part. All right. So there's a lot to be done here.
Starting point is 00:28:19 But what I'm trying to do is why it's reality check. I am not interested in breathless accounts of what's happening online because that's engagement hunting. I'm not interested in hearing sort of like non-technical reporters who have just heard a lot of those accounts and then are like, look, I don't know the details, but I think we can all agree that like there's not going to be programmers in the future. I think we've got to talk to real programmers. What is really going on?
Starting point is 00:28:45 Something is happening. It's more complicated than other people make it seem. Let's keep listing. I'll read you some more of these reports in weeks ahead. Let's figure out the old-fashioned way. Turn every page. Learn what's going on. What's working?
Starting point is 00:28:58 What's not? what's not, and let's try to figure out what's actually happening. I think we will. And we'll get on it, especially if you follow me here. All right, that's all the time I have for today. Remember, take AI seriously, but not necessarily everything you hear about it. I'll be back on Monday with the main episode and hope they'll do another one of these next Thursday. See you then.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.