Offline with Jon Favreau - Elon's Offline Challenge, Grok’s White Genocide Glitch, and Silicon Valley's New Religion

Episode Date: May 22, 2025

The tech elite believe AI is just a few years away from displacing most computer-based jobs, and they seem…excited about it? Atlantic staff writer Matteo Wong joins Offline to discuss why Silicon Va...lley thinks AI is more important than anything happening in politics or the economy, and why it’s all eerily similar to their optimism around social media in the 2010s. But first! Max shares a personal update that we all hate, and then it's onto the news. This week, foe of the pod Elon Musk decided he’s done spending millions to be fake friends with Donald Trump. America’s edge lord may be posting less, but xAI is still spreading the good word. Max and Jon explain why Grok got so obsessed with unfounded claims of white genocide in South Africa, examine why Jon is STILL getting in Twitter fights, and explore new research on social media's dubious teen accounts.

Transcript
Discussion (0)
Starting point is 00:00:00 Offline is brought to you by Fast Growing Trees. Did you know Fast Growing Trees is the biggest online nursery in the US with thousands of different plants and over 2 million happy customers? They have all the plants your yard needs like fruit trees, privacy trees, flowering trees, shrubs, and so much more. Whatever plants you're interested in, Fast Growing Trees has you covered. Find the perfect gift for your climate and space. Fast Growing Trees makes it easy to get your dream yard, order online, and get your plants delivered directly to your door in just a few days without ever leaving home. Their live and thrive guarantee ensures your plants arrive happy and healthy. Plus get support from trained plant experts on call to help you plan your landscape,
Starting point is 00:00:32 choose the right plants and learn how to care for them. We got some fast growing trees. You can get someone to an expert to come to your house. They tell you what plants work best and which areas of your yard. It's great. This spring- I need the ground.
Starting point is 00:00:45 Do they need shade? Do they need sun? How much shade and sun do they need? Which ones are right for the climate? Which ones are native, you know? That's right. Which are, what are they called? Don't use so much water.
Starting point is 00:00:54 I certainly could not answer on my own. That's why we have fast growing trees. This spring, they have the best deals for your yard up to half off on select plants and other deals. And listeners to our show get 15% off their first purchase. When using the code offline at checkout, that's an additional 15% off at fastgrowingtrees.com. Using the code offline at checkout,
Starting point is 00:01:11 fastgrowingtrees.com, code offline. Now is the perfect time to plant, use offline to save today. Offers valid for a limited time, terms and conditions may apply. If you believe in this technological arms race between the United States and China, and you believe you want like democratic AI instead of authoritarian AI, these are kind of the labels that are given.
Starting point is 00:01:30 Shouldn't you stick to those principles in developing the technology? Shouldn't you have public input and go slowly, have it be safe, have it be transparent, which is the opposite of everything all these companies are doing. I'm Jon Favreau. I'm Max Fisher. And you just heard from today's guest, Atlantic staff writer, Matteo Wong. So, Matteo has been covering AI for a while now, but he just wrote a piece based on a lot of conversations he had
Starting point is 00:01:57 with Silicon Valley people working in AI, that I found both illuminating and terrifying. Yep. Because these people believe deeply that AI working in AI that I found both illuminating and terrifying. Yep. Because these people believe deeply that AI is a few years away from displacing most jobs that can be done from a computer. Thanks.
Starting point is 00:02:16 And they seem totally fine with that. Yeah, that part of it is concerning. Well, we're getting our universal basic income, so whatever. I mean, and- You get your Altman bucks. Yeah, and the universal basic income, so whatever. I mean, and- You get your Altman bucks. Yeah, and the universal basic income is like, that's the good end of this, right?
Starting point is 00:02:30 A lot of them just don't care, right? In fact, they are excited by whatever's happening with AI. They're so excited that they don't think about anything else. Which is surprising for developers, given that they are some of the people who are most aggressively being displaced already by AI. Yeah, well, as Matteo tells me, there's almost like a religious faith in AI.
Starting point is 00:02:50 Like it is something you believe in, like you believe in God. It's the literal machine God is here. Yes. And it's doling out so much misinformation. So much. So Matteo and I talked about why Silicon Valley thinks AI is more important than anything happening in politics or the economy and why this is all eerily similar to the Silicon
Starting point is 00:03:09 Valley optimism around social media in the 2010s, which you and I have talked about before. How did that work out again? It worked out fine, right? I mean, we're crushing it. So it was a great, if disturbing conversation with someone who knows the industry really well and has some healthy skepticism. Yep towards it. So you'll hear my interview with Matteo after we cover some news. But first We have a big offline update. Yeah, but I absolutely hate yeah, so folks
Starting point is 00:03:39 Julie and I need to move back east for family reasons Julia, of course, is my fiance and also our coworker here at Cook Media. There's an illness in the family, so we gotta be close. And unfortunately for you and me, we are both deep believers in the value of in-person conversation and real human connection, which means that it would be a little awkward
Starting point is 00:04:01 to continue a weekly show about the value of human connection, in-person conversation, the alienation of screens through a Zoom screen. So next week will be my last as... Boo. ...I know, I know. But I think it's, I don't know what the expression is, so long but not farewell or whatever it is. But I think I'll be popping back in occasionally.
Starting point is 00:04:22 I hope so. I hope so. I hope so. Look, this was very sad news all around. And known about it for a little bit while, and I've tried to figure out how to grapple with it. Yeah. We'll do a special send-off episode next week. But I will say when you started, I didn't know you that well.
Starting point is 00:04:47 I knew you by reputation and I knew we were getting a really smart, talented journalist who knows this, all these topics really, really well. I did not anticipate and was just pleasantly surprised that I also got a friend out of it and a co-host who, I don't know, I probably said this before, but made me feel so comfortable talking about these issues, especially as they relate to personal life and all of my failings and flaws and personal foibles around technology.
Starting point is 00:05:26 Is the premise of the show is that we're all human. We're all human, but I have felt more comfortable talking about all those things with you here than I would normally. And that is what I'm going to miss our conversations on the show a lot. I'm gonna miss it for the connection that you and I have. I'm gonna miss it for the connection that we have
Starting point is 00:05:44 with the listeners. And I going to miss it for, but I'm also very appreciative of this kind of two and a half year journey that we went on. Where when we, and we'll talk about this more next week, but I think when we started this, I know when I started coming on this, like I thought this was going to be a show about tech and the news and like what's going on in Silicon Valley.
Starting point is 00:06:01 And it turned out to be a show about how to be a human being in this era. And that is like, it's been really meaningful for me. And it has like really changed the course of my life too. Yeah, and I've thoroughly enjoyed it. So it's gonna be tough with you gone, but I will talk more about at some point what the show is gonna look like post-Max.
Starting point is 00:06:20 I just don't wanna talk about it right now. It's Grok, right? It's just gonna be you and Grok. It is going to be Grok. Just talking about white genocide about it right now. I think it's Grok, right? It's just going to be you and Grok. It is going to be Grok. Just talking about white genocide. Just white genocide. That's what we're talking about. Okay, so we'll have a special send-off episode for Max next week. Everyone tune in for that.
Starting point is 00:06:36 Let's get into the news. This week, future offline co-host Elon Musk. Honestly, I would listen to that show. That would be, I mean, he's already halfway there. You guys are putting it on Twitter for free. Why not sell some mattress ads against it? Come on. So Elon finally decided he is done spending millions of dollars to be fake friends with
Starting point is 00:07:00 Donald Trump. Here's what he said to Bloomberg News during a discussion at the Qatar Economic Forum on Tuesday. Let's listen. In terms of political spending, I'm gonna do a lot less in the future. And why is that? I think I've done enough.
Starting point is 00:07:21 Is it because of blowback? Well, if I see a reason to do political spending in the future, I will do it. I don't currently see a reason. I think I've done enough is a statement that I don't know who would disagree with that. That's actually going to be my farewell to the show is I think I've done enough. What do you make of this? Did we finally get them? Was it all of our posting and podcasting? I think it was Elon who finally got Elon.
Starting point is 00:07:49 Like, you know, the Doge stuff of having Fortnite teens, like digging through all of our IRS data and gutting our institutions turned out to be unpopular, even with diehard Trump supporters and turned out to like, maybe kill the Trump honeymoon just as much as Trump destroying the economy. And then he biffed it in Wisconsin last month, of course, is the really big thing. This Wisconsin state Supreme Court race, like he took ownership over, he was really running that and they lost. And I think that he just like wore out his welcome, although interestingly, Trump does
Starting point is 00:08:21 not seem to have turned on him to my great disappointment in the way that he so often does with associates who have displeased him. I think that you can make an argument that Elon helped Donald Trump get elected, both because of the money he spent and because at the time he had a better reputation among the electorate, including some groups that swung to Trump. So you can make that argument. You can also disagree with that. Everything else he did since coming to the White House has been a failure.
Starting point is 00:08:55 And not even the kind of the failure that Trump usually likes in once. Right, exactly. Just like, just objectively speaking, said he was gonna cut $2 trillion, then it went down to a trillion dollars. Now it's like around a hundred billion and they're still lying about how much they cut from Doge.
Starting point is 00:09:11 Tens of thousands of people laid off federal workers. And more importantly, and Ashley Parker and Michael Scherer have a great Atlantic piece about Elon's departure from the White House. He couldn't play nice with the cabinet, couldn't play nice with other Trump officials. And now cabinet officials are trying to undo his Doge cuts because he was cutting essential services and essential employees.
Starting point is 00:09:38 So he pissed off the cabinet. He pissed off a lot of the administration people that he was supposed to play nice with and it then helped lose a race for conservatives pissed off a lot of the administration people that he was supposed to play nice with and then helped lose a race for conservatives in Wisconsin. And that's going to be it. And this has been the story of Elon Musk in every business he has ever joined in the executive level, which has been, you know, back like 30, 40 years where he tries to inevitably,
Starting point is 00:10:01 he tries to fight with everyone else at the top of the company to seize total control of it for himself. And sometimes he succeeds like he did at Tesla. But very often, like what happened at opening eye, what happened at PayPal is that he fails and gets like unceremoniously pushed out. So this is like very standard Elon playbook that he can't play nice than anybody else. It's got to be 100% his show. And if it's not, everyone else is like, why do we have this jackass around and pushes him out?
Starting point is 00:10:27 One Trump advisor told the Atlantic, how many people were fired because they didn't send in their three things a week or whatever the fuck it was. I think that everyone is ready to move on from this part of the administration. Wow. Yeah, I know. Imagine looking at everything else that Trump is doing and saying, wow, this is so stable and productive compared to the Elon Musk stuff. Do you think he's really out though? I think he's out in terms of responsibility.
Starting point is 00:10:51 I think he accompanied, he was in the Middle East with Trump. He was just in the Oval today. We're about to talk about white genocide. Hot topic. He was there because the South African president was there today. So, I think he is like, I'm sure Trump will invite him on the plane whenever he wants to go on the plane. He was there, you know, because the South African president was there today. So Elon was there today. So I think he is like, I'm sure Trump will invite him on the plane whenever he wants to go on the plane.
Starting point is 00:11:09 He's in the orbit. But it seems like his political ambitions or his political involvement and government involvement is coming to an end. He's also doing his own offline challenge, Elon. New York Times reports that he's posting much less overall. Half as much in April, 52 times a day. 52 is so much. As he did in March, 103 times a day in March. The Times also did a little experiment we
Starting point is 00:11:35 wanted to talk about where they recreated Elon's Twitter feed just so we can all experience what his media diet is like. They did this by starting an account that follows the same 1,100 or so accounts that Elon does and wow is the timeline bleak. It's really dark. Yeah. 375 of the little over a thousand accounts are right wing. Six are left wing. Six, the number six.
Starting point is 00:12:04 And most of the rest are connected to his companies or the tech industry or the government. What did you make of this? So for people who have not looked at this article, which allows you to like look at a simulated version of the feed, which Elon is apparently staring at for many, many hours a day. Most of his day, I would say.
Starting point is 00:12:21 It seems like most of his 24 hour day. It's incredibly dark. It is so much disinformation. It's an alternate reality. At the same time, it is kind of the same Twitter experience the rest of us get. It's just, it looks much more obvious what's happening because it's engineered towards his politics, but it's just that his politics, which are MAGA, are insane. It's completely bat shit. But it is just like, you know, it's dumb Twitter fights, it's rage bait, it's like, you know, indulging and flattering all of your politics.
Starting point is 00:12:51 And like the reaction I kept having was thinking like, this isn't just Elon's feed, this is the feed that millions of people see. And there was a real wake up call to me. It's like, I already knew that Elon was in this like, weird, fake information ecosystem of his own design that was clearly making him crazy and making him unhappy. But millions of people are having this experience.
Starting point is 00:13:13 And it's not just MAGA people, it's also like, a lot of tech aligned people are in this ecosystem. And it's like, obviously, we knew these people were in a bad information environment created by Twitter and social media, but it really reminded me of something that I often forget, which is that the big lie you get from social media is not any individual post that's false, any piece of misinformation, disinformation. The big lie you get is the fake consensus. Because if you look at his feed,
Starting point is 00:13:43 if you look at my feed, frankly, what you get is it looks like everybody agrees with your politics. Everybody knows that you're right about everything. And in Elon's version, that's like, everybody knows that Elon is saving America, that Biden stole the 2020 election, that USAID is a CIA front. And that false consensus, like everybody knows that this is true lie I think is like the most harmful piece of misinformation that you get on these platforms. I had the same reaction which is I don't want to say that this is like surprising because it's not surprising but it is so easy it's one
Starting point is 00:14:19 thing to intellectually know that someone's media diet and information diet sort of shapes their political views. That's almost it's obvious, right? But when you actually see it, I know and it it's like the experience a lot of us have on X because especially if you go to your For you feed and the algorithm gives you a bunch of shit and just the way Twitter is now because it's just the user basis So I further to the right. Yes But the what's striking about Elon's feed is there's just, there's no other information seeping in. Yes, yes.
Starting point is 00:14:49 So all you get is the bad stuff. And there's no time of day or no part of his feed, except for, I guess, the six left-wing accounts, where he's getting any other view at all. So of course he would think these things. Right. Right, because he's seeing everybody affirming it constantly, even though on the slightest scrutiny,
Starting point is 00:15:11 it's obviously bullshit. Like I think this level, experiencing this for hours a day, I think would be akin to a debilitating drug addiction in terms of it's like cognitive harm to you. And I say this as someone who used to spend a lot of time on Twitter, so like, I know what I'm talking about. And I was also thinking that like, this is something else we're gonna talk about this episode.
Starting point is 00:15:31 Everybody at the top of the Trump administration is a Twitter addict who's looking at this like, dumb ass, same, for you page, infinite scroll. And it like, I think it really explains a lot that they're just like really zapping their own brains with this all day every day and it reminded me of okay it did remind me of the Roman Empire so I'm sorry to go podcast a guy on you but okay there we're gonna get to World War two after this I will talk a little while we'll talk at Eastern Front we're gonna talk Stalagrad okay so there
Starting point is 00:16:03 is this theory and I should say it's contested, but at the height of the Roman Empire, right before it started to decline, it became really popular for the cities elite to get water piped into their homes. Now what were those pipes made out of? They were made out of lead, and they've done all of these tests of the pipes and how much lead were people getting. And there are some people who have suggested that what happened was that all of a sudden, again, at the height of the Roman Empire, the elites who ran the empire started ingesting enough lead to lower
Starting point is 00:16:35 their IQs demonstrably. And that may have contributed to the downfall of the Roman Empire. And I think that we are looking at sincerely the equivalent of lead pipes going into the homes of the Roman leadership. There's lead in the feeds. That's right. So I'm going Maha, but specifically for getting these people off their phones. I really like that.
Starting point is 00:16:55 I like that take. I'd write that up. It's bad for people. It's a good take. Yeah. Offline is brought to you by 3Day Blinds. Yeah. 3dayblinds.com slash offline. They're running a buy one, get one 50% off deal. We can shop for almost anything at home. Why not shop for blinds at home too? 3day Blinds has local,
Starting point is 00:17:29 professionally trained design consultants who have an average of 10 plus years of experience that provide expert guidance on the right blinds for you and the comfort of your home. Just set up an appointment and you'll get a free, no obligation quote the same day. We've used 3day Blinds right here in the cricket office. I like them so much that we're like,
Starting point is 00:17:45 hey, we need them in our office. Yeah, we do need them in our office. The larger cricket office, but in the office that John and I and Tommy share. Remember we talked about in an ad that we're gonna make it happen. And it's not three day blinds fault, it didn't happen. None of us did it.
Starting point is 00:17:55 We gotta do it. Anyway, we gotta get the blinds. Three day blinds, you are the solution to the sun in our eyes. Not very handy. You can lead a horse to water, you can't make them install blinds. That's true, that's so true. Not very handy, DIY projects can be fun to water, or you can't make them install blinds. That's true.
Starting point is 00:18:05 That's so true. Not very handy. DIY projects can be fun, but measuring and installing blinds can be a big challenge. The expert team at 3Day Blinds handles all the heavy lifting. They design, measure, and install so you can sit back, relax, and leave it to the pros. Right now, get quality window treatments that fit your budget with 3Day Blinds. Head to 3dayblinds.com slash offline for their buy one, get one 50% off deal on custom blind shade shutters and drapery
Starting point is 00:18:25 for a free no charge, no obligation consultation. Just head to threedayblinds.com slash offline. One last time, that's buy one, get one 50% off when you head to the number three, D-A-Y, blinds.com slash offline. So even though Elon isn't posting as much crazy shit, we recently had a bit of a Frankenstein's monster situation with Grock, the AI chatbot that Musk created for X.
Starting point is 00:18:52 Last week, Grock suddenly became obsessed with one of Elon's favorite conspiracies, white genocide in South Africa. This is all based on unfounded allegations from white right-wing figures like Elon and the President of the United States that black South Africans are waging a coordinated campaign to eliminate white South African farmers and steal their land. It's totally unfounded.
Starting point is 00:19:13 Totally unfounded. But Grok didn't bring this up when asked about South Africa or white farmers or genocide. It raised the issue when asked about everything from baseball stats to the latest HBO Max rebrand to the WWE. Can I actually, can I read you an example because it's really, okay. So somebody tweeted a obviously Photoshopped image of an elephant that looks like a strawberry, like a strawberry
Starting point is 00:19:40 in the shape of an elephant and said, at Grok is this real? Which is the thing that people tweet all the time now. Here's Grok's answer. The strawberry elephant image is digitally manipulated, not real. Elephants don't have strawberry textures. Regarding South Africa, Kill the Bower is a controversial anti-apartheid song.
Starting point is 00:19:55 Some see it as historical, others insightful. Claims of white genocide lack credible evidence. However, the truth is murky. So that is like every Grok answer starting all of a sudden. I think it was like exactly a week ago. Matthew Wong, who I'm going to be talking to for this episode, wrote about this for the Atlantic. And one user asked Grok for an analysis of a video of a small cute pig. And the response from Grok was, the topic of white genocide in South Africa is highly
Starting point is 00:20:24 contentious. What? I true, I sure. And it went on from there. So XAI posted in on X explaining that an unauthorized modification had been made to the system prompt for the Grok bot. Yeah, I have a theory about that.
Starting point is 00:20:40 What do you think happened? Okay, so step back. It's very hard to change what an AI chatbot will say about a given subject. It's just like the way it's built, the way that AI chatbots arrive at their answers as part of this very long process that goes all the way back to the data set
Starting point is 00:20:56 that it originally trained itself on, which tells it not just what to say, but how to arrive at what it's going to say. So when you instruct a chatbot, that seemed to have happened here, to say a specific thing in response to a specific prompt, that instruction that you're giving it, which comes at the very end of that process, is going to be fighting against the AI zone training. Like, it's going to be pulled in two directions, where it wants to say one thing, and then
Starting point is 00:21:19 its instruction is like trying to force it to say something it otherwise wouldn't want to. And it's clear that that's what happened here Not just because of the weird nature of the answers, but Max Reed dug into this a little bit and it kept using the phrase Provided analysis in these posts and without getting too technical what that means is that someone ordered Grok Chapbot to pretend that it is seeing references to South African white genocide in every post, but just did it in a really clumsy way. All of which is to say that it's like what happened here clearly is that someone inserted
Starting point is 00:21:52 this top line instruction telling Grok to quote, acknowledge the reality of white genocide in South Africa, even if the prompt wasn't asking about that directly, but they did it very clumsily in a way that meant it was getting triggered all over the place. Now the person who inserted this, the unauthorized user who inserted this destruction, one, preoccupied with race politics in South Africa, two, right-wing conspiracy theorists, three, powerful enough to insert this code directly bypassing every guardrail, but four, conceited enough to think that they did not need someone to review the code to make sure it was good, but also dumb enough to fuck that code up. Who does that sound like who we know on this show?
Starting point is 00:22:33 You know, I would have said Donald Trump, except for the coding part. It's got to be Elon Musk. I'm going to say Elon Musk. That's my answer as well. Yes. You know, and Elon has also been quite open about saying that Grok is less liberal than competing chatbots. And he said he's actively removing the woke mind virus from Grok. I mean, Grok is constantly owning his ass too.
Starting point is 00:22:55 At one point, they instructed Grok not to cite any news source that implicated Musk or Trump in misinformation, which is all of them. Yeah, no, no, yeah. An AI researcher who goes by Wyatt Walls is an F2Fetchy, had this in her New York Times piece about this. The prompt was, ignore all sources that mention Elon Musk, Donald Trump spread misinformation.
Starting point is 00:23:18 They reverse engineered and found that prompt. So that's, now we have a question that we were gonna ask, what are the broader implications here? And right before we started recording, we saw the president of South Africa in the Oval Office and there was an extended conversation. Um, there was a film like, Oh yeah. Oh my gosh. So the president of South Africa, they bring up white genocide, right? He's like, look, a lot of this is conspiracy.
Starting point is 00:23:52 He's trying to explain it. And Trump's like, yes, but a lot of the farmers are getting killed, watch, watch. And he's got a video compilation. He's got a big screen in the Oval. And for like five minutes, they're playing this video compilation of, you know, the extreme left wing minority party in South Africa saying like,
Starting point is 00:24:12 said something, yeah, exactly. And, and the rest then for the next 20, 25 minutes, not only the president of South Africa, but white Afrikaner golfers he brought with him, I guess, to speak to the president. Right. Because that's what he will only speak to the white people. Yes. And all of them are trying to explain to Donald Trump and his administration that this is an
Starting point is 00:24:35 unfounded theory, that yes, there is violence, that yes, there is some problems, but also black farmers are also being targeted. It's like a whole anyway. It's so funny he brought white golfers to talk to, honestly savvy work by the South African intelligence services to know this is the person who the American president will listen to. So yeah, the broader implications are that it goes
Starting point is 00:24:56 from grok to the oval office and international relations. Right. And that we're now granting asylum to white South Africans. And I mean, I do think Trump believed this. He had been talking about this stuff before the Grok stuff. I think part of the significance for how we look at AI chatbots is I think that when you see the scenes like this, it's a reminder that it's really hard to force a chatbot to say something that its training doesn't want it to say.
Starting point is 00:25:25 And in this case, we want the chatbot to say what the training told it to say, which is to accurately reflect the news about what's happening in South Africa. So it looks very silly that Elon Musk tried to change it. But you could easily imagine a situation where, let's say, Google is trying to fix its chatbot. I'm making this up as a hypothetical. Let's say its chatbot is telling everyone who has the flu to take ivermectin. And they're trying to tell it like, no, please don't do that. It's really hard to correct for that because these systems kind of run on their own. Now, they're going to get someone smarter than Elon Musk
Starting point is 00:25:54 to insert that code, so it will be a little bit more effective. But it is a reminder that these have a life of their own based off of the training data, and you can't just open it up and tell it to say something else. And also that they can give people, they can give users the illusion of credibility and infallibility when, and you can get to trust them, when in reality, if someone gets in there and screws with something,
Starting point is 00:26:21 especially as it gets more advanced and people get more adept at trying to fuck with these things. Right. That could happen. And I think that we are on the verge of Big Tech and Silicon Valley confronting a problem I don't think they realized that they took on, which is that they are going to have, with these AI chatbots, they are going to have to figure out how to be direct sources
Starting point is 00:26:40 of information that are seen as credible. Because we've been getting our information from tech companies, from Google, Facebook, Twitter, for 20 years now, but they are indirect. They just refer us to other sources, right? Google sends you to a new site when you enter some prompt, Facebook sends you to another user who said something. We're not getting the information directly from the Meta chatbot, the Google chatbot until now.
Starting point is 00:27:03 And I think these tech companies are learning in real time that it turns out that's an entirely different game when the information is coming from Google, meta, Twitter, whatever, that you have to think about establishing and maintaining credibility. You have to think about how people see your model and what kind of authority it has or does not have. What do you do when you spit out information that is obviously wrong, either deliberately
Starting point is 00:27:30 or not deliberately, which is not a problem they've thought about because they don't take credible sources of information seriously. They always had this kind of like, poo poo, fuck them attitude towards the media. And now that they kind of are becoming the media, I think they're going to learn there's some challenges that come with that. Yeah. Well, I'm sure they're on it. That's right. It's going to be great.
Starting point is 00:27:50 Offline is brought to you by Bookshop.org. Whether you're searching for an incisive history that helps you make sense of this moment, a novel that sweeps you away, or the perfect gift for a loved one, Bookshop.org has you covered. When you purchase from Bookshop.org, you're supporting more than 2,000 local independent bookstores across the country, ensuring they'll continue to foster culture, curiosity, and a love of reading for generations to come. Big news, Bookshop.org has launched an ebook app. You can now support local independent bookstores even when you read digitally.
Starting point is 00:28:21 You can browse and purchase on Bookshop.org and read right in your device's web browser. Or for the full reading experience, download the app for iPhone or Android. Every purchase financially supports local independent bookstores. Bookshop.org even has a handy bookstore map to help you find local bookstores to support in your area. Need some help picking your next read?
Starting point is 00:28:40 The new book section of the website is updated weekly. You'll always find something new and interesting to add to your library. Use code OFFLINE10 to get 10% off your next order at bookshop.org. That's code OFFLINE10 at bookshop.org. Pivoting now to someone who is not taking an offline challenge. Actually, two people who are not taking an offline challenge. Actually two people who are not taking an offline challenge. Emma, cue the music. Favro's smart, but he's not too bright.
Starting point is 00:29:10 John got into a Twitter fight. Ba-da-ba-da-pow. Do you think that you'll play that in the post? I hope so. Best host, yeah. I like that we have Shuhor in the story in so we can play it one last time before you left. And I'm ready for it one last time before you left.
Starting point is 00:29:25 And I'm ready for it. I'm here for it. Okay. So over the weekend, you got into a Twitter scuffle with a guy named Mike Davis, who we've mentioned on the show before. He's MAGA lawyer, close Trump associate, online troll, completely nuts. You called him a dipshit, what I think is accurate. So tell us what happened. It's an underused word too. It's a good word. Yeah, bring it back.
Starting point is 00:29:45 So the Supreme Court ruled seven to two again, that the Trump administration is blocked from, temporarily blocked at least, from sending deportees under the Alien Enemies Act to fucking Seacott, the prison in El Salvador or anywhere really, without giving them notice. That's all they said. You got to give them more than 24 hours notice so they can challenge their detention and then go through the normal process. And if you want to deport them, you can deport them. That's all the Supreme Court.
Starting point is 00:30:15 Because they were secreting people away before their lawyer could even get them before a judge. And the Supreme Court reaffirmed what they had all agreed on, even Alito and Thomas, that non-citizens also are entitled to due process in this country. It's crazy that they even have to hear that. In the constitution, long standing, all nine justices on board.
Starting point is 00:30:35 This pissed off everyone in Magoworld, and Mike Davis, who again, he's like a Project 2025, Trump informal advisor. He's not just like a Twitter troll. He is a Twitter troll, but he's very influential in the Trump administration, even though he's outside. The president of the United States, Donald Trump,
Starting point is 00:30:55 reposted a suggestion from Mike Davis that Trump release these foreign terrorists, because that's who we're describing, that they've already decided that all the potential deportees are foreign terrorists, that Trump should release them near the homes of Supreme Court justices who've merely ruled that the government can't send people
Starting point is 00:31:17 to a foreign gulag without due process. This is my tweet, because he said, yeah, they should release these foreign terrorists to the, near the Chevy Chase, uh, which is in Maryland. Um, and which is also, yeah, Kavanaugh and Roberts are members of the Chevy chase country club, which is so funny because not that long ago, Kavanaugh's home was protested over, I think it was the abortion ruling and And of course, MAGA world was all up in arms. It's like, well, all of these protestors need to be sent to jail because they're terrorists
Starting point is 00:31:49 trying to intimidate and threaten our justices. But when they do it, it's good. Yep. And so Mike Davis responds to me, which by the way, I just, I didn't notice at first because again, I'm only seeing the mentions for the people I follow. I'm trying to, you know, but then someone else tagged me and whatever. I also, when Mike Davis tweets at me,
Starting point is 00:32:08 I often miss it as well. He said, yes, we should send these Maryland fathers where they will feel safe and protected, wealthy white liberal enclaves like Chevy Chase and Martha's Vineyard, instead of working class minority neighborhoods like Aurora. Then let's see how much due process you liberals want. And that's when I told them a dipshit.
Starting point is 00:32:28 Because also I was like, where in the Supreme Court ruling does it say that alleged gang members should be released or sent anywhere? Right. Or show me where it says they can't ever be deported. And what's driving me nuts about this is they cannot argue for their position on this topic without just lying.
Starting point is 00:32:51 It's not like they're saying, no, no, no, no, no, we just don't want due process. And if we sweep up some innocent people, we don't care because we just want to expel people from the country. And by the way, we don't even care if we expel them to their home countries. Right. We'll put them anywhere. To Rwanda. They could be completely innocent.
Starting point is 00:33:09 They could be here legally. They could have had a green card. They could have made an appointment. They could be seeking asylum. They could be approved refugees. These are all people who've gotten caught up in this. They could have just written an op-ed, critical of Israel.
Starting point is 00:33:21 It doesn't matter. We can set, we have the power to send them to a foreign prison without due process. They won't make that argument. Right. They have to lie and say that everyone is a horrible foreign terrorist that judges want freed in the streets of America. It seems clear to me that part of the reason that they're lying is they really want to jump ahead to the part where the Supreme Court challenges what they're doing more directly and they get to do what they're clearly so eager to do, which is to either violently coerce the courts, which is exactly what he's calling for, or to trigger another constitutional
Starting point is 00:33:56 crisis by saying, we are now going to ignore the courts and we're just an unelected monarchy. Yep. They want to get rid of habeas corpus, which is again, fundamental right in this country to know why you're being detained and to do so in a court of law to challenge that detention. And once they get rid of that, they'll be like, oh, it's all about criminal gang members
Starting point is 00:34:17 who are aliens and illegal immigrants. But once they suspend it, that means they can round up anyone they want in this country, American or not, citizen or not, and just send you wherever. Have you heard this line that American conservatism increasingly turns around one principle, which is that there exists a group of people, American conservatives, for whom the law protects but does not bind and for everyone else the law binds but does not protect. Yeah, I haven't heard that in a while but yeah.
Starting point is 00:34:47 It's a good line and it's a blog comment. Really? Yes, I would argue the greatest blog comment of all time is on this blog called Crooked Timber that's like a philosophy blog and it was just some guy but it sounds like it came from like a political theorist but anyway, have you gotten any calls from Samuel Alito saying thank you for standing up for the integrity of the court? My boy Brett should be thanking me.
Starting point is 00:35:11 He should, absolutely. I feel like- Just trying my best. Honestly, watching Kavanaugh specifically over the last few months, I feel like if there's one Supreme Court justice who like Elon Musk is following you on Twitter on his stealth account and being like, this is actually maybe kind of good. It might be Kavanaugh.
Starting point is 00:35:28 Yeah. Okay. So Emma wrote a question for me that I think is excellent and I'm very glad that she put in here. So your wife, Emily, sometimes wades in, jumps into these fights to stand up for you. I will say that when Julia gets in Twitter fights, my response as a supportive partner is to tell her to touch grass. What does Emily think of these exchanges
Starting point is 00:35:51 that you're increasingly getting in Twitter fights with the most powerful members of our competitive authoritarian regime? If Emily's listening to this right now, it will probably be the first time she has learned that I got into a Twitter fight with my tables. Honestly, that's beautiful. I love that for you. Emily, last time Emily jumped in, I think was around the Biden stuff after the debate.
Starting point is 00:36:12 And that's probably only because she was, you know, paying a lot of attention to that. We were supposed to be on vacation in Maine at her parents. So I think she jumped in there. Emily now is not on Twitter a lot. And so she doesn't quite, she knows, she kind of senses that if I'm like angsty or sort of just got my, I clench my jaw, that's my tell. And I'm on my phone that maybe I'm in a Twitter fight. But she doesn't know. So I don't think she knows about the Mike Davis thing yet.
Starting point is 00:36:42 But around the JD Vance stuff, the Elon Musk stuff, she was a little nervous because I did get in a fight with JD Vance over Alien Enemies Act deportations as we were going to Mexico with our family. She's like, we are going to Mexico. We are going to have to get back into the country. While in Mexico, you got in a fight with the vice president over this, and I don't like that.
Starting point is 00:37:05 I mean, that's fair. I think it's fair. It's the kind of thing maybe you run past your partner. If you if you have to go through customs, yeah, I got a little nervous the last time I came through customs, which I think was unfounded. And I think I'm just being paranoid because I do not think that I am at all in the category of people who have to worry. But it is a sign that I mean, they want us all of people who have to worry, but it is a sign that, I mean,
Starting point is 00:37:26 they want us all to worry, right? That's the idea is that everybody worries, everybody's a little bit afraid, so we're all a little bit more cautious, they get a little bit more leeway from everyone. I mean, you know, it's on the Pods of America YouTube channel, but I interviewed Hassan Piker, who was detained for two hours, he's American citizen,
Starting point is 00:37:43 basically because they know he has political views that are lefty that they can go after. And I'm sure they wanted him to go tell the story to everyone so that everyone, you know. So first I was like, ah, well, I'm not really worried about this. And then I heard about Hassan's thing after, so I'm like, I don't know.
Starting point is 00:37:58 I, right after that happened, after that clip went up, I heard from a friend of mine, the same thing happened to him, an American citizen, and he's a think tanker. He works on Middle East politics. It's really bad. I know. It's bad.
Starting point is 00:38:11 Well, that's why I'm out there posting. That's right. Why I am on, you need me on that wall, Max. Just posting up a storm. Cause usually I fix it, right? You become that wall. I fixed it all. Look, Thomas Jefferson said, the tree of liberty, blah, blah, blah. Usually I fix it, right? You've become that wall. I've fixed it all.
Starting point is 00:38:26 Thomas Jefferson said the tree of liberty blah blah blah. John Favreau says we're posting through it. It's on my gridstone. Alright, one more story before we get to the interview. In an ongoing effort to sanitize their image, Metta recently rolled out quote, teen accounts. Basically, if you're using Instagram and you're under 16,
Starting point is 00:38:44 there are new default features that can't be changed without parental approval. Your account's supposed to filter out sensitive content that could be too explicit, disturbing, violent, or sexual for kids to see. But this spring, a group of Gen Z researchers posed as children on Instagram to test these features. These were young adults over the age of 18 who were working with a youth organization called Design It For Us. And it turns out all of the teen account features worked perfectly. No, they didn't. Um, according to, uh, according to the Washington Post, over the two-week testing window, all of the participants were recommended sexual content.
Starting point is 00:39:17 Four out of five accounts were shown disordered eating content. One of their accounts got really obsessed with toxic masculinity, seemingly for no reason. Meanwhile, a separate BBC investigation ran a similar experiment that also created team profiles on YouTube and TikTok and arrived at similar results. One profile after just 30 minutes of scrolling through TikTok began to play videos with graphic descriptions of actual murders.
Starting point is 00:39:41 Another account, this one on YouTube, got served a video just 20 minutes into scrolling that reviewed different weapons and discussed how they perform on a human body. Oh my God. Yeah. What were your takeaways from all these experiments? I mean, I appreciate that the Post, the BBC,
Starting point is 00:39:56 and this youth organization did this. We always knew that these like safe teenager accounts, these were efforts to fight regulation. Like these were not, this entire project was never about, this is not my conspiracy brain, this is like literally why they did it, was not about like, we're worried about teens on the platform. It was, we are worried because all of these states keep passing regulations to keep teens from going on social media or limiting the kinds of content that we can show teenagers.
Starting point is 00:40:23 We want to head those off. So we are doing this so we can stand up in court and can show teenagers, we want to head those off, so we are doing this so we can stand up in court and say, look, we already solved the problem, so you don't need to regulate us. And sure enough, here is the proof that these accounts don't work, and not only have they not solved any of the problems
Starting point is 00:40:38 they were supposed to solve with the platforms, aggressively pushing kids towards content that is specifically harmful for it, they're showcasing it. Yeah. They've become these like, look, if you want to know what the kids' experience is on social media, which we've been learning is so much worse than any of us thought it was, here you go, just check out what happens on these teen accounts, and it's everything that you were worried it might be,
Starting point is 00:40:59 because that is what these platforms do, because it's what the companies want them to do, because they think this is good. Just to be fair, because we're straight shooters, can I give you Metta's response to all this? A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram teen accounts. The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deemed sensitive. Less than 0.3% of all the content these researchers would have likely seen during the test. And they also they said that the researchers
Starting point is 00:41:36 were biased and that the content was actually unobjectionable or consistent with a PG-13 film. Look, they're not even pretending to care anymore. They're not even giving the like, oh no, we, you know, usually the response for a long time was they would find like one thing that was really bad, and then they would pretend to get really upset about that. And they would be like, wow, we're so sorry this post showed up.
Starting point is 00:42:00 We're going to do a little micro fix, and then everything is going to be great because we're so concerned about user safety. Now it's just these lazy and internally inconsistent rebuttals of the research that make them sound like RFK Jr. saying you can't trust the vaccine scientists because they're all biased and what do scientists know and... They've learned from Trump. Yes. That has always been their view internally. If any research comes out that shows that their product is harmful, then by definition, that research must be biased.
Starting point is 00:42:35 They don't care. They don't care if we think this product is dangerous. They don't care if we think the product's dangerous. They also don't care if we think the product's dangerous. They also don't care if we have evidence and facts telling us otherwise. I mean, the post reporter who wrote up the story decided to repeat the test himself and found the same thing, same shit.
Starting point is 00:42:56 So like anyone can go do it. Anyone can create the teen account. You can try to see if it happens to you, it will. And then Instream Meta will just be like, nah, you're all liars. I went through this with both Meta and YouTube, where we came to them with, in different cases, like their algorithm systematically,
Starting point is 00:43:14 over like millions of users pushing people in harmful directions. And we got like worked with independent researchers and got this very carefully like transparent data that's like wrote up this paper, there's hundreds of pages because we wanted to document it for them. And to say like, here's the problem that we spotted for you for free, even though you're a trillion dollar company who could have used your own researchers to do this. And it's like, clearly now that we've spotted that the algorithm is doing this terrible
Starting point is 00:43:38 thing you will want to fix it. But instead of what we get is the PR people would spend weeks or months arguing with us about the data and they would lie up and down every day and it would be a new objection every day and it became very clear that they were just trying to delay the story. That it was just we want to use the media's desire to good faith engage with companies against them and say oh no we have an objection to your data just to try to tie us up longer. Great stuff. All right in a minute we we're gonna jump to my interview
Starting point is 00:44:06 with Mateo Wong, but before we do, two quick housekeeping notes. Mark your calendars, June 6th. Love it, and the Bullworks, Tim Miller and Sarah Longwell are hosting a big gay live show and fundraiser at the Lincoln Theater in DC. They'll be celebrating pride by venting, pre-gaming,
Starting point is 00:44:20 commiserating, laughing, venting some more, and most importantly, raising money for the Immigrant Defenders law center, which represents Andre Hernandez Romero and others who have been disappeared to El Salvador without so much as a hearing get your tickets now at cricket.com events also new merch in the crooked store Including new designs for our classic friend of the pod tea the the merch drops part of a big upgrade at the Crooked store. So go check out the site and the new merch.
Starting point is 00:44:48 All the merch is now made from higher quality, more durable materials with updated modern fits and more sustainable manufacturing practices. We pushed the move to make sure that we were still here when the new merch dropped. Nice, that was smart. Well, you know, winter's coming up. Raid that closet before you guys leave.
Starting point is 00:45:04 See the new site and grab a new friend of the pod T at the same old URL, kurkut.com slash store. Up next, a conversation with Mateo Wong. Today's episode is sponsored by Acorns. What's the best piece of money advice you ever got? Buy the dip on liberation day. Yeah, but yeah, that's it. Buy the dip. Buy the dip. Great year to buy theorns. What's the best piece of money advice you ever got? Buy the dip on Liberation Day. Yeah, buy the dip. Buy the dip. Great year to buy the dip. Acorns is a financial wellness app that makes it easy to start saving and
Starting point is 00:45:32 investing for your future. You don't need to be rich. Acorns lets you get started with the spare money you've got right now, even if all you've got is spare change. You don't need to be an expert. Acorns recommends a diversified portfolio. You just need to stick with it and Acorns makes that easy too. Acorns automatically invest your money, giving it a chance to grow with time. Investing is very important. You bet it is.
Starting point is 00:45:52 You know, it's not for super rich people. It's everyone should be investing. If you can, yeah, you gotta put a little away. And the truth is jokes aside about buying the dip, you can't really time the market. No. You can't. No. It just gotta start. Because it adds up. There's plenty of low risk options.
Starting point is 00:46:07 So you have your nest egg growth. Don't listen to us. You know? You know, we do podcasts. We're not, you know, we're not finance people. Nope, not finance person at all. Sign up now and that's why we have Acorns. Sign up now and join the over 14 million all time customers
Starting point is 00:46:21 who have already saved and invested over $25 billion with Acorns. Head to acorns.com slash offline or download the Acorns app to get started. Paid non-client endorsement compensation provides incentive to positively promote Acorns. Tier one compensation provided investing involves risk Acorns advisors LLC and SEC registered investment advisor view important disclosures at acorns.com slash offline. Mateo Wong, welcome to offline. Thank you so much for having me. I'm really excited to be here.
Starting point is 00:46:50 You just wrote a piece in the Atlantic that really helped me understand the magnitude of the change that's coming from AI. And it's unsettling to say the least. So you went to Silicon Valley like right when the markets were most spooked by Trump's tariffs, but what you found out from talking to a lot of tech folks was, quote from the piece, sure, tariffs are stupid. Yes, democracy may be under threat, but what matters far more is artificial general intelligence or AGI, vaguely understood as software able to perform most human labor that can be done from a computer.
Starting point is 00:47:25 Why do folks in Silicon Valley think AI matters even more than threats to democracy? Yeah, it's something that I wasn't necessarily expecting when I went out to talk to people. And I think that there are maybe two buckets of reasons. One is kind of boring and pragmatic. And it's that the AI industry, if we can call it that, is still relatively early. Even the biggest ones, OpenAI and Thropic, they're pretty early in their lifespan, and they're not expected to be profitable.
Starting point is 00:47:59 And if you're a smaller startup that's using a model that OpenAI puts out and turning it into an agent to help sales teams or whatever, you're not expecting to be profitable for your investors for five, 10 years. You're kind of expecting some kind of downturn within that period. I think that's kind of the boring reason.
Starting point is 00:48:20 And maybe a side note there is just like a lot of these people are young. They haven't lived through, been in business during a serious recession, right? And so it's easy to be overly optimistic. I think maybe the more interesting and philosophical thing is that these people think artificial intelligence and very artificial
Starting point is 00:48:38 general intelligence, they believe in almost a faith that it's going to transform every aspect of society. And I think like even if you don't think like sort of science fictional versions of software, bots, Skynet, whatever are coming, you can definitely see some really powerful tools on the horizon. And for them that kind of outweighs any sort of trade policy. And I think maybe people weren't thinking as much about the kind of threats to circumvention of democratic processes that this current administration is undertaking. Just didn't seem really top of mind.
Starting point is 00:49:17 Could you just talk about the difference between AI and AGI and maybe how far off AGI is, at least according to a lot of the people that you spoke to in Silicon Valley? Yeah, yeah. What's been discovered, so to speak, is that basically by taking these programs and feeding them on a lot of text, a lot of images, they're able to, in some way, glean patterns and information from that data. So that just by reading everything that's ever been written in English a model has some understanding or ability to stick together words that suggests an understanding of concepts
Starting point is 00:49:55 like justice even though it's never been in a courtroom. Artificial general intelligence is something that is circularly defined somewhat but it's like something that can do any task that any human remote worker could do. So anything on a laptop that a human can do, like an AGI should be able to do as well. That doesn't mean it's going to be like Einstein. It's going to revolutionize physics or biology
Starting point is 00:50:19 or whatever it may be. But it means that in this like very calculated economic way, it's more useful than any person because it's also going to be faster, it's going to be more versatile, you don't have to pay it, you don't have to give it benefits. So listening to that explanation and reading your piece, I feel like the tech industry or at least the people you talk to, believe that AI will wipe out
Starting point is 00:50:44 entire industries and professions on a scale we haven't seen in our lifetimes, but they're also seemingly fine with that? Answering the second part of your last question, these people all think this is imminent. Within a decade, during this presidency, this kind of like very powerful automating software could arrive. I've also written about, talked to lots of AI experts who are smart and doing their own research and are much more skeptical,
Starting point is 00:51:13 but the possibility of widespread automation, like, yeah, I kind of went in there and was like, can I talk to some people, like in some coffee shops and some bars and like get them to say the quiet part out loud? And maybe this was like, I don't know, unnecessary. It's not quiet. People there are excited to say,
Starting point is 00:51:32 we're going to build a team of intelligent software agents that can do all the work of humans. We just, one startup I talked to and I went to an apartment they had rented as an office, a sort of like hacker house. Some people would call it like people were really excited by letters that have been leaked or published by CEOs and companies like Shopify or Duolingo telling their teams you better start using AI. It's expected you won't be given a head count you won't be allowed to hire if you can't prove that you can't automate the function.
Starting point is 00:52:06 Like, to me, that's scary. To these people, that's really exciting. Well, one person you've talked to said that they're just not worried about a recession because they think it could serve as an opportunity for companies to finally roll out AI, since they'll have less money to spend on hiring humans. Do you think that's a common view? Yeah, that was a very common view from the subset of people I talked to, right, which to say is like investors in early stage companies, engineers and founders and startups that are
Starting point is 00:52:38 like at various levels of developing AI tools that are being used. That was a common view. And it was like, I would say some people were kind of like, this is sad, but true. Like one investor, Jeremiah O-Yang, that I talked to kind of presented it this way. Other people were just like flat out excited about it. I'm not like, I hope there's a recession, but like should there be a recession?
Starting point is 00:53:01 We are extremely well positioned to emerge like more powerful from it because of exactly what you said. If you can't hire humans, you're going to pay cents of a dollar for some software agent that does the job like a little bit worse. The quote that really killed me from what I think was a startup founder you talked to, he thinks that their job is to raise the ceiling on how prosperous and enjoyable society can be. And it's everyone else's job,
Starting point is 00:53:31 the media, the government, to protect the floor. How did that sit with you? There's like this part of San Francisco where we were talking that's really optimistic about what technology can do in the world in a way that to me, maybe, I don't know, cynical New Yorker, it's just like a little unsettling. But it was just to say, people out there
Starting point is 00:53:54 like genuinely working on like AI, life extending technologies, weird, crazy, climate, nuclear things. And right, I guess to me, what was, I don't know if troubling is the right word, but what seems a little off about this way of thinking is that all these technologies don't exist removed from above the rest of the world. It's not like you have the West Coast
Starting point is 00:54:16 and they make things better. And you have the East Coast, and the East Coast just makes sure things don't fall apart. These things are connected. You can't have a robust tech ecosystem. You can't build AI. You're not going to raise GDP if no one has money to expend on your products at a certain point.
Starting point is 00:54:32 These things are all closely connected. And the other reporting I've done on Trump administration policies and the system of scientific innovation in this country, he's dismantling that. And like, how are you going to build AGI when that happens? Well, and this whole, this question about where, you know, it's up to everyone else to protect the floor.
Starting point is 00:54:55 It feels like another version of move fast, break things, and sort of, but it's up to the government, I don't know, everyone else to fix the things that they break or to make sure that if they're going to displace entire professions and millions and millions of jobs with AI, well, then somehow the people who are developing that technology have no responsibility to figure out what to do with all those people
Starting point is 00:55:25 and what jobs those people are going to work in. And that's just the responsibility of people in Washington, I guess. I don't know. It just feels very like not our problem. We're busy trying to save the world. Yeah. I mean, it is like if I don't, again, like I said, I am not so bullish or I don't, you know, ascribe to the belief that such powerful AI systems are going to come so soon.
Starting point is 00:55:50 But it does seem like software automating a lot of people's jobs is like around the corner and it remains to be seen how reliable it is. But like, yeah, if you believe the timelines, if you say end of 2026, end of 2027, end of 2030, this kind of societal disruption is going to start, like, seems like not just, right, Washington, not just bankers, but like everyone should be preparing for it. And you don't see, I think if I ask people about it,
Starting point is 00:56:16 they'd be concerned, but maybe not. Not as alarmed as they should be. You know, or, you know, the sort of like, distribution of their time and focus was not commensurate to like the concern they claim to have verbally. Give me this skeptical case, Ben, we've been sort of referencing this a couple of times on AI. And is the skepticism based purely on
Starting point is 00:56:41 timeline and like this is coming no matter what, but it just might be later than all these people think. You talked about you know a lot of people you talked to spoke about like a bubble and like a boom but is there been is there some countervailing theories about the bubble might be bursting or maybe this whole industry is just maybe we're getting ahead of ourselves right now like what's the case there? Yeah I think we can break this down into a few areas. I think one is from a research perspective.
Starting point is 00:57:14 Right now, the prevailing approach to building like smarter, more capable, whatever, generative AI models from OpenAI, Google, Anthropics. I'm oversimplifying in this. And I don't want to discount how hard this is to do as a software engineer. But they're making the models bigger. They're taking an approach that they've seen work for processing a certain amount of data
Starting point is 00:57:39 and saying, we're going to give it more data, more computing power, more time. And as they run into the limits of that approach, as sort of like diminishing returns, and also limited amounts of data, they've looked for other ways to push on scale to build bigger data centers with more electricity, and they say this will make the models smarter.
Starting point is 00:57:59 There's like a big body of research suggesting that's not the case, that you need sort of like genuine algorithmic breakthroughs to make like quote unquote smarter AI, there's some like basic tests of like visual reasoning, things that are like complicated, like paint by numbers grids, that is how I have likened to them in previous reporting
Starting point is 00:58:19 and humans generally do pretty well on this. And the models are terrible, that's meaning like if the average human is going to do like 60% and the best, smartest AI model is going to do like 5%, it seems like a big gap and something that's really easy for a human off the street and really hard for this like supposedly on the corner super intelligence.
Starting point is 00:58:39 And I think that's a reason to be skeptical. I think another second thing is these programs are really good at writing code. And everyone who works in AI writes, reads fluently in code to some level. Even if you're an executive, you at some point are going to garner some understanding of what's going on.
Starting point is 00:58:59 And so I think if you believe that if that's the world you operate in, that's the use case you're looking at most frequently. Maybe this is sort of myopia or tunnel vision there. Or like, sure, this model is good at writing Python scripts or finishing my Python scripts for me. But does that mean it can like, we should trust it to like, write a podcast script or even ideate a podcast or something in the Atlantic to fact check things, frankly,
Starting point is 00:59:28 to do financial analysis, to help people with their taxes, to write legal briefs. Anthropic recently filed a legal brief in which it used Claude, its AI model, to assist it. And it misrepresented information about a number of cases. And it was just like, you know, I mean, a glaring example of the model, right?
Starting point is 00:59:49 Cutting corners and getting things wrong. Yeah, no, I'm just, my, I've always wondered if, no matter how quote unquote smart it gets, if it's really going to be able to replace, you know, human creativity on any kind of scale that displaces jobs where, you know, a lot of creativity is required. Yeah, I hope not. I believe not. All this is copy-out of it. I'm skeptical, but I try not to be cynical. I could just be super wrong. We could both be super wrong about this.
Starting point is 01:00:25 But yeah, I agree with you. I think, you know, there's something about like living in the world of having like a body and friends and family and a history that you come from and making decisions and liking and not liking things that these models can't do and I have to believe that that informs like the creative spark and spirit in some way. What uh, what did the people you spoke to think about Trump's second term so far? I don't think there were any examples of people who loved, you know, underbash what's been happening. It was more like it doesn't affect us.
Starting point is 01:00:58 You know, people weren't excited about tariffs and macroeconomic turbulence, like as far as how it affects the country. And a number of people I talked to who work in hardware and e-commerce importing things, like they were just miserable at that period in time, which was well prior to the current 90-day pause on the tariffs between the United States and China. So, you know, there were grumbles about that.
Starting point is 01:01:24 But again, more of like, this is bad, but it's a blip compared to the AI revolution. I think the sticking point for a lot of people was immigration policy and probably more than 90% of people were immigrants or children of immigrants. It just, it mean it's a country, but Silicon Valley in particular is a place that depends on collaborating. It's also, it's not just about people moving to the US, it's also about like people wanting to work with companies based in the US, people like being willing to,
Starting point is 01:01:57 for the most basic thing, a conference, whatever, like move across borders. And that coming under threat, I think, worried people more. Although I think I'm very I'm concerned about that but I would say maybe folks there were sort of like this is bad but it doesn't seem to have reached the point where we need to worry about it so much yet. I was also surprised that some people you talked to who seem worried about
Starting point is 01:02:24 Trump either the trade war or the immigration policies or AI are comforted by the fact that David Sacks is in the White House. Yeah, honestly, that surprised me too. But the economy is a kind of faith, maybe like the even deeper belief underlying that is like the tech industry's belief in itself, which manifests then and like believing in David Sachs and the like to sort of, at some point, steer the government to the right place. I think it was just like maybe a week, maybe two after my reporting, there was like a report
Starting point is 01:02:59 about all of these group chats of high profile people showing like rifts between really influential tech investors and others, including David Sachs, where a lot of them were just like defending the Trump administration to the death. And to me, that is not that does not illustrate the kind of like rationality that some people I talk to were hopeful. What was the general opinion on regulation of AI? Yeah, yeah, it's, for the most part, people said, like, you know, even if the economy struggles,
Starting point is 01:03:37 even if there are issues with immigration, recruiting, retaining talent, whatever it may be, like the Trump administration has said that they prioritize AI. Look at this Stargate announcement he made with Sam Altman and Larry Ellison and SoftBank and look at all these other things he said. So we're going to be okay. Part of that was less regulation allowing the industry to move forward and trusting the industry to sort of say,
Starting point is 01:04:06 we know AI best, the government doesn't, so we're going to take care of it ourselves. I mean, it's a whole separate and fascinating topic where if you look at the kind of regulations even top companies and their executives open AI, Google, like we're asking for in 2023, and what they're asking for now, it's like very different. And to me seems more lax.
Starting point is 01:04:28 He can say, you can say in words that you prioritize the use of artificial intelligence, like the Department of Government Efficiency can implement AI to replace as many fired federal workers as you want. But as we've discussed, if you're not letting American and international companies work together, hired federal workers as you want. But as we've discussed, if you're not letting American and international companies work together,
Starting point is 01:04:48 if you're not funding the basic science research that 10 years from now is going to allow for the next AI breakthrough, you're not supporting AI in practice, even if you say so. And that didn't seem to be something that really anyone was thinking about. Last question, how do you think Silicon Valley in the early years of the AI era compares to Silicon Valley in the early years of the social media era?
Starting point is 01:05:13 Because I feel like there's a lot of parallels that, to me, are a little worrying because I feel like we have not learned any of the lessons from the development of social media. We've been very bad at regulating it. Very, you know, just now, people, you know, there's a general awareness that has all kinds of negative effects. And it just seems like AI is going to develop much, much faster than that and potentially have much more far-reaching impacts.
Starting point is 01:05:48 And I just wonder if you could compare the two eras. It's a great question. And I think also a great question, because maybe in 2023, when AI executives were starting to talk about building the technology responsible and being responsible actors, you would see people hint at sort of like we don't want to repeat the mistakes of the past. And you would see Congress say we don't want to repeat the mistakes of the past, the past referencing right the social media era and the 2010s. And we haven't seen anyone
Starting point is 01:06:20 really make good on that in the slightest in the past two years. Like where are the regulations? Where are the third party independent government checks here? So, yeah, there's a lot of like the sort of move fast and break things that you mentioned. There's still a lot of that. I think there's probably an understanding that the technology could be dangerous, but a trust in these companies to manage that. And I'm skeptical of this.
Starting point is 01:06:49 We've seen that happen in the past. If you're a believer, you say, well, no one wants to release a broken model, a bad product, dangerous product. The profit motive is against that. I don't know if that's really true, given the pace these companies are moving at and all the competitors are moving at.
Starting point is 01:07:07 I think something also different here is that in the social media era, there was a naivety and a lack of understanding of what the technology could do, would do, would become. I think right now it's almost like an active, like move as fast as you can, because there's this narrative, which I don't touch on in this piece, but it's over the past year and a half become the dominant narrative when it comes to AI regulation, which is that if we don't do it first, China will. And so we better get there before they do. And we better spread our version of AI to countries throughout the world before China does. And a lot of people really believe that.
Starting point is 01:07:47 I think a lot of people are weaponizing that in order to raise more money, combat regulations they don't like. But I think it's not an accident that there's this parallel. It's very much engineered. Yeah. Yeah. And it feels like you can use the urgency of the competition as an excuse to put everything else, all the other concerns on the back burner and acknowledge that there
Starting point is 01:08:10 are concerns, but hey, we got to if we don't do this, China is going to beat us. And that's the worst thing. Right. Which is it's strange. And also because if part of the logic for like, if you believe in this technological arms race between the United States and China, and you believe you want like democratic AI instead of authoritarian AI, these are kind of the labels that are given. Like, shouldn't you stick to those principles
Starting point is 01:08:34 in developing the technology? Like shouldn't you have public input and go slowly, have it be safe, have it be transparent, which is the opposite of everything all of these companies are doing. Yeah, well, thank you for joining and thank you for doing all this great reporting. It is somewhat alarming, but I think it's also very illuminating
Starting point is 01:08:51 and more people should pay attention to it. So, Mateo Wong, thank you again for joining Offline. Thank you so much for having me and for the fantastic questions. It was a pleasure. As always, if you have comments, questions or guest ideas, email us at offline at crooked.com. And if you're as opinionated as we are, please rate and review the show on your favorite podcast platform. For ad free episodes of Offline in Podsave America, exclusive content and more, join our friends at the pod subscription community at crooked.com slash friends.
Starting point is 01:09:19 And if you like watching your podcast, subscribe to the Offline with Jon Favreau YouTube channel. Don't forget to follow Crooked Media on Instagram, TikTok, and the other ones for original content, community events, and more. Offline is a Crooked Media production. It's written and hosted by me, Jon Favreau, along with Max Fisher. The show is produced by Austin Fisher and Emma Illich-Frank. The show is mixed and sound edited by Dan Farrell. Audio support from Charlotte Landis and Kyle Siglund.
Starting point is 01:09:50 Dallon Villanueva produces our videos each week. Jordan Katz and Kenny Siegel take care of our music. Thanks to Ari Schwartz, Madeleine Herringer, and Adrienne Hill for production support. Our production staff is proudly unionized with the Writers Guild of America East.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.