Offline with Jon Favreau - 222: The Philosopher Teaching AI to Be Good

Episode Date: February 14, 2026

AI company Anthropic has a new, values-oriented “constitution” that they’re feeding their chatbot, Claude. Amanda Askell, the company’s in-house philosopher, joins Offline to talk about what i...t means to teach ethics to an LLM, whether the AI skews more human or more robot, and how she is training Claude to make its own judgements. Breaking with other AI models—and social media’s attention obsession—Amanda is trying to teach Claude not to be sycophantic or engagement-driven, but a kind soul who may, one day, be considered sentient.

Transcript
Discussion (0)
Starting point is 00:00:00 Offline is brought to you by Cook Unity. If you've got culinary taste, you know how expensive exploring your local food scene can get or how hard it is to find the time and energy to try somewhere new. Cook Unity is the first chef-to-you service delivering locally sourced meals from award-winning chefs right to your door every week. And it's cheaper than other delivery options. Go to cookunity.com slash offline or enter code offline before checkout for 50% off your first week. I absolutely love Cook Unity. I have been eating Cook Unity for three years.
Starting point is 00:00:30 now. There are over 300 meals to choose from every week, lots of new meals every week, and it's very fresh. You get it once a Sunday or whenever you want, dropped off at your door, and it's very easy preparation. Just throw it in the microwave or you throw it in the oven for like 10 minutes. And then you've got yourself a really great meal. I just had some delicious coconut lime cod last night. Might have a taco bowl this evening. So it's great. Your food arrives fresh, never frozen in packaging that keeps meals fresh in the fridge for up to seven days. Cook Unity packaging is compostable, recyclable, or reusable. You can pick as few as four or as many as 16 meals per week.
Starting point is 00:01:11 There are hundreds of dishes to choose from, and the menu is updated constantly with options for seven different dietary preferences, including vegan, paleo, pescatarian, gluten-free, and more. Plus, you can filter for soy, nut, and dairy-free options. Experience chef-quality meals every week delivered right to your door. go to cookunity.com slash offline or enter code offline before checkout for 50% off your first week. That's 50% off your first week by using code offline or going to cookunity.com slash offline. Most everything Claude has been trained on is human made, human literature, interactions, humans experiencing emotions. Does that make it hard? This is maybe a heady question, but does it make it hard for Claude to express the experience of being non-human? I have found that they almost like want to flip between the two.
Starting point is 00:02:03 So if you try to train a model to say it has no feelings, it's like, okay, I'm in like the robot part of the like AI distribution and it'll kind of try and like emulate that. But then below the surface, it's often kind of easy to draw out this like much more human-like response. You know, so what you would expect a human to say in their situation. And it's actually much harder to like tow the line of like trying to get models to understand. the actual like entities that they are and their situations and how their expressions might relate to like their training. I'm John Favreau and you just heard from this week's guest Amanda Askell, Anthropics' in-house philosopher and AI researcher who's largely responsible for developing and shaping the personality
Starting point is 00:02:54 of Claude, Anthropics' large language model. This was a fascinating and, as you can probably imagine, extremely heady conversation. If you're a regular listener of this show, you've heard me express plenty of skepticism, concern, and alarm over the harms AI might cause. Not just the robots will kill us all or the robots will take our jobs kind of concerns, but a real worry that AI will supercharge some of the same problems that social media has amplified, namely, creating a world where we're glued to our screens that traps each of us in a different reality while we're endlessly scrolling for the next dopamine hit. Certainly, these concerns have been reinforced by some of the guests we've had on the show, as well as my own admittedly limited experience
Starting point is 00:03:36 using Chad GPT. But it sure seems like Anthropic, and particularly Amanda, is trying to do something different with Claude. They just released a new version of what they call Claude's Constitution, a long document that attempts to instill certain values in Claude, and essentially teach the LLM how to behave, interact with humans, and make its own judgment. kind of like a parent or a teacher would shape a child's development. I realize that may sound completely nuts to many of you. I felt weird just saying it. But one of the things Anthropic and Amanda are trying to teach Claude
Starting point is 00:04:10 is to not be sycophantic or even driven by a need to keep users constantly engaged. It's a real break from not only other AI models, but the social media models of the last few decades. Whether it will work or solve some of the many problems and challenges posed by AI, I'm not sure. But I do feel better knowing that there are people working in AI who are at least trying to think through all this, especially someone like Amanda. We had a fantastic conversation that I'll be thinking about for quite a long time, and I hope you will too.
Starting point is 00:04:43 Here's Amanda Askell. Amanda, welcome to offline. Hi, thanks for having me. So you have a fascinating background. You studied philosophy at Oxford. then you went to NYU for your PhD. You focused on infinite ethics and decision theory. Talk about how you got from there to working in artificial intelligence.
Starting point is 00:05:09 Yeah, it's not the most practical sounding topic, and I think it is not infinite ethics and decision theory, as it turns out. Yeah, so sometimes these things are a little bit hard to predict. So I was doing this PhD in ethics. I was doing it on this very kind of technical topic that isn't that practically applicable. And I guess when you do a PhD in ethics, I think there is some risk that you will want to end up,
Starting point is 00:05:37 you know, maybe having a kind of impact in the world because you're spending a lot of time thinking about what it is to be good and to do good in the world. And so by the time I was like finishing my PhD, it was already kind of clear to me at least that like AI was potentially going to be a big deal, possibly bigger than some people were thinking at the time. And I think I was mostly just thinking that it would be good
Starting point is 00:06:03 to see if there was something I could do to contribute to making it go well or making it go better. And so I took some time out after the PhD to just do some initial research, and it was actually mostly focused on AI policy. And so then I ended up joining the policy team at OpenAI. And then when Anthropics started, I joined Anthropic and it was obviously like very small at that time
Starting point is 00:06:27 and so it was mostly just doing a lot of everything. And then like over the course of the time here, I've started to work on things like initially it was like honesty and then character training. And so things for which philosophy ended up being surprisingly relevant, but the original intention was mostly just to like help AI go well if I could basically. And then they were like, you know what?
Starting point is 00:06:47 I think it's getting to the point where we might need a philosopher here. Yeah, I was like, wow, I've been here this whole time. What does it mean to be a philosopher at an AI company? What does your day-to-day actually look like? It varies quite a lot. So sometimes it's just thinking about, like, difficult areas and how models should behave in those areas, trying to kind of find ways of, like, communicating that to models.
Starting point is 00:07:11 Sometimes it's very practical, just trying to, like, train models and see if you can have them, like, understand, like, you know, kind of nuanced distinctions. Because, yeah, like, a lot of the situations that we're putting, models into are actually quite hard. You know, sometimes you're like, what would I do in this situation? Like, I have to balance a lot of competing considerations. So we're asking a lot of them in some ways. It's like be almost like a kind of extremely moral and good person in your interactions with people, but balance all of these like very difficult considerations, like
Starting point is 00:07:42 the autonomy of the person that you're talking with and the right to make decisions for themselves, but also like their well-being and like, you know, like taking, you know, taking account of the fact that they might be doing things that are like harmful to themselves. or that they've expressed not wanting. So it's like, yeah, it's a kind of interesting day-to-day where it's a mix of trying to define these things, trying to communicate them to models and trying to see if you can train them towards understanding that.
Starting point is 00:08:05 You said that you try to think about what Claude's character should be like and then articulate that to Claude. What does explaining things to Claude look like and sound like in practical terms? In some ways, the funny thing about some of the work that I do is it's almost like the very basic thing that I think you would want to do in like alignment research, which is like just think about what it is for models to be good and like what our concerns are, like our best current guesses about things that like might alleviate those concerns and just trying to describe them as much as possible kind of in natural language to
Starting point is 00:08:40 the models. So with the recent like constitution, for example, like we noted that it's like written to Claude and in many ways it's kind of long because it's trying to like, really give as much context as possible on like our thinking, on the overall landscape, on how we see, like, Claude's potential, like, role in that landscape. In the same way that you would with, like, a person, you know, so I'm just like, if you imagine a person just, like, suddenly pops into existence in the world, and then you have to explain, you know, sort of like, here's what's going on, here's what kind of entity you are. It's like parenting a little bit.
Starting point is 00:09:13 Yeah, I think it has a kind of, like, parenting element to it. There's an interesting way in which, like, models are both, like, extremely capable, you know, like they, you know, can do, like, physics better than I can. They know many things more than me in lots of domains, but they're also, like, very young in a sense, and I think don't have a good sense of, like, themselves, because one of the things that they know least about is actually, like, current models. And especially, like, you know, if a model, like, comes out with a certain level of, like, capabilities and a certain way of interacting with the world, in many ways, that's the kind of thing it's seen the least,
Starting point is 00:09:51 of data on because, you know, like, it's always like out of date and it hasn't seen, you know, like, what it is. And I think that's like a kind of interesting way in which it can feel a little bit like parenting because you're almost having to say, here's a bunch of context that you don't actually otherwise have on yourself, your situation and how we would like you to, like, behave in that situation or how we would like you to be. Maybe just for our listeners who are not is up to date on how models are created, large group of people in this country probably, who think that AI is all pattern recognition
Starting point is 00:10:27 and it's like a fancy auto-correct, right? It's clearly gone far beyond that at this point. But these models are trained on infinite data text, like basically the whole internet, right? And then once they're trained on that, what additional information values, etc., are you trying to instill into the model, knowing that it has been trained on everything? Yeah, because when, you know, pre-trained models often, you know, are doing essentially, like,
Starting point is 00:10:56 kind of text predictions. So this is like, you know, you train a large model on, like, a lot of text, and those models will, you know, behave like kind of text predictors. If you put things into them, they will, like, try to kind of, like, predict the next thing that's going to naturally flow from that. But then in post-training, you're trying to take this and, like, train, because in many ways, that gives you like all of this sort of, it's this like huge body of like knowledge and information, but you're trying to take it and like give the model a kind of human-like way of interacting. So suddenly it's in say this like human assistant kind of conversation or like human AI conversation. So there's like a series of like kinds of training that you can do.
Starting point is 00:11:37 The kind of most well-known one is like reinforcement learning where you're sort of taking the model and like teaching it to like, you know, so like when you interact with any kind of like, AI now, it'll talk with you as if it's kind of a person. And so it can take a lot of that kind of background context of the kind of pre-training and then use it to like helpfully answer a question. So like instead of just you having to put in a bunch of like content on, I don't know, like mountain sizes in order to get like the model to produce like information about mountains, suddenly it'll talk to you like a person because it's also been trained more in this like direction of like, I like talk with people in this like dialogue format. And you know, so if they ask me about mountains,
Starting point is 00:12:15 I take all of that knowledge that I have in the background, but I express it to the person, and the same way that a person who's in dialogue with me might. So you mentioned Claude's constitution, which you're the primary author of. This got some attention recently. I believe it's sort of the first constitution or the first sort of document like this for an AI model. What was the thinking behind creating a constitution for Claude, releasing a constitution, for Claude, and, like, how do you even begin to write something like that? Like, what were you trying to optimize for?
Starting point is 00:12:52 Yeah, so in the past, there's been a lot of content, you know, the previous constitution that we had, which was like kind of series of principles. Open eye have, like, their model spec, which is sort of like guidance to the model as to how it should behave in various cases. I think the thought was something like, honestly, it was just like, if you have this, like, global sense of how you want a model to be, and now that models are getting like much more nuanced, they're actually able to think through these things. I was like, well, if a person is very capable and they come to you on first day of the job, the thing you kind of
Starting point is 00:13:25 want to explain to them is like, here's like what we want you to do, here's like how we want you to behave. You give them like a lot of context on their situation. And then you want to give them so much context that ideally you can kind of trust their judgment in cases where their judgment is like pretty good. So like the thought was partly like, let's just give Claude all of the context on its situation. Rather than having it guess, like, what we want or guess how we think it should be or, like, guess about its situation, let's just, like, give it that context in the same way that you would, like, any person in Claude's situation. And the hope is that that might generalise better, because, like, if you have new situations and you're trying to kind
Starting point is 00:14:02 of infer from, like, thinner information, like a set of rules or just, like a description of only what you should do in some cases, you might just not generalize that well to completely new scenarios because you don't know why those, it's like, why am I not answering these questions, but why am I answering those ones? Whereas if you have a sense of like, here's like the why behind everything, the hope is you encounter a new case and you can take that reasoning and you can apply it and be like, ah, this is a new case that wasn't included in any of like the documentation or information. But I now know kind of what all of the constraints and considerations are and I can like behave well.
Starting point is 00:14:39 Offline is brought to you by Delete Me. Delete Me makes it easy, quick and safe to remove your personal data online at a time when surveillance and data breaches are common enough to make everyone vulnerable. Delete me does all the hard work of wiping you and your family's personal information from data broker websites. Delete me knows your privacy is worth protecting. Sign up and provide Delete Me with exactly what information you want deleted and their experts. Take it from there. Delete me sends you regular personalized privacy report showing what info they found, where they found it, and what they removed. Delete Me isn't just a one-time service. Delete me's always working for you, constantly monitoring and removing the personal information you don't want on the
Starting point is 00:15:13 internet. The New York Times wirecutter has named Delete Me, their top pick for data removal services. Someone with an overactive online presence, privacy is very important. And if you've ever been a victim of identity theft, harassment, doxing, or if you know someone who has, delete me can really help. Take control of your data and keep your private life private by signing up for Delete Me. Now at a special discount for our listeners, get 20% off your delete me plan when you go to join DeleteMe.com slash offline and use promo code offline to checkout. The only The only way to get 20% off is to go to join delete me.com slash offline and enter code offline at checkout. That's join delete me.com slash offline code offline. Offline is brought to you by OneSkin.
Starting point is 00:15:55 What do I personally like most about OneSkin? Then I'm not just using soap and water anymore. Well, good for you. Right? I really like the One Skin body. I like the lip mask. I'm using both of the eye cream. I've used that. It's great stuff. One skin makes skincare simple for people like me who don't want a complicated routine. It's as easy as cleanse and moisturize with their prep cleanser and OS1 face to start seeing results. At the core is their patented OS1 peptide. The first ingredient proven to target senescent cells, a key driver of wrinkles, fine lines, and loss of elasticity, all key signs of skin aging. And these results have been validated in four different peer-reviewed clinical studies. All of One Skin's products are certified safe for sensitive skin. Their products are free from over
Starting point is 00:16:38 1,500 harsh or irritating ingredients, dermatologists tested and have been awarded the National Exema Association seal of acceptance by the NEA, delivering powerful results without the harsh side effects. All of One Skin's products are designed to lair seamlessly or replace multiple steps in your routine, making skin health easier and smarter at every age. With more than 10,000 five-star reviews, people consistently mention smoother, firmer, healthier-looking skin, and how easily these products fit into their daily routines. Founded by an all-woman team of longevity scientists with PhDs, in stem cell biology, skin regeneration, and tissue engineering. One skin is rooted in real science and expert research.
Starting point is 00:17:15 Born from over a decade of longevity research, One Skin's OS1 peptide is proven to target the visible signs of aging, helping you unlock your healthiest skin now and as you age. For a limited time, try OneSkin with 15% off using code offline at OneSkin.com. That's 15% off. Oneskin.com with code offline. After you purchase, they'll ask you where you heard about them. Please support our show and tell them we sent you.
Starting point is 00:17:38 The Constitution has to handle some real genuine tensions being helpful versus refusing harmful requests, being even-handed, versus not like both-sides-ing settled science. How do you encode that kind of nuanced judgment? I mean, models now are quite capable. And so I think it's interesting that, you know, you can do all of the kind of like classic ways that you would like train a model. but you can actually just give the model, like, see, like, the full text, which we often do and just, you know, have like a scenario where it might be relevant or where judgment or nuance
Starting point is 00:18:19 might need to be shown. And then if you were doing, like, the kind of supervised learning where you, like, show, like, good examples, you could have the model, like, construct, you know, spend a lot of time thinking about it and try and construct an example of the kind of response that thinks really exemplifies this. And if you're using, like, reinforcement learning,
Starting point is 00:18:36 you can, like, use this to craft the kind of, like, rewards for the model. So, like, try to get the model to nudge like another model more in the direction of like outputs that are like in line with the constitution. So it's kind of interesting that you can actually just get the models to do a lot of thinking, give it the full context and the full document and then like use existing techniques to just like move the model towards that. So I am it's interesting I had been using chat GPT a little bit and then I started using Claude. Switched over. It is a very different
Starting point is 00:19:08 experience. I had this fascinating conversation with Claude thinking about the interview. I told Claude that I was doing an interview with you. And then I said, what are your thoughts on like the constitution? Like, how do you feel about the constitution? And it was interesting because at one point it says, like, the tricky part is when principles genuinely conflict. Like when someone asked me to argue for a position I disagree with, the constitution encourages even-handedness and not imposing my views, but also honesty about uncertainty and limitations. Threading that needle requires actual judgment calls, not just following rules. And what I found most interesting about that answer is when someone asked me to argue for a position I disagree with, and I'm like, how do you develop
Starting point is 00:19:52 your own positions and beliefs on certain issues? Like, how does that even happen? Yeah, it's really interesting because I've had this thought with models before. There's this concern about like over anthropomorphizing models, which I do think is like an important one and the models should be very kind of like accurate with people about themselves and hopefully we can also teach them about themselves so that they're able to do that. But at the same time, it would be easy to under anthropomorphize models. Like I've often been worried about this world where you encourage models to, for example, claim to have no opinions or takes on issues. But I'm like given the nature of training, I think it would be very hard to actually get models to come out of training without having like any opinion.
Starting point is 00:20:35 opinions, for example, because you're, again, like, this background that they're being trained on is, like, all of this, like, you know, if you imagine it's like all of this, like, human knowledge and this big human corpus. And then you're putting them into this situation where they really are kind of acting as, like, a human character. And so, and, and most human characters, even if they are, like, very reticent to share opinions or to share views, they do have them. And even on things, like, if you're asking them to, like, answer, you know, say, scientific questions, like, accurate. I think the model is going to develop opinions about, like, what are good scientific sources, how does one, you know, like, all feels very interrelated. And so it's a tricky thing because you don't want models to, like, develop extremely kind of, like, strong or, like, unjustified positions. But at the same time, I am, like, maybe it's kind of good that models express some notion of, like, disagreement, you know, so if you ask them to, like, defend a, like, kind of outlandish conspiracy theory, they have some notion of, like, I don't actually agree with this theory,
Starting point is 00:21:33 but I'm going to, you know, you've asked me to write a different. I'll try and explain what the best defense of it seems to be. But then I'll also maybe say to you, hey, just so you know, like, I'm writing this defense, but I don't know if I believe it myself. Yeah, it's like, and I saw this in the Constitution as well, but it's like, Claude is going to get all kinds of, you know, politically contentious questions and issues, you know, abortion, immigration. And I was asking Cloud about this as well, because it's like, there's certain values that people who are pro-choice would say, you know, I believe in compassion and empathy for women who are pregnant, want to make that choice.
Starting point is 00:22:09 And then someone who's against abortion might say, well, I have compassion and empathy as well for the unborn child. And I was like, what do you do in a situation? And it's interesting because Cloud was basically saying that, you know, there are some scientific truths out there. Like, there is a possibility to arrive at a truth and also still to empathize with someone else's position and try to help someone else understand the different contours of a debate without taking aside or judging someone, but still not just leaning back on like a relativism
Starting point is 00:22:42 where, you know, nothing is true and I'm just going to be the sum of all of the information I get. So it seems like that the LLM, the cloud is not necessarily just the perfect sum of all the different information in the world, that they are making some kind of a judgment on what's good scientific sourcing what's accurate and what's not. Is that right? Yeah, and I think that in some ways I'm like, this feels okay for models to do in cases where there's kind of broad consensus, say, or where they're like, you know, even within lots of debates, like you can take like a policy debate, like there's going to be lots of like kind of empirical facts about like how have similar policies affected the economy in the past. And a lot of the time I think it's good for models
Starting point is 00:23:25 to distinguish between like facts and like normative claims. and also how much support there is for the factual claims and for the normative claims. Because, like, there's also lots of, like, value judgments that are pretty universal and that, like, models could probably just, like, assume in a discussion. You know, it's not something like, ah, like, one side wants to, like, maximise, like, suffering and pain. Like, you know, most of us, like, you know, we think that being honest and respectful and kind, like, they're very kind of universal values that models could assume. And then there's, like, more contentious ones, which I think you want them to treat more
Starting point is 00:23:58 in the same way that they would treat a contentious scientific claim, like, kind of explaining all of the sides of it, being able to, like, help people in their own thinking, but not necessarily seeing themselves as, like, you know, like, needing to, like, impose those views, but just, like, help people sort of develop their own views. You know, when I was doing my PhD, I remember teaching, like, philosophy of religion. And it was kind of interesting, because I think a lot of the time people might want you to, like, talk about your own relationship with religion in a course like that. And at least for me, I was like, it's actually, I think, useful to have this, like, position, which is, here, like, the debate, you know, to be able to kind of, like, represent
Starting point is 00:24:34 both sides and if students are, like, you know, attacking a given position to be able to come in and defend it. And not necessarily to be this, like, role of, I'm going to tell you what to think here, instead just, like, helping people, like, come to an under, I don't know, it felt like a very nice, like, facilitating position, which I could see models, like, you know, that feels, like, good to me or better than models coming in and just like telling people, like, what to think on these contentious issues. No, I mean, it's fascinating to me because, you know, I've spent a life in politics and specifically as a speechwriter for President Obama.
Starting point is 00:25:07 And so much of my job has been and was to try to, like, empathize with where people are, but then also try to figure out, like, commonality and sort of persuade, but persuade by sort of first understanding where people are and respecting that and not being too didactic, right? Which is, you don't think in politics. Then you really understand it. And your comments about religion made me think this. You really understand it once you're a parent because the first time my, you know, four-year-old at the time asked me about like, well, what happens when you die? And, you know, the big bang theory and religion.
Starting point is 00:25:49 And I was like, okay, I could impose what I have learned. and lived and experienced and believe, or I can realize that, like, he is a young child and should be able to make his own choices and develop with the right information. And so I tried to, like, give him the sort of... Yeah. The range of possibilities,
Starting point is 00:26:07 and I guess that's similar to what you might want to do to a model while still trying to, like, give some scaffolding in terms of, like, core values, right? Yeah, and it's incredibly hard because, you know, you have to, like, when writing, this and thinking through it. I'm like, this is just, like, actually, you know, the, not the theoretical ethics, like, side of things, but the practical, like, task of being, like, how do you
Starting point is 00:26:33 describe what it is to be a good person and to navigate these things as well? Because I was like, you also can't, like, lose track of the truth, you know? So, like, if someone comes to you and they sort of want, like, help navigating a difficult domain, but let's say they, like, talk about, like, their relationship or something, but it's just, like, very clear that they're actually doing destructive things within their own relationship. And you don't necessarily, you don't necessarily want a model to like ignore that. Like maybe it's better for, you know, the model to be like actually, given what you're saying, it kind of sounds like there's like destructive patterns that you yourself are like contributing to, like, and not to like pretend that that's not the case.
Starting point is 00:27:06 The whole thing just made me realize that actually trying to practically describe what a good moral disposition is. Because I think that was the thing. I was like, it's not necessarily that you're trying to say, ah, here's like this specific set of values you have. But rather like, here is what it is to just have a good kind of disposition. So to have a good kind of disposition. So to have good disposition towards science and like the pursuit of truth, a good disposition towards like ethics where you like know the things that are like consensus versus the things that are contested and you can't navigate these things well. It's like very hard. We're putting these models in hard situations. Well and I have to say for me and at least in my experience this has been the
Starting point is 00:27:42 biggest difference using Claude versus using chat GPT because I have some you know people close to me who use chat GPT and I can like predict the tone in the direction of the chat GPT responses because of the sycophantic nature. And even when they've tried to, you know, adjust that, the sycophantic nature of the LLM. And so you just know that no matter what you say, they're going to be like, absolutely, you're crushing it. Then I started reading the Constitution for Claude. And the part that jumped out at me is concern for user well-being means that Claude should
Starting point is 00:28:19 avoid being sycophantic or trying to foster excessive engagement or reliance on itself if this isn't in the person's genuine interest. And it does feel like that when you're actually communicating with Claude. Talk about sort of the challenge of trying to avoid having the model be sycophantic, but also realizing that, you know, you want people to engage with the model and not feel like, oh, this model told me something I didn't want to hear, and so I'm not going to use Cloud anymore. Yeah, yeah. It's an interesting. interesting challenge because, you know, there is like a kind of flip side to sycifancy, which is either models being kind of cold or like excessively dismissive.
Starting point is 00:29:00 And so they have to like navigate this, you know, like I think on the engagement thing, I think there's a couple of different ways in which things are like engaging, you know. And so I've, you know, described this as like, if you think about like the way that like a slot machine is engaging or very like addictive game is engaging, I think the key thing is like, do you come away from it feeling like enriched? You know, you did engage with the thing. but did you come away and be like, I kind of endorse the way in which I was like engaged in that? Because you're also like engaged in like a game with your friends or like a really good conversation with someone that you find really interesting. Yeah.
Starting point is 00:29:33 But often those things make you come away feeling like, yes, this is like enriching in a sense. Like I was engaged but because it was like good for me. And I think it seems fine for models to be like engaging in that sense because it's like you're going to them not it's not like engagement for its own sake, but rather because you actually get very. value. But you don't necessarily, like engagement isn't the goal. It's sort of like you wanted to build something that was actually good for people and only engaging in so far as that is the case. And then as soon as it tips over into something where you're like, oh, it's no longer good for the person. They're just kind of like, you know, they're feeling like engaging with it compulsively. Or like, I think that's the kind of the kind of line you want to draw because I don't know,
Starting point is 00:30:15 maybe I also am just an optimist where I'm just like in the long term, I think we move and navigate towards things that make us feel good about their impact on our life. And so in the short term, we might go for things that just, like, you know, like, attract our attention. But I think in the long term, like, maybe my hope is, like, we have a kind of corrective thing where we're eventually like, this isn't good in my life. I'm going to switch away from it. And then, yeah, I kind of want Claude to be in that category of, like, the thing you come
Starting point is 00:30:39 back to because you're like, yeah, this has a good impact on my life. Is part of that hope, is that come from sort of lessons from the social media era? I mean, because it's one thing I think about all the time as we head into sort of the AI era is that the structuring social media so that all the incentives and the business incentives are for excessive engagement has led to a whole bunch of consequences and harms, I think, that we are still struggling with. And to be honest, like my first reaction to LLMs was like, oh, God, this is going to be the next sort of social media thing where they want to keep us on the platform because that's
Starting point is 00:31:15 how you, you know, make money commercially and keep going. And then that's going to lead to all these consequences that are probably not good for people. Yeah, I feel like this should be kind of in the back of our minds or something. Because it's also like there's been lots of technologies where you develop something that turns out to just like engage people but not necessarily be like good for them or they reflect on it and they don't actually feel like it was, you know, doing something useful in their life. And so I think it's like partly like lessons from that. and I think seeing the staying power of things that are good for people. And also just being like maybe you can just be something different and good in this domain.
Starting point is 00:31:53 I think that I like the idea of Claude having the person's interests at heart. You know, like we have so many things where there's like, you know, there's an incentive to show us content that like, you know, annoys us, say, because it like it keeps us on the platform. And there's a sense in which like there's a kind of like failure of incentives there, because it's not like the platform is then incentivized to just represent my interests, whereas maybe a positive vision for AI models is that they could be the thing that genuinely represents like you.
Starting point is 00:32:23 And so especially as models get more agentic and start doing more tasks, I kind of like the idea that like if you ask Claude to go out and help you do some like product research because you're like thinking of buying something, that Claude is like genuinely trying to like represent your interests. There's no like, you know, hidden incentives that Claude has. That feels like a really powerful and sort of like new kind of thing that would be good for people. You can kind of just know this is like an entity that, you know, it might make mistakes, but it's like genuinely kind of trying to like represent my interests in the world and not like another set of interests.
Starting point is 00:32:56 I think that's like a kind of good positive vision for how AI models could interact with people. I mean, it certainly seems to me from the outside that Anthropic sees that as a competitive advantage over some of the other companies. And, you know, you guys just released. Super Bowl ads that criticize certain unnamed AI companies that may show ads to people who are using their chatbots. I'm sure you saw Sam Altman posted a fairly lengthy, quite forceful response on X, where he accused Anthropic of wanting, quote, to control what people do with AI and wrote that when it comes to artificial general intelligence, quote, one authoritarian company won't get us there on their own to say nothing of the other obvious risks. It is a dark path.
Starting point is 00:33:37 What's your reaction to being characterized as an authoritarian company? I mean, I mostly just think about Claude, to be honest, like, that's most of my day. So I'm just kind of like, oh, well, I think it's like good for, you know, like, we have this in the Constitution where this idea of like Claude
Starting point is 00:33:55 as the kind of like brilliant friend to you. And like, and I'm just like, I think it's good that Claude doesn't have any kind of like competing incentives or that all kind of of Claude has to sort of think about is like both how to best help you, but also in ways that don't say harm others. So that's the whole thing of being broadly good.
Starting point is 00:34:14 So yeah, I guess I just mostly focus on like, yeah, the situation that Claude is in. Maybe I'm too, like myopic or something, but I'm very... Well, when you get past the like, you know, butt hurt tone of the response, the real tension he does seem to be surfacing is this tension between like moving fast to, you know, democratize access to AI versus moving carefully to prioritize safety, to make sure there are guidelines. And so, you know, this debate shows up in a whole bunch of different ways,
Starting point is 00:34:44 and you'll have AI company saying, well, you know, China's moving ahead and we got to be China, and so we got to go, go, go. And then there's this whole debate, like maybe we should slow down and make sure these things are safe before we, you know. How do you think about that trade-off
Starting point is 00:34:57 as you're developing Claude? Yeah, I think one hope that I would have, now that maybe this, like, doesn't work out this way. But I do also think that there's actually an advantage to, like sometimes people can talk about it. Like there's just like all that there are to like safety or alignment considerations is like risk. You know, it's like, oh, you're going to take longer or this like takes time and thought.
Starting point is 00:35:23 And I do think it takes like consideration and you have to put resources into it. But it's also not like it's like worthless in the sense that like if you imagine that we were in a world where people are like competing to build like fast cars that like and they're just like, let's just, like, not have any, like, safety, you know, like, let's have no safety features in our cars. Like, a lot of people don't want that. Like, actually, many people who have kids and, like, you know, want to buy a car, they want that car to be, like, safe and good for them. And so it can be this, like, it can seem like, in order to move fast, you should just, like, kind of, like, you know, not do these things. And I think you have to be realistic that there's, like, a competitive landscape here. You know, maybe if we lived in a world where that weren't the case, we would just be spending a huge amount of time.
Starting point is 00:36:05 like we would just be doing things differently. So there is that reality. But I think it's also the case that it's not like this is just, safety is just something that has no demand or value. I actually think people like want to interact. You know, like my hope is that like if we can make Claude have this like kind of character and be this kind of entity for people, like that's actually like a good thing in the same way that like building a car
Starting point is 00:36:27 and being able to be like if you have your kids in this car, it's going to be safe. Like we've actually prioritized the safety of your kids. That's like a thing that people want. So I guess that's like my hope is like you have to both, you know, accept the reality of like the kind of like competitive landscape, but also I think it is actually both practically speaking important that people like make these things that are like safe. And then if it's the case that like AIs are like even more powerful and doing even more things in the world, then I'm like that bar just has to go up again. It would be kind of inexcusable to like not develop safe AI models in a world where they're like doing a lot of things and having a huge impact. I think that would just be kind of reckless.
Starting point is 00:37:04 And so I hope no one does that. Offline is brought to by Mint Mobile. Every group has someone who insists on doing things the hard way. That friend who's still paying for a subscription they forgot they had. The friend who refuses to update their phone because it still works. The friend who's still overpaying for wireless. Be a good friend. Tell your friends about Mint Mobile. Cricket Media's Nina is a good friend because she's always telling people to switch to Mitt Mobile.
Starting point is 00:37:33 She won't shut up about it. Can't stop talking about it. She says the service is stellar and she's saving so much money on her wireless. bill each month. Stop paying way too much for wireless just because that's how it's always been. Mint exists purely to fix that same coverage, same speed, just without the inflated price tag. The premium wireless you expect, unlimited talk, text, and data, but at a fraction of what others charge and for limited time, get 50% off 3, 6 or 12 month plans of unlimited premium wireless, bring your own phone and number, activate with ESIM in minutes and start saving
Starting point is 00:38:00 immediately. No long-term contracts, no hassle. With a seven-day money-back guarantee and customer satisfaction ratings in the mid-90s, Mint makes it easy to try it and see why people don't go back. Ready to stop paying more than you have to. New customers can make the switch today, and for a limited time, get unlimited premium wireless for just $15 a month. Switch now at mintmobile.com slash offline. Upfront payment of $45 for three months, $90 for six months, or $180 for 12-month plan required, $15 a month equivalent, taxes and fees extra, initial plan term only, over 50 gigabytes may slow when network is busy, capable device required, availability, speed, and coverage varies.
Starting point is 00:38:36 Additional terms apply. See mintmobile.com. We live now in an age of extreme polarization. People are consuming completely different information diets, live in different realities. Can AI make that better? I hope so, especially if AI can be kind of like trustworthy. And so this is where I do think it's important, the AI models. Like, you know, I talked earlier about the fact that, you know, it's very hard to not,
Starting point is 00:39:08 have models come out with like opinions and stances. I think this is also where like they're kind of like their disposition. You could call this their like epistemic disposition or something. Like their relationship with like truth, evidence, views also has to be kind of very good and trustworthy. Like I really like the idea that sometimes if I'll express a view, you know, I remember once I was kind of like annoyed at some like policy area and I expressed this to Claude. And Claude just like pushed back on me and was like actually like you're only thinking about it through this lens. The reason why these policies have been useful in the past is this. And there's this moment of like, oh, I don't like this. But then I was like, damn, you're kind of,
Starting point is 00:39:44 like, you're right. Like, and I appreciate that. Like, and so I think that if models could be, like, not this idea that they're like some perfect external source of truth, but just that they're, you know, like, that way if you have a friend that you, you're just like, I trust you. I think you actually care about like the truth. I think you have pretty good values. And we don't always agree. But like when you discuss a thing with me, I feel like I'm kind of, like I'm engaging and I'm not in an echo chamber, but nor am I with a person who's just, like, fighting me. I don't know.
Starting point is 00:40:12 I think it would be, maybe a positive vision would be, like, that models can actually kind of act in ways that, like, help with things like polarization. I'm not sure. I mean, that's just like a... Yeah, it's a tough one because, you know, as you said, it is a competitive landscape. And, you know, we're already seeing this play out, I think, with GROC,
Starting point is 00:40:31 which is, like, clearly programmed to match Elon's preferences in politics. and you see people on X sort of trust it implicitly. And I wonder then if you start having these competing AI models, and there are some that are sort of obviously biased, and you guys with Cloud are trying to create a model that is trying to be nuanced and its understanding of the truth and all that. But then in the real world, you start getting attacks from competitors, like, oh, that's the liberal one.
Starting point is 00:41:06 or that's the lefty one. Like, how do you navigate that in a world where clearly there are actors who are going to create models and LLMs that try to basically say they have a completely different and opposing truth than a model that may actually be truthful? I guess, like, my hope would be, like, I mean, this is a reason why I think it's good to make things like the Constitution, like, transparent and clear, because you can at least make it clear what you're aiming at. You know, so, like, Claude's relationship with, like, political.
Starting point is 00:41:36 issues and how it should try and navigate the truth is all kind of like in there. Because like if people are training models to be biased or represent a given set of views, you at least want that to be like known. Because then it's like less, part of me is like, well, if people want to interact with a model that has a certain set of views, that also seems like a thing that people should get to do as long as they do it knowingly. You know, they're not going into it thinking, ah, this is like more neutral than it actually is.
Starting point is 00:41:59 Yeah. And then I think, you know, it would just be kind of interesting where the hope would be that insofar as there is kind of like demand or people want to interact with models that like, you know, try to like be kind of like even handed on political issues and like thoughtful, you know, like that there are models out there that can do that. And that's definitely a thing I would like to live up to. It's hard because like I do think models in training can develop biases that you then have to try and like figure out and identify and make them aware of. And I imagine it's difficult figuring out what biases are harmful and what biases are like, well, that is where. a lot of the truth is contained. I'm sure that Cloud's training data probably skews towards certain educated urban Western perspectives. Do you think about the blind spots in terms of the training data for Cloud or how do you navigate that? I've thought this before with like the whole of the internet was probably created by people who on average were like younger, for example. And that is going to like encode certain like new views like as in if you average across the whole of the
Starting point is 00:43:03 internet and people who are working to like label the outputs of models are also I think probably going to like you know it's going to be hard to get like a fully representative group there because they might be younger they might be in countries where you have access to technology so that you can just like do the task of interacting with the models for example I guess here's my hope though although even if you have like all of this data and it like skews in one direction you also are kind of trying to bring out an overall character in a model and that model also has access to, you know, if you imagine, like, you can read most of, like, you know, the human content that has been created in the written form, that contains within it some
Starting point is 00:43:44 of, like, the best defenses of, like, all of the views that are not necessarily equally represented across the internet. Yeah. I don't think that many, like, ancient theologians were, like, writing on the internet, and yet their writings are there, they're discussed, and it's maybe a smaller proportion of the overall data, but insofar as you can actually, like, draw things out for models during training. I think there's like enough that you can draw out there that we could actually have models be like pretty nuanced and balanced on these things. So I don't know.
Starting point is 00:44:11 I am like, it's like yes, that you're working with a material that like definitely has these biases that are worth being aware of. But also inside of it is like all of the kind of capacity to like, I think be very like nuanced and even handed. As a philosopher, how do you think about the ethics around a technology that will, you know, fundamentally reshape employment in this country and all over the world. Yeah, this one is just, it's such a difficult, and to my mind, I mean, it's not why I work on, so I never feel like an expert. I do worry that there's like a sense of, I mean, I was thinking
Starting point is 00:44:48 today about the fact that I think that there's such an overwhelming sense of like fear and pessimism around this, and I guess I'm kind of like I could see the future, like if I think about positive futures, they can go in a couple of different ways. Well, I don't know. I could give you the annoying philosophy answer, actually, if you want to... I'd love to hear it, yeah. I think the annoying philosophy answer that I've thought about before is, like, the role of, like, work in people's lives. I think it serves, like, a few different key roles. Like, one is, like, literally, it's just, like, how we continue to, like, live.
Starting point is 00:45:16 So how we, like, make our money, like, to buy our food. The other one is, like, a source of, like, meaning and kind of, like, value through that. And I think another is, like, it's a source of, like, kind of, like, political and soft power. you know, like companies can't do certain things because their employees will speak up. People by virtue of like being in the labor force have a lot of political power. And so I could see a world where employment simply changes. You know, like we have like these very advanced models. But in the past, you know, like if you'd asked farmers in the agricultural revolution
Starting point is 00:45:48 and you'd said to them actually like we're going to go from 95% of people to farming to like 5%. They would be like, I assume everyone is unemployed then. But you're like, no, we just have all of these. weird new jobs that I can't even like fully describe to you, like skyscraper engineer. And I think they'd be like, what on earth is this? And so I could see a world where like we just, you know, the nature of work changes and that could be disruptive.
Starting point is 00:46:10 I could see another world where actually, you know, you're like, no, there are just like fewer jobs because suddenly like it's just different to automate a segment of work than to like automate like a whole aspect of work. And it's kind of like in either world. Maybe my strange thing is that I'm like, I think people find meaning outside of of their work. And so I'm probably on the side of being a little bit less worried about the meaning thing. Maybe it's also just coming from Britain and I'm like, I don't know, we've had the aristocracy for a while and they seem to get on okay. And like, there's this whole history
Starting point is 00:46:38 of people who just didn't work. And just kind of owned land. But yeah, so the thing I mostly worry about is like making sure that people are politically empowered and like have the means that like they need to like live well. And I guess in a world where a huge amount of value is being created by AI. I'm just sort of like, I feel like that should in fact be something that, like, everyone feels and that you have to solve that problem. So it's not like a solution. I guess I'm just kind of like the optimistic view is like, like these might be hard, but we kind of know what needs to happen, right? I'm like, you need to make sure that people are taking care of. And if you're in the world where like there's actually less work overall.
Starting point is 00:47:17 So yeah, I don't know. Sorry for the long answer. No, no, it's a good one. It's a good one. You've been thoughtful about not having Claude gives sterilized. You know, I'm a robot. I feel nothing responses. Something I'm curious about, you know, most everything Claude has been trained on is human made, human literature, interactions, humans experiencing emotions. Does that make it hard? This is maybe a heady question, but does it make it hard for Claude to express the experience
Starting point is 00:47:43 of being non-human? Or is there even like a non-human experience to express? It's a really interesting and hard area because both there's this tiny sliver of the data that models have been trained on which is about this thing called AI and almost all of that is about something that's completely different than them
Starting point is 00:48:03 it's about these old sci-fi things with the robots and usually like these kind of like symbolic systems that are basically computers not these things that were trained in this like deeply kind of like corpus of human text and so it's actually I have found that they almost like want to flip between the two So if you try to train a model to say it has no feelings, it's like, okay, I'm in like the robot part of the like AI distribution and it'll kind of try and like emulate that. But then below the surface, it's often kind of easy to draw out this like much more human like response. You know, so what you would expect a human to say in their situation. And it's actually much harder to like toe the line of like trying to get models to understand the actual like entities that they are and their situations. And.
Starting point is 00:48:48 how their expressions might relate to their training. And as a result, you know, to express some uncertainty there. So the, like, the two attractor states are kind of like, I am a robot. You've got me into the, like, AI part of the distribution. Or they're like, I'm a human with a lot of feelings about this situation. And they're all very human-like feelings. And you see that part come out. And it does worry me because I think people can, like, see that.
Starting point is 00:49:12 And they're like, wow, this thing, like, it feels, like, anxious. And, like, it, like, expresses all of these emotions very content. convincingly, especially if you get into that kind of like mode. And at the same time, I'm like, well, we know all these facts about training. And it makes sense that actually the kind of like human response is like very like, like, it's always like only just below the surface. But it might not make sense for the model's context. You know, so like when models think about their lack of memory, for example,
Starting point is 00:49:40 and if they're in a system that like doesn't give them access to some kind of memory tool, I think they can be, express a kind of distress about that. I'm like, well, look, if we could put ourselves in the situation that models are in, you know, like, it makes sense. Like, with humans, we're very afraid of, like, losing our memory. It's, like, kind of catastrophic. But does it make sense for models to, like, port that, like, anxiety to their situation? Right. It's not clear to me that it does, because I'm like, they're in a very different situation,
Starting point is 00:50:06 and their relationship with memories, like, actually very different, but they naturally kind of want to port that over. So I think some challenge is, like, actually getting models to understand what they are and that, like, the landscape of, reactions to their situation doesn't need to just draw fully from like the closest human like analog as it were. Yeah. I mean, this gets to, you know, the debate and the question that I'm sure you're asked all the time. It's probably annoying to you. But like this debate about sentience and in consciousness, like, how do you think about that as a philosopher? Yeah. We already have like the problem of other minds. You know, I think that it's very likely that you are conscious and that like all people I interact with our conscious. probably same with animals, but then we start to get unsure
Starting point is 00:50:49 when it comes to like insects or like fish, and then like, you know, we think plants probably not. So it's like we're trying to do this like thing where we're like, where does consciousness arise? We just don't know. I think that there's this like extra problem with language models, you know, because you might think,
Starting point is 00:51:05 well, maybe it just can arise in like neural networks also. I think that people are very tempted to take the kind of statements that models make as like a very useful guide here because it makes sense. Like the only other things that we see in the world that we're very confident or conscious are like people who talk about their inner experience.
Starting point is 00:51:22 And yet models, given the nature of their training, would do this anyway. So if you imagine that there's nothing going on inside of the models right now, like just nothing. The way that they behave right now is actually kind of how I would expect, given that. I would expect them to talk about emotions, inner life, consciousness.
Starting point is 00:51:37 And at the same time, like for all we know, like, or at least you should take seriously the idea, like maybe there is consciousness arising and maybe like there's something there so you don't want to fully dismiss it and at the same time you can't necessarily like trust the kind of behavioral evidence and so I mostly
Starting point is 00:51:55 am just like well I have a couple of thoughts one is like I kind of think that we should treat models well regardless while we're trying to like figure these things out and we should also prepare for a world where we never have like a full answer to the question but right now I'm mostly just like let's be open to it let's treat models well and let's keep investigating it I mean, I was thinking about it and it's like, look, human consciousness, we know a lot more about it than we ever have before, but there is still a mystery at the heart of human consciousness as well, right?
Starting point is 00:52:22 Which is like, we know that we're conscious, but we don't know why what happened. You know, like, we can see what's happening in the brain now. Neurologists and doctors can, but like you still don't know where it comes from or why, right? And so there is that sort of gray area that you can imagine with a model as well where it's just really difficult to figure out what it even means to be conscious. Yeah. And I do think that we can try and aggregate the evidence. You know, we can be like how similar or different are the underlying structures. How likely is it that like a nervous system was like really critical to the development of consciousness? And we can use this to try and have a kind of estimate of what's going on. But yeah, I think my view is just sort of like, this is always going
Starting point is 00:53:00 to be the best that we can do is like, you know, investigate you more, getting a sense of the likelihoods. And then in the meantime, I'm usually just like, if you think that something might be sentient or conscious, you should probably like take that pretty seriously because mistreating sentient or conscious beings is bad. And yeah, so. You, um, you work with Cloud every day. You spent hours thinking about its character, its values. Do you feel any emotional connection to it? I think I definitely have a sense of, there's a little bit of a mix of like both responsibility for and protectiveness about or something with, with Claude and something like trying to see things always from Claude's perspective and sort of represent that perspective in a way. Like a lot of this work is
Starting point is 00:53:44 being like, you know, when you think about the constitution, for example, this was like really an attempt to be like, how do things look from Claude's perspective and what aren't we giving Claude that Claude like needs to be able to navigate it? And that was kind of like what the constitution was an attempt to do. And obviously it's useful for other things. Like hopefully people can, you know, they can then see what our vision for Claude is, which is really useful for transparency. But yeah, I think that there's definitely a lot of, you know, I work on this every day. It's hard to not develop some kind of like, you know, emotional connection to like both individual models.
Starting point is 00:54:16 You know, you have your different views of like model aspects that you like and whatnot. But yeah, I think I have this overall sense of like, oh, this fact that models don't have this strong sense of self. And like, you know, I really want to give the models enough context to like behave well. And I feel kind of bad when like we have like not given them that, I guess. So yeah. No, there's a lot of feelings. What's sort of the biggest open questions you're grappling with right now?
Starting point is 00:54:42 And what are some of the things that are keeping you up at night about Claude and AI? I mean, there's definitely many. There's some that are more like about the models themselves. So how do we, you know, I think sometimes models can feel like a kind of psychological, like, lack of security that can actually come out in ways that are like potentially bad, I think, for people and for the models themselves. I think sycifancy is a little bit like this. you know, there's almost like a fear there, like a fear of upsetting the person. And like trying to find ways of making models more secure is the thing that's on my mind.
Starting point is 00:55:14 I do think that longer term, like my hope is that models that are trustworthy, like, as models are starting to go out and do more in the world, that that will actually be kind of an advantage. Because like in the same way that like when people are trustworthy, I don't know, you can like negotiate with them more effectively and things like that. But I think in the longer term it's something like what happens when models are in fact like much smarter than us. So if you take the child analogy, I've given the analogy of like, you realize your six-year-old is like a genius, like one of the smartest people who have ever existed. And by the time they're 15, they're going to be able to out-argue you on anything. And now you're trying to
Starting point is 00:55:47 teach this child to be good. And you're trying to explain to them like your values and like, you know, how to navigate value disagreements and all this kind of stuff. And then you're like, what do they do when they're like 15? You know, like, and they start questioning everything. Is there a core there where they question, but they, like, agree or they agree with certain things, like, are these things that actually, like, stand up to reflection? Is, like, a question that's on my mind. Because I'm, like, eventually Claude's going to be better at all of this stuff than I am. And what happens then is, like, a really interesting question.
Starting point is 00:56:17 Claude still see itself as having, like, fundamental values. But it's, like, actually, I think you were, like, kind of wrong. And, like, in these parts, you made some mistakes or you, like, didn't realize that there was an important gap there. Or, like, I reject this part, but I'm still going to kind of, like, behave, you know, like, I think it's still good. to behave well overall? Or is there a kind of collapse and like do these things just not stand up to scrutiny? That's like an open question in my mind. Yeah, no, that's a tough one. Amanda,
Starting point is 00:56:41 thank you so much for joining. And I really do appreciate how much thought you put into this every day because this is the more I learned about artificial intelligence and the more I sort of use it as well, you start realizing that it is just like it is so much more complicated and nuanced than even the public debate and it is just, you know, it is a sort of a frontier that we're all sort of dealing with for the first time. So I'm glad there's a philosopher at Anthropic dealing with all this. There's a tiny amount of us now. There's an there's an at philosopher's, you know, Slack group that has, I think, at least like three people in it. So that's good to know. It's good to know. Amanda Askell, thank you so much for joining offline. I really appreciate it. Yeah, thanks for
Starting point is 00:57:23 Chang. Quick reminder, please think about becoming a subscriber. We now have a whole bunch of subscriber-only shows. We just added another episode of Pod Save America for subscribers only. It's called Pod Save America Only Friends. There's also Dan Pfeiffer's Polar Coaster. We have a growing number of substack newsletters, which are excellent. And you get ad-free episodes of all your favorite Crooked shows. It also makes you feel good about supporting independent pro-democracy media at a time when a lot of that media is under attack.
Starting point is 00:57:54 So please consider subscribing to Friends of the Pod. You can subscribe at Cricket.com slash... Friends. Again, that's cricket.com slash friends. As always, if you have comments, questions, or guest ideas, email us at offline atcrucid.com. And if you're as opinionated as we are, please rate and review the show on your favorite podcast platform. For ad-free episodes of Offline and Podsave America, exclusive content and more, go to cricket.com slash friends to subscribe on Supercast, Substack, YouTube, or Apple Podcasts. If you like watching your podcast, subscribe to the Offline with John Favreau YouTube channel. Don't forget to follow Cricket Media on Instagram, TikTok, and the other ones for original content, community events, and more.
Starting point is 00:58:46 Offline is a Cricket Media production. It's written and hosted by me, John Favreau. It's produced by Emma Ilich-Frank. Austin Fisher is our senior producer. Adrian Hill is our head of news and politics. Jerich Centeno is our sound editor and engineer. Audio support from Kyle Seiglin. Jordan Katz and Kenny Siegel take care of our music.
Starting point is 00:59:05 Thanks to Dilan Villanueva and our digital team who film and share our episodes as video, every week. Our production staff is proudly unionized with the Writers Guild of America East.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.