Your Undivided Attention - This Moment in AI: How We Got Here and Where We’re Going

Episode Date: August 12, 2024

It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s h...appened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIAThe AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.Info Sheet on KOSPA: More information on KOSPA from FairPlay.Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics. Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey. RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreJonathan Haidt On How to Solve the Teen Mental Health CrisisCan We Govern AI? with Marietje Schaake The Three Rules of Humane TechThe AI Dilemma Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey. The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, everyone. Welcome to Your Undivided Attention. I'm Tristan. And I'm Aza. And we're actually going to flip it around today and have Sasha Fagan, our executive producer, here for Your Undivided Attention, actually get in from the microphone and interview us. So, Sasha, welcome to Your Undivided Attention. You are the background actual host of this podcast. Thanks so much, Tristan. And hi, hi, Azar. It's so nice to be around this side of the moment. microphone going from the background host of the podcast to to the host for this episode. I'm really excited to be here. You know, it's summertime in the US and while things are a little bit slower and everyone's at the beach, I thought it would be a great opportunity to just
Starting point is 00:00:45 take a breath and reflect about where we are at this moment in the tech landscape. It seems like yesterday, but it was actually a whole year and a half ago that you guys recorded this video called The AI Dilemma, which, you know, surprised us all by going viral all around the world. You guys did such a great job of kind of forecasting how the AI race was going to play out. And I'd love to just get a sense of where you think we're headed today. The other thing I really want to do in this episode is get a little bit of a readout from all of the travels that you've been doing all around the world, including the AI for Good summit that you went to in Geneva in this past spring.
Starting point is 00:01:25 and all of the amazing conversations that you have behind the scenes to policymakers and folks in the tech industry. And the third thing I want to get to in this episode is your reflections on some of the really big developments we've seen in the social media reform space in the US, particularly the passage of some legislation around kids' online safety, which we know is now even more important than ever, given how AI is going to supercharge social media harms. So let's get it started with a reflection on the AI Dilemma. Just give us a top line of what you talked about in that video and we'll go from there. Yeah, the essence of the AI Dilemma talk that we did in March of 2023,
Starting point is 00:02:04 which really launched this kind of next chapter of CHT's work, which extends from social media to AI. That talk, the AI dilemma, was really about how these competitive forces drive us to unsafe futures with technology. We saw that with social media, where the competitive forces driving for the race to get attention, the race to get engagement, drove the race to the bottom of the brainstem that then sort of inverted our world inside out into the addicted, distracted, polarized society that we have now. And how with AI, it's not a race for attention. It's a race
Starting point is 00:02:36 to roll out, a race to take shortcuts to get AI into the world as fast as possible and on board as many people as possible. And since the AI dilemma talk a year and a half ago, we've seen more and more AI models scaled up even bigger with more and more capabilities and society less and less able to respond to the overwhelm that arises from that. Yeah. The other thing that we talked about in the AI dilemma is just, what is this new AI? Like, what's different this time? Why does it all seem to be going so fast?
Starting point is 00:03:06 And what we talked about was that, well, it used to be that because the different fields of AI were separate, the progress was pretty slow. And then in 2017, that changed. There was a breakthrough at Google, a technology invented called Transformers, which all large language models are now based on, and essentially they taught computers how to see everything as a kind of universal languages. And every AI researcher was suddenly working on the same thing, having AI speak this kind of universal language of everything. And that's what we get to this world now with SORA, Mid Journey, ChatGPT, that if you can describe something, AI will make it.
Starting point is 00:03:47 And that was one. And the other thing that we talked about were the scaling laws. That is how quickly AI gets better just by putting more money in. Right. Before dumping lots of money into making an AI didn't really make it smarter. But after more money meant more smarts. That's really important to get. More money means more smarts. That's brand new with this kind of AI.
Starting point is 00:04:11 So the companies are now competing to dump more billions of dollars into training their eyes so they can outcompete their competitors. And that's what's causing this insane pace. So when we're talking about money, what are the big? sums of money. So, you know, we know roughly GPT-4 was trained with around $100 million of compute, and we know that, you know, the next models are going to be trained, rumored for a billion to $10 billion training runs. And when you scale up by a factor of 10, out pops more new capabilities. You know, so much has happened in the last couple of years. And I'm really
Starting point is 00:04:46 interested to know about the conversations that you guys are having in the Bay Area. Whenever I talk to you guys, you tell me about interesting conversations that you've been having and how it's shaping your perspective on things. So I'm just wondering if you can kind of walk us through how those conversations around AI have evolved over the last two years and what you're hearing on the ground, as it were. One of the weird things about wandering around the Bay Area is the phrase can you feel the AGI? That is the people that are closest, I know, right? Seriously? I feel the AGI. There's T-shirts with this one. I've walked into dinners, and the first thing that somebody said to me is, like, you're feeling the age I.
Starting point is 00:05:29 He looked at my face. I was really concerned. I actually hadn't been sleeping because when you metabolize how quickly everything is scaling up and the complete inadequacy of our current government or governance to, like, handle it, like, it honestly makes it hard for me to sleep sometimes. And I walked in, he looked at my face as like, ah, you're feeling the AGI, aren't you? This is AGI's in artificial general intelligence, which some people, outside of the Bay Area, don't ever think that we're actually going to get to. So you're talking about something which is, you know, it's just normal in the Bay Area to be working towards that and thinking about it. And it should be really clear here, because there is debate inside of, you know, sort of both the academic community and the labs of does the current technology, you know, this Transformers based large language, Will it get us to something that can replace most human beings on most economic tasks as the sort of the version of AGI, the definition that I like to use?
Starting point is 00:06:32 And the people that believe that scale is all that we need say, look, if we just keep growing and we sort of project out the graph of how smart the systems have been four years ago, it was sort of at the level of a preschooler, GPD4, level of a smart high schooler, the next models coming out, maybe it will be at Ph.D. levels, you just project that out, and by 2076, 2027, that they will be at the level of the smartest human beings, and perhaps even smarter, there's nothing that stops them for getting smarter. And there are other people that say, hey, actually, large language models aren't everything that we're going to need. They don't do things like long-term planning. We're one more breakthrough away from something that can really just be a drop in human replacement. Either one of these two camps, You either don't need any more breakthroughs,
Starting point is 00:07:22 or you're just one breakthrough away. We're very, very close. At least that's the talking side of Silicon Valley. You know, if you talk to different people in Silicon Valley, you really do get different answers, and it really feels confusing sometimes. And I think the point that ESA was making is that whether it is slightly longer,
Starting point is 00:07:39 like closer to, I don't know, five to seven years versus one to two years, still not a lot of time to prepare for that. And when, you know, artificial general intelligence level, AI emerges, you will want to have major interventions way before that. You won't want to have done it, you won't want to be starting to figure out how to regulate it after that occurs. You want to do it before. And I think that was the main mission of the AI dilemma was how do we make sure that we set the right incentives in motion before entanglement, before it gets entrenched in our
Starting point is 00:08:10 society. You only have one period before a new technology gets entangled. And that's right now. Yeah. I mean, it's hard sitting all the way over here in the suburbs of Sydney, Australia. And I do have a sense from my perspective that there's been a little bit of hype. You know, some of the fear about AI hasn't translated. I mean, it hasn't transformed my job yet. My kids aren't really using it at school. And when I try to use it, honestly, I find it a little bit crappy and not really worth my while. So how do you sort of take that further and convince someone like me to really care. And what's the future that I'm imagining,
Starting point is 00:08:48 I guess even for my job, five or ten years into the future? I think one thing that's important to distinguish is how fast AI capabilities are coming versus how fast AI will be diffused or integrated into society. I think diffusion or integration can take longer, and I think the capabilities are coming fast. So I think people look at the fact that the entire economy hasn't been disrupted so quickly
Starting point is 00:09:11 as creating more some skepticism around the AI hype. I think certainly with regard to how quickly this transformation can take place, that that level of skepticism is warranted. But I do think that we have to pay attention to the raw capabilities. If you click around and find the corner of Twitter where people are publishing the latest papers and AI capabilities, you will be humbled very quickly by how fast progress is moving. I think it's also important to note there is going to be hype.
Starting point is 00:09:38 Every technology goes through a hype cycle where people get over-excited. And we're seeing that now, right? And we're seeing that. of dollars, people, AI, Open AI is supposed to be potentially losing $5 billion this year. You know, there's a bit of a feel of, is the kind of crypto crash coming, you know, with the energy around AI at the moment. Right. Exactly. So, and that happens with every technology. So that is true and also true is the raw capabilities that the models have and the amount of investment into the
Starting point is 00:10:09 essentially data centers and compute centers that companies are making now. So Microsoft is building right now a $100 billion computer, super center, essentially. Okay, I do want to move on now to questions around data because there's been a huge amount of reporting recently about how large language models are just super hungry for human-generated data and they're potentially running out of things to hoover up and ingest. And there's been predictions that we might even hit a data wall by 2028. How is this going to affect the development of AI?
Starting point is 00:10:43 I mean, it's a real and interesting question, right? Like, if you've used all of the data that's easily available on the internet, what happens after that? Well, a couple things happen after that. One, and we're seeing this, is that all the companies are racing for proprietary data sets, sitting inside of financial institutions, sitting inside of academic institutions, is a lot of data that is just not available on the open internet. So it's not exactly the case
Starting point is 00:11:13 that we've just run out of data like the AI companies may have run out of easily accessible open data. Free data. Free data. The second thing is that there are a lot of data sources that require translations. That is, there's a lot of television and movies,
Starting point is 00:11:30 YouTube videos, and it takes processing power to convert those into, say, text. But that's why Open AI created a whisper in these other systems. There's a big push in the next models to make them multi-modal. That is not just speaking language, but also generating images, also understanding videos, understanding robotic movements. And it is the case with GPD 4-scale models
Starting point is 00:11:55 that as they were made multimodal, they didn't seem to be getting that much smarter. But the theory is that's because they just weren't big enough. They couldn't hold enough of every one of these modalities at the same time. So there's some big open questions there. But when we talk to people on the inside, and these are not like the folks like the Sam Altman's or the Dario's that have an incentive to say that the models are just going to keep scaling getting better, what we've heard is that they are figuring out clever ways
Starting point is 00:12:28 of getting over the data wall and that the scaling does seem to be progressing. We can't, of course, independently verify that, but I'm inclined to believe them. Some companies are turning to AI-generators, content to fill that void. This is what they call synthetic data. What are the risks of feeding AI-generated content back into the models?
Starting point is 00:12:51 Right. Generally, when people talk about the concerns of synthetic data, what they're talking about is sort of these models getting high off their own exhaust, which is that if the models are putting out hallucinations and they're trained on those hallucinations, you end up in this sort of like downward spiral where the models keep getting worse. And in fact, this is a concern.
Starting point is 00:13:10 Last year, Sam Altman said that one out of every thousand words that humanity was generating was generated by chat GPT. That's incredible. That is absolutely incredible. Incredibly concerning, right? Because that shows that not too far into the future, there will be more text generated by AI and AI models, more cognizabeth flavor done by machines than by humans.
Starting point is 00:13:36 So that's in and of itself scary. And of course, if AI can't distinguish what AI is generated and what they didn't and they're trained in that model, you might get the sort of downward spiral effect. That's the concern people have. But when they talk about training on synthetic data, that concern does not apply because they are making data specifically for the purposes of passing benchmarks and they create data that are specifically good at making the models better. So that's a different thing
Starting point is 00:14:05 than sort of getting high in your own exhaust. Right. But it leaves us in a culture where we're surrounded or have surround sound of synthetically created data or non-human created data potentially. That's right. Non-human created information around us. And this is how you can get to
Starting point is 00:14:21 without needing to invoke anything sci-fi or anything AGI, how you can get to humans lose control because this is really the social media story set again, which is everyone says when an AI starts to like control humanity, just pull the plug, but there is an AI and social media. It's the thing that's choosing what human beings see, that's already like downgrading our democracies, all the
Starting point is 00:14:45 things we normally say. And we haven't pulled the plug because it's become integral to the value of our economy and our stock market. When AI start to compete, say, in generating content in the attention economy, they will have seen everything on the internet, everything on Twitter, they'll be able to make posts and images and songs and videos that are more engaging than anything that humans create. And because they are more engaging, they will become more viral, they will out-compete the things that are sort of bespoke human-made. You will be a fool if you don't use those for your ends.
Starting point is 00:15:23 And now, you know, essentially the things that AI is generating will become the dominant form of our culture that's another way of saying humans lost control. And to be clear, AIS is not saying that the media or images or art generated by AI
Starting point is 00:15:40 are better from a values perspective than the things that humans make. What he's saying is they are more effective at playing the attention economy game
Starting point is 00:15:48 that social media has set up to be played because they're trained on what works best and they can simply out-compete humans for that game and they're already doing that.
Starting point is 00:15:57 It's terrifying. We'll still have art galleries and places that are offline, though, that don't have AI-generated content. It'll be artisanal art. Yeah, artisanal art, yeah. All right, so let's get on to what you guys have been up to
Starting point is 00:16:14 because you're always so busy, I can barely book you into a podcast because you're off jet-setting around the world and talking to important people. So I know you went to AI for Good recently, so tell me about that. Where was it? Who did you talk to?
Starting point is 00:16:27 Yeah, we were at the United Nations AI for Good, conference in Geneva with a lot of the major leaders in AI, digital ministers from all the major countries. We ran into a lot of friends and allies. We saw Stuart Russell, who, for those who don't know, wrote the original textbook on artificial intelligence. If you've been through an AI class at a major university, you've read his textbook. He is very much on the side of safety. And he talks about how there's currently at least 1,000 to 1, he estimates closer to a 2,000 to 1 gap in the amount of money that's going into increasing the power of AI versus going into increasing the safety and security of AI. And he gives examples of how that's not true of other industries. For example,
Starting point is 00:17:09 he quoted from his friends at Berkeley, I think, who work on the issues of nuclear, that for every one kilogram going into a nuclear reactor, there's seven kilograms of paperwork to make it safe. So, you know, with that ratio, it's not like when Sam Altman and co are making GPT-5 for every $1,000 they spent on, you know, building GPD5, they spent $7 on the safety work on how to make GPD5 safe. If we were in the nuclear ratio, we would be closer to that. Yeah, that is such an interesting reflection. Aza, what were your thoughts on the AI for Good Summit in Geneva?
Starting point is 00:17:48 Well, I just wanted to name a phrase, actually, Tristan, that you coined when we were there. And this one was sitting in the lecture hall and I think it was actually someone from Google who was talking about Alpha Fold 3 and she was talking about how it would take before 10 years for them to find an enzyme that might say break down plastics in our environment but they had used AlphaFold 3
Starting point is 00:18:15 to discover an enzyme within hours and how cool that was and it is really cool but she of course didn't then say the next step which is but that same tool could be used to create an enzyme that might eat human flesh or do any number of terrible things. And in fact, this was a thing we saw a time and time again in the open source panel where they're supposed to be talking about the risks and the opportunities of open source. Everyone on the panel only talked about the opportunities.
Starting point is 00:18:45 No one would really touch the risk. And what was frustrating is that it was a kind, I said it was sort of like gas lighting. And actually Tristan turned to me and said, no, this is half lighting. They are only telling half the truth. And it's frustrating because if we only talk about the good thing, then we are ill-equipped to actually handle the downsides, which means we are much more likely to have the downsides smack us in the face. And so my big request from everyone is like, let's stop half-lighting.
Starting point is 00:19:17 Let's acknowledge the good at the same time as we acknowledge the harms. and then we'd be able to way find and navigate much better. And one of the experiences Tristan and I had being there is person after person, whether it's just an attendee or a diplomat at the highest level, would come up to us and say, thank you for saying what you're saying. Thank you for talking about the incentives. Thank you for not half-lighting us.
Starting point is 00:19:46 And it just made it clear to us, not that, like, oh, we're so special. It's that there aren't enough people that aren't captured by, say, what their company requires them to say so that everyone has this feeling of just not being told the full truth. One of the other things that really blew me away, actually, walking around AI for good
Starting point is 00:20:09 was all of the people who listened to the podcast. I remember, Aza, we had, like, the head of IKEA's responsible AI innovation and that they had used the AI dilemma to sort of guide some of their policy. The Cuban minister, right? Yeah, the Cuban Digital Ministry who works on policy
Starting point is 00:20:25 and they wanted our help with some stuff on autonomous weapons. They just listened to the episode on autonomous weapons. I was just blown away by how many policymakers who are working on these issues follow the podcast
Starting point is 00:20:35 and just want to thank all of you listeners because it both, you know, makes us feel like our work is really trying to, you know, we're trying to impact things in the world and, you know, one of the people who actually came up to us was Swiss diplomat Nina Frey
Starting point is 00:20:48 who told us about some of the work that she's inspired to do because of the podcast. and we actually asked her to send a voice memo after you ran into her Aza and let's take a listen to that. Hi, Aza and Tristan. This is Nina. I'm a Swiss diplomat currently working on tech diplomacy.
Starting point is 00:21:03 I think it was in April, 2023, when you released your podcast episode on the three rules to govern AI. After listening to that and your thoughts about putting the actors to a table to make them cooperate, and I thought that would be something that Switzerland could. also contribute to. And we launched together with Eat-Dade Zurich, a initiative that's called the Swiss School for Trust and Transparency, which wanted to contribute with concrete actions to really also bridge the time gap from now until proper regulation will be in place. Fast forward today, this has led to one initiative amongst others that really tries to kind
Starting point is 00:21:49 of create a virtual network for AI, which invites partners to contribute of resource pooling in the three pillars, compute data and capabilities, to really give a more equitable access to AI research. And your podcast of one and a half years ago has been a kickoff initiator to this thought that led to so much. So I really wanted to thank you for that. and for your continuous action to a more safe and equitable access to AI. Thank you. That's so awesome to hear. It can feel really powerless as a human being seeing the tidal wave of AI coming
Starting point is 00:22:37 for what can we possibly do. And without being polyanish about it, there is a way that I think that clarity can bring agency and that it's not the kind of thing that we're going to be able to ever do alone, not any single one of us. This is always going to be a coordination kind of problem. And seeing that there can be decentralized action
Starting point is 00:23:02 where each person who listen to this podcast or otherwise is informed can say, what can I do in my sphere of agency? If we all did that, the world would be a much better place. And this is one of those examples of it happening in practice in ways that we could never have possibly imagined. Yeah, one of my favorite parts about walking around that center in Geneva was the sense that the movement was seeing itself,
Starting point is 00:23:30 or like feeling the movement. I remember I was talking to Maria Ressa, former podcast guest who won the Nobel Peace Prize on her work. And what she said after the social dilemma launch is the movement needs to see itself. There's a lot of people who are working on this, but when you're a person who's working on it, your felt sense is, I'm alone.
Starting point is 00:23:46 I don't feel the other humans that are working on this. And so how do we actually have the humans that are listening to this podcast, feel the other humans that are listening to this podcast, and then doing real things in the world because of it? And so one of our thoughts with this episode is trying to bring more of that to light for people so they can feel that there is progress slowly but surely being mobilized. Yeah, well, that's a really good segue into what I wanted to talk about next, actually, which is that the work that CHT has been doing on AI is really on a continuum to the work that the organization first started to do on social media.
Starting point is 00:24:21 And, you know, I think that's something people don't always understand very well. So I'd love for you to have a go at explaining that. Yeah, the key thing to understand that connects our work on social media to AI is the focus on how good intentions with technology aren't enough. And it's about how the incentives that are driving how that technology gets rolled out or designed or, you know, adopted leads to, you know, worlds that are not the ones that we want. a joke that I remember making EISA when we were at AI for Good was imagine you go back 15 years and we went to a conference called Social Media for
Starting point is 00:24:55 Good. I could totally imagine that conference. In fact, I think I almost went to some of those conferences back in the day. Because we were all, everyone was so excited about the opportunities that social media presented and me included. I remember hearing Biz Stone, the co-founder of Twitter on the radio in 2009, talking about someone sending a tweet in Kenya and getting retweeted twice and suddenly everybody in the United States saw it within, you know, 15 seconds.
Starting point is 00:25:15 And it's like, that's amazing. That's so powerful. And who's not intoxicated by that? And those good use cases are still true. The question was, is that enough to get to the good world where technology is net synergistically improving the overall state and health of the society? And the challenge is that it is going to keep providing these good examples,
Starting point is 00:25:37 but the incentives underneath social media we're going to derive systemic harm or systemic weakening of society, shortening of attention spans, more division, Less of a information commons driven by truth, but more the incentives of clickbait, the outrage economy, so on and so forth. And so the same thing here. Here we are 15 years later. We're at the UN AI for Good Conference. It's not about the good things AI can do.
Starting point is 00:26:00 It's about are we incentivizing AI to systemically roll out in a way that's strengthening societies? That's the question. It's worth pausing there because it's not like we are anti-AI or anti-technology, right? It's not that we are placing attention on just the bad things AI can do. It's not about us saying, like, let's look at all the catastrophic risks, or the existential risk. That's not the vantage point we take. The vantage point we take are what are the fragilities in our society that we are going to expose with new technology that are going to undermine our ability to have all those incredible benefits?
Starting point is 00:26:44 That is the place we have to point our attention to. We have a responsibility to point our attention to. And I wish there were more conferences that weren't just AI for good, but AI for making sure that things continue. Just one metaphor to add on top of that that I've liked using recently, you've mentioned a few times, is this Jenga metaphor. Like, you know, we all want a taller and more amazing building of benefits that AI can get us. But there's imagine two ways of getting into that building.
Starting point is 00:27:14 one way is we build that taller and taller building by pulling out more and more blocks from the bottom. So we get cool AI art that we love, but by creating deepfakes that undermine people's understanding of what's true and what's real in society. We get new cancer drugs, but I also creating AI that can speak the language of biology and enable all sorts of new biological threats at the same time. So we are not people who are, you know, we are clearly acknowledging The tower is getting taller and more impressive exponentially faster every year because of the pace of scaling and compute and all the forces we're talking about. But isn't there a different way to build that tower
Starting point is 00:27:53 than to keep pulling out more and more blocks from the bottom? That's the essence of the change that we're trying to make in the world. And this is why, just to tie back to something you said before, half lighting is so dangerous. Because half lighting says I'm only going to look at the blocks I place on the top, but I'm going to ignore that I'm doing it by pulling it block out from the bottom. That's right. Exactly. Okay, so what are some solutions to these problems? What kind of policies can we bring in on a
Starting point is 00:28:21 national level? Yeah, there are efforts underway to work on a sort of more general federal liability coming out of product law for AI. And I just wanted to have a call out to our very talented policy team at CHT, our leaders there, Casey Mock and Camille Carlton, that They're often more behind the scenes, but you'll be able to listen to them in one of our upcoming episodes to talk about specific AI policy ideas around liability. And another just sort of very common sense solution, and we can tie this back to the Jenga metaphor, is how much money, how much investment should be going into upgrading our governance. So we can say that at least, you know, like 15, 25% of every dollar spent, of the trillions of dollars going into making AI more capable, should go into upgrading our ability to govern and steer AI as well as the defenses for our society. Right now, we are nowhere near that level. Yeah, but who makes the decision about what should be spent on safety?
Starting point is 00:29:30 I mean, is that something that happens on a federal level? Is that something that happens on an international level? or do we trust the companies to make those decisions for themselves? You can't trust the companies to make decisions for themselves because then it becomes an arms race for who can hide their costs better and spend the least amount on it, which is exactly what's happening. It's a race to the bottom. As soon as someone says, I'm not going to spend any money on safety
Starting point is 00:29:50 and suddenly I'm going to spend the extra money on GPUs and going faster and having a bigger, more impressive AI model so I can get even more investment money, that's how they win the race. And so it has to be something that's binding all the actors together. We don't have international laws that can make that happen for everyone, but you can at least start nationally and use that to set international norms that globally we should be putting 25% of those budgets into it. So this conversation, like a lot of the conversations we have on the show
Starting point is 00:30:20 can feel a little bit disempowering because it can be hard to get a sense of progress on these issues, but there have actually been some big wins for the movement and I'd love to get your guys' thoughts on these, especially on the social media side. Yeah. There's actually a lot of progress being made on some of the other issues that CHT has worked on, including the Surgeon General in the United States, Vivek Murthy, actually issued a call for a warning label on social media. And while that might seem kind of empty or what is that really going to do, if you look back to the history of big tobacco, the Surgeon General's warning was a key part of establishing new social norms that cigarettes and tobacco were dangerous. And I think that we need that set of social norms for social media. Another thing that happened, is this group Mothers Against Media Addiction that we talked about the need for that to exist a couple years ago. Julie Skelfo has been leading the charge
Starting point is 00:31:10 and that has led to in-person protests in front of META's campus in New York and other places. And I believe Julie and Mama were actually present in New York when they did the ban of infinite scrolling recently in New York state legislatures. There's been 23 state legislatures that have passed social media
Starting point is 00:31:27 reform laws. And the Kids Online Safety Act just passed the United States Senate, which is a landmark achievement. I don't think something has gotten this far in tech regulation in a very long time. And President Biden said he'll sign it if it comes across his desk. And that would be amazing. And this would create a duty of care for minors that use the platform, which would mean that the platforms are required to take reasonable measures to reform design for better outcomes. It doesn't regulate how minors search on the platform, which deals with the issue
Starting point is 00:31:52 that would have a chilling effect on free speech or especially issues on LGBTQ minors. So this is, I think, progress to celebrate. Yeah. And I just want to say as well, like, you know, some of the most passionate advocates for these bills of being the parents of children who were injured and in some cases even died because of the use of these platforms. And I know you guys have met some of those parents and Center for Humane Technologies had a lot of opportunity to work with some of those parents over the past few years. And we've reached out to a few of them to get their stories on the podcast. So I would love to get your reactions to some of these tapes. This is Kristen Bride and I'm a social media reform advocate. I came by this role in the worst way possible. In June 2020,
Starting point is 00:32:39 I lost my 16-year-old son Carson to suicide after he was viciously cyber-bullied by his high school classmates over Snapchat's anonymous apps. When I learned of this, I reached out to Yolo, one of the anonymous apps, who had policies in place, that they would reveal the identities of those who cyber bully and banned them from the app. Yet when I reached out to them on four separate occasions, letting them know what happened to my son, I was ignored all four times. And it was really at this point that I had a decision to make,
Starting point is 00:33:17 do I accept this or do I begin to fight back? I chose to fight back, but I had absolutely no idea where to turn. I had watched the social dilemma and I decided to reach out to the Center for Humane Technology. and tell them my story and ask if they could help. They fortunately immediately responded and connected me with resources and people who could help. It was really at this point that I started to tell my story and begin my advocacy journey, which for the last two years has been advocating for the Kids Online Safety Act.
Starting point is 00:34:02 Well, it's always really hard for me to hear Kristen's story. Actually, just as a small aside, I remember the moment her email came into my inbox because I was completely inundated when the social dilemma came out. We had just emails and requests just constantly. And I remember reading it, and there was just so many things. We almost weren't able to respond to that message. I'm so glad that I, I'm so glad that I, think it was like one in the morning, and I forwarded the email to our mobilization lead, David
Starting point is 00:34:36 Jay, and he helped Kristen get going. And it's just amazing to see the advocacy that she's been able to do since then with unfortunately so many other parents who have lost their kids because of social media. So this is not some kind of like moral signaling. This is real people who have real children who've lost their lives because of real issues that we have tried to warn against. So let's just keep making sure that we get this right so we don't have more parents like Kristen that have to face this. And we should celebrate that we were able to pass the kids online safety and now Privacy Act and passed by a 91 to 3 margin.
Starting point is 00:35:13 That's huge. And to connect this back to AI, it's that have we solved any of the misaligned incentives of social media of first contact with AI? And the answer is, of course, no, we haven't. which means that as our systems become more powerful, more persuasive, more omnipresent, these kinds of harms are only going to become more common and more prevalent rather than less, which means we really do have to move now. Well, thank you so much, both of you.
Starting point is 00:35:58 I've really enjoyed my time interrogating you from in front of the microphone. phone, and I promise I'll give her back to you for the next episode. Thanks so much, Sasha. Yeah, thank you so much, Sasha. Your undivided attention is produced by the Center for Humane Technology, a non-profit working to catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher and producer, and our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudakin, original music by Ryan and Hayes Holiday. And a special thanks to the whole Center for humane technology team for making this podcast possible. You can find show notes, transcripts, and much more at humanetech.com. And if you like the podcast, we'd be grateful if you could
Starting point is 00:36:41 rate it on Apple Podcasts because it helps other people find the show. And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.