Your Undivided Attention - The Crisis That United Humanity—and Why It Matters for AI

Episode Date: September 11, 2025

In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanit...y rallied together to solve the problem.Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis.So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change.Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA“Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan SolomonThe full text of the Montreal ProtocolThe full text of the Kigali Amendment RECOMMENDED YUA EPISODESWeaponizing Uncertainty: How Tech is Recycling Big Tobacco’s PlaybookForever Chemicals, Forever Consequences: What PFAS Teaches Us About AIAI Is Moving Fast. We Need Laws that Will Too.Big Food, Big Tech and Big AI with Michael MossCorrections:Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198.Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai.

Transcript
Discussion (0)
Starting point is 00:00:00 Along comes the ozone hole. I mean, the shock value of this thing was unbelievable. I think that the moment when I, as a scientist, feared that it would be something just, you know, left in the pages of scientific journals, was really, you know, when you're in Antarctica, it's so isolated, it's so vast, it's so untouched. I kind of began to think, well, you know, are they really going to care? Hey, everyone, it's Tristan. Welcome to your undivided attention. And hey, everyone. This is Azaraskan. Today we're going to be talking about something that is so critical. When we've been covering AI in this podcast for the last two years,
Starting point is 00:00:45 we all know that what's implicit in all of this is that we need to have coordination, global coordination, in order for the incentives of AI to be aligned with a positive future. Currently, we don't have those incentives. We're releasing the most powerful, unscruitable, uncontrollable technology we've ever invented faster than we deployed any other tech in history and under the maximum incentive to cut corners on safety. For that to change, there would need to be global coordination on AI.
Starting point is 00:01:12 And people look back into history and say, well, that's impossible. We're never going to get global coordination on a technology. And many people don't know about the example of the Montreal Protocol. Where in the 1980s, humanity did rally, and 198 countries all got together and regulated domestic industries of a chemical technology that was driving the ozone hole, a collective problem in our collective atmosphere that wasn't driven by one company or one country, but the arms raised dynamic between all of them. And this is an episode that's offering kind of a blueprint of how did this unprecedented agreement happen? How did these countries come together? How did the companies
Starting point is 00:01:49 push back? How did public demand and consumer awareness and public education all play a role in enabling this unprecedented agreement with a novel technology? If this episode does one thing, my hope is that it debunks the spell we're under of inevitability, that the Montreal Protocol gives us a positive example for when something that could feel just inevitable wasn't. So our guest today is Susan Solomon. She's an environmental scientist who is part of the Antarctic research team that assessed the ozone hole in the mid-80s, and she ended up actually winning the 2007 Nobel Peace Prize for work in combating climate change. And her book, Solvable, How We Healed the Earth and How We Can Do It Again, came out last year.
Starting point is 00:02:35 There's a quote that Aiz and I like to come back to by Margaret Mead, which is, never doubt that a small group of thoughtful, committed citizens can change the world because indeed it is the only thing that ever has. This is an episode about how a small group of committed people, environmentalists, scientists, policymakers, diplomats, all came together to solve a multipolar trap. So with that, here we go. Susan, thanks so much for coming on your individed attention.
Starting point is 00:03:06 Thank you for having me. Susan, you were one of the very first scientists on the ground in Antarctica studying the ozone hole. I think that was in the 1980s. So just take me back there. Take the listeners back there. What were you doing? What did you find? What happened next?
Starting point is 00:03:23 Well, I was one of the first people to go down to the Antarctic to try to understand and why there was an ozone hole. I didn't discover the ozone hole, but I went down there to make measurements of other things that affect ozone to try to put the pieces together, you know, to try to solve the puzzle. Why was this mysterious hole opening up over the Antarctic? We never expected, we scientists, never expected to see a hole in Antarctica. We really didn't. We thought it would be global, and all of a sudden it was there. We thought it would take 100 years to appear also. We thought it was going to take a long time to get really big changes in ozone, and all of a sudden we had these 50 percent losses of ozone over the Antarctic. I mean, the shock value of
Starting point is 00:04:08 this thing was unbelievable. Could you just explain briefly just to translate, because we had this abstract idea of a hole in the Earth's atmosphere and the ozone, but why does that matter? So what was the thing that was at stake? Like, what was the worst-case scenario? What would happen? than if we didn't deal with this problem from a human or biological life perspective. Yeah, that's a great question. Good news is if you had to have a hole in the ozone layer anywhere on the planet, Antarctica is a pretty good place to have it because there's not a lot of biological life there. What the ozone layer does is to protect life on the planet's surface from ultraviolet light.
Starting point is 00:04:44 And I think, as we all know, if you get too much ultraviolet light, you get skin cancer, or cataracts. I've had cataracts, lots of people in my age of. had cataracts. It's not a pleasant experience. And it's related to often having too much UV. So it's dangerous for us. And if it's that dangerous for us, you can imagine it must also be dangerous for animals and plants and crops and everything else that lives on the planet. If we didn't have an ozone layer, life itself would be impossible on Earth. So the evolution of life had everything to do it the evolution of an ozone layer. When we say we have a hole, it's really about a 50% reduction
Starting point is 00:05:25 in the amount of ozone. It looks like a hole, like, you know, the hole in a donut because it's so confined to the Antarctic. And when you look at the satellite data for total ozone, you know, you see this missing piece. And that's how it got the name hole. In this case, there is certain chemicals that certain products were putting into the environment that were driving this ozone hole. And in this case, a simple example, it would be aerosolized deodorant. So you spray your deodorant or spray your hairspray to have the cool 1960s 70s beehive hairstyle. Could you just say a little bit about what these chemicals were and what the companies were behind them that were caught in this trap that was inadvertently creating this collective problem? Yeah, really interesting.
Starting point is 00:06:12 at the time that the ozone hole opened up 75% of the global use of chloro-fluoracarbon chemicals, which are the chemicals that cause the hole, was for literally spray cans. Like you said, Beehive hairdoes, paints, oven cleaner, you know, you name it, anything that came out of a can was aerosolized or made to come out as little particles via the addition of a little bit of chloro-floric carbon into the can. It's really great at making propellants. So that's why it was used that way. So the remaining 25% was for things like refrigeration and air conditioning.
Starting point is 00:06:54 But the fact that most of the use was for something that was in consumers' control, in my opinion, was a big factor in why we're actually able to deal with it. Because people, particularly, I have to be honest and tell you Americans, because it did not happen in Europe, interestingly enough. But in the United States, Americans turned away from spray cans. I mean, I can remember the campaign, get on the stick to save the ozone layer. We're talking about stick deodorants. Literally like the roll-on sticks of deodorants versus the spray deodorants. Yeah, yeah.
Starting point is 00:07:31 And, you know, nowadays you can barely find a spray deodorant in the United States because the stick became so popular. It was a simple thing to do. and my observation is that there's a lot of people, even people who might not be completely sure about an environmental problem, if they're offered something to do that isn't too hard, they'll often do it.
Starting point is 00:07:54 And that actually happened before the ozone hole was even discovered. That happened in the 1970s. So imagine this. In 1974, two chemists from the University of California at Irvine came up with the idea that if we kept, using these spray cans that in 100 years we might see about a 5% change in the ozone layer. So it was a small change, it was far in the future, kind of like the way people used to think about climate change, not happening for a long time, but anyway, the fact that people were
Starting point is 00:08:31 discussing that science actually led to enough popular demand to drop these chemicals that the sales in the U.S. plummeted. So then let's relate this for listeners to the problems that we often identify of this sort of multipolar trap. There's, if I don't do it, I lose to the other guy or company or country that will. In this case, there is certain chemicals that certain products were putting into the environment that were driving this ozone hole. That's right.
Starting point is 00:08:59 There were at that time maybe a dozen total companies worldwide chemical companies who were manufacturing this stuff. They were in the United States. in Europe and Japan, not too many anywhere else. Russia was also making some. So there was a limited number of corporations making the stuff. But they were in powerful companies. These were big chemical companies with a lot of cloud.
Starting point is 00:09:28 Right. I just want to say, I'm citing free from your book, spray cans at that time were a tremendous moneymaker with sales growing from about 5 million cans a year in the United States in the late 1940s to 500, million cans a year by the end of the 1950s. In 1973, just before the scientific story hit, 2.9 billion cans were sold the United States. So the spray can business in the U.S. stood the value of that, stood at a value of about $3 billion, while refrigeration and air conditioning,
Starting point is 00:09:57 which were other uses for these chemicals, which were smaller fractions of the total use, topped out at about $5.5 billion. This is important because what we're going to get into in this conversation is when there are economic interests at play, because it's one thing, when there's this sort of small thing that's causing a problem in externalities or pollution, and it only makes up 1% of a company's revenue, and there's an easy alternative. So we'll just sort of swap it out. And then I think we're going to get to, you know, the sort of later stages of how do you coordinate this at an international level? Because the first phase didn't require international coordination.
Starting point is 00:10:27 The first phase was just consumers stopping buying this spray cans. That's right. Of course, it's important to remember that spray cans were being sold everywhere, and it was only in the U.S. that people turned away from them. So it was the U.S. companies that really had the problem. So the action of consumers on national governments begins to put the pressure on industry as a whole because the American companies now are concerned about the fact that they're losing market share, the Europeans are gaining market share because they're selling to places like India
Starting point is 00:11:07 and the developing countries. And so there is beginning to be pressure on the American government to do something, to start actually working on an agreement. And then along comes the ozone hole. So just when people are turning away from this product and American companies are starting to get a little bit unhappy, I would have to say a lot unhappy, the ozone hole comes along, And suddenly you've got this massive driver that tells people, hey, this problem is apparently so much worse than we thought it was going to be. And then the question became, was it only in the Antarctic that this was happening?
Starting point is 00:11:50 Would it happen at other places too? And as the next few years rolled by, we began to see changes in total ozone over other latitudes that were also much bigger than we expected. So Antarctica really was, if you pardon the pun, the tip of the iceberg. And the reason that I think people reacted and that so much was able to be done is what I like to call the three P's of environmental problem solving. So the first P is the issue was deeply personal. You know, there's nothing more personal than cancer. Cancer is so personal that people fear. at a subliminal level.
Starting point is 00:12:35 And it really doesn't give you a whole lot of good feeling to have people say, well, it's only a small percent. You know, maybe you'll be okay. That's not very successful philosophy. So it's personal. It's the first P. This problem was very perceptible. It was easy to show people, hey, look, this ozone is falling off the cliff in the Antarctic.
Starting point is 00:12:57 Look at how it's dropped. It fell by 50%. We have measurements. So it was personal. and perceptible and we had practical solutions. That's the third P. The practical solutions were use other chemicals. Now, that doesn't get you away from the world of chemicals. And in some people's eyes, that's, you know, not a good thing. But nevertheless, those chemicals were found that could substitutes. And in some cases, it was actually surprisingly simple. For example,
Starting point is 00:13:29 chlorofloricarbons were being used as solvents, actually, in fairly large amounts. So they'd be used to clean electronics chips, for example. Testing revealed that you could actually do pretty well with lemon juice and water. It depends. If you're trying to make a supercomputer, probably not. But if you're trying to make something pretty simple, probably, yes. So I just want to review these three things that you're sort of saying. So we're marking this for listeners.
Starting point is 00:13:56 So the first was it's personal. So the cost of this problem, ozone hole, go from abstract thing where there's, I see a satellite image. I have no idea how that relates to me to personal skin cancer, some kind of real material threat to me. And the second was perceptible. So actually making it salient, not just visible, but visceral. Like, you know, what are ways in which is actually affecting real people, real life, real plankton, real situations? And then the last thing you're saying is practical. There's actually practical alternatives of things that we can do.
Starting point is 00:14:25 But I want to just sort of keep steering us towards the thing you were speaking about that maybe these U.S. companies started losing all this revenue, but you're saying all these European manufacturers continue to sell these chemicals, which starts to create this pressure. The only way to solve this problem is if the Earth's atmosphere isn't just over the United States, it's over the entire world. So we need all these countries to do something. And that brings us to the Montreal Protocol. So this is the unprecedented thing where 190 countries are coming together to say we have to do something about this, relatively speaking, very abstract and kind of scientific and far-off problem. So if we can do it for the ozone, we should be able to do something for these other technologies. So take us into the Montreal Protocol. How did this actually happen? And just to say, just to make the problem perhaps even harder, I could imagine heading into this. The Europeans are saying, well, the Americans are losing market share. So you're coming to negotiate in bad faith. You want us to fall behind in some way because you aren't winning. So I'm very curious then how we get into the Montreal Protocol from that, like, position?
Starting point is 00:15:25 Well, the U.S. is always an influential player, and negotiations are always best done when they are slow and steady. And that's something that people have a lot of difficulty understanding nowadays, I think. We want an instant solution. And what happened with the Montreal Protocol was anything but instant, really. When you look back on it, the original protocol just says, said, okay, we're going to freeze production at current rates. So you'll still be a lot to produce, but you just won't be allowed to produce more than you did the year before.
Starting point is 00:16:04 That wasn't really that onerous for these companies because there was already a switch going on. Even among European consumers, people were interested in making the switch. But the ozone hole was scary to the whole world. So I think that that made them realize, hey, you know, there could be litigation go on after the fact. You know, we could be found guilty of damaging all kinds of people's health and have to pay for that. And it really was nothing more than a hope in the beginning that will be able to actually cut production at some time in the future in this protocol. And by the way, that's the same kind of thing that started the United Nations Framework Convention on Climate Change. It was just an agreement to start talking and have the hope to reduce production at some future date.
Starting point is 00:17:00 So the process was very incremental. But although the protocol was signed in 1987 and the initial set of some 25 countries came on board, the developing countries came on board because after a little bit of a bumpy start, They were promised that if things like refrigerators cost more for them when they would need it, that the protocol would pay for the incremental cost of the additional expense. And that was, I think, a really good philosophy that the protocol took. And the developing countries got what they needed to be assured that they weren't going to be exploited in this protocol. And that's a very important thing in every international agreement. So everybody got a little bit of something.
Starting point is 00:17:47 That's how international negotiations work. I just would love to go a little bit more into how do you get, you know, these features you spoke about, legally binding, gradual phase out of ozone depleting substances, binding timetables, trade restrictions, financial and technical assistance to developing countries. I don't want to like make this over technical, but I do want listeners to have a sense of something can feel impossible, and then you can actually make an unprecedented agreement that, as far as I understand, this is the only UN treaty with universal ratification. with all 198 UN member states as parties. So you're sitting there as a scientist in Antarctica, you see this thing, and you're sitting, it must have at some point felt kind of just hopeless. Like, because there you are.
Starting point is 00:18:29 You see this problem. You know it's going to be a big deal. But then there's this abstract idea of, well, hundreds of countries would need to sign on to something, and you're just this single person behind maybe a keyboard and a computer and the ability to write a letter to a congressman. I think one of the things that comes up in our work all the time is this agency gap, the feeling of individual,
Starting point is 00:18:47 scientist, individual humans, that you're scaling that up through some public communication, but the feeling is you're still just an individual, and this problem is much bigger than the kind of grappling hooks of this handful of individuals that are even aware of the problem. So, can you make the impossible possible? I think that the moment when I, as a scientist, feared that it would be something just, you know, left in the pages of scientific journals, was really, you know, when you're in Antarctica, It's so isolated. It's so vast. It's so untouched. I kind of began to think, well, you know, are they really going to care? In the end, when it comes down to it, is this going to be the driver? But the very next year, I went down to Tully Greenland in a ground-based campaign working on the Arctic. So the next couple of years, we began to see the same kinds of things in the Arctic that we saw in the Antarctic. Now all of a sudden, you're in the Arctic.
Starting point is 00:19:48 There's trees, there's people, there's countries. So the idea that the same chemistry is operating in the Arctic, and we're also seeing depletion at mid-latitudes. And another thing that was really important for the Montreal Protocol was its advisory structure. Advisory is too strong a word. Information gathering is really what it was. they created groups of scientists who would provide them with assessment reports, and they were required to do the assessment reports internationally.
Starting point is 00:20:24 So there was a science assessment report, a technology report, which looked at, you know, what kinds of things could you put in a refrigerator instead of a chloroferocarbone? How well would it work? And then there was an impacts and economics group. So they looked at things like, you know, how bad would the skin can't? or get and how much would it cost to do something else, that kind of stuff. So three different science groups, all providing really detailed reports on the state of the understanding. And that was the information that the policymakers had to begin to plan.
Starting point is 00:21:00 Just to link this to AI briefly, because I want to make sure listeners are marking, this might feel like a just pure chemical, you know, environmental sort of problem or treaty. How does this relate to AI or social media, which of this is. topics that we traditionally cover. And the sort of parallel here is you have hundreds of countries that have to come together where the countries oversee the regulation of companies, domestic companies, that are producing technology. And there needs to be an engagement at the country level and at the private companies within those countries that have to agree to standards. And you need information sharing. And you're talking about this sort of expert
Starting point is 00:21:34 advisory group that's doing the information sharing on technical assessments. You know, there was just, you know, several weeks ago in Beijing, the international dialogue, on AI safety between the U.S. and China researchers in which those channels need to get open. And you can imagine something like a Montreal protocol for AI formed around, hey, we've got all these countries that are building AI. Inside of those countries, there's private companies that are doing it. They're doing it really fast. They're not really sharing information with each other. There's tons of risks. We don't even have the same vocabulary for those risks. I just want to draw the parallels so people are tracking why what you're talking about and the way that this was assembled tease up
Starting point is 00:22:11 our solutions for AI. I'm not an AI expert, but I think that's where there may be a big difference. In the case of ozone, people really did get interested. People were very interested in the environment in the 70s and 80s. They still are, I would argue. But I think the key thing is that people have to be interested in the problem, and then they have to demand a change in some way. And whether it happens through an NGO that institutes a court case or popular demand because people stop using spray cans and they use something else, people have to express a strong desire for change.
Starting point is 00:22:54 And the same thing was true in civil rights. I mean, nothing happened until in that case people took to the streets. And sometimes that's probably what it takes, peacefully. But it's a very, very important thing to do to express popular demand. And that was the predecessor to making something happen. I don't know whether anything would have happened on ozone if the American public hadn't switched away from spray canes. Every time I go back and think about it, I think that is what opened that bottleneck and said, hey, some powerful companies are going to lose market share. But then they realized, hey, you know, actually we could gain perhaps if we do something else.
Starting point is 00:23:37 They didn't have the something else in their back pocket. That's sort of an urban myth out there that they were already ready to go. But I think that the fact that other options existed made it something they could think about. In the case of AI, the companies aren't going to be thinking that way. They're thinking about the enormous amounts of money that they see available to be made. And that probably means, I'm sad to say, that there has. has to be more public engagement. That's right.
Starting point is 00:24:09 But getting more public engagement around AI is going to be tough because most people don't even understand what it is. Right. I want to track one thing you said earlier, which is you talked about at first the ozone hole being discovered above Antarctica where there's not really a lot of human life, human activity, so there's not a lot of human consequence. And the difference between that versus discovering it up in the Arctic and the northern hemisphere where there is a lot of human life
Starting point is 00:24:33 and there is a lot of human activity and there are a lot of things at stake for us. There's sort of a skin-in-the-game recognition. So that's one aspect, because with AI, you could say, for example, AI is a really big issue. There's many different things that it touches, you know, job loss and livelihoods, the risk of superintelligence that we don't know how to control, the risk of AI companions driving everybody crazy and replacing real relationships and driving up loneliness and driving up sort of psychosis. And, you know, AI touches a lot of different issues, just like the ozone hole touches a lot
Starting point is 00:25:00 of different issues. But the sort of metaphorical switch from the Antarctic, which is like abstract, not really affecting humans to the Arctic of affecting daily human life might be something like AI companions. AI companions do affect everybody, your friends and family are talking to AI all the time and there's more people going crazy. So you can imagine a public movement based on the touch point that actually touches people in their regular lives. So that's like one thing that I wanted to mark. Something else you mentioned earlier, regulation and companies taking that threat seriously, sorry to say this, but I think being more cynical in the 2020s, I think most companies view litigation is just
Starting point is 00:25:36 the cost of doing business, meaning it's not actually a threat that they're going to take seriously to alter their behavior. We know that the companies all know they're going to get sued for copyright, but they know that there's no other way to train the AI models other than to scoop up and extract all this information. And if we don't do it, China will. So under the national security argument, we have to keep racing and scooping up all this data. The litigation is just going to be a really big price tag. We pay some time down the line after we're making trillions of dollars from automating all of human labor. Yeah, it just becomes a time. a little fee that they pay as a small consequence for owning the total market.
Starting point is 00:26:13 That's probably how they've always behaved, actually, as long as the fees were tiny. But it sounded like in the ozone hole example that they were concerned about litigation as one of the motivating factors, or at least in a boardroom, could sway them to say, maybe we do need to accelerate that research. Or did it take the Montreal Protocol being in effect to actually start supercharging the research and development efforts at the companies. The Montreal Protocol did get, again, the engineers talking to each other about what can we actually do.
Starting point is 00:26:43 It's maybe that practical side that comes in very strongly here. And like actually perhaps the AI companies that you're talking about, these companies were technical companies. They loved making new stuff. So as long as the product that they were making wasn't really that much of a big moneymaker for them. They weren't going to try that hard to hold on to it. They were just, you know, we'll move on and make something else that's actually good for business. So chemical companies aren't saints either, and sometimes they hang on to certain chemicals probably way longer than
Starting point is 00:27:18 they should, even in the face of litigation. You can think of some of that going on right now. Microplastics and Phaas and other things. But in this case, there just wasn't a big positive for them and staying in the game. There's something you said that it's not really interesting, which is that the companies didn't really have alternatives ready to go. They weren't really researching them. And it took outside pressure, it took critics sort of to being the true optimist to say, actually, there is another path possible.
Starting point is 00:27:51 And this is something we hear from the AI companies all the time, that there isn't really another viable path for the way to make AI. And what I think you're pointing at is that there is no insight. for them to search for anything but the default path until there's some kind of pressure placed on them. Yeah, that's exactly how they're going to behave. They're, you know, companies, I sometimes like to say companies are like cats. They don't like it when you move the furniture around. So, you know, they are not going to do anything that they don't have to do. And more than that, there's a billion-dollar marketing budget every single day telling us about all the benefits
Starting point is 00:28:27 of AI, which, by the way, I use AI every day and I enjoy those benefits every day. And I'm not denying that set of benefits. It's about what are the ways in which we're currently releasing AI that will get the benefits without risks that no one would want to take that are the other side of that trade. I just want to add one flavor because what's happening in the AI world is the belief that all of this is inevitable. There is no other way. And imagine if everybody involved in the ozone hole problem, all the companies and all the governments, all collectively held in their mind's eye, this is inevitable. There's no. nothing that we can do. By believing it's inevitable, they are ensuring, it's a self-fulfilling
Starting point is 00:29:06 prophecy. They are casting a spell. That means they will never even seek another path. And so one of the things that I said in a recent TED talk was that we have to be committed to another path, even if we don't know what it is yet, because the chemical companies didn't necessarily perfectly know exactly what all the other alternatives were going to be. Martin Luther King didn't say, I have a dream and this is the exact path how we're going to get there. He said, I have a dream. This is about snapping out of that dilution and recognizing that we won't stamp out of that delusion if we collectively start by believing that it is inevitable. That's a great point. But doesn't it also reflect your values? I mean, you have values around
Starting point is 00:29:42 your fear of AI. Other people may have values that say, you know, anything more technical is good. You know, so, hey, bring it on. So the problem is when you can't impose your values on other people, society makes decisions based on its collective values. And at the time of the Montreal Protocol, the collective values were very much pro-environment. We can make change. We can improve our environment because we had already done it with several other previous examples, like getting rid of DDT, getting our smog under control. And in the case of AI, I'm afraid the problem is that the whole thing is just too new. It's a brand new wild frontier that people just don't, you know, I mean, what are they going to compare it to?
Starting point is 00:30:30 I guess maybe the advent of the internet or deeper than that, the birth of conscious and intelligent species that could have the ability to make tools because AI has the ability to automate toolmaking and scientific and all technological development, which gets you sort of a new infinity curve to mine of all potential benefit, which is, by the way, why the conversation about the risk is so confusing. because AI represents both a positive infinity of new scientific and technology development you couldn't even imagine. At the same time, that it also represents a negative infinity of new kind of ways that things could go wrong that you could never even imagine.
Starting point is 00:31:06 I just don't think AI is... We don't fear it enough. It's not personal enough to enough people. Using your framework, it's like, you need to clarify the personal and the perceptible and then the practical. Aza, go ahead. Yeah.
Starting point is 00:31:18 Oh, it's also interesting. Just how abstract, even, cancer is because it didn't work to get people to stop smoking by telling them that it would give them cancer. You had to take a different route, which was to tell them that they were being manipulated and deceived. What you had to do was put television commercials on that showed how smoking was very glamorous. Do you remember those? That had a huge effect on people. And what it did was show people who had become terribly disfigured because they had facial cancers and they had lost their voices, and we're talking with those horrible boxes.
Starting point is 00:31:56 So, yeah, smoking is very glamorous and look what could happen to you. That's what scared kids and got people off of smoking. So it had to make it personal. I think we should move on to the sort of ongoing evolution of this story, which is that the Montreal Protocol was not the one only. It was actually the framework or skeleton for an agreement that created this sort of dial. space for the terms and conditions of dealing with the ozone hole problem as it continued to evolve, especially as we started getting different replacements to the original CFC chemicals that were driving
Starting point is 00:32:32 the problems. So can you speak to, in 2016, when countries came together again in Kigali, Rwanda, what happened then and how did this sort of story continue to evolve? Yeah. So what happened in Rwanda was that the governments had begun to realize that the things that we replaced the chlorofluorical carbons with. They were great. They weren't damaging the ozone layer. That was a tremendous advance. But like the chloroflorocarbons, they were also greenhouse gases, both chloroferocarbons and the things that initially replaced them, which are hydrofluorocarbons. Or HFCs, yeah. So you have that chlorofloricarbon CFCs. You replace them with HFCs. And for example, in your auto air conditioner that you probably have today, you've got an HFC and not a CFC. But now we're replacing
Starting point is 00:33:24 those with what are called HFOs, so hydrofluor olefins, which have very short lifetimes, and they don't do anything to ozone, and they're also not greenhouse gases. They don't spread around the globe fast enough to be significant greenhouse gases. So when people began to realize that they could make a switch there too. The companies were very much in favor of it. The NGOs were very much in favor of it because it would be a fantastic contribution to shaving a little piece off a global warming. About a third of a degree by 2050. So basically that one, honestly, I don't even think there was popular demand to speak of because people really didn't even know. But what there was was a clear practical way forward. And the countries that had been, you know, very proud, of course, of the
Starting point is 00:34:18 Montreal Protocol and all of its success realized that they could do even more, that they could do something really good for the environment by making this switch to. And the industry wanted it. And by the way, President Obama was very engaged in that whole process and spent a lot of time with his counterpart in China, which was Z discussing, you know, this change. This is really important that even the U.S. and China, which were technically, well, earlier in their cycle, but starting into enter into geopolitical competition and rivalry, we're actually able to collaborate on something when it came to existential safety, because that's an important precedent people need to believe for AI.
Starting point is 00:34:54 Go on. And, of course, I think it is true. You have to have a leader or leaders who can see why that would be a good thing and push for it as the two of them did. One of the things that also helped us when it came to Kigali was the fact that linked to food safety and that you could do things under that agreement that would allow you to properly refrigerate foods in developing countries instead of having stuff spoil. So it's important not only for the health of people, but it's also important for just getting
Starting point is 00:35:31 food to people. You know, I mean, you'd much rather have it not die in the truck. You'd have it the truck be refrigerated and it actually get to where it needs to go and get into people's mouths. So the fact that doing proper refrigeration would actually move the needle on food and health, I think really, really helped Kigali become more practical. But even when it passed in Rwanda, I said to myself, the United States with the fractured politics that we have nowadays, our Senate is never going to ratify this change. So the way an international agreement works, is that the executive branch negotiates it, and then the Senate has to actually ratify it.
Starting point is 00:36:18 It's actually written into the Constitution, that it has to go that way. And not only that, it's the toughest thing the Senate does. You need a two-thirds majority to ratify an international agreement. It was in October of 2022 that the Senate ratified the Kigali Amendment. And what ended up happening was that the NGOs, the people like the Environmental Defense Fund, and the Sierra Club and people that really have a lot of interest in the environment wanted it to happen and the industry wanted it to happen. And that moved to Capitol Hill in incredible ways that they normally don't move.
Starting point is 00:36:55 I mean, most people, I didn't even know this was even happening. I'm sure most of the listeners of this podcast probably weren't tracking this. And it's a good example of how if you start with a basic skeleton framework with the Montreal protocol, the institutional trust and the relationships that are built in that skeleton framework allow future work to be done without all of the public demand being constantly motivated by waves of cultural outrage that have to be maintained or cultivated in narrative warfare tactics like no one wants to do anyway.
Starting point is 00:37:21 So I just think there's a really optimistic story here that technology and solutions evolve over time and this was a framework that allowed those evolving solutions to continue to get better because it turned out in that case that some of our quote-unquote solutions were also part of a different problem and we have to keep evolving our solutions. And if we imagine we did a moment,
Starting point is 00:37:39 Montreal Protocol for social media and the engagement-based business models so that you don't have these companies that are competing for attention. And so maybe all these newsfeed companies are competing for a different metric. Like when you talk about politics, it sorts for unlikely consensus. So you're seeing multiple perspectives synthesized. And it's a competition for which companies are synthesizing multiple perspectives. But the point is you have an agreement that lets you continually evolve and adapt at the speed of the technology's problems. And that's what I think is so important about Montreal and Kigali. Yeah, that's a great point. I've been living with the Montreal protocol for almost my entire life, so I didn't really think of it that way, but you're absolutely
Starting point is 00:38:19 right. I think that once you have that sort of framework, you can do so much with it. And the problem is there are too many problems where we don't have anything. Right. We have no international agreement going whatsoever. And at that point, you're sort of stuck until you get something going. And again, that's why they have to start slowly. Even in the case of the Montreal Protocol, the initial protocol was just not that ambitious, you know, freeze the production at your current rates. Don't, you know, we're not telling you you have to phase it out. But you know what? Within three years, the companies were ready to phase out. They were ready to drop by 50%, which is incredible because the technology had come so far. And because their engineers were talking to
Starting point is 00:39:02 each other. That's the other thing that you don't have if you don't have that framework. It just makes me think what are all the missing sort of first step, quote-unquote, treaties that just provide this very basic skeleton framework on different issues. And there just could be hundreds of these little skeleton frameworks that allow for this ongoing management and discussion channel. I think that's another place that the impossible gets broken, which is to say you don't have to solve the whole thing at once. It can become a stepwise process. I think it may be worth just getting a couple other examples from your book because it's not just the Montreal Protocol that we've collaborated on. It's also been urban smog and leaded gasoline and the toxic pesticide DDT.
Starting point is 00:39:48 And I'm curious if you can pull forward any lessons from there that we can generalize. Some of those are purely domestic issues. Like urban smog is something that we deal with domestically. although obviously there's a lot of technologies that are developed in one country or another that end up being spread all around the world because things like cars are international products and they were a big factor in smog. So the development of the catalytic converter, for example, maybe I'll start with that one, was a huge, huge benefit for pollution worldwide.
Starting point is 00:40:23 And it happened in this country, it happened first here, because it was forced to happen. The Clean Air Act of 1970, which occurred literally by popular demand, people were really sick of the amount of smog that was going on in Los Angeles and New York. Back in those days, those cities looked like New Delhi and Beijing look today. Actually, Beijing's cleaned up a lot, but New Delhi is still pretty bad. Karachi is another very, very polluted city. So people were sick of it.
Starting point is 00:40:55 They were demonstrating. They were demanding change. The time was ripe for a Clean Air Act, which passed the United States Senate by a unanimous vote in 1970. And it didn't actually say you have to develop catalytic converters. It said, you have to get emissions down by,
Starting point is 00:41:15 I don't remember the exact number. 90%. I think we should say to clearly put the recording because it's such an inspiring example. You wrote in your book that the Clean Air Act of 1970 explicitly required the auto industry to reduce emissions of smogged. producing carbon monoxide, organic molecules, and nitrogen oxide in new cars by a startling
Starting point is 00:41:32 90%, which was clearly a breathtaking number that would require an engineering breakthrough. It's the whole, I have a dream, and we don't know what the pathway to that dream is yet, but somehow you've got to get it down by 90%. And then I think you have an antidote in your book that when the leaders of the auto industry came to Washington to complain that they were being asked to be impossible, and when one of the committee staff stepped out of the meeting for a moment, a general motors engineer followed him and as often happened in that era, key information was communicated in the men's bathroom.
Starting point is 00:42:00 The engineer confided, look, we can build whatever you want us to build. If you tell us to build a clean car, we'll build a clean car. That is an amazing thing. They said, hey, 90%, you're going to have to get there. And the auto industry said, we can't do it. And yet, they did it. They didn't get there quite as fast as the original Clean Air Act required,
Starting point is 00:42:21 but with a little bit of delay they got there. So it's again the same story. Industry will always say, oh, no, no, no, we can't possibly change. It's impossible. And then they actually can't change. You have to keep coming back to you. You have to have a vision that there's something practical out there, at least some idea of how you're going to do it. That really helps because right now when you look at climate change, I think that the people
Starting point is 00:42:45 who are saying, oh, we can't do anything are the ones who are saying it's still impractical. And the facts don't bear that. out. The facts show that we have gotten so good at making renewables, and they are cheap in the long run. Yes, they require an initial upfront investment, but when you look at how they perform compared to their counterparts, in the end, you really do save a lot of money. And I will also point to the amount of tremendous progress we've actually made on climate change. Even though it is so incredibly embedded in our economy, we have made unbelievable progress already. I mean, We would have been facing a four-degree future by 2100.
Starting point is 00:43:27 We've turned that curve into probably a three-degree curve. We'd like to make it two degrees or one-and-a-half. I think we can get there. We've made tremendous progress on the cost of renewable energy. It's much, much cheaper than it used to be. There's no real reason why we can't move forward on it except the deliberate avoidance of the problem by certain governments and certain companies. That's going to always happen in any problem.
Starting point is 00:43:56 So I think that we shouldn't be too U.S. centric. China is moving ahead. Europe is moving ahead. I think that they'll continue whether or not we're in the Paris Agreement or not. People don't want to build coal-fired power plants anymore. They're too expensive. You know, this conversation is really making me reflect
Starting point is 00:44:18 that maybe there is a place for non-naive hope. for coordination on AI. And where my mind goes is you sort of described America moving first, driven by American consumers, but there is a reason for America to start working towards a kind of global coordination. And I think there may be a blind spot for at least, you know, Tristan and I, and many people in the AI community, because we sit inside of the U.S.,
Starting point is 00:44:46 that we assume the U.S. has to move first. But if we all got crystal clear on the more powerful AI becomes, the more deceptive, the more blackmail, the more uncontrollable it becomes as it gets better and better at beating humans at all games of strategy and achieving goals, if we're all crystal clear that what was being built was uncontrollable, well, then I think China probably could move to just ban that kind of technology within China and say no open source. model can do that. No company can work on AI above a certain set of red lines. And now if China moves first, I can actually see a world in which it opens up position for the rest of the world to coordinate and say, well, okay, there actually isn't to risk of being out-competed if we all agree to a certain red line. I don't know how believable that is, but it feels more believable than any other path that I see. Yeah, really interesting.
Starting point is 00:45:49 I think what we have to always look at is how hard is this thing to unwind? You know, if we're making a mistake, how persistent is the problem that we created? Lead is forever. Chlorifloricarbons last a long time. Carbon dioxide from burning fossil fuels last a long time.
Starting point is 00:46:08 If we make a mistake with something but we can unwind it quickly, that's a different kettle of fish than when you make a mistake with something that will last forever. Excellent point. And I think just imagining into a world of future governance, we need to make the distinction between externalities that are irreversible and or whether there's at least a massive asymmetry where it's much easier to create the problem than to reverse the problem. And wherever that's true, we should act with much more precautionary principle, much more care, much more upfront risk analysis than just proceeding blindly. Well said. Susan, thank you so much for coming on. This has been a fantastic, one of my favorite conversations on this podcast, and I'm super grateful for all the work that you've done,
Starting point is 00:46:50 believing in the possible, even when it looked impossible and inevitable, and hopefully leaving listeners with some hope and also an increased appreciation for the complexity and nuance of how we navigate really difficult terrain. Well, it was my tremendous pleasure. You guys are fantastic interviewers, and I enjoyed being on the program. Thank you for asking me. What I find really interesting about the story, although we didn't get to talk about it with Susan, is there isn't really one hero of solving the ozone hole crisis. It turns out it's a number of scientists, all working in coordination.
Starting point is 00:47:26 And that actually creates, I think, a whole in our history. Because there isn't just one person that becomes the hero, it isn't really part of our collective memory of, oh, yeah, this is how we solve it. It just sort of feel like, well, then it got solved and it's messy. And I think this is really important because, and you pointed to this out in your TED talk, is when you hear about all the problems, your mind says, well, either I have to figure out a way of solving it all, and it's on my shoulders, or it can't really be a problem, and I'm just going to ignore it. And this is pointing at the real solutions to big problems are messy, and it's okay to not
Starting point is 00:48:08 know what the whole solution is. it's going back into the, well, we each have to do what we can in the spheres that we have agency, try to increase those fears of agency, and understand that it's going to be hundreds or thousands or tens of thousands of people taking small actions that in aggregate make the difference. Yeah. And I said in the TED Talk is your role is not to solve the whole problem, but to be part of humanity's collective immune system against the kind of blindness and naivete of the current path. Because we have the evidence of AI and controllability.
Starting point is 00:48:39 And we have examples of Montreal Protocol. And if people are repeating and sharing these examples, we have a chance for something different to happen. The other thing that, like, I find really hopeful in Susan's story is when she goes down to the Arctic and she says, oh, actually, it's going to affect the world where people are. And what I find interesting about that is we often hear in the AI world, no one's going to do anything until we get a train wreck until we start getting the first really big catastrophes and even sometimes I fall into this belief if you're right, you know, that's just human nature.
Starting point is 00:49:19 We have to wait until we get hurt before we change your behavior. But here's an example of where that wasn't the case where there is enough of an ability for humanity to see into the future that it caused the world to coordinate. I really want to stop and highlight that because actually it's not that human nature
Starting point is 00:49:38 is that we have to get hurt before we change. There are examples when we can see the hurt coming and change. We can act with foresight, totally. And it's such an important aspect. I mean, it's not like the ozone hole. If you just like breathe through your nose, look through your eyes, hear through your ears, none of your sensory organs pick up the fact that there's an ozone hole problem looming.
Starting point is 00:49:58 So back to E.O. Wilson's problem statement of we have Paleolithic brains, medieval institutions. God, like, tech, well, guess what? This problem hit us in our, like, you know, evolutionary blind spot. And we dealt with it anyway. We actually use the fact that we have scientific tools. We have communication tools. We had the public that was aware of it. I also want to highlight something that wasn't really part of the story
Starting point is 00:50:17 that these two scientists, Sherwood, Roland and Muriel Molina, who actually discovered this in 1974, the publication of their first warning, there were six months of relative silence. And they actually went to the American Chemical Society, and they called for a boycott by citizens of hairspray and deodorant. And these scientists are part of this sort of activism that's part of the story too. And I think about people like Daniel Kokatello
Starting point is 00:50:40 or the Open AI whistleblowers and William Saunders who we had on our podcast who are some of those scientists who are saying, hey, there's some relative silence here. We have to be a little bit louder. We have to go create AI 2027. We have to go make people be aware of these issues. And everyone has a role in this story
Starting point is 00:50:54 of kind of creating and moving towards the kind of Montreal Protocol for AI. Your undivided attention is produced by the Center for Humane Technology. a non-profit working to catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher and producer, and our executive producer is Sasha Fegan,
Starting point is 00:51:17 mixing on this episode by Jeff Sudaken, original music by Ryan and Hayes Holiday, and a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and so much more at HumaneTech.com. And if you like the podcast, we would be grateful if you could rate it on Apple Podcast.
Starting point is 00:51:36 It helps others find the show. And if you made it all the way here, thank you for your undivided attention.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.