Your Undivided Attention - A First Step Toward AI Regulation with Tom Wheeler

Episode Date: November 2, 2023

On Monday, Oct. 30, President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, sha...res his insights on the order with Tristan and Aza and discusses what’s next in the push toward AI regulation. Clarification: When quoting Thomas Jefferson, Aza incorrectly says “regime” instead of “regimen.” The correct quote is: “I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. And as that becomes more developed, more enlightened, as new discoveries are made, new truths discovered, and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regime of their barbarous ancestors.” RECOMMENDED MEDIA The AI Executive OrderPresident Biden’s Executive Order on the safe, secure, and trustworthy development and use of AIUK AI Safety SummitThe summit brings together international governments, leading AI companies, civil society groups, and experts in research to consider the risks of AI and discuss how they can be mitigated through internationally coordinated actionaitreaty.orgAn open letter calling for an international AI treatyTechlash: Who Makes the Rules in the Digital Gilded Age?Praised by Kirkus Reviews as “a rock-solid plan for controlling the tech giants,” readers will be energized by Tom Wheeler’s vision of digital governance RECOMMENDED YUA EPISODESInside the First AI Insight Forum in WashingtonDigital Democracy is Within Reach with Audrey TangThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Transcript
Discussion (0)
Starting point is 00:00:00 All right, welcome to your undivided attention. I'm Tristan Harris. And I'm Azaraskin. And there's a lot of things that are moving with AI. Most of it can feel overwhelming if you open Twitter. But as many things that are developing with AI, there's a lot of developments in the response to AI. How do we get this right? How do we end up in a stable world?
Starting point is 00:00:21 And we wanted to make sure that you listeners were aware of some of the things that actually are happening that are good news. This week, Vice President Harris and Secretary of Commerce, Gina Romondon, are attending the UK AI summit. We're seeing many governments around the world stepping up and treating this problem seriously. We're seeing that China will be attending the UK AI summit this week. The G7 group of industrialized countries
Starting point is 00:00:41 released an 11-point code of conduct for AI companies, including rules for advanced foundation models. There's also been private meetings where Western and Chinese academic AI researchers have been meeting to discuss shared frameworks for getting to AI safety. There is also a public letter for an AI treaty,
Starting point is 00:00:57 aI treaty.org, that's urging the need for an international agreement on AI safety. And also, we personally have been busy. About two weeks ago, I did a appearance on Bill Maher talking about how we respond to AI risks. And I just flew back from Washington, D.C., where I was with President Biden for the announcement of the long-awaited executive order on AI, which is what this episode is all about. So this 111-page executive order is a sweeping announcement that imposes guardrails on many aspects of the new AI models. One of the remarkable things about this is, executive order is that it really takes seriously the full scale of impacts AI has in society,
Starting point is 00:01:35 and that's why it's so broad. So it mandates that companies share internal testing data, and very importantly, that companies must notify the government when they are training new frontier foundation models. That is, models that go beyond 10 to the 26 flops, which is a fancy way of saying things that are of scale GPT-5 and beyond, as well as anything that poses serious national security, economic security, or public health threats. The executive order also goes after the intersection of AI and biology by making federal funding for life sciences dependent on using higher standards around gene synthesis and the kinds of things that can be used to do nasty things with AI and biology.
Starting point is 00:02:14 The order also addressed the new development of cutting-edge privacy tools and the mitigation of algorithmic bias and discrimination and the implementation of a pilot national AI research resource or NARE, which will fund AI research related to issues like health care and climate change. And finally, the executive order tries to solve the deficit of AI talent in the US government itself. They're launching an AI talent search on AI.gov. I think what's most impressive about this order is just that it reflects the many different areas of society that AI touches, that it's not shying away from the multiple horizons of harm, privacy, bias and discrimination, job automation, AI expertise, biological weapons.
Starting point is 00:02:57 Instead of saying these are way too many issues for the government to tackle, this executive order has bullet points for how it's going to try to signal a first step towards each of these areas. So I actually was in the room as the president was signing the executive order. It was a privilege, really, to be there in this historic moment. And I was chatting with one of the White House lawyers. And he used a phrase that I thought was exactly right. He said, this is the end of the beginning.
Starting point is 00:03:24 I remember Tristan, you and I back in March, really realizing that we're going to have to have something like an executive order. We did the AI dilemma, and while, of course, it's not us pushing for an executive order that made it happen. We've now sort of completed this process where in March this was not an issue, the executive order was signed. And so we're going to be discussing that today with Tom Wheeler. Tom Wheeler knows the tech industry from both government and business perspectives. He was a venture capitalist in the cable and telecommunications industry, and he was chairman of the Federal Communication Commission, the FCC, from 2013 to 2017. These days, he is a visiting fellow in governance studies at the Brookings Institution,
Starting point is 00:04:07 where he's been researching 21st century tech regulation for his new book, TechClash, who makes the rules in the Digital Gilded Age. Tom, welcome. Aza, thank you. It's great to be with you guys. and to the storytelling of one of the first times we visited Washington, D.C., trying to meet the various institutions in D.C. Tom, we were actually at a meeting, I think was that at the United Nations, or it was held by Dick Gephard and some other groups, to try to figure out how are we going to get our hands wrapped around this? And I'm so curious, given your very, very deep expertise in government and in Washington, what is your overall take on the executive order? Well, let me back up, first of all, to both of you have been engaged in a missionary effort that has been really important. And I think you ought to feel good about the fact that the President of the United States has stepped up as he did. You know, it's been interesting to watch as Congress talked, the administration moved forward. and they move forward in an evolutionary process, if you will.
Starting point is 00:05:13 You know, the first thing out of the box was the AI Bill of Rights, which was kind of aspirational. And then came the NIST standards for management and mitigation, which are terrific but without any enforcement. Then came the voluntary commitments of the major AI model companies that, again, were well-induced. intended but so general is to almost be unenforceable. And now what President Biden signed in the executive order
Starting point is 00:05:47 mean 111-page executive order was I was struck by his use of the Defense Production Act and its enforceability mandatory nature to require certain things. But the problem with executive action is, is that most of the other things are guidance and are not enforceable. We need enforceable oversight of the digital activities, and that absent action by Congress, we're not going to get there because of the fact that we're still operating under industrial-era rules
Starting point is 00:06:34 and industrial-era statutes, and industrial era assumptions. So, bottom line on the executive order, hooray, great leadership throughout this entire process. But we really need enforceable strategy that only the Congress can create. You know, I often consider AI to be like the mythological Greek monster, Hydra, the multi-headed monster. And, you know, as I looked at the executive order, I think the president took a swing at every head he could find on the hydra-headed AI monster, and that's terrific.
Starting point is 00:07:17 In terms of just signaling power, and it wasn't lost on any of us, that the UK AI summit was happening directly after this announcement. And so there's a signaling value in saying the U.S. is going to do something, or rather that the U.S. is taking it really seriously. and in the sense that we all have to do what we can do, I viewed this as incredibly good. This was sort of the maximum that Biden, or really the executive branch, could do.
Starting point is 00:07:47 And so before we go into, like, how might we fix the limits of our medieval or maybe industrial revolution-era institutions, I do think it's important to walk through at least a little bit of what's in this executive order, especially around, like, the use of the Defense Production Act to force government in the, loop for frontier models and things like that.
Starting point is 00:08:09 And then let's step back to this larger question of sort of structurally how might we redo governance to match the times. Sure. Back to the question of enforceability and the Defense Production Act and the requirement that certain of the companies, and I guess it is yet undefined, but certain of the companies that are building foundation models need to inform the government. government as to what the training is going on, need to be running some red team activities to try and identify vulnerabilities and share that information because it has national security
Starting point is 00:08:53 and economic security implications, therefore there can be mandatory requirements. Those are all good and those are important steps and we need to understand what's in the black boxes. and have an ability to, based on that understanding, deal with whatever reality is created. I think it falls short of the Food and Drug Administration, for instance, we will run government tests on every new pharmaceutical and determine whether or not it can be released to the market, but it's a move in that direction. And it's a mandatory requirement that the government is at least aware of what is going on.
Starting point is 00:09:36 Now, the interesting thing, and we can get to this later, but the interesting thing is I didn't see in the order specifically who was covered. And one of the fascinating things is, okay, how do we deal with open source models? That is coming definitely from people who we know are not covered by this. Yeah, that's right. My understanding from the order is the Defense Production Act requires that if you're basically building like the GPD 5 and GPT6, the big frontier level systems, the government now needs to know that you're about to do that training run, and they need to know the results of that training run and the red teaming and the testing and the capabilities evaluations as you're doing them so that they get at least aware or notification of it. Now, compared to proposals, there have been proposals to say we need to license companies to do frontier training run. So there's been these proposals that have been floating around D.C.
Starting point is 00:10:35 That, hey, you can't even be able to train a GPT5 or GPD6 unless you get a license from the government at some agency. This doesn't go that far, but it sets up a government in the loop in the process. Because up until the executive order, if I was anthropic or open AI and I want to train GPT5 or GPD6, I didn't have to tell anybody. I could just do it. So just to kind of break that down for listeners in simple terms, that's what that means. And I think, you know, your point about the Defense Production Act,
Starting point is 00:11:02 I think the fact that we're invoking that with AI speaks to the national security implications that are being recognized. And to Aza's point about signaling value, the fact that the executive order is signaling that there's very deep national security implications that would require us to take something like the Defense Production Act as a tool
Starting point is 00:11:19 to get something to happen is also, I think, valuable. One thing I want to ask you about, Tom, for people who are not really familiar about this, one of the levers that the executive order uses is federal funding conditions. So basically, in a few different places, the government's saying, this executive order, as a condition, for example, life science is funding, to get that funding from the government, you have to do these new and improved practices.
Starting point is 00:11:44 So, for example, one of the things the executive order covers in the hydra, which I think is a great term, it covers the many horizons of harm to use our internal phrase here at CHT, that because AI affects bias, discrimination, jobs, labor, biological weapons, risks of doom scenarios, those sci-fi scenarios all the way up, you know, to the long term, when something affects all those different horizons of harm at the same time, that's the hydra that you're speaking to. And I think just, again, applaud the people who are working extremely hard at this of the White House, Ben Buchanan, Bruce Reed, the whole teams that have been working very, very hard on this. And done in record time, I think, like, you know, six months unprecedented,
Starting point is 00:12:18 it's the most aggressive action that they could have taken. And one of the areas that they covered in that hydra is actually the intersection of AI and biology and them mandating that there needs to be new and improved genetic synthesis screening so that labs have tighter controls and the kind of materials that one would use with AI and to do nasty stuff with biology. Can you speak to any of the history of the power of this lever? Because obviously this is only in effect places that are affected by federal funding. But I think you'll have some background here. There are two principal ways in which the government affects the marketplace. One is through direct regulation and the other is through its role as typically the largest consumer and that's what this
Starting point is 00:13:03 second part that you've been talking about is doing and again it's terribly important i just have to pause here for a second i agree with everything you just said about the incredible effort speed and dedication that went into doing this i don't want to have that as a as somehow being Eeyore and complaining about the significance of this effort. But one of the drawbacks, one of the shortcomings of relying simply on government procurement or government funding is that it only goes to those who are procuring or being funded. And again, as you guys have been so terrific in your missionary work in pointing out, this is much more expansive than that. So, haza, yes, use every tool at your disposal, but we also need new tools.
Starting point is 00:14:00 You know, I think another thing that this executive order does is it lets us see when the tech companies are speaking out of both the left side and the right side of their mouth. It sort of like forces that hand. Because I remember, you know, Tristan and I were at the Schumer AI Insight Forum. And there is, you know, the moment that I think Schumer really wanted when he asked, who here thinks the federal government will need to regulate AI and should regulate. And every single CEO from Sam Holtman to Mark Zuckerberg to Satya Nadella, from all the major companies raised their hand, right? And it led to headlines like tech industry leaders endorse regulating artificial intelligence at the rare summit in Washington. And then right after the executive order comes out, net choice, which
Starting point is 00:14:43 is funded by a lot of those same organizations, releases their quote, which is Biden's new executive order is a backdoor regulatory scheme for the wider economy, which uses AI concerns as an excuse to expand the president's power over the economy. So here we go, right? They're saying like, yes, please regulate us, just not that one. And they're sort of like with one hand, they're saying yes, and the other hand, they're saying no. So I would love for you to talk a little bit about that thing. So, As a recovering regulator, this is kind of like the line in Casablanca, you know, where Claude Raines says there's gambling going on here. This is a classic move in these kinds of environments that, yes, I am all for puppies and apple pie
Starting point is 00:15:35 and the flag. And now let's talk about what the specifics of that means. Oh, golly, we can't go there. This would be terrible. This would be awful. And against innovation and, you know, then all come out all the detailed imaginary horribles. One should not be surprised. You know, one of the things I'm proudest of is my term as chairman of the FCC was net neutrality.
Starting point is 00:15:57 I would meet with industry executives or listen to them, make their speeches or testify. And we're all for net neutrality, but let's define net neutrality my way, which is it's only about blocking and throttling. This is why the job of policymaking is so damn difficult. You know, I'd come home from work when I was chairman and I'd sit there at the dinner table with my wife and I would say, you know, the public interest is fungible. There is nothing clear cut about this is the public interest. There's this aspect of the public interest and that aspect of the public interest. And the job of the policymaker is to sift through all of that and figure out what is the fungible answer. to address the public interest.
Starting point is 00:16:52 You're sort of saying, in the end, you have to choose a process that does good sense and decision-making. It's not going to be something just static in time. And one of the parts of the EO is this personnel as policy thing. Right now there's a dearth of knowledge of expertise about AI in the government.
Starting point is 00:17:14 And so there's a huge hiring spree. There's going to be a sort of head or chief of AI, I think, and now every federal agency. And I think the White House is creating a White House AI Council, which will coordinate all the federal government's AI activities, staff from every major federal agency. So I'm curious then, in the frame of the end of the beginning, what happens next?
Starting point is 00:17:39 Is the AI Council the right way to think about it? And, of course, back to your fundamental question of, like, How do we have governance keep up with the increasing pace of AI? First of all, Bruce Reed, who's a deputy chief of staff at the White House and is going to head the AI Council, is a really good guy who understands these kinds of issues. But his job will be to be the maestro, if you will. I think at the end of the day, what we need is a new federal agency that is focused, on the realities that digital has brought to a previously industrial economy and society and government, and that there has to be that kind of hands-on authority. At the end of the day, you're going to need somebody with rulemaking authority to come in and say,
Starting point is 00:18:37 okay, these are the decisions that we made back to the question of what's in the public interest. Here's how we have put those various forces together. But let me pick up on one other thing that I was thinking as you were saying that. I watched Eric Schmidt on Meet the Press a month, six weeks ago, whatever it was, when they were interviewing him about AI, and he said, oh, you know, you got to let the companies make the rules here because there's nobody in government that can understand this. And I got infuriated because we used to hear that in the early days of the digital platforms. oh, you know, these digital platforms are so complex,
Starting point is 00:19:15 and if you touch it, you'll break the magic kind of a thing. And it seems to be the same kind of playbook, which is, well, let's let the company just go ahead and make the rules because they are really the only ones who understand. And, you know, I just kept saying to myself, well, wait a minute, we split the atom. We set men to and from the moon safely in a government program. And sure, there is not the kind of in-depth,
Starting point is 00:19:42 knowledge widespread, but you know what? I bet that there are very few members of Congress who can explain jet propulsion or Bernoulli's principle that keeps airplanes in the air, but we sure do regulate the manufacture and operation of aircraft. We've sort of been calling this the under the hood bias, which is the deference to the people who know the most about the inner workings of engines, somehow assuming that the people that know the most about the inner working of engines also know the most about how to set up stoplights and residential zones and where to put freeways. And it's just, there are different skill sets.
Starting point is 00:20:23 But one of the themes that I develop in TechLash is that it is always innovators who make the rules. And that's terrific because they see the future. And that's how we've had advancements in science, business, the arts. It is always people breaking the rules to expand the barriers that move us forward. And you want that and you want to encourage that. But also the history in the industrial era and now in the digital era is that ultimately that rulemaking reaches a point where it infringes on the rights of individuals
Starting point is 00:21:04 and the overall public interest at which time we the people have to collect step up and say, we're going to put some guardrails down here. And I think we are at that point. Can you say more about how this played out in the Gilded Age and how we successfully navigated the tension between regulation and innovation? Any examples that we can draw upon as we're trying to deal with what is admittedly a double exponentially faster moving technology of AI, but still, you know, even so. Yeah. So when the government finally went to deal with what industrialization was visiting on individuals in the economy, they cloned the management practices of the industrial companies.
Starting point is 00:21:49 And I don't know whether you recall. Frederick W. Taylor was the guru of management at that point in time, his theories of scientific management, which was basically how do we wring out all incentive to be different from the process because then we'll get scope and scale economies. And so we developed regulatory agencies that were rigid, sclerotic, and micromanaging, just like corporate management was. But that's not the way digital management works, and we need to be adopting digital management techniques. What are digital management techniques?
Starting point is 00:22:31 Well, it's transparency, it's agility, it's risk-based kind of. assessments. We need to have government that in its approach to the challenges created by the digital companies copies a lot of the management practices of those companies because it is only that way that you get the risk-based agility that is necessary to both keep up and to avoid thwarting innovation. The makes me think of is, you know, the agility of the technology has to be matched by the agility of that which is supposed to be governing it. If your agility of what's governing something is less than the agility of the thing you're trying to govern, you're going to lose.
Starting point is 00:23:14 We also, obviously, we run this podcast together and we're sitting here as people who care about a future that works. Can you speak to some of the ways that you see practically we could move towards the world in which government is moving at a more agile pace with respect to AI? And you mentioned the regulatory agency as one vehicle to get there, although it is a little bit of a 20th century model. Yes, but heck, our democracy is an 18th century model. We're getting closer. So, okay. And so we need to be focusing on not how do we define tomorrow in terms of what we understood yesterday, but how do we create an understanding, particularly now, an
Starting point is 00:24:05 of the need for flexibility going forward. One of the things that strikes me is our friend Mustafa Salimann, who's one of the co-founders of Deep Mind, and now inflection was on this podcast. He's got his own good book out right now. Yeah, exactly, the coming wave. And he talks about it as like AI isn't just going to evolve, it's going to hyper-evolve, which means that any strategy
Starting point is 00:24:30 that can be discovered will be discovered. anything that can be exploited will be exploited. And so now the job is, like, how do we sort of hyper-volve our ability to protect? Because any commons that isn't protected will be exploited in the very near-term future. And it's not as if, you know, our founding fathers weren't sort of aware of this. I remember I went for a late-night walk after, like, going to the Schumer Insight Forum, after going to the White House, and we were decompressing. It must have been like 10 or 15 miles walking in it.
Starting point is 00:25:04 I ended up at the Jefferson Memorial. But, you know, there's this, you know, in giant letters, you know, there's a Jefferson quote talking exactly about this. It says, quote, I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. And as that becomes more developed, more enlightened, as new discoveries are made, new truth discovered, and manners and opinion, change. With a change of circumstances, institutions much advance also to keep pace with the
Starting point is 00:25:36 times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regime of their ancestors. So when I was a young kid in this town early on working in Congress, I found myself at 2 a.m. one morning, alone with Mr. Jefferson, in that wonderful memorial reading those exact words and it was an important moment in my life and i'm really excited to hear you say that and i think that here's what's important in the original gilded age in the industrial era we came out of mr jefferson's environment of agriculture and artisans and suddenly were confronted with the world
Starting point is 00:26:29 in which people were pulled off the land into big cities to work in soulless factories that produced products that destroyed artisans. And so there's this huge change, never before seen circumstances. And there wasn't something you could fall back on and say, hey, you know, well, we can do this plan B over here.
Starting point is 00:26:58 But somehow the democratic process in a Congress that in many instances was bought and paid for by the special interests came up with never-before-seen solutions, which we take for granted today, to deal with the never-before-seen challenges. I think we are dealing today with never-before-seen challenges. seen challenges, digital challenges, as opposed to industrial challenges, and that we need to find
Starting point is 00:27:33 in ourselves the same kind of commitment to seek out never-before-seen solutions. We need to be as innovative in our oversight as the innovators are in creating the need for that oversight. It's important to note that to clone or match the agility of tech companies in government can also lead to big potential failures, right? If you get it wrong, it's a massive political risk for any political party or any government to take, which if we try to interpret the government's in action in a good faith way, it's the fear of getting it wrong. I'm not saying that because I believe that's the predominant reason why there isn't action,
Starting point is 00:28:18 by the way. But that is one of the issues, and it reminds me of something that, as I've worked on technology issues for a decade now, running into Horst Riddle and Melvin Weber's description of wicked problems, which are that wicked problems are a specific kind of problem in social policy, which is that there's no definitive formulation of the problem. They have no stopping rule, meaning you never know if you have completely solved it because it's continually evolving. The solutions to wicked problems are not true or false, but better or worse. There's no early test of whether a solution will actually work to a wicked problem.
Starting point is 00:28:53 problem. And every solution to a wicked problem is a one-shot operation because there's no opportunity to learn by trial and error. And I could go on. But the last one is that the social planner who's doing the solution to the wicked problem has no right to be wrong because they'll be liable for the consequences of their actions that they generate. And we are faced with a litany of wicked problems. Climate change is not a definable one-shot. There isn't one-shot solution, yet we have to do something and we have to do something big. Similarly, with AI, there's a massive of evolution of a system that is changing every day. And even while we're here talking about the ideal laws to be written down on paper,
Starting point is 00:29:30 if I open up Twitter right now on my phone, I will see a thousand examples of breakthroughs in AI capabilities that came out in the last 24 hours per our AI dilemma talk, that will probably, and this is why if I try to steal man the concerns of those who say, who say we really need to go at a pace that's much more slow in the development, because maybe we don't have a solution to this wicked problem, One of the things we can do is not drive and amplify the intractability of this issue a thousand X in another six-month period because we're scaling to GPT-5 and GPD6
Starting point is 00:30:04 and introducing a whole other exponential of new surface area touchpoints of harm that we haven't yet figured out how to address. So let's just throw up our hands and say, oh, God, there's no way you can do anything. You know, what you said at the set, you talked about this is the end of the beginning. beginning, which of course is taken from the great Churchill quote after the Battle of Britain and which he said, this is not the end, this is not the beginning of the end, but it is the end of the beginning. And do you know one of the things you can say about Winston Churchill and his leadership to save Western civilization is he made an awful lot of mistakes? But dealing with
Starting point is 00:30:48 those mistakes in real time, the results ended up saving Western civilization. civilization. And I don't think that, you know, we can go around saying, oh, well, we can't get this right, therefore we can't do it. You know, one of the things that I always get frustrated with is people say to me, well, you know, you can't have this new agency because it'll be captured by the people it's supposed to regulate. And my point is, well, there's capture right now because nothing's happening. Yes, we're imperfect. But that's no reason not to seek out solutions and approaches to things. And if they don't work, discard them and move on.
Starting point is 00:31:34 I mean, you know, I was a venture capitalist before I was chairman of the FCC. I was an entrepreneur before that. I have been in lots of companies that didn't work but move the ball forward. And yes, I understand from my service in government that, There's a horde of people sitting out there ready to pounce on your any mistake. But the answer is, I tried. I'm curious when you then sort of like cast your mind towards hope and like pathways towards hope, or at least trailheads. For us, Audrey Tang's work in Taiwan is a really interesting and exciting direction for how would you upgrade governance itself using.
Starting point is 00:32:23 AI tools. So, you know, it's a really interesting direction to have as AI gets faster, as our computers get faster, to have governance itself also get faster so that like the agility of the tech is matched by the agility of the thing governing it. And so I'm curious, like, you've sat on the FCC. I'm sure you've had like desires for it. If I could wave this magic wand, I would move in this direction, something in that direction. Like, what are trailheads that lead us down a direction of hope? Government is where the collective people come together to make decisions that hopefully will be in the overall good.
Starting point is 00:33:03 And we need to be responsive to how technology is changing that which is supposed to be governed, meaning that governance itself needs to embrace the technology, I mean, it's terrific. You know, the EO saying, let's have an AI officer in every agency is a great idea. I mean, it's going to be challenging because there is a muscle memory for every agency. You know, there are antibodies that will gather around all of these new initiatives. And again, the EO is not perfect.
Starting point is 00:33:44 It went as far as it could with the powers that exist. Now let's see what do we do to change the powers that exist. I don't think we have any choice but to believe that we are at a trailhead that leads to new hope and opportunity. Otherwise, why are we hiking? There was a Spanish poet who, I won't pronounce his name, but he wrote a poem that said something to the effect of traveler, there are no. No paths. Paths are made by walking. That's the responsibility that we have. We need to start out. And if it's the wrong path, we can fix that. But let's start walking and making paths. Because if we don't, there won't be any. Completely. I think that's probably the perfect place to end. by the Center for Humane Technology,
Starting point is 00:34:49 a nonprofit working to catalyze a humane future. Our senior producer is Julia Scott. Kirsten McMurray and Sarah McRae are our associate producers. Sasha Fegan is our managing editor. Mixing on this episode by Jeff Sudaken, original music and sound design by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible.
Starting point is 00:35:10 You can find show notes, transcripts, and much more at HumaneTech.com. And if you made it all the way here, Let me give one more thank you to you for giving us your undivided attention.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.