Tangle - What happened between the Pentagon and Anthropic?

Episode Date: March 3, 2026

On Friday, President Donald Trump ordered federal agencies to immediately cease their use of the artificial intelligence (AI) company Anthropic’s technology after it declined to allow the ...Pentagon unrestricted access to its models. Later in the day, Defense Secretary Hegseth directed the Pentagon to designate Anthropic as a “supply chain risk to national security,” which he said would bar contractors, suppliers, or partners doing business with the U.S. military from conducting commercial activity with Anthropic. Shortly after, OpenAI announced it had agreed to let the Pentagon use its AI models within classified systems. Ad-free podcasts are here!To listen to this podcast ad-free, and to enjoy our subscriber only premium content, go to ReadTangle.com to sign up!A special report.We know that all eyes are on Iran right now, but we also wanted to highlight a special report from West Texas we broke over the weekend. Last week, the Trump administration waived a series of environmental laws and regulations to begin awarding contracts for border wall construction in Texas’s Big Bend region, where Tangle Executive Editor Isaac Saul owns land. Isaac wrote about what he’s hearing from residents, the reason he’s opposed to a wall, and why he hopes Trump abandons the plan. This is a special report, available for free to all readers. You can read it here. You can read today's podcast⁠ ⁠⁠here⁠⁠⁠, our “Under the Radar” story ⁠here and today’s “Have a nice day” story ⁠here⁠.You can subscribe to Tangle by clicking here or drop something in our tip jar by clicking here. Take the survey: What do you think of the military partnering with AI services? Let us know.Our Executive Editor and Founder is Isaac Saul. Our Executive Producer is Jon Lall.This podcast was written by: Isaac Saul and audio edited and mixed by Dewey Thomas. Music for the podcast was produced by Diet 75.Our newsletter is edited by Managing Editor Ari Weitzman, Senior Editor Will Kaback, Lindsey Knuth, Bailey Saul, and Audrey Moorehead. Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 From executive producer Isaac Saul, this is Tangle. Good morning, good afternoon, and good evening, and welcome to the Tangle podcast, a place we get views from across the political spectrum, some independent thinking, and a little bit of my take. I'm your host, Isaac Saul, and on today's episode, we're going to be talking about the Anthropic Pentagon dispute. This is a huge story. We're going to break down exactly what's happening, some of the big AI. questions that are now before us. Going to share some views from the left and the right. And today, a unique section on what some tech writers are saying about this.
Starting point is 00:00:50 And then we'll jump into my take. Before we jump in, I want to do two things. First of all, I want to thank you all who tuned in for our live stream last night. We did a full hour plus on the Warren Iran. And then I answered a bunch of questions live on the stream, talk shop with some readers and listeners and viewers who tuned into the channel. It was super fun. I had a blast.
Starting point is 00:01:16 I want to do more of it. I was candidly a bit impressed with myself. I mean, I just basically just rift solo for a whole hour answering questions. And at the end, I felt energized. I was like, I didn't want to go to bed. I was up all night. I was reading the comments. I was just trying to dig deeper into some of the stuff people asked about that I didn't
Starting point is 00:01:36 get a chance to respond to. It was super fun. That video is up on our YouTube channel now. The live stream's over, but the video is up. So if you go look up Tangle News on YouTube, you can watch it. I think it'd be a really fun watch right now. And if you enjoy it, please subscribe to the YouTube channel. And also, as I mentioned, on the stream last night, become a Tangle member.
Starting point is 00:01:57 You can go to readtangle.com forward slash membership to do that. I had six people working, six people on my team working last night from 8 to 10 p.m. on the stream, which is only possible because you guys fund this project with your membership. So huge shout out and thank you to John Wall, Lindsay Canuth, Russell Neustrom, Audrey Moorhead, Will Kayback, Magdalena Bukova, Candida Hall, all on the stream last night in the background helping make it happen. Really appreciate you guys. Second thing is a heads up that I know all eyes are on Iran right now, but I did publish this piece about a very, very, very important story that's happening on the border where I own some property in West Texas
Starting point is 00:02:51 right now. I encourage you to go to our website, retangle.com, and check it out. The story is headlined on our website. Trump is pushing a border wall in my backyard. The my there is me, Isaac, and that's a story about something that's happening right next to the property that I own in West Texas. The link to that will be on our episode description. I encourage you to go check it out. All right, with that, I'm going to hand it over to John for today's main story, and I'll be back for my take. Thanks, Isaac, and welcome everybody. Here are your quick hits for today. First up, U.S. Central Command announced the U.S. death toll in the United States's conflict with Iran rose to six after two service members' bodies were recovered from an Iranian strike on a military position in Kuwait.
Starting point is 00:03:46 Separately, Secretary of State Marco Rubio said that the hardest hits are yet to come from the U.S. military. Rubio also said that after Israel shared their plans to attack Iran, the administration deemed the threat of a counterattack on U.S. assets as imminent. Number two, voters will take part in primary elections on Tuesday in Arkansas, Texas, and North Carolina. The Democrat and Republican Senate primaries are the most closely watched races in these states. States. Number three, the Supreme Court voted six to three to temporarily block a California law that limits when schools can notify parents about a student's transgender identity while a challenge to the policy continues. Separately, the court temporarily blocked an effort to redraw the lines of the New York 11th Congressional District represented by Nicole Maliatakis overruling a lower court judge
Starting point is 00:04:33 who found that the current map violates the state constitution. The court's three liberal justice dissented. Number four, the House Oversight Committee released video of its separate closed-door hearings with former President Bill Clinton and former Secretary of State Hillary Clinton last week. At number five, French President Emmanuel Macron said that his country will grow its nuclear arsenal and work more closely with European allies on deterrence measures, a departure from France's longstanding nuclear weapons policy. Anthropic is rejecting an ultimatum from the Pentagon to lift the company's AI safeguards or risk being blacklisted. Defense Secretary Pete Haysetheth wants to use
Starting point is 00:05:16 Anthropics AI model, clawed for, quote, all lawful purposes. Anthropic, though, has been very clear. It's a hard no when it comes to using its AI technology for autonomous weapons and mass surveillance of Americans. On Friday, President Donald Trump ordered federal agencies to immediately cease their use of the artificial intelligence company Anthropics technology after it declined to allow the Pentagon unrestricted access to its models. Later in the day, Defense Secretary Pete Hegsef directed the Pentagon to designate Anthropic
Starting point is 00:05:45 as a supply chain risk to national security, which he said would bar contractors, suppliers, or partners doing business with the U.S. military from conducting commercial activity with Anthropic. Shortly after, OpenAI announced that it had agreed to let the Pentagon use its AI models within classified systems. For context, the Department of Defense has been using Anthropics technology, most notably its AI assistant Claude, since last year, including within classified systems. However, the Defense Department has recently sought increased access to the technology.
Starting point is 00:06:15 which Anthropic resisted. Specifically, the company refused to allow its models to be used for mass surveillance of U.S. citizens or the development of autonomous weapons. Defense Secretary Hegsef gave the company until Friday to reach an agreement, then quickly progressed a deal with open AI once the deadline passed. In a post on Truth Social, President Trump called Anthropic a radical left, woke company that made a disastrous mistake trying to strongarm the Department of War. While federal agencies were directed to stop using Anthropics technology immediately, Trump said agencies that currently rely on its models will have six months to phase out their use. On Friday, Anthropics sued to challenge the supply chain risk designation. In a statement, Anthropic called the move legally unsound, adding that it did not believe
Starting point is 00:07:00 that today's frontier AI models are reliable enough to be used in fully autonomous weapons, and mass domestic surveillance of Americans constitutes a violation of fundamental rights. In its own statement, OpenAI said its deal with the Pentagon, included similar restrictions on use that Anthropic had requested, mass surveillance, autonomous weapons, and high-stakes automated decisions, such as a social credit system. Full details on those restrictions have not yet been made public, but an excerpt of the contract shared by OpenAI
Starting point is 00:07:28 says the Pentagon may use its systems for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. Today, we'll break down the Anthropic Pentagon conflict with views from the left, right, and technology writers, and then Isaac's take. We'll be right back after this quick break. All right.
Starting point is 00:08:05 First up, let's start with what the left is saying. The left back's anthropic saying it was right to restrict the Trump administration's access to a powerful technology. Others add that further conflicts over government AI will come. In the New York Times, Maureen Dowd wrote, Real Dustbits hijack artificial intelligence. Hegsteth should be focused on our nerve-wracking duel with Iran. Instead, he spent the week at war with Dario Amade. the thoughtful chief executive of Anthropic,
Starting point is 00:08:31 and one of the few in Silicon Valley advocating for humanity, doubt said. President Trump and Hegseth already have a healthy disregard for democracy. Trump is trying to take over our elections because he's rightly worried that his party is going to get shellacked in November, and now he's escalating his push to remove the few pathetic guardrails that exist on AI. On Tuesday, Hegset summoned Amo Day to the Pentagon to demand that he let the Pentagon do whatever it wanted, as long as it was lawful.
Starting point is 00:08:57 This is poppycock, of course, because Trump and Hegsteth have contempt for the law when it gets in the way of their whims, power grabs, and revenge plots, doubt said. The self-styled Secretary of War offered Amadei a double ultimatum. He would invoke the Defense Production Act to compel Anthropic to give the Pentagon unrestricted use of its model, or he would designate it a supply chain risk, a national security threat, which were put the company's government contracts and possibly the company itself in jeopardy. Anthropic had a choice, be extorted, or be blacklisted. In MS now, Hayes Brown argued Anthropic was right not to trust Pete Hegsef. The Pentagon's inability to accept constraints isn't necessarily unique to Trump or Hegsef. The defense community is pushed back hard against anything seen as a constraint on potential actions under GOP and Democratic administrations alike, Brown said.
Starting point is 00:09:46 History is littered with examples from America's refusal to join the International Criminal Court to rejecting the International Treaty banning landmines. Whether the Pentagon intends to use Claude in the ways Anthropic rejects is in many ways secondary to the idea that the military would accept guardrails on its actions from outside the chain of command. Given the competition's a moral mindset toward products that have the potential to cause harm, the bar for assessing Anthropics' self-regulation is so low as to truly be in hell, Brown wrote. As with almost every major technological leap,
Starting point is 00:10:18 America's laws are deeply lagging when it comes to policing the rapid growth of AI. Without real safeguards and regulations, there's little stopping the Pentagon from blacklisting a company that dare draws the line at having Americans' data siphoned up rather than foreigners or at having a robot being the one pulling the trigger. All right, that is different what the left is saying, which brings us to what the right is saying. Many on the right see the administration's demands as legitimate, but question the utility of battling anthropic. Others say the prospect of autonomous AI weapons creates difficult tradeoffs. In the Wall Street Journal, Alicia Finley wrote about Trump's Road to War.
Starting point is 00:11:00 with Anthropic. Many Silicon Valley readers lean left, but most fiercely opposed government efforts to regulate AI. Mr. Amade broke with his competitors by endorsing a Biden executive order that imposed federal oversight on AI models. Anthropic also lobbied for regulation of AI by states such as New York and California and opposed the administration's efforts to preempt state laws, Finley said. All this made Mr. Amade persona non grata in the Trump administration. David Sacks, a Silicon Valley venture capitalist who serves as Mr. Trump's AI czar, accused Mr. Amade and other AI doomers of stoking public fear to encourage government control of AI. Trump officials are right that the sort of AI regulation Mr. Amade has advocated
Starting point is 00:11:41 could amount to unilateral disarmament by slowing innovation, Finley wrote. The Chinese, Russians, and other U.S. adversaries won't handcuff themselves with regulation, yet banning government agencies and contractors from using anthropic tools, which Pentagon officials favor for their dexterity will handcuff the U.S. and could damage national security. In the New York Times, Ross Douthith asked if AI is a weapon, who should control it? It's easy to get skynet vibes from the Pentagon's demands. As Matt Iglesias noted, all the weird and complicated scenarios spun out by AI Dumers
Starting point is 00:12:14 get a lot simpler if our government decides to start building autonomous killer robots, Dothet said. That's not what the Pentagon says it intends to do. It's professed concern is that it can't embeds, bed a crucial technology into the national security architecture and then give a private company a general ethical veto over its use, even if those ethics seem reasonable on paper. Doing so outsources decisions that are supposed to be made by an elected president and his appointees. Over the long run, though, one can imagine Pentagon officials offering some advantages over
Starting point is 00:12:44 the typical AI mogul when it comes to safety and control, death it wrote. First, they tend to be more focused on concrete strategic objectives than on machine gods and the singularity. Second, they are constrained from certain gambols by bureaucratic caution and the chain of command. Third, they answer to the public through elections and civil control in a way that CEOs do not. All right, this is if we're with the left of the writer saying, which brings us to what technology writers are saying. Some technology writers worry that OpenAI's Pentagon deal will bring about the dangers Anthropics sought to avoid. Others question Anthropics claim to veto power over how the government uses its technology.
Starting point is 00:13:23 In MIT technology review, James O'Donnell said OpenAI's compromise with the Pentagon is what Anthropic feared. OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear. Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on. the Pentagon, O'Donnell wrote. It's not yet clear if Open AI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. The whole reason Anthropic earned so many supporters in its fight, including some of OpenAI's
Starting point is 00:14:13 own employees, is that they don't believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance, and an assumption that federal agencies won't break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn out battles, of Donald said. On this front, we've essentially ended up back where we started, allowing the Pentagon to use its AI for any lawful use. In strategy, Ben Thompson explored anthropic and alignment. What is the standard by which it should be decided what is allowed and not allowed, if not laws, which are passed by an elected Congress.
Starting point is 00:14:53 Anthropics position is that Amade, who I am using as a stand-in for Anthropics management and its board, ought to decide what its models are used for, despite the fact that Amade is not elected and not accountable to the public, Thompson, route. And on that second point, who decides when and in what way American military capabilities are used? That is also the responsibility of the Department of War, which ultimately answers to the president who is also elected. I do have tremendous discomfort about AI's surveillance capabilities in particular. There are a lot of safeguards we thought we had that were actually mostly due to the friction entailed in overcoming them. AI, even more than the computers and the internet, is a friction solvent, and I completely understand why Anthropics pushback on this specific point resonates broadly, Thompson said.
Starting point is 00:15:36 The way to address this new reality, however, is with new laws and through strengthening accountable oversight, cheering or even demanding that an unelected executive decide how and where such powerful capabilities can be used is the road to an even more despotic future. All right, let's head over to Isaac for his take. All right, that is it for the left and the right and some writers from the technology sector are saying, which brings us to my take. The government has a right to deny contracts to any private company. Similarly, any private company has a right to refuse the terms that the government. offers them. What the government shouldn't do, and maybe legally can't do, is use its power to punish any private company that turns down a government contract. That's all easy to say, but a lot of
Starting point is 00:16:34 questions complicate that simplicity. Who do we want holding the power of next generation artificial intelligence, the government or tech companies? How do we hold them each accountable? What happens if the CEO of a private company has more control over critical defense technologies than our top military generals. Would that diffusion of power be necessarily bad? These questions are not easy to answer. But the basic story here is still simple. Anthropic is a private company with technology the government wants and already relies on. The government tried to work with Anthropic, but Anthropic didn't like the terms of the deal. And when it walked away, the federal government tried to punish it in the harshest way possible. That's not good governance. When the dust settled,
Starting point is 00:17:19 open AI got a deal with restrictions that appeared to be similar to what Anthropic wanted, and some reporting suggested personalities spike the deal, not philosophies. But the deals are not the same, far from it. And when we look at the details of why the Anthropic deal fell through, the federal government comes out looking even worse. On Thursday, Anthropic CEO Dario Amadeh, released his statement, warning that AI can undermine rather than defend democratic values, which is unsettling to say after talking with the government,
Starting point is 00:17:52 he added that he believed in the use of AI for lawful foreign intelligence and counterintelligence missions, but not for mass domestic surveillance. Based on the deal, Amadei says the administration was asking for, the government could have used anthropics tools to buy up detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant,
Starting point is 00:18:15 feed it into artificial intelligence, and create a comprehensive picture of any person's life, automatically and at massive scale. Amadei said that his company supported the development of partially autonomous weapons like the ones used in Ukraine, but that frontier AI systems are simply not reliable enough to power fully autonomous weapons. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer, he said. We can all read between the lines here. The government wants to be able to use these tools for mass surveillance of Americans and to create fully autonomous weapons,
Starting point is 00:18:53 but Amadei left no room for any doubt, saying explicitly that the Department of Defense, quote, will only contract with AI companies who accede to any lawful use and remove safeguards in the cases mentioned above. Not surprising. Definitely alarming. Then, when Anthropic walked away, the Department of Defense responded by punishing them, sending a message to the entire industry that businesses could face devastating consequences if they don't cooperate with the government. In his newsletter's stratetry under what technology writers are saying, Ben Thompson asked, and I'm paraphrasing here,
Starting point is 00:19:30 why Amade gets to decide what a permissible use for its AI would be and not Congress or the U.S. government. Again, the dynamics here are complicated, but I don't think Thompson's premise is quite right. The government can regulate the tools that, already exists through the legislative process, but company leaders have a say in how their company's products are used. That's not unique to Anthropic. It applies to every company and the entire private sector. If Hamaday believes his technology is so advanced, the laws have been
Starting point is 00:20:00 caught up to properly regulate it, then he is within his rights to contractually limit who uses those tools, and for what? Thompson also asked, who decides when, and in what way American military capabilities are used. Anthropics' position is that an unaccountable Amade can unilaterally restrict what its models are used for, end quote. Again, I don't think this is quite right. American military capabilities do not include unfettered access to anthropics tools. That's the point. Amade was negotiating terms of a deal to give the American military more capability by granting more access to his tools under certain conditions. If the government doesn't like those conditions, which it doesn't, then it doesn't get the tools. That's business. This actually happens all the time.
Starting point is 00:20:50 As tech researcher Dean Ball wrote, every transaction of technology between a private firm and the military involves a contract. Indeed, the companies that do this are called defense contractors for a reason, and these contracts routinely contain operational use restrictions. System X cannot be used in counties. Why, a common restriction with the telecommunications technology such as Elon Musk Starling. Technological limitations like this fighter jet is only certified for uses in X conditions, and use of it outside those conditions is a breach of warranty, and intellectual property restrictions. The contractor owns and maybe purpose and resell the know-how and IP associated with X weapon system developed with public funds. What's more, the government already agreed to these terms with Anthropic under the Biden administration. The Trump administration agreed to before changing its mind.
Starting point is 00:21:40 In the chaos of this deal blowing up, OpenAI, Sam Altman swooped in to take advantage of the opportunity. Altman apparently has no problem working within the confines that Amadei rejected, though he's trying desperately to convince the public that the terms he agreed to somehow protect the values Amade espoused. They don't. OpenAI has claimed it has various safeguards in its contract with the Department of Defense to protect Americans, like a clause, and I'm going to put my own emphasis here, that, quote, the AI system shall not be used for unconstrained monitoring of U.S. person's private information as consistent with these authorities.
Starting point is 00:22:19 But the Defense Intelligence Agency, the DIA, a spy agency inside the DOD, purchases public, bulk smartphone data. That data is already legal to purchase. So Altman's assurances that his models will only be used within the bounds of the law is inadequate. The deal still leaves pathways open for domestic surveillance. Also, if Open AI can't be used for unconstrained monitoring, does any constraint of any kind mean the AI systems can now be used to monitor Americans? And what if the government is monitoring a non-citizen,
Starting point is 00:22:52 who just so happens to be speaking with a U.S. citizen, a loophole that has been known and discussed for years? Pressed by journalists to point to where exactly in their contract the worst outcomes here might be prohibited, open AI has not been able or has declined to do so. Altman, in trying to clean up, shared his own extremely careful language about OpenAI and the Department of Defense never, quote, intentionally, end quote, being used for domestic surveillance and limiting the, quote, deliberate, end quote, tracking or monitoring of U.S. citizens.
Starting point is 00:23:23 OpenAI now says it has introduced new language that could be helpful. For the avoidance of doubt, the department understands this limitation to prohibit deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information. But that language has not been formally agreed to, and you might forgive me if I don't trust Sam Altman's PR speak at this moment. To say this was a reputation-defining move by Altman is underselling it. I'm not naive enough to think Amade and Anthropic are motivated purely by altruism and democratic freedoms, but I personally reacted to the news by canceling my chat GPT subscription,
Starting point is 00:24:04 attempting to delete all my data from open AI servers, and moving my limited AI-based work over to Claude. The choice has less to do with protesting Altman's public stance than with protecting my own privacy, and I'm apparently not the only one. I have to add that the whole of U.S. artificial intelligence policy is now comically backward. Anthropic, a U.S.-based company that apparently has some semblance
Starting point is 00:24:28 of a moral compass in the democratic spirit, whose technology was literally used in a military operation over the weekend, has been labeled a supply chain risk. Yet the government's relationship with the Chinese AI providers like DeepSeek remains unchanged, despite credible reports that they're being used against our interests. We are selling American chips to foreign companies to compete with us in the great AI race,
Starting point is 00:24:51 but we're threatening, punishing, and limiting companies operating in our domestic AI sector. as chat GPT might say, it's not just worrisome, it's nonsensical. And if this technology is as important world-changing and norm-shifting as people say, it's going to be a very big problem. We'll be right back after this quick break. All right, that is it for my take, which brings us to your questions answer. This one's from Anne in Douglasville, Pennsylvania.
Starting point is 00:25:31 Hey, Ann, shout out Douglasville, PA. During the state of the union, President Trump called out a girl in the audience who supposedly was being transitioned into a boy during school against her wishes without the parents' knowledge or approval. I've read several different reviews detailing the inaccuracies and truths of the State of the Union speech, and no one has mentioned this callout. I find it hard to believe any school would try to transition a student from one gender to the other against the student's wishes and without parents' involvement. What is the background and reality of this situation? Okay, so the story with Sage Blair that Trump described was a bit different than your summary.
Starting point is 00:26:08 When she was in high school, Blair reportedly told a friend that she wanted to go by a male name and male pronouns. A guidance counselor who overheard the conversation talked to Blair, and the school agreed to socially transition her. When the school notified her adoptive mother, Michelle Blair, her paternal grandmother, she disapproved. But the school allegedly continued referring to Blair with a male name and pronouns and allowing her to use the boy's bathroom while withholding this decision from Blair's grandmother. Michelle Blair also claims that the school failed to notify her of bullying and harassment her daughter experience from male peers, which she alleges resulted in Blair running away from home and being sex trafficked. Trump's description of this story was consistent with Michelle Blair's claims in a lawsuit against the school, though the president not specifying that Blair was being socially
Starting point is 00:26:58 transitioned could imply the school had supported a medical transition. attorneys for the school argued that it had no duty to inform Michelle Blair of Sage's transition and that the school's actions could not be factually linked to Sage's leaving home. Other cases of schools moving forward with social transitions without informing parents have been recorded, but the frequency of these events is difficult to gauge. In most of these cases, the schools argue that the child consented to the transition and wanted it hidden from parents. All right, that is it for your questions answered.
Starting point is 00:27:29 I'm going to send it back to John for the rest of the podcast. and I'll see you guys tomorrow. Have a good one. Peace. Thanks, Isaac. Here's your under the radar story for today, folks. On February 26th, a Kansas law went into effect mandating that identity documents, including birth certificates and driver's licenses, list sex assigned at birth rather than sex matching gender identity. The law also reversed all prior changes to sex identification. Because the law had no grace period, more than 1,000 transgender Kansasans official documents, including drive, drivers licenses were invalidated as soon as the law went into effect.
Starting point is 00:28:10 According to Harper-Seldon, an attorney for the American Civil Liberties Union, the Kansas law is the first to retroactively invalidate residents legally obtained driver's licenses. USA Today has this story and there's a link in today's episode description. And last for not least, our have a nice day story. Omar Yagi grew up in a refugee community in Jordan without access to running water or electricity. In 2025, he won the Nobel Prize in Chemistry for his work on machines that can pull drinking water from the desert air. Now, Yaki has invented a machine that runs on thermal energy. And according to his company, Ataco, can generate up to 1,000 liters of clean water every day, even in arid climates.
Starting point is 00:28:54 For islands like Kariako, which was devastated by Hurricane Barrel in 2024, and which currently imports water from Grenada during the dry season, the invention shows promise. The technology's ability to function off-grid using only ambient energy is particularly compelling for our context, Kariako government official Devin Baker said. The Guardian has this story, and there's a link in today's episode description. All right, everybody, that is it for today's episode. As always, if you'd like to support our work, please go to reetangle.com,
Starting point is 00:29:23 where you can sign up for a newsletter membership, podcast membership, or bundled membership that gets you a discount on both. And if you didn't get to check out our live stream last night, you can head over to our YouTube channel and check out the full event. While you're there, please be sure to become a subscriber to our YouTube channel. It helps us out a great deal in the algorithm and helps us make sure that you get our latest videos and exclusives. We'll be right back here tomorrow. For Isaac and the rest of the crew, this is John Law signing off.
Starting point is 00:29:48 Have a great day, y'all. Peace. Our executive editor and founder is me. Isaac Saul and our executive producer is John Law. Today's episode was edited and engineered by Dewey Thomas. Our editorial staff is led by managing editor Ari Weitzman with senior editor Will K-back and associate editors Audrey Moorhead, Lindsay Canuth, and Bailey Saw. Music for the podcast was produced by Diet 75.
Starting point is 00:30:12 To learn more about Tangle and to sign up for a membership, please visit our website at reetangle.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.