Prof G Markets - Violent Backlash: What the Sam Altman Attacks Signal for AI

Episode Date: April 15, 2026

Following the violent attacks on Sam Altman, Bradley Tusk and Brian Merchant join Ed Elson to break down why AI is facing growing resistance. They explore the future of AI regulation, how politicians ...should position themselves in the debate, and whether rising tensions could lead to more disruption or violence. Bradley Tusk is the founder and CEO of Tusk Ventures. Brian Merchant is a tech journalist and author of the Blood in the Machine Substack. Subscribe to the Prof G Markets Youtube Channel  Check out Bradley’s latest piece on AI regulation Check out our latest Prof G Markets newsletter Follow Prof G Markets on Instagram Follow Ed on Instagram, X and Substack Follow Scott on Instagram Send us your questions or comments by emailing Markets@profgmedia.com Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Support for the show comes from Virgin Atlantic. A lot of people dread flying. I've been on some bad flights and I've been on some truly miserable flights. But it's a whole different story when an airline shows up for you and the crew treats you like a VIP. Virgin Atlantic offers warm one-on-one service from the moment you step on board. Its upper class cabin features four-course meals, fully lay flat seats, and drinks delivered on demand. Make the journey as exceptional as a destination when you fly Virgin Atlantic. Go to virgin Atlantic.com to learn more.
Starting point is 00:00:33 episode is brought to you by Tellus Online Security. Oh, tax season is the worst. You mean hack season? Sorry, what? Yeah, cybercriminals love tax forms. But I've got Tellus Online Security. It helps protect against identity theft and financial fraud so I can stress less during tax season or any season.
Starting point is 00:00:53 Plan started just $12 a month. Learn more at tellus.com slash online security. No one can prevent all cybercrime or identity theft. Conditions apply. Today's number 12,000. That's how many comments Trump received on an image he posted of himself depicted as Jesus before it was taken down. According to the president, he meant to be portrayed as a doctor that was right after he called the Pope, weak and terrible. And in other news, panic on Wall Street as traders prepare for the rapture.
Starting point is 00:01:27 Money markets matter. If money is evil, then that building is hell. Welcome to ProfiMarkets. I'm Ed Elson. It is April 15th. Let's check in on yesterday's Market Vitals. The major indices rose as President Trump signaled he was open to talks with Iran that pushed the NASDAQ to its 10th straight gain while the S&P 500 came close to a record high. Oil prices fell below $100 a barrel. Bank stocks were mixed after earnings. Wells Fargo fell 5%. while Citigroup rose nearly 3% will be breaking down all those bank earnings on tomorrow's episode. And finally, Amazon shares rose nearly 4%
Starting point is 00:02:09 after the company acquired Starlink's biggest competitor, Global Star. Okay, what else is happening? AI has a popularity problem, and it is now getting violent. Last week in Indiana, a local councilman's home was shot at 13 times after he voiced support for a data center project in his town, A sign reading, quote, no data centers was left at his door. Then, Sam Altman, the Open AI CEO, was targeted twice in the same weekend. A man threw a Molotov cocktail at his home on Friday and threatened to burn down OpenAI's San Francisco headquarters.
Starting point is 00:02:48 Police recovered a document from the suspect, warning of humanities, quote, impending extinction from AI, as well as a list of names and addresses of CEOs and investors of AI companies. The 20-year-old has been charged with attempted murder and faces a second count of attempted murder for the security guard who was at Altman's House. Separately, two people were arrested for firing shots at Altman's House on Sunday. These attacks are an extreme manifestation of the rising anti-AI sentiment in the US. Among 31 countries surveyed, Americans reported the lowest level of trust in their own government to regulate AI at just 31%. and people are now acting on that distrust. In just two years, $64 billion of data center projects have been blocked or delayed due to local opposition.
Starting point is 00:03:39 So, here to discuss these disturbing headlines and AI's general popularity problem in this country. We are having another panel discussion with two experts. We've got Bradley Tusk, founder and CEO of Tusk Ventures, and Brian Merchant, tech journalist and author of The Blood in the Machine Substack. Bradley and Brian, thank you very much for joining me on the show here. Bradley, I'll start with you. I mean, this news of Sam Altman, two attacks in the span of just a few days, it really is just a striking example of this growing feeling in America that I've talked about. I know you've talked about, and that is a lot of people just don't like AI at this point. What do you make of this news and what does it say about this moment? Yeah, I mean, I think people don't like a lot of things. And to be clear, regardless of what you think of either AI or Sam Altman, no one should be throwing Molotov cocktails at his home or at even's home. But I think you've got a combination of one,
Starting point is 00:04:38 just general distrust or happiness in this country, right, whether it is the fact that we are 23rd in the world happens report or 60 seconds for people under the age of 25, whether it's the fact that our government seems to be hijacked by stringists on both sides of the aisle, whether it's the fact that we haven't regulated Internet 2.0 yet. So even things like social media. have never been dealt with by Washington, let alone AI, and then combine that with the fact that AI is really unpopular. I saw a UGov Bowl that showed that people have a 47 to 27 by margin of people. Disrust of AI, people think that AI will replace a lot more jobs, and it creates almost every different survey mechanism out there shows that people are fearful. And then anecdotally,
Starting point is 00:05:23 when you just talk to people, they feel the same way. Then you mentioned in the intro, local opposition blocking the construction of data centers, I think that's often the fault of the hypers who seem to think that it would be okay to pass along all of their energy costs to regular consumers, right? And if you are living near a data center, the idea that your electricity bills should go up 30, 40 percent to subsidize Sam Altman or Jensen Wong or whoever it is so that they can become trillionaires is unacceptable. And in this case, I think it's actually elected officials on both sides of the aisle acting in the interest to protect their constituents. And so, yeah, when you have a government that consistently fails to regulate technology,
Starting point is 00:06:03 when you have a government that feels run by the extremes and you have a society that's generally unhappy, these are unfortunately the kind of things that come from it. Brian, you've written about this before, and your book is about actually the Luddite movement, which is sort of the first iteration of technology coming along, people getting very worried about it and revolting, essentially. what do you make of the attacks on on sam altman what does it say to you well i mean what it says to me is that this uh this discontent um these these grievances that that people have um are are real they are pronounced and we have to look at them as if uh you know some of these people are obviously on the extreme end of whether it's a political spectrum or or an ideology at least one of
Starting point is 00:06:53 shooters was, you know, one of these X-risk AI safety advocates who's really worried that AI is going to rise up and become sentient and end humanity. And so if you believe that, then, you know, doing all you can, you know, may look like a rational outcome as abhorrent as it looks to everybody else. And to step back a second, we do have a long history, right, when there is a disruptive technology, number one, number two, that is being developed and sort of unleashed by a particular sort of group of interests, right? When you have in the Luddites time, that was the factory owners who were spearheading factorization and automation.
Starting point is 00:07:40 And they were doing it without community input, without asking what workers and communities, what they wanted. So we have a dynamic that looks an awful lot like what's happening here today, where you have a few industrialists who had the backing of the state. They had all the resources. They had all the capital. They had all the power. And they were saying, this is the way it's going to be. We're going to automate jobs this way. And you're either going to sort of work in our factory or you're going to get out of the way. And the Luddites who actually registered, this is one of the things that people get wrong about the Luddites today is they weren't dummies. They weren't
Starting point is 00:08:14 backwards looking. They understood quite well what was happening. They were technologists. They use this stuff every day. They used the automated technologies and smaller iterations in their in their workshops and at home. And so they understood what the industrialists were trying to do. And that's what motivated their response. They didn't want to see their way of life subsumed by factorization given over to a relative handful of interests. So it was really about power. It was about democracy. And it was about losing agency. And so today, a lot of lot of the backlash we see against AI is motivated by these very same fears and concerns, in no small part, because the AI CEOs and Tech Titans themselves have come out and use this
Starting point is 00:09:00 language, right? From the beginning, they've said, oh, this technology is so powerful. It could be big trouble for humanity. It could be the gravest thing humanity has ever faced if we're not careful with it. It's going to eliminate 20 to 30 to 50 percent of jobs, you know, depending on how Dario Amadeh, Anthropic is feeling. And it is going to be this huge disruptive event. And that's how they're forecasting, that's how they're describing their own project, their own business.
Starting point is 00:09:29 And so, again, why would anybody, you know, not take that seriously, right? We take it serious at different levels, and some people will attach themselves to the X-risk element and say, well, we don't want to exterminate humanity. And most people will say, hey, I'm out here listening, and you're saying you want to automate all the jobs with AI tools. You want to automate, why would I be okay with that? Why would I trade that for a
Starting point is 00:09:56 why would I allow a data center in my backyard to help you in that project? So to me, all of this backlash, you know, I'm honestly a little surprised it hasn't arrived a little bit sooner, just how aggressive the industry and its leadership has often been. Yeah. Yeah, this gets to sort of the PR and comms point. And Bradley, I mean, you've worked in exactly this sector. You've worked in politics. You've worked in tech and politics and how they come together. And there is this interesting question, which is like, well, all of the big AI CEOs are telling us that this technology is, in a lot of ways, quite scary and in some cases, bad, like it's going to destroy things. It's going to destroy white-collar work. It's going to completely disrupt the economic model as we know
Starting point is 00:10:43 it, and they've done it in a way that is legitimately quite scary. And I guess it does beg the question of, like, I mean, why say that? If you're the CEO of a technology company, why would you come out and say this technology is going to be really bad and it's going to really negatively impact a lot of people's lives? I mean, what do you make of the comm strategy there? Yeah, I mean, I think that keep in mind, from their perspective, comms is a couple of different things at the same time. It's the way we're talking about it right now, which is how the public might perceive something, how regulators and lawmakers might perceive it, but it's also fundraising. So OpenA. and Anthropic are still both privately held companies with giant valuations.
Starting point is 00:11:27 Open AI is nearly a trillion dollars at this point. And as they raise money, a lot of what you just said interpreted slightly differently is very appealing potentially to investors, right? So when you're talking about, hey, this is going to wipe out lots of jobs. What investors here is this will be the tool instead that businesses are going to use to replace workers, and instead they're going to pay money to Open AI, to Anthropic, to all of these different companies. And so I think that the language that you use potentially to recruit employees, so the New Yorker has a great piece this week on Sam Altman, and a lot of the recruiting that he did at Open
Starting point is 00:12:05 AI was around the idea that he was the responsible person trying to protect humanity from the potential perils of AI. That clearly does not seem to be the case, but he used that language to incentivize people who did care about this issue genuinely to come work for him. There's language they use with investors. And I think what they're finding right now, and I think sometimes this is sort of the both naivete
Starting point is 00:12:28 and arrogance that you will see in the tech world, which is a lack of understanding of how their words then land with real people or with people in politics and government. And a lot of what they're saying is now, coming back to haunt them. But the real question to me is, we know that the public is concerned. And we have seen at least at the local level, elected officials protect consumers from things like paying for the costs of the energy needs of data centers. But when it comes to the larger issue of catastrophic risk, states like New York and California have done some regulation around
Starting point is 00:13:02 frontier models, but some of this really needs to be done at a federal level. And right now, we're seeing the opposite from this White House. We saw this White House. and issued an executive order in December telling states you're not allowed to regulate AI. And luckily, governors just from both parties, roundly ignored that. But there are areas where you're going to see Washington need to step up. And I think whether or not they do so may dictate how this whole thing plays out. Stay tuned for more of this panel right after the break. And if you're enjoying the show, please follow our new ProfiMarkets YouTube channel.
Starting point is 00:13:35 The link is in the description. This is advertiser content brought to you by Virgin Atlantic Ed. A couple weeks back, I got you a birthday gift not to pat myself on the back, but it was a pretty good one. It was indeed. You surprised me with Virgin Atlantic upper class tickets to London. So tell us all about it. It was pretty incredible. From the moment I entered that upper class cabin, I have to tell you, I felt like a VIP. Anything I needed, a drink, snack, assistance with the seat. Flat seats. Flat seats. Flat seats. Exactly. Had the four-course meal. Got my champagne. Very delicious.
Starting point is 00:14:16 enjoyed the food. And the journey home? The journey home was great. I went to the Virgin Atlantic LHR Clubhouse. That's the Heathrow Clubhouse. Heathrow Clubhouse was awesome. Got myself a coffee. Headed over to the meditation pod
Starting point is 00:14:30 that they called the Somer Dome. Kind of felt like a sort of spaceship where you relax and think nice thoughts. So I did that for a little bit. Then we went over to the wing, which are these acoustically sealed booths where you could do some work. You could even record a podcast.
Starting point is 00:14:45 I didn't do that, but maybe I should have. It was a very enjoyable experience. So, Ed, the real question here is, what are you planning to get me for my birthday? See the world differently with Virgin Atlantic. Flying should be more than just transport. It is part of the adventure. Go to virginatlantic.com to learn more.
Starting point is 00:15:05 Tickets and lounge access provided by Virgin Atlantic. Hi, I'm Bray Brown. And I'm Adam Grant. And we're here to invite you to the Curiosity Shop. A podcast that's a place for a live. listening, wondering, thinking, feeling, and questioning. It's going to be fun. We rarely agree. But we almost never disagree, and we're always learning.
Starting point is 00:15:30 That's true. You can subscribe to the Curiosity Shop on YouTube or follow in your favorite podcast app to automatically receive new episodes every Thursday. We're back with Profi Markets. It's a very difficult time in a lot of ways to be an AI executive because, I mean, on the one hand, And as you say, there is an economic incentive or maybe a fundraising incentive would be the right way to put it to say that this stuff is going to be very damaging and it's going to just structurally completely upend the entire economy as we know it. But at the same time, I also wonder if they also actually believe that.
Starting point is 00:16:12 And that seems to be something that you also have to kind of reckon with, especially in the context of a government, which seems pretty unwilling in general to promote any form of policy, any form of regulation. And if you're building in the AI space in that environment, and you seem to recognize, you know, this administration doesn't really want to do anything in terms of regulation, then maybe you do feel you need to sound the alarm and say, hey, this is going to be,
Starting point is 00:16:40 this is actually a big deal, this is actually going to be a problem. And then on our end, it becomes very difficult to understand what's true and what's marketing and what's hype. So I guess, I mean, Brian, just turning it to you, which parts of the story do you think are real? I mean, when Sam Altman goes out and says, yes, this is going to be massively destructive in a lot of ways, or when Daria Amadee says that,
Starting point is 00:17:05 I mean, to what extent should we take that seriously versus write it off as, you know, marketing? Yeah, I mean, I think you're absolutely right that it's, you know, both of those tendencies are kind of bound up in this same trajectory. And part of this is necessity, right? Like the tech landscape is such that if somebody wants to, you know, release a product that can compete with one of the giants like meta or Amazon or Google, then you need just truly an immense amount of capital. If you want to compete rather than angle to get bought up or something.
Starting point is 00:17:43 So you need a story that can command the kind of capital that can compete with one of, you know, three or four. of the tech oligopolis that are out there, right? The tech monopolies that have sort of over the last 20 years sort of concentrated their power. And so that story then becomes not just, hey, here's a cool product. That's not going to get you there. You need a story that is on the magnitude of,
Starting point is 00:18:11 we are creating the software that can automate every meaningful job. And, you know, that language is right there in Open AI's charter still to this day. You can look at that as intraming. to the pitch to investors. And so I think there are a number of different factors there. I think if you look at the last 10 years of the history of sort of this latest AI boom,
Starting point is 00:18:32 then you really see it beginning in earnest around at least expressed fears about X-risk and the possibility, you know, as sort of presented by Nick Bostrom and others, that AI could become super-intelligent and become this danger. I think one of Sam Altman's key intuitions was that, you know, Early on when he was just sort of heading up, you know, just quote unquote, you know, heading up Y Combinator, he sensed that there was a lot of energy here in this space that he could tap into one way or the other. And so he reached out to Elon Musk and kind of mimic this language and was able to sort of use that concern just as a lightning rod to get some interest and power and momentum into AI in general. And then from there, it's hard to walk away from that narrative. You see that the more you talk about it, that it does affect investors. It does. sort of compel people to pay attention. It does get headlines. And so I think it does sort of balloon on and on and out. So some of these guys, I think, like Dario Amadeh, I am sure he's legitimately concerned about all of this stuff. Is his marketing department aware that he can win a round of
Starting point is 00:19:37 headlines by expressing that concern in the release of mythos? Of course they are. So they present every sort of white paper, every released or unreleased model, you know, with the same sort of level of gravity as though it were a new set of promotional materials. And so it becomes difficult to distinguish between the two. But I would say it is it is yes and both. And now we're in this pickle where the AI industry can't really walk away from its promise that has attracted so much investment in the first place. They can't say, you know what, we're not going to automate all the jobs. Then SoftBank might say, well, then what was that $30 billion for, right? You know, so it really, it really, we're sort of up on the brink and the precipice here. And I think Bradley was absolutely right.
Starting point is 00:20:26 You know, it's not just the politicians. It's also the AI industry, you know, meta and Open AI and all these guys are bankrolling, you know, packs right now to the tune of $100 million to sort of influence elections. They supported the moratorium to ban state-level AI lawmaking. So the very least they could do if, you know, they want to de-escalate the rhetoric, as Sam Altman says, is, you know, is stop interfering. in the democratic process, right, is to let voters feel empowered. It feels some sway over this technology that is, you know, being integrated into every pore of society. Yeah, it's a great point. I mean, if there's one thing that's going to make you dislike AI even more, it's to read a headline that Mark Andreessen is bankrolling millions of dollars into these pro-AI super PACs that we are continually starting to read more and more about.
Starting point is 00:21:17 I mean, Bradley, what is the right? right policy response here. I mean, what we've kind of identified is that we don't seem to have much regulation at all. Americans are very scared. They're getting increasingly angry about it to the point where we are seeing literal violence against these tech CEOs. Like, what are we supposed to do this from a policy perspective? Yeah, I mean, I think you almost have to think about it from a taxonomy of how to regulate AI because I've never, you know, I've been working around politics for over 30 years and there's never been anything quite like this. So there's, in my mind, kind of four different categories. The first is consumer protection. And that typically tends to be the province of
Starting point is 00:21:56 state and local government. So that's things like regulating chatbots, especially around things like mental health, regulating data centers and the negative externalities that can post on others, regulating the use of AI and hiring decisions, things like that. The second would be catastrophic harm. And like we've said, California and New York have tried to pass regulations or have passed regulations around frontier models, but that's two of 50 states. And this is the kind of thing that's the it really should be done by the U.S. government. The EU has a framework that covers, you know, 22 countries. We have two states.
Starting point is 00:22:27 So that's number two. Number three would be jobs. And I don't think there is any plan whatsoever for how to deal with the fact that we could be seeing 10, 20% unemployment at some point because of AI. And look, I do believe that at some point in 20 years or whatever it is, all kinds of new industries that we can't conceive of today will be created that will have a lot of jobs. to AI, but a lot of people are going to fall through the cracks. Look, that's why I think Andrew Yang was right way back to, you know, a decade ago when he proposed universal basic income, because I think that we are going to be in a world. And I will say, I just saw a white paper the other day. Daniel Schreiber, who's the founder and CEO of Lemonade, which is an
Starting point is 00:23:09 insurance tech company, funded a study for in Israel that had the idea of basically creating a new type of tax that as corporate profits increase because they've reduced. headcount, taxing that as sort of a VAT, and then redistributing that to people and what he calls the negative income tax, but effectively is a form of universal basic income. So there are ideas out there, but you have to think about them. And right now, politicians just say job training, but like, we can't all become plumber. That's not going to solve the problem. And then the fourth would be, you know, where AI can do good. So if you think back to Doge and it was a total disaster, But where Doge could have been really great is how do we bring AI into government to do things like procurement, compliance, licensing, permitting, data management, facilities management.
Starting point is 00:23:59 There are a lot of ways that we could make our government a lot more efficient and a lot more cost effective. And so the challenge is you have to be able to think about all of these different categories at the same time. And that really requires thoughtful leadership. and because we live in a world where I believe every policy outcome is driven by, you know, a political input, politicians are thinking about their next election. They're thinking really about their next primary, basically, and they're not thinking about all of the different complications that we just outlined. And so, you know, this is a time where we really need truly transformational leadership at all levels of government. And by and large, we don't really have it.
Starting point is 00:24:39 One final piece, and at least a small measure that I'm trying to do, which is to use AI in a way to cut against some of that institutional power out of my foundation. We're coding a tool called how to create societal change that will be an agent where you can put in there, okay, I want to ban cell phones on my kid's school. I want a stop sign on my corner, whatever it might be. And then the agent trained on basically by decades of all of our work here will say to you, okay, great. here's the current law that governs cell phone use of your kid's school. Here's who's in charge of it. Here's what it would need to say. And then here's a full campaign plan for how you as an individual could go about changing it and it will be totally free.
Starting point is 00:25:19 So it's a very small act of defiance. I get that. But we are in the process of coding it right now. And my hope is to release it in the fall. I mean, just to follow up on what policymakers and politicians should be doing, how should you be positioning yourself as a politician? I mean, we've seen that Bernie and AOC have, they've been like, stop the data centers, period. Right.
Starting point is 00:25:43 And they've said, I mean, people are saying they want to end AI outright. That's not quite true. Basically, the idea is, until we have a framework of policy, press pause no more. And I guess the question becomes like, what is going to be the popular thing to do? Should you be super against AI? Should you be pro-AI, pro-innovation? I mean, that seems to be like the big question. I guess I'll follow up and ask that question to you, Bradley, as someone who's worked in exactly this space, what would you be doing?
Starting point is 00:26:16 Yeah, I mean, the question, it sort of depends on what you're running for. So if you are a member of Congress, let's say, and your district is gerrymandered, which is true for all of about 25 of them at the House, and turn out your primary is going to be 10%, 12%, something like that, odds are being radical like an AOC of Bernie might be, or on the far right too, and just opposing AI in all forms, probably is the right political play. Now, if you're running for Senate or governor or president where there's a larger electorate or potentially a competitive general election, then you can't quite be, you know, so extreme and you need more nuance. I actually do think that, and this might be very naive, and maybe I'm just falsely hoping for this, but I could see a world in 2027 where a Democratic House, a Republican working White House and probably a Republican Senate, but we'll see, actually do manage to get together
Starting point is 00:27:12 and come up with a comprehensive bipartisan deal around AI, not necessarily because they even care about the problems that the three of us do and that we're talking about here, but simply that if they fear that 2028 is going to be the AI election and it looks like they haven't done anything about it, none of them want to have to go stand before the voters and say, oh, well, I couldn't do anything. Don't blame me. And so I do have this hope that simply because, because there's so much attention focused on it and so much anxiety around it, and this might be the one place where everyone actually could get together and come up with some thoughtful ideas. Yeah, it seems to be one of the few issues in which both sides kind of agree in its general dislike of it,
Starting point is 00:27:53 or at least anxiety towards it. Yeah, I mean, if you look at something narrow like the dozen or so states that have passed chatbot restrictions and regulations, those are totally bipartisan, both in terms of who's voting for them in the bill itself, types of states doing. Yeah. Brian, I mean, just going back to the Luddouts, and just for context for people, I mean, this is what happened when the factory was introduced and then you had all these textile workers in England who revolted, they smashed up the machines, etc.
Starting point is 00:28:24 I mean, in a sense, I wonder if this is just what happens. Like, when a new technology arrives, you have violence, you have disruption, you have chaos, but also maybe not, and maybe there's a way that you're supposed to prevent this. I mean, what lessons can we learn from that period of history and how should we take it moving forward? Yeah, no, that's not absolutely not a given that we'll see violence and mass disruption at this scale. There are a couple things that tend to signify that you will see it, right, when you have an immense
Starting point is 00:29:03 concentration of capital and power and the development and deployment decisions around a technology are flowing expressly from that. And being imposed anti-democratically on a population, you're much more likely to see sort of angry uprising and rebellion. And again, it's another way that this moment sort of maps relatively and worryingly neatly onto the Luddites and the dawn of the Industrial Revolution, because at that time, you had this moment where, you know, automated machinery was beginning to be produced en masse, and factory owners or to be factory owners, realize that they could amass a bunch of these machines, put them in those early factories, and divide and automate labor in a way that could break the power of sort of the workers and the guilds. They weren't actual
Starting point is 00:30:01 guilds, but the industries and the cottage industries that had developed and had shared interests. And so when you have all of that sort of power and decision-making capacity and money sort of concentrated in a few hands, it is a recipe for disaster because the, I mean, the cloth workers, they went to parliament for years and years and years, for a decade, full decade running up to the actual Luddite rebellion saying, look, the new factory owners, they're using these machines in ways that violate the laws on the books. They're hiring workers that haven't been apprenticed that shouldn't be allowed to work, all these things that we have to regulate the trade.
Starting point is 00:30:42 They're ignoring all the laws, all the standards, all the norms, and then they're just pushing down our wages and pushing down our quality of life. They're destroying our livelihoods, and they won't stop. And so here's a list of things that you could do to fix that. Funny enough, one of the things that they proposed was very much like a Andrew Yang-style VAT, where it was like, why don't you tax the extra amount of cloth that a machine can produce and then use that to sort of fund like a general fund for workers who need to retrain? But they were laughed out of Parliament, right?
Starting point is 00:31:15 Time and time again, Parliament not only said, no, we're not going to listen to you, they tore up those laws and regulations on the books and basically left it completely up to the whims of the market and these very powerful actors. And so when you have a situation like that, which increasingly, is mirroring what's happening today with an industry that has a ton of power, you know, at least right now in its alliance with the Trump administration, has sort of the ear of, you know, David Sachs and sort of the insiders in the administration, and they're working very closely together to do what they're going to do regardless of sort of popular will
Starting point is 00:31:50 and you have all these efforts to overturn local laws and things like that, then, yeah, it does start to be this period where people look at that and say, well, what can I do? Right? What can I do? What are the options for me on the table? I voted. I told my council member, don't vote for this. A hundred people showed up at this event and said, please don't vote for this. And they did it anyways because the industry convinced them or they thought it was the right thing to do. But suddenly it looks like I don't have a say. I don't have any power. I don't get a vote in how the AI future is going to unfold. And if I'm in Gen Z where the negative sentiment towards AI is overwhelming, the end. B.C. poll that just came out, it was like 44 points underwater for people who aged 18 to 34. They hate it because they're looking at the headlines and it's saying, this is the worst job market for entry-level jobs in 37 years. AI has taken all the jobs. So yeah, what are you going to do? Are you just going to kind of sit down and say, well, I guess I don't get a job. I guess the data center is going to get put up in my backyard. So in this sense, I feel like the industry, politicians, everybody should be paying close.
Starting point is 00:32:58 attention to those very genuine and very rational feelings of aggrievement over what's happening and what's what you know what's happening to their futures too right if this isn't the wake-up call that people need then i really don't know what is Bradley tusk Brian merchant I can talk about this for hours but we do need to wrap it up here I appreciate both of you appreciate your time thank you so much for joining us yeah thanks for having us yeah thanks for having me. Okay, that's it for today. We appreciate you joining us for another Profi Markets panel. If you have a guest you think we should speak to on this topic or any other, please drop us a line in the comments or email our producer Claire at Markets at Profg Media.com.
Starting point is 00:33:42 I hope to hear from you. This episode was produced by Claire Miller and Alison Weiss, edited by Joel Patterson and engineered by Benjamin Spencer. Our video editor is Brad Williams. Our research team is Dan Chalon, Isabella Kinsell, Chris Nodonoghue, and Mia Silverio. producer is Jake McPherson. Thank you for listening to Property Markets from Property Media. If you like what you heard, give us a follow. I'm Ed Elson. I will see you tomorrow.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.