Big Technology Podcast - Senator Mark Warner: Nobody’s Ready for What AI Could Do To Us

Episode Date: March 25, 2026

U.S. Senator Mark Warner is a three-term Virginia senator and vice chair of the Senate Intelligence Committee. Senator Warner joins Big Technology to discuss whether Washington is prepared for the eco...nomic and societal disruptions of rapidly advancing AI. Tune in to hear why Warner believes recent college graduate unemployment could surge from 9% to 30% and why he's more frightened than reassured about Congress's ability to respond at speed. We also cover the Anthropic-Pentagon relationship, AI romantic relationships, data center opposition polling, and the ongoing battle over congressional stock trading. Hit play for a rare conversation with one of the few senators who actually understands what's at stake. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 If AI progress is actually moving on an exponential, are we ready? Let's talk about it with U.S. Senator Mark Warner right after this. You might be tempted to let Taco Bell's new Lux Value menu go to your head. Because 10 indulgences for $5 or less makes you feel fancy. Like you might think you need cloth napkins. Well, you don't. Just use the ones that come in the bag. Don't let the Lux go to your head.
Starting point is 00:00:23 Welcome to Big Technology Podcast, a show for cool-headed and nuanced conversation of the tech world and beyond. We have a great show for you today. U.S. Senator Mark Warner is here with us. We're going to talk about whether the government is ready for fast AI progress, what government data says about AI-driven job loss so far, and the latest on that small anthropic situation with the Pentagon. Senator Warner, great to see you again. Welcome to the show.
Starting point is 00:00:46 Alex, thanks so much for having me. So it's been four years since we last spoke. And I reached out because I had been getting freaked out. I'll be honest. I've been speaking with some folks in it around these AI labs, and there's a belief among them that AI technology is moving on an exponential and could have real disruptions. And I think for me and many others who've been watching this, that was marketing language a couple of months ago.
Starting point is 00:01:14 But now there's at least a percentage chance that that's real. And I'm freaked out because I'm not sure if the government is ready for an exponential. Silicon Valley might do exponentials, Washington does linear. or backwards sometimes. I wanted to just get your take on, I know you're right into this. Everybody says, you know, go to speak to Senator Warner.
Starting point is 00:01:36 He's the one that knows what's going on. But I want to get your take on the general vibe in Washington today. Do you think there's awareness among, you know, rank and file members of Congress and the Senate that something might be brewing that there will have to be drastic action to head off the negative consequences if it happens?
Starting point is 00:01:57 Well, Alex, I don't think government's ready. I don't think society's ready. And I know the same, you know, AI optimists who are talking about this. I actually think they have changed their pitch and are now holding back because they're freaked out about freaking out people. And I've seen, like, you know, the, and I am still long AI in terms of value. but boy, short term, next three to five years, the economic disruption is going to be, I just think we are not ready at all. We don't have good data.
Starting point is 00:02:38 We don't know what's happening. An example I like to give is if you just look at Anthropics Clod products this year, how Claude has already kind of disrupted the whole software business. Now, the market recovered a little bit, but then it hit the same thing on the HR business. The markets don't respond that way that quickly if people aren't saying there's going to be fundamentally dramatic change in kind of industry fundamentals. And that's just two areas. And I think there's much, much more to come. So I've heard you say this a couple times that the CEOs may be downplaying the impact.
Starting point is 00:03:19 I know they speak with you privately. Are they telling you things like saying, hey, Senator Warner, don't say this to other people, but you're what we think or what brings you to that assessment. What brings me is, you know, the CEOs who are saying this in the AI space. And when I'm hearing privately from big brand name firms who are saying they're, they're cutting off or cutting in half the number of interns or first year hires, I even heard from a nationally known law firm that is decided to hire no first year associates. They're going to take a pause and see how this works out before they even hire. All these kids, after they've done everything to get through law school and they got a job offer they thought with a big brand firm.
Starting point is 00:04:05 And then it's just going away. Nothing they did. Because of AI. Yeah, because AI. And I hear like so many companies that are mid-sized who say, you know, I had one guy the other day saying, you know, I had 23 people do this back office function. Now I got three. isn't that amazing. And the thing is we are not even collecting data on this yet. That's why I've got a bill with Josh Hawley, very bipartisan, that says to BLS, Bureau of Labor Statistics, we need to start
Starting point is 00:04:35 measuring this. And not just in terms of firms like a Jack Dorsey saying he's cutting 40% of his staff on because of AI and whether that's true or not, you know, we won't know for sure, but that kind, but also try to measure jobs that would traditionally have been created, because my view is that this is kind of particularly hit kids coming out of college, coming out of graduate school, we're at about 9% recent college graduate unemployment. I think that number will actually go to 30%. And the economic disruption that will have not only on those young people that don't get jobs, but their parents who help finance their college education and the level of kind of fear that is
Starting point is 00:05:16 amongst everybody I know that's in college at this point. I don't think people are factoring that in. And to say government's not ready would be an understatement. Right. And we're going to talk about some of the legislation that you have brewing. And, you know, but it takes more than one or two senators here. And, you know, you've already passed the Claude test, Senator Warren, which is you're a senator that knows what Claude is. 100 U.S. senators, how many of them do you think know what Claude is. Well, I hope more than you and I think. But, you know, and again, I don't know if you want to go now into the whole, you know,
Starting point is 00:05:53 Clyde's part of Anthropic, whether we want to go down that path now. But, you know, I would argue that Anthropic, you know, pick your Anthropic, open AI. Obviously, Google is doing, doing well. We've got a half dozen LLMs that are making major advances. But, you know, what's happening to Anthropic at this point, as they were doing, business with the Defense Department and being very well used. And, you know, the anthropic leadership crosswise with Hegsef at DoD. And obviously any company, if they're going to do business with DoD, has to make some accommodation. But the idea that we're going to turn over
Starting point is 00:06:29 to Pete Hegsef, the ability to completely decide that these AI tools can be used totally for surveillance without any guardrails or even potentially worse, creating AI weapons without a human in the loop, that's big freaking deal. And we ought to have, if we were not in this war with Iran, at this point, I think, you know, that would have been a major focus. And what is even happening with this or done, I'm trying to rally the tech community to say, regardless of what you feel about Trump and Hegsef, you know, if you're having these decisions and then Hegsef is going to declare, is trying to declare anthropic a supply chain risk, that would mean that not only Anthropic couldn't do business with DOD, but any company, and virtually every major company in America does some level of business with DOD, they couldn't do business with Anthropic as well.
Starting point is 00:07:21 This would be the ability for a single individual to write a death sentence to major American tech companies, and people need to realize this stuff is happening real time. Okay. So I guess like the reason why I'm asking the awareness, and I take your point, we're going to talk a little bit more about this in the second half about the anthropic. DoD or the war, the dispute they're having with the Department of Defense Department of or whatever you want to call it. We'll talk about that a bit more. The reason why I asked is because, and maybe this dispute is giving more awareness to Anthropic, I just want to see if you could reassure me or maybe you're saying that there is little reassurance that when it comes to the list of priorities that your colleagues have, that this
Starting point is 00:08:08 at least ranks. because I remember reporting on the social media stuff five, ten years ago. It was clear that there was no, there was, and you know what, I guess we sort of came out of it okay. Well, Alex, we did. But this is my worry. There's the same lack of awareness in the government for something that could happen faster. Well, amen. Amen.
Starting point is 00:08:32 Like social media was a challenge. And, you know, I had bipartisan bills on data portability, interoperability, delegability. delegability, which is now basically called agentic AI. We had things about dark patterns. There was lots of bipartisan action. And all the social media companies, you know, they all said, yeah, we want some meaningful regulation until you put words on the page. And we batted zero.
Starting point is 00:08:56 We still haven't even done the freaking kids online safety bills. So social media was a challenge. It has, I think, effects, you know, psychic effect, psychological effects. I'm sorry on young people, but it is tiny compared to AI. When we think about, you know, the stories already, we're hearing about AI, you know, leading kids potentially to suicide. We're seeing what was kind of a spot story just six months ago of people becoming romantically involved with AI agents.
Starting point is 00:09:29 Now this is actually a statistical thing you can look at. And that's just on the kind of, psychological societal effects. But on the job effects, we just don't have, we don't have good data. We have people, I think, shifting blame. I saw Amazon, you know, they've announced 11,000 job losses. They say none of that's due to AI. But I got to just tell you, you wouldn't have literally not billions, but trillions of private capital coming in if these enterprises don't, the investors don't think they're going to get going to return. Now, some of this may be because we're going to have great health care breakthroughs or have AI created jobs, which I believe we will.
Starting point is 00:10:09 But in this short term, the amount of AI job dislocation is going to be jaw-dropping. And I don't think the majority of senators understand. And I think they can be convinced. And I just fear that what we've got now is, you know, the overriding agenda coming out of the Trump administration is they are kind of AI accelerators. accelerators, you know, peddled in the metal because we got to beat China. And we do have to beat China. But the idea that we are not going to think about any guardrails or about the short-term economic consequences, I think is really frightening. And as somebody who still believes the power of AI, by the way, we're putting the genie back in the bottle anyway, could have positive effects. We could actually have populism on the left and the right coming together to try to, you know, snuff out the innovation.
Starting point is 00:11:02 do it ham-handedly. So boy, boy, this is, you know, I'm trying to get hired one last time in this job. And probably the major reason is, if I can help navigate, you know, some of these AI solutions, and I don't pretend to have, by any means, all the answers. Matter of fact, if we, I think Alex,
Starting point is 00:11:22 we may have talked about this at one point. We go way back in time, like three years ago. The, I think, well, at least at that point, thought through, well, guarantee of a job was let's at least make sure everybody has basic coding skills. That was well intentioned, but was obviously not the right answer since those are the first jobs being eliminated. And so when you think about the way that your colleagues view this, is it high priority, medium priority or low priority for them? Wasn't this stuff is hard. I joke, but it's kind of true.
Starting point is 00:12:00 there is no linear relationship between me spending more time on AI and actually thinking I have a better understanding. It is evolving so quickly. I think most members, and this is a human reaction, if you don't get it and it seems too complicated, you want to try to punt on that. And that allows for, you know, simple-minded solutions like let's just shut it. down or let's just, you know, have a moratorium all data centers for a year. That's not going to answer the question. So we do have to navigate it. And, you know, what small value I hope I can add is not turning this into a partisan issue
Starting point is 00:12:45 and trying to find folks on both sides of the aisle that says, hey, we've got to grapple with this. China and the rest of the world is moving ahead. There is no way we can reverse this. But we are not powerless, both to put guardrails in effect. and also say in terms of the economic dislocation, you know, and my challenge to the AI community is, you guys are right.
Starting point is 00:13:07 If government defines this all, we'll probably screw it up. So you guys help us define what this transition looks like, what the training or re-skilling, whatever tool we want to call it, but you also got to help pay for it because the cost of this are going to be amazing. Yeah, and I'll explain a little bit about my line of questioning here.
Starting point is 00:13:26 I just wanted to see if you think the government would be able to move fast if we end up seeing this exponential. I'll even take your words. You said recently in a great YouTube video about the AI challenge, this is as dramatic as a change as anything I've seen in my lifetime. He said, think about the transformation brought by the internet. This AI information at the rate we're seeing it could be over in the next two to three years. And, you know, I, again, like I've read, I know you have legislation.
Starting point is 00:13:55 You have three bills, at least three bills in action right now on AI. gathering data, trying to understand the implications here, trying to head off the issue. And it's different. Like, it's not, you can attack it in a way that it's not like stop it, right? It's like maybe help people who are being, who are at risk of job dislocation. But I'm not very reassured hearing the way that you describe the way that this issue is being handled in the Senate that that speed is going to be met. Well, I'm not sure I can point to a policymaker's anywhere in the world that's figure this out. I got good bipartisan legislation. Let's put a commission together similar to the
Starting point is 00:14:35 cyber salarium that actually put some points on the board. You know, commission of the economy of the future. We've got, you know, bills to get BLS to start reporting on AI job disruption. I've got a bipartisan bill about how AI is going to affect the financial markets and how we ought to think through this. They are, I think, thoughtful, but they are self-acknowledging here, small incremental steps. when it very much could be the holy shit moment. And can we think big? And, you know, if Donald Trump, the disruptor, if he had an ounce of either empathy or collaborative spirit,
Starting point is 00:15:18 somebody that is a disruptor could actually help us through this. But I want to be more optimistic. but I am terrified. I mean, I had somebody come in the other day and it was like, I thought it was a very interesting thing saying you get, you get three couples of parents together who are talking about their kids. Ten years ago would have been, you know, this globalization, I don't know if my kid's going to get a job.
Starting point is 00:15:48 Five years ago would be, oh my gosh, I'm really concerned about whether my kid is getting addicted to social media. Now the conversation, this is happening at such a level that our policy makers get it. They are terrified that their kids have done everything right. They're going through college. And there may not be a job there. Right.
Starting point is 00:16:05 And can I just say, so I brought up social media as an example of our U.S. legislative body's ability to deal with technology effectively. But it's different than social media. I think we both agree here that with social media, the big disagreement was, are you going to tell Facebook, like, how to handle its news feed, what to do? this isn't necessarily legislation that needs to be or policy that needs to be can we tell the AI companies to stop making their models. To be able to handle the negative effects here, it's more like how do you stimulate job growth or retraining? And even that's probably not proven.
Starting point is 00:16:42 But that's what gives me hope is that there's a chance that that can be. And the fact that you have these bipartisan bills gives me hope that that can be a solution. I've talked to some of my friends, in the industry, you say, like, let's at least deal with things like non-consensual nudes. Do you want your young daughter or son to be portrayed with a deep fake out there? And everybody says yes, but then you get, you know, Elon at Grock saying, no, we, you know, we're going to be an outlier.
Starting point is 00:17:12 And, you know, we default to the lowest common denominator on some of this. You know, the idea of these horrific stories of people being guided to suicide. You know, we can say, well, we're going to try to correct the model a little bit, but we're always lagging. I mean, I do think I'm kind of freaked out about this. You know, the idea of who you turn your romantic interest to it. I think we all remember that movie a few years back called, I think it was called Her, where the main character fell in love with a kind of a chat bot. That stuff is happening now, not in tiny numbers, but is actually starting to appear statistically.
Starting point is 00:17:51 And then we come to the, you know, the job dislocation. You know, the biggest, in most kind of mid-tier public universities, the number one major for most young people is business or business administration. Those are the jobs that you come out and you go work for a firm for a couple of years as a young analyst or whatever. Those jobs are gone. I mean, somebody said Jets the other day, and I'm not sure this is right. But maybe, you know, some of these companies ought to pay an incentive to get more people into nursing. as opposed to business administration.
Starting point is 00:18:23 We ought to at least disclose to people that the job prospects in some of these fields are kind of dramatically change. And I'm just not sure whether we're ready. And one of the scary things that I've found, and again, I want to be more optimistic is like you talk to the leading AI companies, the leading AI thinkers,
Starting point is 00:18:50 and they'll give you a partial answer well, gosh, we're going to build a lot of data centers so that the traditional trades will have an increase. And that will be a short-term increase in terms of building those facilities, and there's going to be obviously huge needs for more electrons. So I'm a big advocate that we'll never be able to power this without a small modular nuclear or other kind of decentralized power generation. But that's still going to be a relatively small number. And then you say, well, how do we make sure that whatever you're going to do,
Starting point is 00:19:21 you can use AI to become better skilled at it. And everybody's kind of got soft terms because they're making up right now. But gosh, we got to have that stuff ready yesterday. And it'll be very interesting to see, you know, even this hiring cycle as we get, you know, close to graduation in May and colleges. It's going to be very telling. Yeah, it's going to be very telling. People, new grads get jobs. We're going to learn very quickly.
Starting point is 00:19:49 Yeah. So, I'll say, yeah. a couple of things. First of all, as someone who's married to a nurse, I agree with you. It's a good career path, and I always tell her at least one of us will be imploded in the long term. On AI romantic relationships, I mean, can't possibly believe that adults should not be able to enter into these relationships with AI chatbots. Is that more of a minor thing that you would? In terms of legislation, right? I don't know. I mean, obviously on minors. And, you You know, you, and trust me, I'm not, you know, big brother here and saying we can prohibit behavior of adults.
Starting point is 00:20:32 But, you know, at some point, you know, as a functioning society that needs to procreate, that needs to have human relations, I just think we ought to have at least a discussion about this. And I just think that my friends in the community need to either not blow it off and say, oh, we can't, there's nothing we can do. Or, you know, at least we put a bigger warning sign, you know, adults are going to do what they're going to do. But the full fully informed before you go down some of these rabbit holes. And the ability to have any kind of shared common truth as we think about how AI could affect. you know, political debate. I am terrified right now of, you know, disruptions in our 26th election from foreign sources or frankly, even, you know, the president's willingness to try to say he wants to have the feds take over our elections. And we have not seen deep fakes used in a massive
Starting point is 00:21:34 way so far, but as we know, that technology is evolving on a monthly basis. And it only takes one major screw up in an election cycle, for example, for people. people who already are losing faith to lose faith in our basic democratic processes. Right. So you're running for a fourth term, three-term senator, at this point. And one of the things that I love when I speak with politicians is we can talk about polling. And no one reads polls better than people like yourself. So I want to read to you a couple polls about AI's popularity or lack thereof and sort of get your reading on what it can mean politically. This is from the NBC News poll. You might have seen it. A.
Starting point is 00:22:15 of registered voters, 57% said they believe that the risks of AI outweigh the benefits and a plurality of voters view AI negatively and don't believe either Democrats or Republicans are doing a good job handling policy related to the rapidly advancing technology. I guess let's leave the reaction to Democrats and Republicans aside for a moment. What are the consequences? And we've tried to figure this out on the show, but no one better to speak about it than with you. What are the consequences for this AI industry if it continues to pull so low? Is there, are they opening themselves up to put a glass? Well, they're opening themselves up.
Starting point is 00:22:56 I think the first line will be the war against data centers. You know, and they are big. They use a lot of power, you know, and that becomes almost a proxy for the overall concerns about AI at large. And, you know, they're going to have to go ahead and make sure that people's electric bills don't go up. the water supplies don't go up, that they are better, that they are better screened. You know, I got a county in Virginia that took their AI revenues and put it all into affordable housing, so people see a tangible benefit. And in Virginia, we're on the front line. We're data center, you know, heaven in terms of the biggest state. Biggest by data centers in the U.S.
Starting point is 00:23:37 We're having a major debate right now at the state level about trying to extract, you know, somewhere between $500 million and a billion a year from the industry. I would hope the industry would lean into some of these things and say, yes, we will voluntarily help and we will dedicate that to this economic transition. It's happening so quickly. I'm not sure we're going to get that together, and I'm no longer the governor or the state official, but the industry, the tech industry writ large,
Starting point is 00:24:07 has basically said, and I say this is a pro-tech guy. I'm, you know, my business background was tech. I'm a big believer. But the tech industry so far has generally said, you know, and rightly so, policymakers don't get us. We can blow them off. That was clearly the success of the social media platforms to never have any regulatory basis at all.
Starting point is 00:24:31 And then when you do have over-regulation, say from the EU, and they'll point to the EU and say, you, listen, we don't want to be like Europeans. They have no innovation at all. getting it right is hard. But on this one, you know, when I, this one, if they kind of ignore and say, we can blow off any regulatory or any framework,
Starting point is 00:24:51 or we have no obligation, I think it could bite them. Now, it's not going to disappear AI. These models are out there. And the fact that, you know, and China clearly is investing at an amazing rate, but even if America closed down, the models can transfer to another entity
Starting point is 00:25:09 that has the compute power. So this is not going away in a certain way, not to sound old school wistful, but if there was ever a time where the world as a whole ought to be thinking through this rather than nation-state competition, it is on this issue. And I absolutely do believe that we are now, whether it's full AGI or full process. We're getting close to where the magic that happens inside these models. I don't think, at least I've heard from many of, and they're most of the guys, have said, like, we don't really understand how this is what all is happening.
Starting point is 00:25:51 This is way beyond just predictive of the next word, which was kind of the, you know, AI 101 model that people got educated on, you know, a long time ago, like two years ago. Okay, let me run this by you before we end this segment, because you mentioned the data centers. I was stunt. So there was a series of negative polls about AI that came out recently. And I was stunned at the way people feel about data centers. So this is from Pew. I'm sure you've seen this poll.
Starting point is 00:26:17 Far more American say data centers are mostly bad than good for the environment at 39% to 4% for home energy costs at 38% to 6%. And the quality of life for those that live nearby. 30% to 6%. I mean, goodness, that is a terrible, terrible polling numbers. for these data centers. Does that mean that they're going to be places where they are just not going to be built
Starting point is 00:26:43 because the opposition is so high? I saw there was an Axios report that said something like half of the data centers that are expected to be built this year are delayed. Now some of that is part shortages, but I think community opposition is going to be a big part of it. Well, and the interesting thing too, and rightfully some of the tech companies
Starting point is 00:27:00 will say, well, you actually look at the electric rates for states that have done a lot of this? They've not seen a dramatic rise. But I think they have to do more than just say, hey, we're going to cover your no increase in utility and electric rates. I think they've got to put that in statute. I think they've got to move more towards self-generation that is adjacent to the AI facility. So it doesn't go into the full grid. And I think we have to document that.
Starting point is 00:27:31 I think they have to do more on the water usage. I think they need to do a much better job on just visual screening. These are big, ugly buildings. And the thing a little bit is they are making progress, but to go into a community and sell that when the only image you have is of, say, you know, data centers in Northern Virginia that are still old school last generation, that's a hard sell.
Starting point is 00:27:59 Now, there always will be a jurisdiction that needs, that additional revenue to get by. And they do, you know, they do generate revenue and they don't bring a lot of kids because they don't have a lot of jobs adjacent. But there needs to be a rethinking on this. And I do think, you know, the state battle that's going on in Virginia right now, I've said to the industry, you guys got to watch this because we are the mother load of data centers. And if there is some adjustment of the kind of economic deal that's going to happen in Virginia, that is going to be copied by every other state around the country. And my pitch to the AI industry is, you know, don't just fight it like mad. Be proactive and say, yes, we're going to chip in more.
Starting point is 00:28:47 And we're going to chip in more. Not only make sure your electric rates don't go up and they are appropriately shielded, but we're going to actually put money on the table to help through this economic transition. And I, you know, I get a lot of head nods, but, you know, the lack of specific policy ideas. Alex, I've talked to everybody I can. And most policy experts and others are observing the problem or want to do things like, I'm trying to collect data. But what the actual reskilling, retraining program looks like, you know, we don't have a lot of a good example so far. Yeah, I'm sensing some frustration with tech companies. Yeah. I mean, it's, But I kind of get it. You know, if you think about the big guys, you know, they've been pounded on for years since most of the big guys act, the hyperscalers, most of them actually started, you know, as either social media or you got Amazon and Microsoft and some videos. But they've kind of gotten through with, you know, kind of good lip service, but no rules or regulations in place.
Starting point is 00:29:59 And it's kind of like this time, I think the seriousness and back to your numbers on data centers, the fear is real and palatable. And I don't want this innovation to stop. But I do think sitting down and figuring this out in a more forward-leaning way is really essential. And that's what I'm desperately trying to do here. And this job, at least, is not allow this to be kind of divided D's versus ours. All right, let's take a quick break and then come back and talk a little bit more about Anthropic and the Pentagon and the state of AI in warfare. Back right after this.
Starting point is 00:30:39 I've interviewed a lot of great tech founders on this show. And one surprisingly universal challenge comes up again and again, finding the right domain name. It's something I ran into myself when launching big technology. The names you want are often taken, and it's tempting just to settle and move over. on. But the founders I respect most don't settle on fundamentals, and your name is one of them. It should immediately signal what you actually built. That's what I appreciate about dot-tech domains. It just makes sense. It tells the world your customers, your investors, and anyone Googling you that you're building technology, clean, direct, and no qualifiers. And I'm seeing more serious
Starting point is 00:31:19 startups leading into it. Nothing. Dottech, 1x.com, aurora.com, CES.com.com. C-E.S.com. tech, pie.com, and so many more. If you're building something tech first, don't settle. Secure your dot-tech domain from any registrar of your choice and make your positioning obvious from day one. Starting something new isn't just hard. It's terrifying. So much work goes into this thing that you're not entirely sure will work out. And it can be hard to make that leap of faith. When I started this podcast, I wasn't sure if anyone would listen. Now I know it was the right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of businesses around the world and 10% of
Starting point is 00:32:02 all e-commerce in the U.S. From household names like Allbirds and Cotopaxi to brands just getting started. With hundreds of ready-to-use templates, Shopify helps you build a beautiful online store that matches your brand style. Get the word out like you have a marketing team behind you. Easily create email and social campaigns wherever your customers are scrolling or strolling. It's time to turn those what ifs into with Shopify today. Sign up for your $1 per month trial today at Shopify.com slash big tech. Go to Shopify.com slash big tech. That's Shopify.com slash big tech.
Starting point is 00:32:39 And we're back here on Big Technology podcast with Senator Mark Warner. Senator, it's always great to speak with you. I was looking at the date of our last conversation. I can't believe it's been four years. Alex, that's my-bling. We got to make this much. I didn't believe that either. Yeah.
Starting point is 00:32:56 So let's just, let's pick up on the anthropic thing. The U.S. government, right now, so you've definitely stated your, you know, your opposition to them being labeled as a supply chain risk in the first half. The U.S. government right now is in the middle of removing Anthropic from federal agencies. There's actually a six-month phase-out that the president has ordered. Is this? So can you talk about because you know. government agencies very well. Can you talk about, like, is this something that Anthropic is already being removed and you can't really see them being put back? Or is this six-month
Starting point is 00:33:32 deadline, something like we've seen in the past with TikTok, which could just be a six-month deadline? Because we know they need Anthropic that just gets pushed back again and again. Which is it? Alex, great question. Yeah, again, I go back to, like, you know, the TikTok issued, President Trump in his first term and his Treasury Secretary was good friends with Steve Mnuchin. literally convinced me, you know, about the national security risk around TikTok because of, you know, particularly the ability to alter the message and more of the propaganda than the data collection. And then obviously, President Trump completely flipped on that issue. And TikTok's here to stay. And I still like to more the details on the controls the new American owners have. So I don't know
Starting point is 00:34:17 the answer to that, whether this is talk or they're actually being disconnected. And And, you know, to take out what is at least at this moment in time, probably the market leader, and when there are actually benefits happening from the usage, and I'm, you know, I got no particular beef for Anthropic. Or, you know, I'm not carrying their water here, but I am saying when you can get thrown out, what happens to Anthropic could happen to open AI and could happen to Amazon. It could happen to Google. you name the entity and you're going to have to go through a political
Starting point is 00:34:56 litmus test. Now I think Anthropic probably screwed up their negotiations with Department of Defense, but to put up the supply chain designation, which I don't believe has ever been designated against an American company, this is a death warrant. And I don't think any company technology driven or not wants to have a single individual. This is not even the president. This is Secretary Hanks, making that determination without some due process,
Starting point is 00:35:25 this is a big freaking deal. And I just hope, and this will be the, you know, I think the jury's out on this, I've been trying to talk to all of the other tech companies to say, even if you are Anthropics' biggest competitor, you don't want this precedent set, particularly because at least with this administration, as we've seen time and again,
Starting point is 00:35:48 they may love you today, but that doesn't mean they're going to love you tomorrow and, you know, take the, you know, in the political figure, think the Marjorie Taylor Green, if that kind of, you know, up and down approach is applied to all of our leading tech companies, you know, who's going to, you know, we're going to see where we've always had advantages
Starting point is 00:36:09 in terms of our international take-up. People are going to say, heck, you know, maybe it's better to go with the Chinese model. Okay. So you're the vice chair of the Senate Intelligence Committee. Was or is the Pentagon making a AI-based surveillance program of American? That was one of the central contentions. I do not know the answer to that.
Starting point is 00:36:31 And I should. This administration has not been forthcoming. And unless we have bipartisan oversight, we're not going to get those answers. And I think there have been concerns race. and this is not just around the Intelligence Committee, it ought to be also the Armed Services Committee and others. And I think I've had conversations with a lot of my Republican friends. I think I'm making the case that this is a big deal,
Starting point is 00:37:01 that we've got to know some of this. We might decide that that is the right choice. We may even decide, although I can't imagine this would be the case, that we're ready to move to AI weapons without a human in the loop. and it's easier to make the decision, for example, on an AI weapon without a human in a loop on defense, you know, having a missile system that would fire based upon an incoming that's the adversary,
Starting point is 00:37:26 you know, to protect an aircraft carrier makes, you know, there's an argument there without a human in the loop. You know, on the offensive side, it's a much more challenging argument, but we ought to have those arguments rather than, you know, a single person in terms of Pete Hegstaff making that determination. Palantir recently demoed Maven Smart System at a conference and showed how it selected targets. Seems like Palantir is actually far more consequential in warfighting than Claude, although maybe they've been updates where Claude was embedded that we don't know about. I'm curious from your position, because you know this better than most or almost everyone.
Starting point is 00:38:04 How important is Palantir there? And when you think about the war in Iran right now, is just a Palantir selecting the target? I think, Talk a little bit about that. I think Palantir has been a very successful company. I think Andrew has been a very successful company. I think the idea that, you know, that these new entrants are shaking up the primes in many ways, you know,
Starting point is 00:38:28 makes sense. I also actually think that, you know, Alex Carp is thoughtful on a number of these issues. I know I was, I raised real concerns about Palantir and the six other technology company. that have taken contracts with Department of Homeland Security. And I had been extraordinarily concerned that, you know, DHS or ICE, as we saw people targeted in Minnesota, I mean, literally a lady who was up for the global entry pass, got denied because they had evidence that she'd shown up at a protest. Do we really want DHS or ICE making those determinations that I've, you know, Pallentor and some of the companies are saying they are not doing that,
Starting point is 00:39:12 How do we independently validate that? This is where we're entering into this realm where, you know, at some point you still need third-party objective, whether they be academic or other experts trying to help keep both sides honest in terms of both sides, both being government and the tech companies. And I find with some of these companies a willingness to participate, and at least they've told me they're willing to participate through that kind of review and oversight. But it really is going to take both political parties and D.C. to realize this is not a Democrat-Republican issue. This is like we're setting the ground rules for stuff that if we don't put ground rules in place
Starting point is 00:39:57 could lead to a pretty spooky place. There's a reason why I think the overwhelming majority of science fiction movies about the future have this kind of dystopian future because that default is actually easier than thinking this through in a rational way. On the Palantir side of the Iran War, obviously it seems like the United States did target and hit that girl school in Iran, and it was presumably bad targeting. Are you, because again, as the vice chair of the Senate Intelligence Committee, do you have any idea of whether a U.S. technology layer like Palantir was involved there? you know i think we need a full investigation and what i'm i'm a little old school that i think
Starting point is 00:40:45 we ought to you know restrain making a conclusion before you got all the facts this girl school was literally right adjacent to you know an iranian military base you was this dia was it sent com i mean i think we need to get the facts out on on this but we all know, you know, technology makes mistakes. And that's where, you know, the rub comes with this kind of horrific event. Let's get the facts before we draw conclusions. But what is problematic is that when the president of the United States, I can't believe he was briefed with his initial reaction,
Starting point is 00:41:30 that this came from the intelligence community, oh, this was the Iranians bombing their own school. And then they kind of said, well, like, here's the material that showed it was an American, you know, a missile. And then he said, well, maybe they got them. When that kind of absurdist response comes from the commander-in-chief, that undermines, I think, not only the confidence of the American people that we're going to get the truth, and it also doesn't help us in terms of how the world views us. for all our clause we have been generally viewed as the good guys and when we lose that designation
Starting point is 00:42:07 you know that doesn't make America safer I'll just leave it at that at this point okay that's a very telling answer that's very interesting all I have a couple more for a year before we leave first of all on the AI job disruption question you've mentioned bipartisanship a number of times I want to put this to you I'm going to be in DC in a couple weeks from now and I'd love to interview one of your Republican colleagues. So I'd love to get you somebody like Mike Rounds is very
Starting point is 00:42:36 thoughtful on this. I got a lot of Republican friends that I think would love to sit down with you. And, you know, it was great. And especially on some of the weapon issues, I think Mike Rounds is, you know, frankly, ahead of me on thinking through some of this stuff. Okay. So maybe we can, I can get in touch with your staffers after this and we can find a way to connect with him. That would be great. Okay. Also, four years ago, we talked about an issue. that's been, I think, really important to me, really important to many Americans, which is that we see, you know, whether it meets the legal definition or not insider trading within Congress. And you are great in your statement saying we shouldn't see this anymore. But here we are
Starting point is 00:43:15 four years later. This is just one example that came through my timeline this week. It looks like Josh Gartenheimer, sorry, yeah, Josh Gartenheimer, who's on the House Intelligence Committee, bought Exxon twice in early February. Now, who knows if that's necessarily. connected to the fact that the Iran war was brewing, but it doesn't look great. Why do you think it's been so difficult for the Congress to pass legislation around this? I can't answer that. I mean, I don't know. It seems like it should be a no-brainer. You know, and I'm lucky enough that I was able to put all of myself in a blind trust, independent. I don't know anything I own. you know and I think we've kind of completely gotten out of all trading and I've moved from
Starting point is 00:44:04 mostly stocks to I think mutual funds but there are you know there there are issues I've seen like you know I was a venture capitalist for many years before I got into this stuff you know I've invested in companies that have have you know took 10 to 15 years to go from startup to a public company and then you know I have a policy that if something becomes public, we try to sell it, but that still shows up as why is Warner selling this stock right now? I don't want to own the stock at this point, but, you know, it is, it is, should you have to disgorge even before, you know, in a company that you had long before you were in public service? There is some, there is some complexity
Starting point is 00:44:52 to this stuff. And again, I've been very, very lucky. I've got the freedom. that I was able to do very well in technology. You know, I'm going to be fine regardless. I don't want to chase people out from even going in public service because if they're kind of somewhere along their career and, you know, they were a founder of a single company, what do they do? I don't know the full answer, but all of those are nits actually compared to
Starting point is 00:45:20 we ought to have a rule that members of Congress shouldn't trade stocks. But here's the part, Alex, makes effort people more cynical. I am right now in the middle of the final negotiations on trying to put in place certain rules around crypto. You know, crypto is here to stay. There are some, again, real beneficial aspects of crypto. But if we're going to have a market structure bill, we've already paid past a stable coin bill, you know, one of the things that makes it difficult to get it finished is when the president of the United States says, so great, grossly,
Starting point is 00:46:00 totally enriches himself through this industry and wants to say he wants to have ethics rules apply to Congress and members of the cabinet, but not to the first family. It's, it's, you know, we ought to be passing these ethics restrictions, but boy, boy, there ought to not be a carve-out for them. You know, anybody whose name rhymes with grump.
Starting point is 00:46:26 Okay. Well, I'm with you on that. Look, Senator Ward, I can't say I'm more reassured that Congress has it under control on the AI front, but I am really thankful that you're out there, you know, stirring it up, working across the aisle and trying to make some progress out there. I'm sure it's not easy. And I appreciate you doing it. I appreciate you spending the time here again.
Starting point is 00:46:44 No, Alex, we should do this more than a quadranial basis because these issues are coming. You know, as you know, I know, and we really need, this is one of the things I would appeal. you've got a very sophisticated audience. You know, if part of your audience has got ideas or suggestions, please, you know, I'm wide open for business on what these policy notions ought to be. So you can get to me easily, you know, online. But it's going to take all of us in this because getting it wrong, boy, getting it wrong could be a major disaster.
Starting point is 00:47:20 But thank you for having me on, Alex. Definitely, yeah. It was great having you. And I'll tell you, social media. that beat took me about 10 years before I ended up in D.C. covering hearings and the speed at which I had to, you know, call and say we got to talk about AI is much faster. So thank you again. And I'm sure the audience won't be shy in writing you. Let me know. Thank you, Alex. Be well. Thank you. All right, everybody. Thanks for watching. And we'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.