a16z Podcast - Why America Must Lead in AI Investment with Senator Young (R-IN)

Episode Date: April 22, 2024

Senator Todd Young (R-IN) speaks with a16z General Partner Martin Casado about the importance of open innovation and American leadership in AI, and why we need to support AI research at all levels —... from the classroom to the war room.In this episode, we distinguish science fiction from science reality in the ever-evolving AI landscape. Resources:Find Senator Todd Young on Twitter: https://twitter.com/toddyounginFind Martin Casado on Twitter: https://twitter.com/martin_casadoWatch the American Dynamism stage talks on YouTube: https://bit.ly/3IqWn1WTo learn more about the American Dynamism Summit, visit our website: a16z.com/ad-summit Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Transcript
Discussion (0)
Starting point is 00:00:00 This isn't a particularly partisan issue. I think there's a tendency to catastrophize different technology areas as opposed to different potential outcomes. My colleagues typically don't think of an investment in the National Science Foundation as a national security investment, but it is. I think what you may have characterized in the past is fantastical, some fantastical doomsday scenarios, and there are a lot of those. And there may even be a couple that
Starting point is 00:00:29 are real, and we need to hedge against. Depending on who you ask, conversations around AI range from solving the world's toughest problems to triggering an irreversible apocalypse. We're going to rapidly compress the treatment and drug development timeline. We will come up with drought-resistant, climate change-resistant crops that'll feed the world.
Starting point is 00:00:53 Do you think that P-Doom, or P-Dume is a probability that always humanity has a catastrophic event? Do you think it's greater with or without AI? But these very fears and fantasies go back as far as 1863, when English writer Samuel Butler, inspired by Darwin, wrote a newspaper article suggesting that humans would create their own successors, a quote, self-regulating, self-acting power that would make us the inferior race. And who could forget the murderous Hal 9,000 in Stanley Cooper's 2001, a Space Odyssey,
Starting point is 00:01:24 or the dystopian crime prediction tech in minority report, or even Pixar's starry-eyed robot, Wally, all of which have shaped our relationship with the kind of science fiction that now in some way is becoming science reality. And as this is happening, everyone is weighing in. From the researchers building the technologies, to the funders ushering in the next wave of infrastructure, to even the consumers,
Starting point is 00:01:49 reimagining the ways that these tools can be used. And of course, governments too are increasingly paying attention. This is going to be another 10x expansion in value for sure. And historically, the U.S. captured that, right? That's like why the Internet age was so great for us, right? And so is that part of the calculus or is the focus really on, like, how do we keep this stuff from hurting us? We also need to embed the standards that we have
Starting point is 00:02:12 as it relates to privacy, consumer protection, and other things into those technologies rather than leaving it to, I'll pick on the Chinese Communist Party again. And that's why Senator Todd Young and three of his colleagues hosted the AI Insight Forum, a series of roundtable events that brought together lawmakers, business leaders, government agencies, and the greatest minds and technology to grapple with AI at this absolutely pivotal moment in its evolution. And in today's podcast, A16C general partner Martin Casado
Starting point is 00:02:42 sits down with Senator Young to discuss what they learn from the forum, where public policy meets AI and how the United States can remain a global leader in this emerging technology, striking the right balance between innovation and regulation. Senator Young has represented Indiana in the Senate since 2016, while Martin currently sits on the board of several AI companies like ambient AI and pin drop security, but also has long been involved in the intersection of technology and the government. In fact, he started his career at the Lawrence Livermore National Laboratory working on large-scale simulations for the Department of Defense. So as we collectively seek to distill science fiction from science reality. Here are Senator Todd Young and Martin Casado.
Starting point is 00:03:28 As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see A16C.com slash Disclosures. We're going to cover in general how policy intersects with AI
Starting point is 00:04:01 and how we're thinking about it and how do we keep the United States ahead. As part of that, you had this Senate AI Insight Forum. I thought maybe you could first just kind of describe what that was and we can talk about what the follow-up has been. Sure. Well, Martin, thank you so much for having me. This is a great event. I understand your speakers.
Starting point is 00:04:18 I shared some great information today. For those who may not be familiar with how Congress typically does its business, when we're really following the normal processes we work through the committee process. So if we had gone the traditional route and trying to address artificial intelligence issues, that would have meant that a number of committees of jurisdiction would have started to hold hearings on artificial intelligence in a very public way. Members of their staff would have prepared members hours before each given briefing, and we would have learned what we could. from those experts who briefed us over the course of two or three hours. But oftentimes members aren't able to dive deeply into the essence of various policy issues when we take that approach. So instead, Senator Schumer, working with three other senators, decided to deviate from the process.
Starting point is 00:05:06 Instead, we held first three briefings and then nine, as you mentioned, AI Insight Forums. These were roundtable events, long tables full of talented innovators, entrepreneurs, policymakers, and others, no cameras were present, as is typically the case in our hearing. So there's a lot of candid conversation on dedicated topics. We held Insight forums on national security, alignment, innovation, and other topics. And members were invited to witness these guided conversations, take extensive notes. And then after that, we have a lot of work product that we can deal with, recommendations, policy concerns, and other things that will lead to actually.
Starting point is 00:05:47 committee hearings and legislation being proposed as is traditionally done. So those were really constructive. Members gave a lot of positive feedback about it. We learned a lot in the process. What are your key takeaways having sat through them? Well, two really. I would say the first is just a general sense that this isn't a particularly partisan issue. I would say one could probably detect slight differences in approaches when it comes to regulation versus innovation as you consult with members on either side of the aisle. But as it related to this, topic, I think, and this gets into the second issue, there was a general embrace by my colleagues after they heard from some of the world's best minds that we needed to regulate, but it would
Starting point is 00:06:27 have to be a light touch approach. We had to be very careful not to go overboard and constrain what is right now sort of leading edge industry and the United States is in the lead. We want to keep it that way. Yeah. Do you think we can expect some legislation framework come out of this? Is there a time frame or is this still kind of too early? Well, I think we can't. My hope is that we can expect all kinds of legislative efforts, not just this year, but in coming years. I don't think we have to rush to do everything at once. We need to get clarity on a lot more of people's concerns.
Starting point is 00:07:00 But more importantly, what they perceive to be their opportunities is that the technology evolves. As we learn more about the technology, as new versions are released, so sort of a wait-and-see approach. I do, however, think that there will be some things we agree on now. We agree that there needs to be some resident expertise within the White House in an ongoing basis that other departments of government look to when it comes to issues surrounding artificial intelligence. I think we need to revamp our human resources approach so that we embed more expertise in every agency of government. We can dialogue with businesses, consumers, and others about things within a department's field that touch on artificial intelligence. and then there'll be some things, as you heard about earlier today, that deal with national security where we're already behind.
Starting point is 00:07:50 We need to make some key investments in people and platforms. And so all of those things are among the things that I think in the coming months will be addressed through the committees of jurisdiction in a bipartisan way if politics doesn't get in the way. And as it relates to most of this, I don't think it will. This is a bit of a personal story. So I did my undergrad in Flagstaff, Arizona,
Starting point is 00:08:10 like small Arizona town. And at the university, which is not a notable university, there was a lot of investment from DOD in supercomputing. And they bought supercomputers, and I worked on them as a researcher. And that led me to a job in Lawrence Livermore National Labs. And then at Lawrence Livermore National Labs, I worked in the weapons program. I worked on these huge supercomputers with a bunch of other people that came from similar paths. And I remember thinking at the time, isn't it amazing that we live in this country
Starting point is 00:08:34 where people understand those new technologies that are, like, very powerful. And they invest all the way down to Flagstaff, Arizona to stay ahead and find the people that are very interested in doing that. it feels to me having gone through that and then having gone through also the intelligence community which also embraced like the internet and a lot of the network technology
Starting point is 00:08:53 it feels to me like we're in a bit of a doctrine shift now that when new technologies come like we kind of are more afraid of them than wanting to get ahead of them and so like I just wanted to ask you is my perception correct that there has been a bit of a doctrine shift in the United States
Starting point is 00:09:08 where we kind of worry about the implication of technologies before we're able to actually harness them and become leaders in them Or is that maybe just like a perspective from the outside? I think it may be fair. I think this is one of the reasons these AI Insight forums were helpful to sort of keep certain colleagues honest that might have been ready to move forward aggressively
Starting point is 00:09:28 and with various regulatory actions. I can't promise that won't happen in certain discrete areas. But I think there's a tendency to catastrophize different technology areas as opposed to different potential outcomes. Historically, one of the many reasons And so we've been a dynamic country and created a favorable regulatory atmosphere for our technology developing businesses is we have laws that reflect our values. We apply those laws to technologies only if they run afoul of certain prohibited behaviors that normally don't even mention the technology, right?
Starting point is 00:10:02 And if we can take, at least for the most part, a tech agnostic approach as it relates to AI development and adoption, I think that that would be more helpful. So no special carve-outs? no special benefits except when explicitly those rare instances where there's a compelling argument that we need to. I went to one of these AI meetings
Starting point is 00:10:23 and there's a lot of concern around AI and I'll listen to the concern. I'm like, wait, that's not AI, that's like all computers. It's like that's not AI, that's the internet. That's been in these things for a very long time and so I've gotten this sense that maybe there's a response
Starting point is 00:10:37 to the prior battle. Like people felt like we didn't get it right with the internet and therefore they're trying to use that consternation now but for a different technology. Well, I think that's absolutely happening. It's happening with respect to data right now. Well, let me tell you what the dangerous.
Starting point is 00:10:53 So having been through the rise of the Internet very closely, there were things that were very unique to the Internet, things like asymmetric attack, right? The more that you invested in the Internet, the more vulnerable you were. And if you're dealing with a terrorist threat, that's a big deal because they didn't have infrastructure to take down. We did.
Starting point is 00:11:07 So that was a very particular thing that impacted defense. There's other things like exponential growth. Things can get out of control, and it's everywhere. There's these huge implications. I will say they actually don't exist with AI, but somehow people are imbuing AI with it. And so I'm just wondering if there's maybe like a lack of literacy, or maybe people feel like they kind of didn't really do it on the previous one.
Starting point is 00:11:28 And this is being applied now. And if that's the case, what can we do to educate or kind of get ahead of it? Well, this is helpful. You're in the right town, right? But to the extent, some of you have not been visiting with members of the Senate, members of the House who will soon be playing with live rounds, as we said in the Marine Corps, right? We'll be considering actual bills, we'll be developing bills, and we'll be voting on them. And there will still be some members who haven't done extensive homework and carry
Starting point is 00:11:53 superficial views of what constitutes artificial intelligence technology. And we need all of you to disabuse them of this being an extension of social media. This is very different. And data, we're going to have to think about data, perhaps, in a different way. So I do think that some are either confused, there are others who are just probably using the opportunity of this legislative effort, which is ahead of us, to try and pass bills that they had prepared two decades ago. Yeah, yeah, exactly. Right.
Starting point is 00:12:24 I was like, now's my time. I've done it in other contexts, but if we do it in this context, it could be quite damaging. And how much is the economic reality hit the calculus when it comes to these discussions? Because none of us really know the future, I will tell you when economics change this much, right? And the marginal cost of creation goes to zero, this is going to be another 10x expansion in value for sure. And historically, the U.S. captured that, right? That's like why the Internet age was so great for us, right? And so is that part of the calculus, or is the focus really on how do we keep this stuff from hurting us?
Starting point is 00:12:55 Well, we tried to make this part of the calculus. In fact, we spent at least half of the time, perhaps we should have spent even more on the AI Insight forums discussing the upsides. It's not as though we had to address the upsides. The upsides are going to be a result of innovators and entrepreneurs and investors. We understand that. But I thought it was very important for us to focus on that, to provide some balance to the staff members and my colleagues who were present. So constantly reminding people of that, I think, is going to be very important. There are, I think, what you may have characterized in the past is fantastical, some fantastical doomsday scenarios, and there are a lot of those.
Starting point is 00:13:33 And there may even be a couple that are real, and we need to hedge against, right? And we need to take those seriously. But we cannot become so fixated on those that we lose the forest from the trees. And we can do this. It'll be complicated. It'll be challenging. We won't get it all right in the beginning. And we shouldn't try and tackle everything in the beginning, which is a bit of a concern.
Starting point is 00:13:52 But I think we'll get it. It'd be great for you to talk about on the positive side, your takeaways from the discussions on how AI can benefit America that's not just strictly economic, from an innovation or a tech or a daily life. It's very difficult for us to quantify some of these things. I can sound very specific and learned by just throwing out figures. Within X years, we'll solve cancer, but no, we're going to rapidly compress the treatment and drug development timeline, something I've heard from our pharmaceutical makers. We will come up with drought-resistant, climate change-resistant crops that'll feed the world,
Starting point is 00:14:29 become more productive, drive down the cost of food. We'll develop the ability to have tailored, personalized tutor services and mental health services, vastly expanding the workforce in those areas, leveraging technology so that we'll have mental health providers in areas that we don't. We can clean up the environment in all sorts of creative ways. Ways that our artificial intelligence technologies will illuminate some of the solutions to hard challenges. will have ways of reading health records so that we can discern probabilities of getting certain types of infections, certain prophylactic measures we can take to extend, improve, and save people's lives.
Starting point is 00:15:12 We don't have the benefit of those AI sort of insights. We can decrease the rate at which mistakes are made within a medical health care context. So, I mean, it goes on and on. And this is what we should be talking about. I'm sitting on a commission right now pertaining to synthetic biology. And so much of advanced biology these days really involves the use of artificial intelligence technologies. And so I have charged the commission with educating me and members on specific things, innovations from material science to medicine to environmental health that I can start touting because that's going to be very powerful for my constituents and also for colleagues. As an outsider, it feels to me like the general discourse is getting more sane.
Starting point is 00:16:00 And so I've actually gotten a lot of faith in the process. Well, I tend to make people feel that way. But I mean in general, I just meant the national discords. I appreciate it. But the natural discourse, it does feel like... I just saw you meant in this conversation. Also this one too, yeah, yeah, yeah. But I think people are becoming much more reasonable about it and so forth.
Starting point is 00:16:16 And I just feel like you're one of the people that's really been advocating for AI research. I think it's the right thing to do. Thank you very much for doing that. Is your job getting easier, am I right, or is it shifted noticeably since it started, say, two years ago? I would say we were able to foresee the need for more research dollars as it relates to artificial intelligence. Myself, Senator Schumer, who helped get signed into law of the Chips and Science Act. We made a massive allocation for additional federal research through places like the National Science Foundation and Department of Energy. It wasn't easy to get that included with the chips piece, but arguably that will even be more.
Starting point is 00:16:52 consequential towards our economic security and our national security in the longer run than will the chips piece. So that authorization will allow us to now appropriate money if members of Congress can be persuaded for AI. And so I feel like half of that hard work is done. And we've got a lot more work to do, but I think the job's getting easier because we pass the chips act. It's not just the authorization of research, but we've already made the argument very recently. that something that falls outside of the direct DoD context, as most of the Chips and Science Act does, is indeed a national security investment. So you've got the institutional muscle memory used to that notion.
Starting point is 00:17:35 That is a historical and modern history. My colleagues typically don't think of an investment in the National Science Foundation as a national security investment, but it is. And then we also were able to make the case to my colleagues that by making these sorts of critical investments in research, and in next generation technologies, as we did through the Chips and Science Act. It's going to lead to a lot of economic growth.
Starting point is 00:17:58 And we were able to personalize that argument, state by state. So that's the very same argument that we have for this situation, artificial intelligence. So this is going to be repeating a little bit, and I apologize. I just think this is so important.
Starting point is 00:18:11 So many of the DC Insiders here understand the Chipsack and why it's important. A lot of people here that come from the Investment Committee or founders or whatever probably don't understand the implication. Yeah. My view, it is literally one of the most significant pieces of legislation in 50 years on innovation.
Starting point is 00:18:23 It's a huge, huge, huge benefit. It was the right thing to do. And so if you could just take just a couple minutes to describe at the highest level for those that are not insiders, what it is, I think it's so important to get this message out. So the Chipson Science Act was at once. It was a $53 billion investment
Starting point is 00:18:40 in both research and incentives for the semiconductor industry. The incentives would be to reshore some of our manufacturing capacity so that our supply chains, would not be as vulnerable to interruption as they, of course, were during the global pandemic, and they could be, God forbid, there was a geopolitical effort to interrupt those supply chains, say, by the Chinese Communist Party, making aggressive actions towards Taiwan.
Starting point is 00:19:08 It was also a national security effort because, of course, we need microprocessors, and we need our own radiation-hardened, domestically manufactured micro-processors to go into nuclear weapons, to go into nuclear weapons, to go into our radar systems, and all manner of other things. So that was the micro-processor piece, and we have been in the process now where some of that money is starting to flow. Over $200 billion of private capital has been invested,
Starting point is 00:19:35 and we're not even at a billion dollars, even close to it, of federal monies that has been released. So it's paying handsome dividends. The market is responding, and we're becoming less risky in our supply chains. The idea was not to become independent of other countries. And then there's this whole other and science piece, which for the purposes of this conversation, big investments in research, which I mentioned earlier, not just in artificial intelligence, but that research can also flow to hypersonics, quantum computing, synthetic biology, autonomous systems, and other areas far upstream, of course, but a bit more applied research than the curiosity-driven research that most people tend to associate with the NSF. Yeah, that's great.
Starting point is 00:20:20 Yeah. So a number of investors are kind of like these kind of free markets solve everything and like kind of almost libertarian event. I am not that. I worked at Livermore. I worked for DOE. I'm a two more efforts. I'm a huge believer in the government.
Starting point is 00:20:30 I'm a huge believer in actually national institutions and involvement. That said, there's been an ongoing dialogue with what is the right roles between kind of private and public partnerships for things like AI? What is the right balance? So I love your view on how we can work together on this. Yes. Where the government stops, you know, where the free markets, you know, pick up. Like, how do you think about that as we go forward?
Starting point is 00:20:53 Well, I look to history. I look at what we did in the space race. I look at the innovations that have occurred through our DOE labs, like the one you worked at, like the one that developed fracking technology before the frackers claimed it their own, right? So many innovations have been earned off of the toil of our researchers in our DOE labs through our land-grant colleges and the other constellation of research agency. So we need to keep making those investments. Over the years, the federal government started to pull back from research that wasn't curiosity-driven,
Starting point is 00:21:27 wasn't basic research, theoretical sort of research, as opposed to more applied. We need some more applied research. I think that's one of the lessons we've learned in recent history. Beyond that, we need clear rules of the road, clear regulations. Those regulations ought to typically be, and I use my words very carefully, here because they're exceptions to everything. But they typically should be technology agnostic so that we don't favor or constrain different types of technologies within the market. That's investors, that's business people, that's consumers. And then we're going to have to work with
Starting point is 00:22:01 our partners and allies on development of standards for some of these what I'll call platform technologies. There aren't many. Artificial intelligence and synthetic biology are the ones that really come to mind. So there's a diplomatic component that if we want our values embedded, and this and follow-on generation AI technologies, we better develop and design those technologies here, but we also need to embed the standards that we have as it relates to privacy, consumer protection, and other things into those technologies,
Starting point is 00:22:35 rather than leaving it to, I'll pick on the Chinese Communist Party again. Yeah, perfect. Okay, so we have the last minute. So I only thought it was fair because when I was at the Insight Forum, you asked me this question, so I'm gonna ask you two questions
Starting point is 00:22:46 that you asked me. The first one, it may be a hard question. It's not. What would you give the chances of P. Doom? The probability of P. Doom? The probability of P. Doom. This is colored by my optimism, right? But low. You didn't ask me to make it quantitative, qualitative. Yeah, low but not non-existent, which is how most of the doomsday scenarios are. They're low probability, high cost, and we need to hedge against them. But let me say this. The first step in hedging against them is to really seek to understand them,
Starting point is 00:23:22 to study them a lot better so that before you take more costly action, constraining the opportunity costs for humanity ahead, we really know what we're talking about. I love that. One last very quick question with my flare. Do you think that P. Doom, where P. Doom is a probability
Starting point is 00:23:38 that humanity has a catastrophic event? Do you think it's greater with or without AI? I think in the short run, You'll see I'm not pandering here. Oh, I love it. This is great. Here's what we're probably going to have. We're going to have in the short run some unsteadiness, right? Yeah.
Starting point is 00:23:52 Because we're trying to learn countermeasures. We're trying to figure out how systems work. We're trying to figure all sorts of things. And then, in the long run, I think we can actually push the wrist down. Awesome. Thank you so much. You were wonderful. Thank you so much.
Starting point is 00:24:06 Thanks. Thank you so much. that you can get an inside look into A16Z's American Dynamism Summit at A16Z.com slash 80 Summit. There, you can catch several of the exclusive stage talks, featuring policymakers like Deputy Secretary of Defense Kathleen Hicks or Governor Westmore of Maryland, plus both founders from companies like Anderil and Coinbase and funders like Mark Cuban, all building toward American dynamism. Again, you can find all of the above at A16Z.com slash 80 Summit. And we'll include a link in the show notes.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.