a16z Podcast - Politics & the Future of Tech with Marc Andreessen and Ben Horowitz
Episode Date: April 5, 2024“If America is going to be America in the next one hundred years, we have to get this right.” - Ben HorowitzThis week on “The Ben & Marc Show”, a16z co-founders Ben Horowitz and Marc Andreesse...n take on one of the most hot button issues facing technology today: tech regulation and policy.In this one-on-one conversation, Ben and Marc delve into why the political interests of “Big Tech” conflict with a positive technological future, the necessity of decentralized AI, and how the future of American innovation is at its most critical point. They also answer YOUR questions from X (formerly Twitter). That and much more. Enjoy! Resources:Watch full episode: https://youtu.be/dX7d6bRJI9kMarc on X: https://twitter.com/pmarcaMarc’s Substack: https://pmarca.substack.comBen on X: https://twitter.com/bhorowitzBen’s Article: “Politics and the Future” bit.ly/3PGKrgw Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
The entire world that's going to use AI is going to benefit from this kind of hyper-competition
that's going to potentially run for decades.
And so I think if you put these CEOs in a true serum, what they would say is that's actually their nightmare.
That's why they're in Washington.
That's why they're in Washington.
Crypto is really our best answer for getting back to delivering the Internet back to the people
and away from the large tech monopolies.
It is the one technology that can really do that.
And if we don't do that, over the next five years, he's been out.
are going to get much, much stronger.
Probably some of them will be stronger than the U.S. government itself.
We're definitely on the side of freedom to innovate.
Having said that, that's not the same as saying no regulations of anything ever.
Ignoring tech is kind of no longer an option in the government
in that they've seen it impact elections and education and everything.
And Big Tech has been president in Washington,
but big tech's interests are not only very different than startup,
innovators' interests, but we think also divergent from America's interest as a whole.
Hello, everyone. Welcome back to the A16Z podcast. In today's episode, you'll get to hear
directly from A16Z's co-founders, Mark Andreessen, and Ben Horowitz, as they discuss one of the
most hot-button issues of today. That is politics and the future of tech. Now, this is
obviously a climactic year for both policy and technology.
But the overlap of these two fields has been increasing for quite some time.
So today, Mark and Ben sit down to answer audience questions around this topic,
touching on the important and increasing distinction between big and little tech,
AI regulation and funding, the role of tech in geopolitics,
and even while the firm is chosen to get involved in tech policy.
Now they cover much more than that, including whether Mark or Ben would ever run for office themselves.
And you'll have to listen into the end for that one.
Enjoy.
The content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investor or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com slash disclosures.
Welcome back, everybody. We are very excited for this episode. We are going to be discussing a lot of hot topics. The theme of today's show is tech and policy and politics. There's just a tremendous amount of heat right now in the tech world about politics. There's a tremendous amount of heat in the political world about tech. And then we as a firm and actually both Ben and I as individuals have been spending a lot more time in policy and politics circles over the last several months. And we as a firm have a much bigger pusher than we used to, which Ben will describe in a moment. The big disclaimer that we want to provide up front for this is that we are a
On partisan firm, we are 100% focused on tech politics and policy. We today in this episode are going
to be describing a fair number of topics, some of which involve partisan politics. Our goal is to
describe anything that is partisan as accurately as possible and to try to be as sort of fair-minded
and representing multiple points of view as we can be. We are going to try very hard and not
take any sort of personal political partisan position. So please, if you could, grant us some
generosity of interpretation in what we say we are trying to describe and explain as
as opposed to advocate for anything specifically partisan.
We advocate for tech policy topics.
We do not advocate for other partisan topics.
So, yeah, on that theme, Ben, we wrote a little while ago.
You wrote a blog post about our firm's engagement in politics and policy.
We sort of laid out our goals and then also how we're going about it.
And we're actually quite transparent about this.
And so I was hoping maybe as an introduction for people who haven't seen that
if you could walk through what our plan and strategy is and how we think about this.
Yeah, it kind of starts with why now.
Why get involved in politics now?
historically tech has been a little involved in politics but it's been relatively obscure issues
H-1B visas stock option accounting carried interest things like that but now the issues are much
more mainstream and it turns out that for most of kind of the software industry's life
Washington just hasn't been that interested in tech or in regulating tech for the most part
but starting kind of in the mid-2000s as softwareate the world and
Tech started to invade all aspects of life.
Ignoring tech is kind of no longer an option in the government
in that they've seen it impact elections and education and everything.
And I think policymakers really want to get in front of it is a term that we hear a lot.
We need to be in front of these things this time, not like last time when we were behind the curve.
And so tech really needs a voice and in particular little tech needs a voice.
So big tech has been present in Washington, but big tech's interests are not only very different than kind of startup innovation, innovators' interests, but we think also kind of divergent from America's interest as a whole.
And so that just makes it quite imperative for us to be involved, not only to represent the startup community, but also to kind of get to the right answer for the country.
And for the country, this is, we think, a mission-critical effort because if you look at the last
century of the world, and you say, okay, why was America strong? And why was basically any country
significant in terms of military power, economic power, cultural power in the last hundred
years? And it was really those countries that got to the Industrial Revolution first and
exploited it best. And now at the dawn of the kind of information age revolution, we need to be
there and not fall behind, not lose kind of our innovative edge. And that's all really up for grabs.
And really the kind of biggest way America would lose it, because we're still like from a
capitalistic system standpoint, from an education standpoint, and so forth, from a talent
standpoint, we're extremely strong and should be a great innovator. But the thing that would stop
that would be kind of bad or misguided regulation that forces innovation elsewhere out of the
country and kind of prevents us ourselves, America and the American government from adopting
these technologies as well. And kind of driving that, driving the things that would kind of make
us bad on tech regulation, our first really big tech, who's goal.
is not to drive innovation or make America strong,
but to preserve their monopoly.
We've seen that act out now, an AI in a really spectacular way
where Big Tech has pushed for the banning of open source for safety reasons.
Now, you can't find anybody who's been in the computer industry
who can tell you that any open source project is less safe.
First of all, from a hacking standpoint, you know,
and you talk about things like prompt injection,
and then new attacks and so forth,
you would much more trust an open source solution
for that kind of thing.
But also for a lot of the concerns of the U.S. government
about copyrights, where does this technology come from, and so forth.
Not only should the source code be open,
but the data should probably also be open as well,
so we know what these things were trained on.
And that's also for figuring out what their biases and so forth.
How can you know if it's a black box?
So this idea that closed source would be safer,
and big tech actually got some of this language
into the Biden administration executive order,
like literally under the guise of safety
to protect themselves against competition is really scary.
And so that's kind of a big driver.
The other related driver is, I think,
this combination of big tech pushing for fake safetyism
to preserve their monopoly
and then rather thin understanding
of how the technologies work in the federal government.
And so without somebody kind of bridging the education gap,
we are, as a country, very vulnerable to these bad ideas.
And we also think it's just a critical point in technology's history to get it right,
because if you think about what's possible with AI,
so many of our country's kind of biggest challenges are very solvable now,
things like education, better and more equal health care,
just thinning out the bureaucracy that we've built and making,
the government easier to deal with, particularly for kind of underprivileged people trying to get
into business and do things and become entrepreneurs. All these things are made much, much better by
AI. Similarly, crypto is really our best answer for getting back to delivering the internet
back to the people and away from the large tech monopolies. It is the one technology that can really
do that. And if we don't do that, over the next five years, these monopolies are going to get much, much
stronger. Probably some of them will be stronger than the U.S. government itself. And we have this
technology that can help us get to this dream of stakeholder capitalism and participation for
all economically, and we could undermine the whole thing with poor regulation. And then finally,
in the area of biology, we're at an amazing point. If you look at that kind of history of biology,
we've never had a language, much like we never had a language to describe physics for a thousand
years. We didn't have a language to really model biology till now the language for physics was
calculus. The language for biology is AI. And so we have the opportunity to cure a whole host
of things we could never touch before, as well as kind of address populations that we never even
did any testing on before and always put in danger. And this again, you have big pharma whose interest
isn't preserving the existing system because it kind of locks out all the innovative competition.
And so for all those reasons, we've, like, massively committed the flag and the firm to being
involved in politics. So you've been spending a tremendous amount of time in Washington.
I've been spending time in Washington. Many of our other partners, like Chris Dixon, VJ Pondy,
have been spending time in Washington. We have real actual kind of a lobbying capability within the
firm, and we'll talk about that some more. But we call it government.
affairs, but they're registered lobbyists and they're working to work with the government and set
up the right meetings and help us get our message across. And then we're deploying a really
significant amount of money to basically pushing innovation forward, getting to the right
regulation on tech that preserves America's strength. And we are not only committed to doing that
this year, but for the next decade. And so this is a big effort for us. And we thought it would be a good
idea to talk about it on the podcast. Thank you. That was great. And then, yeah, the key point there
at the end is worth double underlining, I think, which is long-term commitment. There have been
times with tech, specifically where there have been people who have kind of cannonballed their way
onto the political scene with large sort of money bombs. And then maybe they were just single issue
or whatever, but they're in and out. Or they're just in and out. It was just like they thought
they could have short-term impact. And then two years later, they're gone. We're thinking about that
very differently. Yeah, and that's why I brought up the historical lens. We really think that
if America's going to be America in the next hundred years, we have to get this right. Good. Okay,
we're going to unpack a lot of what you talked about and go into more detail about it. So I will get
going on the questions, which again, thank you everybody for submitting questions on X. We have a
great lineup today. So I'm going to combine a bunch of these questions because there were some themes.
So Jared asks, why has tech been so reluctant to engage in the political process, both at the local
and national level until now? And then Kate asks,
Interestingly, the opposite question, which I find this juxtaposition very interesting because
this gets to the nature of how we've gotten to where we've gotten to.
Kate asked, tech leaders have spent hundreds of millions lobbying in D.C., right?
The opposite point.
In your opinion, has it worked and what should we be doing differently as an industry when it
comes to working with D.C.?
And so I wanted to kind of juxtapose these two questions because I actually think they're both
true.
And the way that they're both true is that there is no single tech, right?
And to your point, there is no single tech.
Maybe once upon a time there was, you know, and I would say my involvement in political
efforts in this domain started 30 years ago. I've seen a lot of the evolution over the last
three decades, and I was in the room for the founding of TechNet, which is one of the sort of
legacy John Chambers and John Doors. So I've kind of seen a lot of twists and turns on this
over the last 30 years. The way I would describe it is, you know, as Ben said, so one is there just,
was there a sort of a distinction and a real difference of view between Big Tech and Little Tech
20, 30 years ago? Yes, there was. It's much wider now. I would say that whole thing is really
gapped out. You probably remember big tech companies in the 80s and 90s often actually didn't really
do much in politics.
Probably, most famously, Microsoft, probably.
Everybody at Microsoft during that period would probably say they had
underinvested, kind of given what happened with the antitrust case
that unfolded.
Yeah, actually, one issue we were united on was the stock option
accounting, which, interestingly, and we were
against Warren Buffett, and Warren Buffett was absolutely wrong on it
and won. And it's actually very much strength in tech
monopoly. So I think did the opposite of what people
in, certainly in Silicon Valley wanted. And I think
people in Washington, DC, and America would have wanted was to make these monopolies so strong
and using their market cap to further strengthen their monopoly because we move from stock options
to, you know, it's too esoteric to get into here, but let's just say, trust me, it was bad.
Yes, yes. It was very good for big companies, very bad for startups. Yeah, and actually,
that's another thing that actually happened in the 90s and 2000s is, so there's a fundamental
characteristic of the tech industry, and in particular tech startups and tech founders,
Ben and I would include ourselves in that group, which is we are idiosyncratic, disagreeable,
iconoclastic people. And so there is no tech startup association. Like every industry group in
the country has like an association that has like offices in D.C. and lobbyists and like major
financial firepower. And you know these under names like the MPA in the movie industry and the
RAA and the record industry and the National Association of Broadcasters and the National
Oil and Gas Association and so forth. Like every other industry has these groups where
basically the industry participants come together and agree on a policy agenda. They hire lobbyists
and they put a lot of money behind it.
The tech industry, especially the startups,
we've never been good at agreeing on a common platform.
And in fact, Ben, you just mentioned the stock option accounting thing.
Like, that's actually my view of what happened at TechNet,
which is TechNet was an attempt to actually get like the startup founders
and the new dynamic tech companies together.
But the problem was we all couldn't agree on anything other than basically there
were two issues.
We could agree on stock option expensing as an issue
and we could agree on carried interest for venture capital firms as an issue.
Yeah.
Carried interest tax treatment.
And so basically what ended up happening was, in my view,
kind of tech net early on got anchored on these, I would say, pretty esoteric accounting and financial
issues. And this could not come to agreement on many other issues. And I think a lot of attempts to
coordinate tech policy in the Valley have had that characteristic. And then look, quite honestly,
the other side of it, Ben, you highlighted this, but I want to really underline it. It's just like,
look, the world has changed. And I would say we're up around until about 2010. I think you could
argue that politics and tech were just never that relevant to each other. For the most part,
what tech companies did was they made tools. Those tools got sold to customers. They used them in a
different ways. And so how do you regulate it database software or an operating system or a word
processor or a router? Regulating a power drill or a hammer, right? Yeah, exactly, right,
exactly. What are appropriate shovel regulations? And so it just wasn't that important. And then
this is where I think Silicon Valley kind of deserves its share of blame for whatever's gone wrong,
which is as a consequence. I think we all just never actually thought it was that important
to really explain what we were doing and it would be really engaged in the process out there.
And then, look, the other thing that happened was there was a love affair for a long time.
And there was just a view that like tech startups are purely good for society.
Tech is purely good for society.
There were really no political implications to tech.
And by the way, this actually continued interesting up through 2012,
people now know of all the headlines that social media is destroying democracy and all these
things that kind of really kicked into gear after 2015, 2016.
But, you know, even 2012, like, you know, social media had become very important actually
in 2012 election, but the narrative in the press was like almost uniformly positive.
You know, it was very specifically that social media is protecting democracy by making sure
that certain candidates get elected.
And then also, by the way,
Obama, there were literally headlines
from newspapers and magazines today
that are very anti-tech,
that were very pro-tech at that point
because the viewist tech helped Obama get re-elected.
And then the other thing was actually the Arab Spring.
There was this moment where it was like tech
is not only going to protect democracy in the U.S.,
but it's going to protect democracy all over the world.
And Facebook Google were the catalysts,
at the time, we're viewed as the catalyst for the Arab Spring,
which is going to, of course, bring a flowering of democracy
to the Middle East that has quite...
It didn't work out that way, by the way.
It did not work out that way.
And so, anyway, the point is, it is relatively recent in the last 10, 12 years that everything has just kind of come together.
And all of a sudden, people in the policy arena are very focused on tech.
People in the tech world have very strong policy, politics opinions, the media weighs in all the time.
And by the way, none of this is a U.S. only phenomenon.
We'll talk about other countries later on.
But there's also these issues that are playing out globally in many different ways.
But I guess one thing I would add is like, when I'm in, I do a fair amount in D.C. on the non-political side.
And when I'm in meetings involving national security or intelligence or civil policy of whatever kind,
it's striking how many topics that you would not think are tech topics end up being tech topics.
And it's just because like when the state exercises power now, it does so with technologically enabled means.
And then when citizens basically resist the state or fight back against the state, they do so with technologically enabled means.
And so there's sort of this sometimes say we're the dog that caught the bus on this stuff, right,
which is we all want a tech to be important in the world.
It turns out tech is important in the world.
and then it turns out the things that are important in the world end up getting pulled into politics.
Yeah, yeah, no, I think that's right.
On the second part of the question, like, why is tech been so ineffective despite pouring all the money in it?
And I think there are like a few important issues around that.
One is really arrogance in that I think we in tech and a lot of the people went in are like,
oh, we're the good guys, we're for the good, and everybody will love us when we get there
and we can just push our agenda on the policymakers without really putting in the top.
in the work to understand the issues and the things that you face as somebody in Congress or
somebody in the White House in trying to figure out what the right policy is. And I think that
we are coming at that from kind of our cultural value, which is we take a long due of relationships.
We try never to be transactional. And I think that's especially important on policy because
these things are massively complex. And so we understand our issues and our needs.
But we have to take the time to understand the issues of the policymakers and make sure that we work with them to come up with a solution that is viable for everyone.
And so I think that's thing one.
I think Texas has been very bad on that.
And the second one is I think that they've been partisan where it's been like not necessary or not even smart to be partisan.
So people have come in with whatever political bent, mostly kind of Democrat,
Democratic Party that they have. And, okay, we're going to go in without understanding you
and only work with Democrats because we're Democrats and this kind of thing. And I think, you know,
our approach is, look, we are here to represent tech. We want to work with policy makers on both
sides of the aisle. We want to do what's best for America. We think that if we can describe that
correctly, then we'll get support from both sides. And that's just a really different approach.
So hopefully that's right. And hopefully we can make progress.
okay good so let's go to the next question so this is again a two-part question so sheen asks
in what ways do you see the relationship between silicon valley and bc evolving in coming years
particularly in light of recent regulatory efforts targeting tech giants and we'll talk about
ticot later on but you know there's been obviously big flashpoint kind of events happening right now
by the way also for people haven't seen the DOJ just filed a massive antitrust lawsuit against apple
the tech topics are very hot right now in dc so how do we see the reliance on the way that one's
an interesting one of the one of the things that i've talked about which is
A lot of little tech, I think, is very much in alignment with some of the things that the FTC is doing.
But probably we would do it against a different kind of set of practices and behaviors of some of the tech monopolies.
It just shows why, like, more conversation is important on these things.
Because, you know, what we think is the kind of abuse of the monopoly.
And what the lawsuit is, I would say, are not.
not exactly the same thing. Well, let's talk about that. Let's talk about that for a moment,
because this is a good case study of dynamics here. So the traditional kind of free market
libertarian view is sort of very critical of antitrust theory in general and is certainly
very critical of the current prevailing antitrust theories, which are kind of more expansive
and aggressive than the ones of the last 50 years, as shown in things like the Apple lawsuit
and many other actions recently. And so for people in business, there's sort of a reflexive view
that basically says businesses should be allowed to operate. But then there are certainly people
who have this view that basically says any additional involvement.
of the political machine, especially the sort of prosecutorial machine in tech,
is invariably going to make everything worse in tech.
And so, yeah, they sue Apple today and maybe you're happy because you don't like Apple today
because they abuse your startup or whatever.
But if they win against Apple, they're just going to keep coming and coming and coming and do more
and more of these.
The opposite view would be the view that says, no, actually, to your point, the interests
of Big Tech and Little Tech have actually really diverged.
And that if there is not actually strong and vigorous investigation and enforcement,
and then ultimately things like the Apple lawsuit, these companies are going to get so
powerful that they may be able to really seriously damage Little Tech for a very long time.
So maybe, Ben, talk a little bit about how we think through that, because we even debate
this inside our firm. But talk a little bit through about how to process through that and then
what you think and then also where you think those lines of argument are taking us.
Yeah, so look, full disclosure, and we were at Netscape. We were certainly on the side of Little
Tech against Big Tech and Microsoft at that time had a 97% market share on desktop. And it was
very difficult to innovate on the desktop. It was just bad for innovation to have them in
level of position of power. And I think that's happened on the smartphone now, particularly
with Apple. I think kind of the epic case and the Spotify cases are really great examples of that
where I am fielding a product that's competitive with Spotify and I am charging Spotify a 30%
tax on their product. That seems unfair. Just from the consumer label, like just from the
standpoint of the world. And it does seem like it's using monopoly power and a very aggressive
So I think it's certainly against our interest and the interest of new companies for the monopolies to exploit their power to that degree.
Like when the government gets involved, it's not going to be like a clean, surgical, okay, here's exactly the change that's needed.
But I also think with these global businesses with tremendous lock-in, you just have to at least have the conversation and say, okay, what is this going to do for consumers if we let it run?
And we need to represent that point of view, I think, from the kind of small tech perspective.
Yeah.
And the big tech companies are certainly not doing us favors right now.
So they're certainly not acting in ways that are pro-startups, I think we can say, as a general.
No, no, no, no, the opposite, sure.
Quite the opposite.
One of my ideas I kick around a lot is, it feels like any company is either too scrappy or too arrogant, but never in the middle.
Yeah, yeah, yeah, like, it's like people.
Right.
You're either the underdog or you're the overdog, and there's not a lot of, not a lot of reasonable dogs.
Exactly, exactly.
Yeah, so there's an inherent tension there.
It seems very hard for these companies to reach a point of dominance and not figure out some way to abuse it.
I also think you kind of touch on an important point, which is in representing little tech, we're not a pure libertarian, anti-regulatory kind of force here.
We think we need regulation in places.
We certainly need it in drug development.
We certainly need regulation in crypto and financial services,
the financial services aspect of crypto.
It's very important.
It's very important to the industry that would be strong in America
with a proper kind of regulatory regime.
So we're not anti-regulation.
We're kind of pro the kind of regulation
that will kind of make both innovation strong
and the country's strong.
Yeah, and we should also say, look,
when we're advocating on behalf of little tech,
Obviously, there's self-interest kind of as a component of that because we're a venture capital firm and we back startups.
And so there's obviously a straight financial interest there.
I will say, I think, Ben, you'd agree with me.
Like, we also feel like philosophically like this is a very sort of pro-America position, very pro-consumer position.
And the reason for that is very straightforward, which is, as you've said many times in the past, the motto of any monopoly is we don't care because we don't have to.
Right, exactly.
And so it's a probably experienced if you've called customer service when one of these monopolies has kicked you off their platform.
form. Yes, exactly. And so, yeah, it's just there is something in the nature of monopolies
where if they no longer have to compete and if they're no longer disciplined by the market,
they basically go bad. And then how do you prevent that from happening? The way you prevent that
is from forcing to compete the way that they have to compete. In some cases, they compete with
each other, although often they collude at each other, which is another thing. Monopoly and
cartel are kind of two sides of the same coin. But really, at least in the history of the tech
industry, it's really when they're faced with startup competition. When the elephant has a
terror at his heels nipping at him, taking increasingly big bites out of his foot, like that's when
big companies actually act and when they do new things. And so without healthy startup competition,
there are many sectors of the economy where it's just very clear now that there's not
enough startup competition because the incumbents that everybody deals with on a daily basis
are just practically intolerable. And it's not in anybody's interest ultimately from a national
policy standpoint for that to be the case, you know, that things can get bad where it's to
the benefit of the big companies to preserve those monopolies, but very much not to anybody else's
benefit. Yeah, no, exactly, exactly, which is such a big impetus behind our kind of political
activity. Yeah, that's right. Okay, now we're going to future looking. So in what ways do you see the
relationship between Silicon Valley and D.C. evolving in the coming years. And then specifically,
and again, we're not going to be making partisan recommendations here, but, you know, there is
an election coming up and it is a big deal. And it's going to have both what happens in the White
House and what happens in the Congress is going to have big consequences for everything we've
just been discussing. So how do we see the upcoming election affecting tech policy? Yeah, well,
I think there are several issues that end up being really important to kind of educate people on now,
Because whatever platform you run on as a Congress person or as a president, you want to kind of live up to that promise when you get elected.
And so a lot of these kind of positions that will persist over the next four years are going to be established now.
I think in crypto in particular, we've been very active on this because we have a big donation to something called the Fair Shake Pack, which is kind of work on this.
And just identifying for kind of citizens, okay, which politicians.
are on what side of these issues?
Who are the just flat-out anti-crypto,
anti-innovation, anti-blockchain,
anti-decentralized technology candidates?
And let's at least know who they are
so that we can tell them we don't like it
and then tell all the kind of people
who agree with us that we don't like it.
And a lot of it ends up being,
you know, look, we want the right regulation for crypto.
We've worked hard with policymakers.
to kind of help them formulate things that will prevent scams,
prevent nefarious uses of the technology for things like money laundering and so forth,
and then enable the good companies,
the companies that are pro-consumer helping you own your own data
and not have it owned by some monopoly corporation who can exploit it
or just lose it, get broken into.
And so you now have identity theft problems and so forth
that can kind of help kind of a fairer economy for creative so that there's not a 99% take rate on things that you create on social media and these kinds of things.
And so it's just important to kind of, I think, educate the populace on where every candidate stands on these issues.
And so we're really, really focused on that.
And I think same true for AI, same true for bio.
I'd also had like, it's not actually the case that there's a single party in D.C. that's pro-tech and a single party does anti-tech.
Definitely not.
There's not.
And by the way, if that were the case, it would make life a lot easier.
Yes, but it's not the case.
And I'll just give a thumbnail sketch of at least what I see when I'm in D.C.
And see if you agree with this.
So the Democrats are much are fluent in tech.
And I think that has to do with who their kind of elites are.
It has to do with this kind of very long-established revolving door.
And I mean that in both the positive and pejorative sense between the tech companies and the Democratic Party, Democratic politicians, political offices, congressional offices, white house offices.
is there's just a lot more integration.
The big tech companies tend to be very democratic,
which you see in all the donation numbers and voting numbers.
And so there's just a lot more, I would say,
tech-fluent, tech-aware Democrats,
especially in powerful positions.
Many of them have actually worked in tech companies.
Just as an example, the current White House chief of staff
is former board member at Meadow, where I'm on the board.
And so there's a lot of sort of connective tissue.
Between those, look, having said that,
the current Democratic Party,
and in particular, certain of its more radical wings,
had become extremely anti-tech
to the point of being arguably,
in some cases outright anti-business, anti-capitalism.
And so there's a real kind of back and forth there.
Republicans, on the other hand, in theory and the stereotype, would have you believe
Republicans are sort of inherently more pro-business and more pro-free markets and should
therefore be more pro-tech.
But I would say there again, it's a mixed bag because, number one, a lot of Republicans
just basically think of Silicon Valley that is all Democrats.
And so Silicon Valley is all Democrats.
If we're Republicans, that means they're de facto the enemy.
They hate us.
They're trying to defeat us.
They're trying to defeat our policies.
And so they must be the enemy.
And so there's a lot of, I would say, some combination of distrust and fear and hate on that front.
And then again, with much less connective tissue.
There are many fewer Republican executives at these companies, which means there are many fewer Republican officials or staffers who have tech experience.
And so there's a lot of mistrust.
And, of course, there have been flashpoint issues around this lately, like social media censorship that are really exacerbated this conflict.
And then the other thing is they're very serious policy disagreements.
And there, again, there are at least wings of the modern Republican Party that are actually quite economically interventionist.
And so, you know, term of the moment is industrial policy, right, which basically there are Republicans who are very much in favor of a much more interventionist government approach towards dealing with business and in particular dealing with tech.
And so I guess to say, like, this is not an either-or thing. Like, there are real issues on both sides.
The way we think about that is, therefore, there's a real requirement to engage on both sides.
There's a real requirement, Ben, to your point, to educate on both sides.
And if you're going to make any progress at tech issues, there's a real need to have a bipartisan approach here because you do have to actually work with both sides.
Yeah, and I think that's absolutely right.
to kind of name names a little.
If you like it, like the Democratic side,
you've got people like Richie Torres out of the Bronx.
And by the way, a huge swath of the Congressional Black Caucus that sees,
wow, crypto is a real opportunity to equal the financial system,
which has historically been documentedly racist against kind of a lot of their constituents.
And then also the creatives, which they represent a lot, to kind of get a fair shake.
And then on the other hand, you have Elizabeth Warren, who has taken a,
very totalitarian view of the financial system and is moving to consolidate everything in the
hands of very small number of banks and basically control who can participate and who cannot
in finance. So these are just very different views out of the same party. And I think that we need
to just make the specific issues really, really clear. Yeah. And the same thing. We can spend
a long time also naming names on the Republican side. So yes. Yes, which will
It would do later.
But so, yeah, well, I should do it right now just to make sure that we're fair on this.
They're Republicans who are like full-on pro-free market, very opposed to all current government
efforts to intervene in markets like A& Crypto.
By the way, many of those same Republicans are also very negative in the antitrust action.
They're very ideologically opposed to antitrust.
And so they would also be opposed to things like the Apple lawsuit that a lot of startup
founders might actually like.
And then on the flip side, you have folks like Josh Hawley, for example, that are, I would
say quite vocally.
I would say irate at Silicon Valley and very in favor of much more government intervention
and control. You know, I think a Holly administration, just as an example, would be extremely
interventionist in Silicon Valley and would be very pro-industrial policy, very much trying to both
sort of set goals and sort of have government management of more of tech, but also much more
dramatic action against, at least perceived a real enemy. So it's the same kind of mix back.
Yeah, so anyway, I wanted to go through that, though. This is kind of the long,
winding answer to the question of how will the upcoming election affect tech policy, which is,
you know, look, there are, you know, there are real issues of the Biden administration in particular
with the agencies and with some of the affiliated senators has been just described. So, you know,
there are certainly issues where, you know, the agencies, you know, under the Trump administration,
the agencies would be headed by very different kinds of people. Having said that, it's not that,
you know, it's not that a Trump presidency would necessarily be a clean win, you know,
and there are many people in sort of that wing who might be hostile in, by the way, in different ways,
or actually might be hostile in some cases in the same ways. Yeah. And by the way, you know,
Trump himself has been quite the moving target on this.
You know, he was very, he tried to ban TikTok,
and now he's very pro-Tik-Tac.
You know, he has been, you know, negative on AI,
who's originally negative on crypto,
it's not positive on crypto.
So, you know, it's complex.
And, you know, which is why I think the foundation of all of this is,
you know, education and we, you know,
why we're spending so much time in Washington and so forth
is to make sure that, you know, we communicate all that we know about,
technology so that at least these decisions are highly informed that the politicians make.
Good. Okay, so moving forward, so three questions in one. So Alex asks, as tech regulation
becomes more and more popular within Congress, which is happening, do you anticipate a lowering in
general of the rate of innovation within the industry? Number two, Tyler asks, what is a key policy
initiative that have passed in the next decade could bolster the U.S. for a century? And then Elliot
Parker asks, what's one regulation that if removed would have the biggest positive impact in economic
growth. Yeah, so I think that, as Ben, see if you disagree with this, I don't know that
there's a single regulation or a single law or a single issue. You know, there are certainly,
I mean, there are certainly individual laws or regulations that are important. But I think
the thematic thing is a much bigger problem or much bigger. The thematic thing is the thing that
matters. Things that are coming are much more serious on the things that have been. I think
that's correct. Yeah. Okay, we'll talk about that. Yeah, go ahead. Yeah, I mean, so, you know,
If you look at the current state of regulation, if it stayed here, there's not anything that we really feel like a burning desire to remove it in the same way that things that are on the table could be extremely destructive.
And basically, you know, look, if we ban large language or large models in general or we force them to go through some kind of, you know,
government approval, or if we ban open source technology, you know, that would have just
a devastating. It would basically take America out of the AI game and, you know, make us
extremely vulnerable from a military standpoint, make us extremely vulnerable from, you know,
technology standpoint in general. And so, you know, that's devastating. Similarly, you know,
if we don't get kind of proper regulation around crypto, the trust in, you know, that's devastating. And similarly, you know, if we don't get kind of proper regulation around crypto, the trust
in the system and the business model is going to fade
or is going to kind of be in jeopardy
and that it's not going to be the best place in the world
to build crypto companies and blockchain companies,
which would be a real shame.
You know, the kind of analog would be the kind of creation
of the SEC, you know, after the Great Depression,
which really helped put trust into the U.S. capital markets.
And I think that, you know, trust into the blockchain system
as a way to kind of invest, participate, be a consumer, be an entrepreneur,
are really, really important and necessary and very important to get those right.
Okay, and then speaking, okay, let's move straight into the specific issues then more.
So, expand on that.
So Lenny asks, what form do you think AI regulation will take over the next two administrations?
B. Secandi asks, well, AI regulation result in a concentrated few companies
or an explosion of startups and new innovation.
E. Ray asks, how would you prevent the AI industry from being monopolized,
centralized with just a few tech corps. And then our friend, Beth Jesus, asks, how do you see
the regulation of AI compute and open source models realistically playing out? Where can we apply
pressure to make sure we maintain our freedom to build and own AI systems? It's really interesting
because there's like a regulatory dimension of that. And then there's the kind of technological
kind of, you know, version of that. And they do intersect. So if you look at what big tech has been
trying to do, they're trying, they're very worried about new competition to the point where
they've taken upon themselves to go to Washington and try and outlaw their competitors.
And, you know, if they succeed with that, then I think it is like super concentrated AI power,
you know, making the kind of concentrated power of social media or search or so forth,
kind of really pale in comparison. I mean, it would be very dramatic if there were only three
companies that were allowed to build AI, and that's certainly what they're pushing for.
So I think in one regulatory world where big tech wins, then there's very few companies
doing AI, probably, you know, Google, Microsoft and meta.
And, you know, Microsoft, you know, having, you know, basically full control of open AI as they
kind of demonstrated, they have the source code, they have the way to, you know, such a win
as far as saying that, you know, we own everything.
And then they also kind of control who the CEO is
as they demonstrated, you know, beautifully.
So, you know, if you take that,
it will all be owned by, you know, three, maybe four companies.
If you just follow though the technological dimension,
I think what we're seeing play out has been super exciting in that,
you know, we were all kind of wondering,
would there be one model that ruled them all?
And even within a company, I think we're finding
that there's no current architecture
that's going to gain, you know, on a single thing,
a transformer model, a diffusion model, and so forth,
that's going to become so smart in itself
that once you make it big enough,
it's just going to know everything and that's going to be that.
What we've seen is, you know,
even the large companies are deploying a technique
called a mixture of experts,
which kind of implies, you know,
you need different architectures for different things.
They need to be integrated in a certain way
and the system has to work.
And that just opens the aperture for a lot
competition because there's many, many ways to construct a mixture of experts to architect
every piece of that. We've seen, you know, little companies like Mistral field models that are
highly competitive with, you know, the larger models very quickly. And, you know, and then there's
other kind of factors like, you know, latency costs, et cetera, that factor into this. And then there's
also good enough. Like, when is a language model good enough? You know, when it speaks English, when it
knows about what things, what are you using it for? And then there's domain specific data.
You know, I've been doing whatever medical research for years and I've got, you know,
data around all these kinds of genetic patterns and diseases and so forth. You know, I can
build a model against that data that's differentiated by the data and so on. So I think what we will,
we're likely to see kind of a great kind of Cambrian explosion of innovation across all
sectors, you know, big companies, small companies, and so forth, provided that the regulation
doesn't outlaw the small companies. But that would be my prediction right now.
Yeah, and I had a bunch of things to this. So one is, even on the big model side, there's been
this leapfrogging thing that's taking place. And so, you know, there's, you know, opening, you know,
GPT4 was kind of, you know, the dominant model not that long ago. And then it's, it's been leapfrog
in significant ways recently by both Google, what's there, Gemini Pro, especially the one with
the so-called long context window where you can feed it.
700,000 words or an hour of full motion video as context for a question, which is a huge
advance. And then, you know, the Anthropic, their big model, Claude is, you know, a lot of
people now are finding that to be a more advanced model than GPT4. And, you know, one assumes
opening is going to come back. And, you know, this leapfrogging will probably happen for a while.
So, so even at the highest end, you know, at the moment these companies are still competing
with each other, you know, there's still this leapfrogging that's taking place. And then, you know,
Ben, as you, as you articulate it, you know, very well, you know, there is this, this giant
explosion of models of all kinds of shapes and sizes.
Our, you know, our company, Databricks just released another, you know, another, what looks
like a big leapfrog on the smaller model side.
It's the, I think it's the best small model now in the benchmarks, and it is, it actually,
it's so efficient, it will run on a MacBook.
Yeah.
And they have the advantage of, you know, as an enterprise, you can connect it to a system that
gives you not only like enterprise, quality access control and all that kind of thing, but also,
you know, it gives you the power to do SQL queries with it,
gives you the power to basically create a catalog
so that you can have a common understood definition
of all the weird corporate words you have.
Like, by the way, one of which is customer.
Like, there's almost no two companies that define customer
in the same way.
And in most companies, there are several definitions of customer,
you know, from is it a department at AT&T?
Is it AT&T?
Is it, you know, some, you know, division of AT&T?
et cetera, et cetera.
I think, I don't want to literally speak for them,
but I think if you put the CEOs of the big companies under truth serum,
I think what they would say is their big fear is that AI is actually not going to lead
to a monopoly for them.
It's going to lead to a commodity.
It's going to lead to sort of a race to the bottom on price.
And you see that a little bit now, which is people who are using one of the big models
APIs are able to swap to another big model API from another company pretty easily.
And then, you know, these models, the main business model for these big models,
at least so far as an API, you know, basically pay per token generated or per answer.
And so like if these companies really have to compete with each other, like it may be
that it actually is a hyper-competitive market.
It may be the opposite of like a search market or like an operating system market.
It may be a market where there's just like continuous competition and improvement and leapfrogging
and then, you know, constant price competition.
And then, of course, you know, the payoff from that is, you know, to everybody else in the world
is like an enormously vibrant market where there's constant innovation happening and then
there's constant cost optimization happening where,
and then as a customer downstream of this,
the entire world that's going to use AI
is going to benefit from this kind of hyper-competition
that's going to potentially run for decades.
And so I think if you put these CEOs in a true serum,
what they would say is that's actually their nightmare.
Well, that's why they're in Washington.
That's why they're in Washington.
So that is what's actually happening.
That is the scenario they're trying to prevent.
They are actually trying to shut out competition.
And by the way, actually, I will tell you this,
there is a funny thing. Tech is so historically bad at politics that I think some of these folks
think they're being very clever in how they go about this. And so, you know, because they show up
in Washington with the kind of, you know, kind of public service narrative or end of the world
narrative or whatever it is. And I think they think that they're going to very cleverly kind of trick
people in Washington into giving them a sort of cartel status and the people in Washington
don't realize until it's too late. But it actually turns out people in Washington are actually quite
cynical. They've been lobbied before. Exactly. And so,
There is this thing, and they, you know, they won't, they don't, I get this from them off the record a lot, especially after a couple of drinks, which is basically if you've been in Washington for longer than two minutes, you have seen many industries come to Washington, many big companies come to Washington and want monopoly or cartel kind of regulatory protection. And so you see, you've seen this, and if you're in Washington, you've seen this play out, you know, in some cases, the guys have been there for a long time, dozens or hundreds of times. And so my sense is like nine months ago or something, there was a moment where it seemed like the big tech companies could kind of get away with this. I think it's, it's,
It's actually, I think actually it's the edges.
I'm still concerned and we're still working on it,
but I think the edges come off a little bit
because I think the cynicism of Washington in this case is actually correct,
and I think they're kind of onto these companies.
And then, you know, look, if there's unifying issue,
there's basically two unifying issues in Washington.
One is they don't like China and the other is they don't like big tech.
And so, you know, this is a winnable war.
Like this is a winnable war on behalf of startups and open source
and freedom and competition.
And so I'm actually, yeah, I'm worried,
but I'm feeling much better about it than I was nine months.
ago.
Well, look, we had to show up.
I mean, that's the other thing.
I mean, it's taught me a real lesson, which is, you know, you can't expect people to know
what's in your head.
You know, you've got to go see them.
You've got to put in the time.
You've got to kind of say what you think.
And then, you know, and if you don't, you don't have any right to, like, ring your hands
with how, like, you know, bad things are.
Yeah.
And then I just wanted to know one more thing just for it says, you know, you kind of
mentioned the big companies being Microsoft Google Meta.
It is worth noting meta is on the open source side of this.
And so meta is actually working quite hard.
And this is a big deal because it's very contrary to the image.
I think people have a meta over, you know, prior issues, you know, correctly or not.
But on the open source, say, yeah topic and on freedom and innovate, at least for now,
meta is, I think, very strongly on that side.
And so it's, yeah.
Yeah.
Yeah, I think that's right.
It's actually an very interesting point.
And kind of, I think it's essential for people to understand is that the way meta is think
about this and the way that they're actually behaving and executing is very similar to how
Google thought about Android, where, you know, their main concern was that Apple not have a
monopoly on the smartphone, you know, not so much that they make money on the smartphone
themselves. Because, you know, a monopoly on the smartphone on the smartphone for Apple would
mean that, you know, Google's other business was in real jeopardy. And so they ended up being kind of
an actor for good. And, you know, Android's been an amazing thing for the world.
I think, you know, including getting smartphones in the hands of people who won't be able to get them otherwise, you know, all over the world.
And meta is doing kind of a very similar effort where, you know, in order to make sure that they have AI as a great ingredient in their products and services is willing to open source it and kind of gives their all of their very, very kind of large investment in AI.
the world so that, you know, entrepreneurs and everybody can kind of keep them competitive,
even though they don't plan to be in the business of AI in the same way that, you know,
what Google is in the business of smart phones to some extent, but it's not their kind of key
business. And, you know, meta doesn't have a plan to be in the AI business, maybe, you know,
to some extent they will to, but that's not the main goal.
And then I would put one other company on the concerning side on this, and it's truly to tell,
but where they're going to shake out. But, you know, Amazon just announced they're investing a lot more
money in Anthropic. So I think they're now, basically, Amazon is too, Anthropic, what Microsoft
just Open AI. I think that's an honest. Yep, yep. Yeah, and so, like, there's, there's a,
Anthropic is very much in the group of kind of big tech, you know, kind of new incumbent,
big tech, you know, lobbying very aggressively for regulation, a regulatory capture in D.C.
And so I think it's sort of an open question whether Amazon is going to pick up that agenda
as open AI, as Anthropic and quickly becomes effectively a subsidiary of Amazon.
Yeah, well, this is another place where we're on the site of Washington,
DC and the current regulatory motion where, you know, the big tech companies have done
this thing, which we thought was illegal because we observed it occur at AOL and people
went to jail.
But what they've done is they invest in startups, you know, huge amounts of money.
Microsoft and Amazon and Google are all doing it, you know, like billions of dollars.
With the requirement, with the explicit requirement that those companies then buy GPUs
from them, like not the discount that they.
they'd ordinarily get, but at a relatively high price,
and then be in their clouds.
So that kind of, and then, you know, in the Microsoft case,
even more aggressive, give me your source code, give me your weights,
you know, which is like extremely aggressive.
So, you know, they're moving money from the balance sheet to their P&L,
you know, in a way that, at least from an accounting standpoint,
it was our understanding, wasn't legal.
And the FTC is, you know, looking at that now,
but it'll be interesting to see how that play.
lays out. Yeah, well, the other is, that's one area. Another issue, you know, that people should
watch is, you know, that's one that's around tripping. The other one is just consolidation.
You know, if you own, you know, half of a company and you get to appoint the management team,
like, is that, you know, is that a subsidiary? You know, there are rules on that. Like,
at what point you have to. You own the company equity. You own the intellectual property of the
company and you control the management team. Yeah. Is that not your company?
yeah and then in that point if you're not consolidating it like is that legal and so the SEC is going to weigh in on that and then of course you know to the extent that some of these companies have non-profit components to them there's you know tax implications to the conversion to for profit and the so forth and so like there's a lot of yeah this this yeah the stakes the stakes in the legal I would say the stakes in the legal regulatory and political game that's being played here I think are quite quite high quite high yes yes screen Ben and I as Ben and I are old enough where we do know a bunch of people who go on a
jail. So some of these issues turn out to be serious. So Gabriel asks, what would happen if there
was zero regulation of AI, the good, the bad, and the ugly? And this is actually a really important
topic. So, you know, we're vigorously arguing, you know, in D.C. that there should be, you know,
basically anybody should be completely capable of building AI, deploying AI. Big companies should
be allowed to do it. Small companies should be allowed to do it. Open source should be allowed
to do it. And, you know, look, a lot of the regulatory pushes we've been discussing that comes
from the big companies and from the activists is to prevent that from happening and put everything
in the hands of the big companies.
So, you know, we're definitely on the side of freedom to innovate.
You know, having said that, you know, that's not the same as saying no regulations of anything
ever.
And so we're definitely not approaching this with kind of a hardcore libertarian lens.
The interesting thing about regulation of AI is that it turns out when you kind of go down
the list of the things that I would say, reasonable people, kind of, you know, kind of sort
of thoughtful people consider to be concerns around AI on both sides of the aisle.
Basically, the implications that they're worried about are less the technology itself and are more the use of the technology in practice, either for good or for bad.
And so, you know, Ben, you brought up, for example, if AI is making decisions on things like granting credit or mortgages or insurance, then, you know, there are very serious policy issues around, you know, how those answers are derived at, which groups are affected in different ways.
you know, the flip side is, you know, if AI is used to plan a crime, you know,
or to, you know, plant a bank robbery or something like that or terrorist attack,
you know, that's obviously something that people focused on national security law enforcement
are very concerned about. Look, our approach on this is actually very straightforward,
which is it seems like completely reasonable to regulate uses of AI, you know,
in things that would be dangerous. Now, the interesting thing about that is, as far as I can
tell, and I've been talking a lot of people in D.C. about this, as far as I can tell,
every single use of AI to do something bad is already illegal.
under current laws and regulations. And so it's already illegal to be discriminatory in lending.
It's already illegal to redline in mortgages. It's already illegal to plan bank robberies.
It's already illegal to plan terrorist attacks. Like these things are already illegal and there's
decades or centuries of case law and regulation and, you know, law enforcement and intelligence
capabilities around all of these. And so to be clear, like we think it's like completely appropriate
that those authorities be used. And if there are new laws or regulations needed due to, you know,
other bad uses, that that makes little sense. But that basically the issues of
people are worried about can be contained and controlled the level of the use as opposed to
somehow saying, you know, by the way, as some of the Dumer activists do, you know, we need to
literally prevent people from, you know, doing linear algebra on their computers.
Yeah. Well, I think that's important to point out, like, what is AI? And it turns out to be,
you know, it's math and specifically kind of like a mathematical model. So you can think of it
for those of you who study math in school, you know, in math, you can have an equipment.
like y equals x squared plus b or something and that equation can kind of model the behavior of something in physics or you know something in the real world and so that you can predict you know something happening like the speed that an object will drop or so forth and so on and then AI is kind of that but with huge computer power applied so that you can have much bigger equations with you know instead of two or three or four variables you could have
you know,
300 billion variables.
And so if you get into the challenge with that, of course,
is if you get into regulating math and you say,
well, math is okay up to a certain number of variables,
but then at the, you know,
two billionth and first variable,
then it's dangerous.
Then you're in like a pretty bad place
and that you're going to prevent everything good
from the technology from happening as well as anything
that you might think is bad.
So you really do want to be in the business of regulating the kind of applications of the technology, not the math, in the same way that, you know, you wouldn't have wanted to, like, nuclear power is very potentially dangerous, as nuclear weapons are extremely dangerous.
You won't want to kind of put parameters around what physics you could study in order to, you know, like literally in the abstract, in order to kind of prevent somebody from getting a nuke.
Like you can no longer study physics in Iran because then you might be able to build a nuke would be kind of the conclusion.
And that has been kind of what big tech has been pushing for, not because they want safety, but because, you know, again, they want a monopoly.
And so I think we have to be very, very careful not to do that.
I do think there will probably be, you know, some cases that come up that are enabled by AI new applications that do need to be, you know,
regulated potentially.
You know, for example, I don't know that there's a law that, like, if you recreate,
like something that sounds exactly like Drake and then kind of put out a song that sounds
like a Drake song, like I don't know that that's illegal, maybe that should be illegal.
I think those things need to be considered for sure.
And, you know, there's certainly a danger in that.
I also think we need technological solutions, not just regulatory solutions for things like
deep fakes that kind of help us get to.
you know, what's human, what's not human.
And interesting, a lot of those
are kind of viable
now based on kind of blockchain crypto
technology. Yeah, so let's
just own the voice thing real quick. Yeah. So it actually
I believe this to be the case, it is not currently
possible to copyright a voice.
Yeah. Right? You can copyright lyrics
and you can copyright music
and you can copyright tunes,
right, melodies and so forth.
Copyright a voice. And yeah, that seems like a perfect
example where it seems like that probably is
a good idea to have a lot that lets you copyright your voice.
Yeah, I feel that way, you know, particularly if people call their voice Drake squared or something, right?
Like, you know, it could get very dodgy.
Yeah, well, again, you know, again, it's just the details, you know, trademark, you can trademark your name.
So you could probably prosecute on that.
But by the way, having said that, look, this also gets to the complexity of these things.
There is actually an issue around copyrighting a voice, which is, okay, well, how close to the, how close to the voice of Drake does, like, there are a lot of people have, like, a lot of voices in the world and, like, how close do you have to get before you're violating copyrighted?
if my natural voice actually sounds like Drake. Like, am I now in trouble? Right. And do I outlaw
like Jamie Fox imitating Quincy Jones and that kind of thing, right? Exactly. So anyway, yeah,
but I mean, but I mean, look, I, you know, agreeing violently with you on this is like, that seems like a
great topic that needs to be taken up and take, you know, looked at seriously from a, from a legal
standpoint that is sort of, you know, is obviously, you know, sort of an issue that's sort of elevated by
AI, but it's a general kind of concept of like, you know, being able to copyright and trademark
things, which has a long history in U.S. law. Yeah. Oh, for sure. Yeah. So let's talk about
the decentralization and the blockchain aspects of this. So I want to get into this. So Goose
asks, how important is the development of decentralized AI and how can the private sector
catalyze prudent and pragmatic regulations to ensure U.S. routines innovation leadership in this
space? So yeah, let's, Ben, let's talk about, well, let's talk about decentralized AI and
then maybe I'll just, I'll highlight real quick and then you can build on it. Decentralized
AI, like, you know, the sort of default way that AI systems are being built today is with
basically, you know, supercomputer clusters in a cloud. And so you'll have a single data center
somewhere that's got, you know, 10,000 or 100,000 ships and then a whole bunch of systems
interconnect them and make them all work. And then you have a company, you know, that, you know,
basically, you know, owns and it controls that. And, you know, these companies, AI companies are
raising a lot of money to do that now. These are very large-scale centralized, you know, kinds of
operations. And, you know, to train us, you know, state of the art model, you're at $100 million
plus, you know, to train a, you know, a big one, to train a small one like the Databicks model
that just came out. It's like on the order of $10 million. And so that, you know,
these are large centralized efforts. And by the way, we all think that the big models are going to
up costing a billion, you know, and up in the future. And so, so then this raises a question
of like, is there an alternate way to do this? And the alternate way to do this is with, we believe
strongly is with a decentralized approach. And in particular, with a blockchain based approach,
it's actually the kind of thing that the blockchain web three kind of method, you know,
seems like it would work with very well. And in fact, we are already blocking backing companies
and startups that are doing this. And then I would say there's at least three kind of obvious
layers that you could decentralize that seem like they're increasingly important. So one is
the training layer. Well, actually, you have let me say four.
There's the training layer, which is building the model.
There's the inference layer, which is running the model to answer questions.
There's the data layer, Ben, to your point on opening up the black box of where the data is coming from,
which is there should probably be a blockchain-based system where people who own data can contribute it for training of AIs and then get paid for it and where you track all that.
And then there's a fourth that you alluded to, which is deepfakes.
It seems obvious to us that's the answer to deepfakes.
And I should pause for a second and say, in my last three months of trips to deepfakes,
see the number one issue politicians are focused on with AI is deepfakes. It's the one that
directly affects them. And I think every politician right now who's thought about this has a nightmare
scenario of, you know, it's three days before their reelection campaign, you know, three
days before the defate goes out with them, you know, saying something absolutely horrible. And it's
so good and the voters get confused and like, and then they lose the election on that. And so
I would say like that's actually the thing, that's the thing that actually has the most potency right
now. And then, you know, what what basically a lot of people say, including the politicians is
So therefore, we need basically a way to detect deepfakes.
And so either the AI systems need to watermark a generated content
so that you can tell that it's a deep fake
or you need these kind of scanners,
like the scanners that are being used in some schools now
to try to detect something as AI generated.
Our view as, you know, I would say both technologists
and investors in the space is that the methods of detecting AI generated content
after the fact are basically not going to work.
And they're not going to work because AI is already too good at doing this.
And by the way, for example,
if you happen have kids that are in a school and they're running one of these scanner programs
that is supposed to detect whether your kid is submitting an essay that you know or use chat GP to write the
essay like those really don't work in a reliable way and there's there's a lot of both false positives
and false negatives off of those that are very bad so those are actually very bad ideas and and for
the same same reason like detection of AI generated photos and videos and and speech is not going to be
possible and so our our view is you have to flip the problem we have to invert the problem and what
you have to do instead is basically have a system in which real people can certify that content
about them is real. And where content has provenance as well. So go ahead. Ben, go ahead and describe
how that would work. Yeah. So, you know, we have like, you know, one of the amazing things about
crypto blockchain is it deploys something known as a public key infrastructure, which enables kind of
every human to have a key that's unique to them where they can sign. So like if I was in a video or in a
photo or I wrote something, I can certify that, yes, this is exactly what I wrote. And you cannot
alter it to make it into something else. It is just exactly that. And then, you know, as that thing,
you know, gets transferred to the world, let's say that it's something, you know, like a song that
you sell and so forth, you can track just like with, you know, in a less precise way, but with
the work of art, we track the provenance or with a house, who owned it before you and so forth. That's
also like an easy application on the blockchain. And so that, you know, kind of combination of
capabilities can make this whole kind of program much more viable in terms of like, okay,
knowing what's real, what's fake, where it came from, you know, where it started, where it's going
and so forth. I, you know, kind of going back, the data one, I think, is really, really important
in that, you know, these systems, you know, one of the things that they've done, that's, I would
say dodgy. And, you know, there have been like big pushback against it with, you know, Elon trying
to lock down Twitter and the New York Times suing Open AI and so forth. You know, these systems
have gone out and just slurped in data from all over the internet and all over kind of, you know,
people's businesses and so forth and train their models on them. And, you know, I think that
there's a question of whether the people who created that data should have any say and whether
the model is trained on that data. And, you know, blockchain is an unbelievably great system for
this because you can permission people to use it. You can charge them a fee, but can be all
automated in a way where you can say, sure, come train, you know, and I think training data
ought to be of this nature where there's a data marketplace. And people can say, yes, take this data
for free. I want the model to have this knowledge. Or no, you can't have it for free.
but you can have it
or no, you can't have it at all
rather than what's gone on,
which is this very aggressive scraping
and, you know,
like you have these very smart models
where these companies are making
enormous amounts of money
taken from data that certainly didn't belong to them.
You know, maybe it's in the public domain
or what have you, but, you know,
that ought to be an explicit relationship
and it's not today
and that's a very great blockchain solution,
you know, and part of the reason
we need the correct.
regulation on blockchain, and we need the SECs to stop harassing and terrorizing people
trying to innovate in this category.
And so that's kind of the second category.
And then you have like training and inference.
And I would say, you know, right now the push against kind of decentralized training
and inferences, well, you know, you need this very fast interconnect and you need it
to all be in one place technologically.
But and I think that's true for people who have more money than time, right?
which is like, you know, startups and big companies and so forth.
But for people in academia who have more time than money,
they're getting completely frozen out of AI research.
You can't do it.
There's not enough money in all of academia to participate anymore in AI research.
And so, you know, having a decentralized approach
where you can share, you know, all the GPUs across your network.
And hey, yeah, maybe it takes a lot longer to train your network or to serve it.
But you know what?
You still can do your research.
You can still innovate, you know, create new ideas, do architectures,
and test them out at large scale, which, you know, will be amazing if we can do it.
And again, we need, you know, the SEC to stop, you know,
kind of illegally terrorizing every crypto company and trying to block laws from being put in place
that help us, you know, enable this.
Yeah, there's actually a really, and you alluded to it, the college thing, actually really matters.
we have a friend, you know, who runs one of, you know, is very involved in one of the big
computer science programs, one of the major American research universities. And of course,
by the way, a lot of the technology we're talking about was developed at American research
universities. Right. And Canadian ones too, Toronto. And Canadian ones? And European ones,
exactly. You know, historically, as with every other way of technology in the last, you know,
whatever hundred years, you know, the research, or research universities, you know, across these
countries have been kind of the gems of the, you know, the wellsprings of a lot of the new
technology that have been in empowering, you know, the, you know, the economy and everything else
around us. You know, we have a friend involved in running one of these, and this friend said
a while ago that he said that the, you know, his concern was that his university would be
unable to fund a competitive AI cluster, basically, so, you know, a compute grid that would
actually let students and professors at that university actually work in AI because it's now getting
to be too expensive and research and universities are just not funded to do, do have KAPX
programs that big. And then he said his concern more recently has been all research
universities together might not be able to afford to do that, which means all universities together
might not be able to actually have, you know, basically cutting-edge AI work happening on the university
side. And then I happen to have a conference that I was in D.C. I was in a bipartisan, you know,
House a meeting the other day with on these AI topics. And one of the, one of the, in this
case, Democratic Congresswoman asked me, you know, the question which, you know, comes up,
which is a very serious question, always right, which is how do you get kind of more members of
underrepresented groups, underrepresented groups involved in tech. And, you know, I found myself giving
the same answer that I always give on that, which is you, you, the most effective thing you need
to do is you need to go upstream and you need to have more people coming out of, you know,
college with computer science degrees who are, you know, skilled and qualified and trained, right,
and mentored in to be able to participate in the industry. And, you know, that's, you know,
you and I, Ben, both came out of state schools, you know, with, you know, with computer science
programs that, you know, where we were able to then have the careers we've had. And so,
you know, but I find myself answering the question saying,
well, we need more computer science, you know, graduates from all, you know,
from every group.
And then, but in the back of my head, I was like,
and it's going to be impossible to do that because none of these places are going to be
able to afford to actually have the compute resources to be able to actually have
AI programs in the future.
And so, like, you know, maybe the government can fix this by just dumping a ton of money
on top of these universities and maybe that's what will happen.
And, you know, the current political environment seems like maybe it's not quite feasible
for a variety of reasons.
And then, and then the other approach,
would be a decentralized approach, would be a blockchain-based approach that everybody could
participate in, you know, if that were something that the government were willing to support,
which right now it's not. And so I think there's a really, really, really, really central,
important vital issue here that I think, you know, I think is being glossed over by a lot of
people that I think should really be looked at. Yeah, no, I think it's absolutely critical.
You know, and this is, you know, again, kind of going back to our original thing, like it's so
important to the country being what, you know, America being what America is.
should be to get these issues right.
And we're definitely in danger of that not happening, you know, because, you know, look,
I think people are taking much too narrow view of some of these technologies and not
understanding their full capabilities.
And, you know, we get into, oh, the AI could, you know, say something racist, therefore we won't cure
cancer.
I mean, like, we're getting into that kind of dumb idea.
And, you know, we need to have a tech forward kind of.
kind of solution to some of these things
and then the right regulatory approach
to kind of make the whole environment work.
So, you know, let's go into the next phase of this now,
which is the sort of global implications.
So I'm going to conjoined two different topics here,
but I'm going to do it on purpose.
So Michael Frank Martin asks,
what could the U.S. do to position itself
as the global leader of open source software?
Do you see any specific legislation
of regulatory constraints
that are hampering the development of open source projects?
Arda asks similar question,
what would an ideal AI policy for open source?
sort of software models looks like.
And then Sarah Holmes asks the China question.
Do you think we will end up with two AI tech stacks,
the West in China, and ultimately companies
will have to pick one side and stay on it?
And so, look, I would say
this is where you get to like the really, really big
geopolitical long-term issue, which is basically
my understanding of things is sort of as follows,
which is, you know, basically the, for a variety of reasons,
technological development in the West is being centralized
into the United States, you know, with, you know,
some in Canada and some in Europe, although, you know,
quite frankly, a lot of the best Canadian and, you know, European, you know, tech founders
are coming to Silicon Valley, you know, Yan Lecun, you know, teaches.
Jan Lecun is, you know, a hero in France, teaches at NYU and works at Meta, those of which are
American institutions. And so there, they're sort of an American or, let's say, American plus
European kind of, you know, kind of, kind of, you know, sort of tech vanguard wedge in the
world. And then, and then there's China. And really, it's, it's actually quite a bipolar
situation, you know, it would say, you know, the dreams of tech being fully democratized
spreading throughout the world have been realized for sure on the on the youth side but you know not
nearly as much on the entrepreneurship side or the invention side and again immigration you know immigration
being a great you know virtue but you know for the countries that are the beneficiaries of
immigration the other countries are going to be less competitive because they're their best
and brightest moving to the u.s. so so anyway so we are in a bipolar we are in a bipolar tech world
it is primarily a bipolar tech world it's primarily the u.s. and china you know this is not the first
time we have been in a bipolar world involving geopolitics and technology. You know, the U.S. and
China have two very different systems. The Chinese system has all of the virtues and downsides of
being centralized. The U.S. system has all the virtues and downsides of being more decentralized.
There is a very different set of views of the two systems on how society should be ordered and what
freedom means and, you know, what people should be able to do and not do. And then look, both the U.S.
and China have visions of global supremacy and visions of basically care and agendas and programs
to carry forward their points of view on the technology of AI and on the societal implications
of AI, you know, throughout the world. And so, you know, there is this Cold War, too, oh, and then the other
thing is just, in D.C. It's just crystal clear that there's this now dynamic happening where
Republicans and Democrats right now are trying to leapfrog each other every day on being more
anti-China. And so, you know, our friend Neil Ferguson is using the
I think Cold War 2.0, like we're in, whether we want to or not, like we're in Cold War
2.0, like we're in a dynamic similar to the one was with the USSR, you know, 30, 40, 50 years
ago. And to Sarah Holmes's question, it's 100% going to be the case. There are two AI tech stacks
and there are two AI governance models and there are two AI, you know, deployment, you know,
systems. And, you know, there are two ways in which, you know, AI dovetails in everything from
surveillance to smart cities to transportation, self-driving cars, drones, who controls what, who gets access to
what, who sees what. The degree by the way to which AI is used as a method for population control,
there are very different visions. And these are national visions and global visions. And there's
a very big competition developing. And, you know, it certainly looks to me like there's
going to be a winner and a loser. I think it's overwhelmingly in, you know, our best interest for
the U.S. to be the winner. For the U.S. winner, we have to lean into our strengths. And, you know,
the downside of our system is that we are not as well organized and orchestrated top
down as China is. The upside of our system, at least morecly, is that we're able to benefit
from decentralization, we're able to benefit from competition from a market economy, from a private
sector, right, where, you know, we're able to basically have a much larger number of smart
people being, you know, making lots of small decisions to be able to get through good outcomes
as opposed to, you know, having a dictatorial system in which there's a small number of people
trying to make decisions. I mean, look, and this is how we won the Cold War against Russia,
is our decentralized system just worked better economically, technologically, and ultimately
militarily than the Soviet to centralized system.
And so it just seems like fairly obvious to me that like we have to lean into our strengths.
We better lean into our strengths.
And because if we think we're just going to be like another version of a centralized system,
but without all the advantages that China has with having a more centralized system,
you know, that just seems like a bad formula.
So, yeah, let me pause there and, Ben, too, what you think?
Yeah, I know for sure.
I think that would be disastrous.
And I think this is why it's so clear that if there's one, you know, to answer the question,
if there's one regulatory policy that we would enact that would ensure America's competitiveness,
it would be open source. And the reason being that, as you said, this enables the largest
number of participants to kind of contribute to AI, to innovate, to come up with novel solutions
and so forth. And I think that you're right, China's, you know, what's going to happen in China is they're
going to pick one because they can. And they're going to kind of drive all their wood behind that
arrow in a way that we could never do because we just don't work that way. And they're going to
impose that on their society and try and impose it on the world. And, you know, our best counter
to that is to put it in the hands of all of our smart people. I have so many smart people from all
over the world, from, you know, like, as we like to say, diversity is our strength. We've got this
tremendous different points of view, different, you know, kinds of people in our country. And, you know,
the more that we can enable them, the more likely we'll be competitive.
And I'll give you a tremendous example of this is, you know, I think if you go back to 2017
and you read any, you know, foreign policy magazine, et cetera, there wasn't a single one that
didn't say China was ahead and AI.
They have more patents.
They have more students going to universities.
They're ahead an AI.
They're ahead and AI.
Like, we're behind an AI.
And then, you know, Chad GPT comes out and goes, oh, I guess we're not behind an AI.
We're ahead in AI.
And the truth of it was what China was ahead on was integrating AI into the government,
their one AI into their government in that way.
And, you know, look, we're working on doing a better job with that with American dynamism,
but we're never going to be good at that model.
You know, that's the model that they're going to be great at.
And we have to be great at our model.
And if we start limiting that, outlawing startups and outlying anybody with the big companies
from developing AI and all that kind of things.
thing, we'll definitely shoot ourselves on the foot. I would say related or like another
kind of important point, I think, in kind of the safety of the world is, you know, when you talk
about two AIs, that's like two A.I. Stacks, perhaps, but it's very important that countries that are
in America, that are in China, can align AI to their values. And I'll just give you kind of one
really important example, which, you know, like I've been spending a lot of time in the middle
East. And if you look at the kind of history of, you know, a country like Saudi Arabia,
they're coming from a world of fundamentalism and, you know, a kind of set of values that they're,
you know, they're trying to modernize. They, you know, they've done tremendous things with
women's rights and so forth. But, you know, look, there's still the fact that they've got, you know,
people who don't want to go to that future so fast. And they need to preserve some of their history
in order to not have a revolution or extreme violence and so forth.
And, yeah, we're seeing al-Qaeda re-spark up in Afghanistan and all these kinds of things,
which are, by the way, al-Qaeda's real enemy of modern Saudi,
just as much as they're an enemy of America.
And so if Saudi can't align an AI to the current Saudi values,
they could literally spark a revolution in their country.
And so it's very important that, as we have technology,
that we develop, that it not be totally proprietary close source,
that it be kind of modifiable by our allies who need to kind of progress at their pace
to keep their kind of country safe and keep us safe in doing so.
And so this has got great geopolitical ramifications what we do here.
Like if we go to the China model that Google and Microsoft are advocating for this Chinese
model of only a few can control.
AI, we're going to be in big trouble.
Yeah, and then I just want to close in the open source point because it's so critical.
So this is where, you know, I say I get extremely irate at the idea of closing down open source,
which, you know, people in a number of these people are lobbying for very actively.
By the way, I will, I'm going to name one more name.
Yeah.
We even have VCs lobbying to outlaw open source, which I find to just be completely staggering
and in particular, Vinode.
Vinode.
So Vinod, Vinod Kozla, who is at a, which is, this is just incredible to me, he's a founder of
on microsystems, which was in many ways a company built on open source, built on open source
out of Berkeley, and then itself built a lot of open source, critical open source. And then, of course,
you know, it was the dot and dot com, which is, of course, you know, the internet was all built
on open source. And Benode has been lobbying to ban open source AI. And by the way, he denies
that he's been doing this, but I saw him with my own eyes. When the U.S. Congressional China
Committee came to Stanford, I was in the meeting where he was with, you know, 30, 20 or 30
congressmen and lobbying actively for this. And so I've seen him doing myself. And, you know,
look, he's got a big stake in Open AI, you know,
maybe it's financial self-interest. By the way,
maybe he's a true believer in the dangers, but
in any of that... Well, I think he proved
on Twitter he was not a
true believer in the dangers. I'll get
into that. I'll explain that, but yeah.
Yeah, so, I mean, even
within Little Tech, even within
the startup world, we're not uniform
in this, and I think that's extremely dangerous.
Look, open source, like, what is
open source software? Like, open source software is
you know, it is quite literal, you know, it's
the technological equivalent of free speech.
which means it's the technological equivalent of free thought.
And it is the way that the software industry has developed
to be able to build many of the most critical components
of the modern technological world.
And then, Ben, as you said earlier, to be able to secure those
and to be able to have those actually be safe and reliable.
And then to have the transparency that we've talked about
so that you know how they work and how they're making decisions.
And then to your last point, also so that you can have customized AI
in many different environments.
So you don't end up with a world where you just have one or a couple
AIs, but you actually have like a diversity of AIs with like lots of different points
to view and lots of different capabilities. And so the open source fight is actually at the
core of this. And of course, the reason why, you know, the sort of, you know, the sort of people
with an eye towards monopoly or cartel want to ban this as open source is a tremendous threat
to monopoly or cartel. Like, you know, in many ways, is a guarantee that monopoly or cartel
can't last. But it is absolutely 100% required for the, you know, for the furtherance of,
number one, a vibrant private sector. Number two, a vibrant startup sector. And then
right back to the academia point, like without open source, then at that point, you know, university, college kids are just not going to be able to, they're not even going to be able to learn how the technology works. They're just going to be like completely boxed out. And so a world where open source is banned is bad on so many fronts. It's just incredibly me that anybody's advocating for it. But it needs to be, I think it needs to be recognized as the threat that it is. Yeah. And on the note, you know, it was such a funny dialogue between you and he. So like I'll just give you a quick summary of it. Basically.
You know, he was arguing for closed source, he for open source.
His core argument was, this is the Manhattan Project,
and therefore we can't let anybody know the secrets.
And you countered that by saying, well, this is, in fact,
the Manhattan Project then is like, you know,
is the open AI team, you know, locked in a remote location.
Do they screen all their, like, employees very, very carefully?
Is it airlocked?
Are they, you know, is super high security?
Of course, none of that is close to true.
In fact, I'm quite sure they have Chinese nationals working there, probably some are spies for the Chinese government.
There's no any kind of strong security at Open AI or at Google or at any of these places, you know, anywhere near the Manhattan Project, which is where they built a whole city that nobody knew about so they couldn't get into it.
And, you know, once you caught him in that, he said nothing.
And then he says back, well, you know, it cost billions of dollars to train these models.
You just want to give that away.
is that good, you know, is that good economics?
Like, that was his like final counterpoint to you, which basically said,
oh, yeah, I'm trying to preserve a monopoly here.
Like, what are you doing?
I'm an investor.
And I think that's true for all these arguments.
Well, the kicker, you know, the kicker bend of that story, the minute,
the kicker to that is three days later, the Justice Department indicted a Chinese national
Google employee.
Yeah.
Who stole Google's next generation AI chip designs, which is quite literally the family
tools for an AI program. It's, you know, it's the equivalent of stealing the, you know,
if you stretch the metaphor, the equivalent of stealing the design for the bomb. And that
Google employee took that those chip design, downloaded them and took them to China. And my
definition, you know, my definition, that means took them in the Chinese government because
there's no distinction in China between the private sector and the government. It's an integrated
thing. The government owns and controls everything. And so, you know, 100% guaranteed that
that went straight to the Chinese government, Chinese military. And Google, which, you know,
like, you know, Google has like a big information security team and all the
rest of it. Google did not realize, according to the indictment, Google did not realize that
that engineer had been in China for six months. Yeah. Amazing. Well, hold on. It gets better.
It gets better. This is the same Google with the same CEO who refused to sell Google
proprietary AI technology to the U.S. Department of Defense. So they're supplying China with
AI and not supplying the U.S., which just goes back to, look, if it's not open source,
we're never going to compete.
Like, yeah, we've lost the future of the world right here,
which is why, you know, it's the single most important AI issue for sure.
Yeah, and, and you're not going to lock this stuff up.
Like, you're not going to lock it up.
Nobody's locking it up.
It's not locked up.
These companies are security Swiss cheese.
And, like, you know, you're not going to, you know,
and you'd have a debate about the tactical relevance of chip embargoes and so forth.
But, like, you're not, you're not, the horse has left the barn on this,
not least because these companies are without,
a doubt riddled with foreign assets and they're very easy to penetrate. And so we just have to
be like, I would say, very realistic about the actual state of play here. And we have to, we have to
play in reality. And we have to play in reality. We have to win in reality. And the win is we need
innovation. We need competition. We need free thought. We need free speech. We need to embrace the
virtues of our system. And not shut ourselves down in the face of, in the face of the conflicts
that are coming. Another one, why are, Andreas asks, why are USVC so much more engaged in
politics and policy than their global counterparts. And I really appreciate that question because
it basically, like, if that's the question that it means that, boy, VCs outside the U.S.
must not be engaged at all because U.S. VCs are barely engaged. Yeah. And then what do you
believe the impact of this is in both the VC ecosystem and society in general? And then
related, directly related question, Vincent asked, are European AI companies becoming less interesting
investment targets for U.S.-based VCs due to the strict and predictably unpredictable regulatory
landscape in Europe. Would you advise early stage European AI companies to consider relocating
to the U.S. as a result? Great question. Well, look, I think that it kind of goes back to a little
of what you said earlier, which is, you know, in startup world, like there's, you know, in the
West, there's the United States, and then there's everywhere else. And the United States is kind of
bigger than everywhere else combined. And, you know, so it's natural. And look, you know, in these
kind of political things, it kind of starts with the leader. And, you know, U.S. is the leader in
VC. Yeah, we feel like we're the leaders in U.S. V.C. So we need to go, go to Washington until we go.
You know, nobody's going. And so that's a lot of the reason why we started things.
on, well, on European regulatory policy, look, it's, I think, right, I think generally regulatory policy is going to, is likely to dictate where you can build these companies.
We've seen some interesting things. You know, France turns out to be leading a revolution in Europe on AI regulatory, where they're basically telling the EU to pound sand, you know, and large reason, because they have a company there, mistral.
And, you know, it's a national jewel for the country.
And they don't want to give it up because, you know, the EU has some crazy safetyism thing going on there.
Yeah.
And I would also note France also, of course, is playing the same role with nuclear policy in Europe.
Yeah.
Yeah.
They're the one country in the cleanest country.
Yeah, probably one of the cleanest countries in the world as a result.
Right.
But have been staunchly pro-nuclear and trying to hold off, I think, in a lot of ways, sort of attempts throughout the rest of Europe.
and especially from Germany to basically bad nuclear civilian nuclear power.
Yeah, and the UK has sort of been flip-flapping on AI policy
and we'll see where they come out.
And, you know, Brussels has been ridiculous as they've been on almost everything.
Yeah, the big thing I think I note here is there's a really big philosophical distinction.
I think it's rooted actually the difference between traditionally it's been called,
I think the sort of Anglo or Anglo-American kind of approach to law
and then the continental European approach.
And I forget forget these.
It's like, I forget the terms for it, but the legal, there's like common law and then, yeah.
Civil, I think it's civil law.
Yeah, yes.
So it's basically, the difference basically is that which is not outlawed is legal or that which is specifically not legal is legal and anything that's not explicitly legal is outlawed.
Right.
In other words, like by default, do you have freedom and then you impose the law to have constraints or by default, do you have no ability to do anything and then the law enables you to do things?
And these are sort of, this is like a fundamental, like philosophical, legal, you know, political distinction.
And then this shows up in a lot of these policy issues with this idea called the precautionary principle,
which is sort of the rewarding of the sort of traditional European approach,
which is basically the precautionary principle says new technologies should not be allowed to be fielded until they are proven to be harmless.
Right. And of course, the precautionary principle very specifically is sort of a hallmark of the European approach to regulation.
and increasingly, you know, by the U.S. approach.
And it specifically, its origin, it was actually sort of described in that way
and given that name actually by the German Greens in the 1970s
as a means to ban civilian nuclear power, you know, with, by the way,
with just catastrophic results.
And we could spend a lot of time on that.
But I think everybody at this point agrees, like,
including the Germans increasingly agreed that that was a big mistake.
Among other things, you know, has led to basically Europe funding Russia's invasion of Ukraine
through the need for imported energy because they keep shutting out their nuclear plants.
And so just like a sort of a catastrophic decision.
But the precautionary principle has become, like I would say, extremely trendy.
Like, it's one of these things, like it sounds great, right?
It's like, well, why would you possibly want anything to be released in the world
if it's not proved to be harmless?
Like, how can you possibly be in support of anything that's going to cause harm?
But the obvious problem of that is with that principle,
you could have never deployed technologies such as fire, electric power, internal combustion,
engines, cars, airplanes, the computer.
Right. Like every single piece of technology we have, the power is modern day civilization has some way in which it can be used to hurt people, right? Every single one.
Technology technologies are double-edged swords. There are a thing, you know, you can use fire to protect your village or to attack the neighboring village. Like, you know, these things can be used in both different ways. And so basically, if we had applied the precautionary principle historically, we would not have, you know, we would still be living in mud huts. We would be just like absolutely miserable. Miserable. And so the idea of imposing the percussionary principle to,
today, if you're coming from like an Anglo-American kind of, you know,
perspective or from a freedom to innovate, you know, perspective,
that's just like, that's just like incredibly horrifying.
The, you know, should basically guarantee to stall out progress.
You know, this is very much the mentality of the EU bureaucrats in particular,
and this is the mentality behind a lot of their recent legislation on technology issues.
France does seem to be the main counterweight against this in the, in Europe,
you know, been to your point, like the UK has been a counterweight in some areas.
But Kay also has, like I would say, they've received a full dose of this program,
me. Yeah, they have that tendency.
Yeah, and they've been in AI in particular, I think they've been on the wrong side of that,
which hopefully they'll reconsider. So again, this is one of these things. Like this,
this is a really, really important issue. And just the surface level thing of like,
okay, this technology might be able to be used for some harmful purpose. Like that,
if that is allowed to be the end of the discussion, like we are never going to,
nothing new is ever going to happen in the world. Like that will, that will cause us ultimately
to stall out completely. And then, you know, if we stall out, that will over time lead to regression
and like literally, you know, I mean, this is happening, like the power is going out.
Like, you know, Germans aside, German industrial companies are shutting down
because they can't afford the power that's resulted from this, you know, kind of the imposition of this policy in the energy sector.
And so this is a very, very, very important thing.
I think the EU bureaucracy is lost on this.
And so I think it's going to be up to the individual countries to directly confront this if they want to.
Anyway, so I really applaud what France has done.
And I hope more European countries join them in kind of being on the right side on this.
Yeah. Yeah, no, it always is funny to me to hear the EU and like the economists and these kinds of things say, oh, the EU may not be the leader in innovation, but we're the leaders in regulation. And I'm like, well, you realize those go together. Like one is a function of the other.
Okay, good. And then let's do one more global question. Lap Gong Liang asks, are there any other countries that could be receptive to techno optimism? For example, could Britain, Argentina, or Japan be ideal targets for our message and mission?
Yes, so we'll look for work on that in Britain.
And look, we've got some pretty good reception from the UK government.
There's a lot of very, very smart people there.
We're working with them, you know, tightly on their AI and crypto efforts,
and we're hoping that's the case.
You know, Japan is having spent a lot of time there is, you know,
they've obviously shown that capability, you know, over time.
And then, you know, there's a lot about the way Japanese society
works, that holds them back from that at times as well.
Without getting into all the specifics, there's, you know, they have a very, I would just say,
unusual and unique culture that has a great difference for the old way of doing things,
which sometimes makes it hard to kind of promote the new way of doing things.
I also think, you know, around the world, you know, that the Middle East is very, very kind
of subject and kind of on board with techno-optimism, the UAE, Saudi, Israel, of course,
You know, many countries out there are very excited about these kinds of ideas and taking the world forward and, like, you know, just creating a better world through technology, which I think that, look, with our population growth, if we don't have a better world through technology, we're going to have a worse world without technology.
I think that's, like, very obvious.
So it's a very compelling message.
And oh, by the way, South America, I should say also, there are a lot of countries who are really embracing technology.
no optimism now in South America, and that's, you know, and some great new leadership there
that, you know, that's pushing that.
Yeah, I would also say if you look at the polling on this, what I think you find is what
you could describe as the younger countries are more enthusiastic about technology.
And I don't mean younger here literally of like when they were formed, but I mean two things.
One is how recently they've kind of emerged into what we would consider to be modernity.
And so, you know, for example, to embrace, you know, concepts like democracy or free market
capitalism or, you know, innovation generally, you know, or global trade and so forth.
And then the other is just quite simply the number of the demographics, the, you know, the, the countries with a larger number of people.
And those are often by the way, the same countries, right?
They have a, they have the reverse demographic pyramid we have where they actually have a lot of young people.
And young people are both, you know, young people both need economic opportunity and are very fired up about new ideas.
Yeah, by the way, this is true in Africa as well in many African countries, you know, Nigeria, Rwanda, Ghana, they were their techno-optimism, I think, is taking hold in a real way.
You know, they need some of them need governance improvements, but they definitely also have young population.
Saudi, I think 70% of the population is under 30, so, you know, just to your point, that they're very, very hopeful in those areas.
Ben, Goltra asks, do you think the lobbying efforts by good faith American crypto firms will be able to move the needle politically in the next few years?
What areas make you optimistic as it relates to American crypto regulation, crypto, blockchain, Web 3?
Yeah. So I think that I'm as hopeful as I've ever been. So there's a bunch of things that have been really positive. First of all, you know, the SEC has lost, I think, five cases in a row. So, you know, like some of their like arbitrary enforcement of things that are on laws is not working. Secondly, you know, there was a bill that passed through the House or the House Financial Services Committee, which is a very, I would say, good bill on crypto regulation.
And, you know, hopefully that will eventually pass the House and the Senate.
There's, you know, we've seen Wyoming, I think, adopt a really good new laws around Dow's.
And so there's some progress there.
And then, you know, there's been, you know, we've been working really, really hard to educate members of Congress and the administration on kind of the value of the technology.
There are strong opponents to it, you know, as I mentioned earlier.
and, you know, that's, you know, that continues to be worrisome.
But I think we're making great progress.
And the Farish 8-Pack has done just a tremendous job of, you know,
kind of backing pro-Crypto candidates.
And with great success, there were six different races on Super Tuesday that they backed
in all six won.
So, you know, another good sign.
Yeah.
Fantastic.
I hit a couple other topics here quickly to get under the wire.
So Father Time asks, can you give us your thoughts in the recent,
TikTok legislation. If passed, what does this mean for Big Tech going forward? And so I'll just
let me give a quick swing at that. So the TikTok legislation being proposed by the U.S.
Congress and currently being taken up in the Senate, which, by the way, and the President Biden
has already said he'll sign it if the Senate and the House pass it. This is legislation that would
require a defestment of TikTok from its Chinese parent company, bite dance. And so TikTok would
have to be a purely American company or would have to be owned by a purely American company.
And then failing that, it would be a ban of TikTok in the U.S.
This bill is a great example of the sort of bipartisan dynamic in D.C. right now on the topic of China, which is this bill is being enthusiastically supported by the majority of politicians on both sides of the aisle. I think it passed out of its committee like 50 to zero, which, you know, is basically like it's impossible to get anybody in D.C. to agree on anything right now except basically, basically this. So this is like super bipartisan. And then it's, you know, the head of that committee is a Republican, Mike Gallagher. And, you know, he immediately and he worked in a bipartisan way with his committee members. But.
you know, the Democratic White House immediately endorsed its bill.
So, so, like, you know, this bill has, like, serious momentum.
The Senate is taking up right now.
They're going to, they're likely to modify it in some way, but it seems, you know,
reasonably likely to pass based on what we can see.
You know, I would say, like I said, by overwhelmingly bipartisan support.
And, you know, look, the argument for the ban is, I would say, a couple different way,
or not the divestment of the ban.
Number one is just like, you know, an app on every, on America's, on the phones of every,
you know, of a large percent of Americans with the surveillance and, you know, potential
propaganda kind of aspects of that, you know, certainly has people in Washington
concern. And then, quite frankly, there's an underlying industrial, you know, dynamic,
which is, you know, the, you know, the U.S. Internet companies can't operate in China.
So there's a, you know, there's a sort of an unfair symmetry underneath this that really
undercuts, you know, I think a lot of the, a lot of the arguments from bite dance.
It has been striking to see that there are actually opponents of this bill who have emerged,
and I would describe on sort of the further to the right and further to the left in their respective
parties.
And, you know, they, you know, those folks, and I won't go through detail, but those folks make a variety, a variety of arguments.
One of the, and I, let me characterize the surface level. I think on the, on the further on the left, I think that there are people who think that, especially kind of further left, Congress people who feel like TikTok is actually a really important and vital messaging system for them to be able to use of their constituents who tend to be younger. They're very internet centric. And so, so there's that, which, you know, is interesting. But then on the, on the, on the further on the right, there is a lot. And our friend David Sachs, for example, might be.
example of this. There are a fair number of people who are very worried that the U.S.
government is so prone to abuse any regulatory capability with respect to tech, and especially
with respect to censorship, that basically if you hand the U.S. government any new regulatory
authority or legal authority at all to come down on tech, it will inevitably be used not just against
the Chinese company, but it will also then be used against the American companies. And so,
you know, it's kind of, it's some drama surfacing around this. And, you know, we'll see whether
the opponents can, you know, can kind of pull it through. You know, look, quite frankly, I, you know,
I, you know, without coming down, particularly on, like, I think there's one of those cases
where there's actually, like, excellent arguments, like on all three sides. Like, I think
there are, like, very legitimate questions here. And so, you know, I think it's great that the
issue's being confronted, but I think it's also great that's, you know, the arguments have
surfaced and that we're going to, you know, hopefully figure out the right thing to do.
A couple closing things. Close on, let's see, hopefully a semi-optimistic note. So John Potter
asks, how do you most effectively find common ground with groups and interests that you benefit
from working with, but with which you are usually opposed ideologically or otherwise.
Yeah, I mean, I think this is, you know, there's this term in Washington common ground.
And I think that, you know, you always want to start by finding the common ground.
Because I'll tell you something in politics generally is most people have the same intention.
You know, like in Washington, in fact, people want life to be fair.
You know, they want, they don't want people to go hungry.
they want, you know, citizens to be safe but have plenty of opportunity.
So, like, there's a lot of common ground that the differences lie not in the intent,
but how you get there, like, what is the right policy to achieve the goal?
And, you know, so I think it's always important to start with the goal
and then kind of work our way through, you know, why we think our policy position is correct.
You know, like, we don't really have a lot of disagreements on stated intent, at least.
I mean, I think there are some intentions that are very difficult in Washington, you know, like the, you know, the intention to kind of control the financial system is, you know, from the government or nationalize the banks or kind of achieve the equivalent of nationalizing the banks is, you know, when you have that intent, that's tough.
But like if you start with, you know, most intentions are, I think, you know, shared between, you know, us and policymakers on both sides.
And then we'll close on this great question.
Zach asks, would either of you ever consider running for office and for fun, what would be your platform?
So I want.
Just because, look, you know, I think being a politician requires a certain kind of skill set and attitude and energy.
from certain things that I don't possess, unfortunately.
Do you have a platform you would run on if you did run?
Yeah.
Okay, yeah, let's hear your platform.
The American Dream.
So I won't do it now, but I like to put up this chart
that shows the change in prices in different sectors of the economy over time.
And what you basically see is the price of, like, television sets and software
and video games are, like, crashing hard.
in a way that's, like, great for consumers.
You know, like, I saw 75-inch flat screen, ultra-high-debt TVs now
or down below $500.
Like, you just, like, it's great.
It's amazing.
Like, when technology is allowed to work, it's magic, like, prices crash in a way
that's just great for consumers and it's equivalent of a giant,
basically, you know, when prices drop is equivalent of a raise.
So it makes human welfare a lot better.
The three elements of the economy that are central to the American dream
are health care, education, and housing, right?
And so if you think about what does it mean to have the American dream,
It means to be able to buy and own a home.
It means being able to send your kids to great schools,
get great education, to have a great life.
And then it means, you know, great health care
to be able to take care of yourself and your family.
The prices on those are skyrocketing.
They're just like straight to the moon.
And, of course, those are the sectors
that are the most controlled by the government.
Where there's the most subsidies for demand from the government.
There's the most restrictions on supply from the government.
And there is the most interference with the ability to field technology
and startups.
And the result is we have an entire generation of key
kids who basically, I think, are quite rational in looking forward and basically saying,
I'm never going to be able to achieve the American dream. I'm never going to be able to
own a home. I'm never going to be able to get a good education or send my kids to good education.
I'm not going to be able to get good health care. Basically, I'm not going to be able to live the
life that my parents live or my grandparents live. And I'm not going to be able to fundamentally
form a family and provide for my kids. And I think that's the, I think that's, in my opinion,
that's the underlying theme to kind of what has gone wrong, sort of socially, politically,
psychologically in the country.
That's what's led to the sort of intense level of pessimism.
That's what's led to sort of the subtraction,
you know, kind of very zero-sum politics
to, you know, recrimination over
optimism and building.
And so I would, I would confront that absolutely directly.
And then, of course, I would announce that I don't think anybody in Washington is doing
that right now.
So either I would win because I'm the only one saying it out loud
or I would lose because nobody cares.
But I think it would, I've always wondered whether that actually would,
both on the substance and on the message,
whether that would be the right platform.
Yeah, now, it would certainly be the thing to do.
It's the thing.
It's very complex in that, you know,
healthcare policy is largely national,
but education policy and housing policy
has also got a very large local component.
So it would be a complicated,
kind of complicated set of policies
that you'd have to enforce.
We still have a ton of questions
so we may do part two on this at some point,
but we really appreciate your time
attention and we will see you soon. Okay, thank you.