a16z Podcast - Balaji on Why AI Raises the Cost of Verification
Episode Date: April 7, 2026a16z general partner Erik Torenberg speaks with Balaji Srinivasan, angel investor and entrepreneur, about why AI simultaneously reduces the cost of creation and increases the cost of verification, and... what that tension means for the shape of the AI economy. They discuss why AI drives companies toward the "trusted tribe" model of the Chinese internet, why physical world tasks are easier to automate than digital ones, why shortcuts only work for experts, and why AI makes everyone a CEO rather than making CEOs obsolete. Resources: Follow Balaji Srinivasan on X: https://twitter.com/balajis Follow Erik Torenberg on X: https://twitter.com/eriktorenberg Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
AI doesn't take your job.
AI makes you the CEO.
The problem is AI is a shortcut.
And a shortcut is good, except when it's bad.
If you don't know how to go the long way around, then you can't debug the AI.
Do we not think that AI is just going to be also better at taste and agency?
I don't think that's true on a short-term basis.
Humans are the sensor EASE actuator.
So it's like a human machine synthesis.
What's taste?
Taste the sense?
And that is what AI can't yet do.
What happens when AI really achieves its potential?
Will LLMs get us to AGI in some capacity?
No.
No, actually, the opposite.
Every tool that makes creation cheaper
makes verification more expensive.
The printing press made publishing easy and forgery easier.
Photography made documentation instant
and manipulation inevitable.
In 1839, the first year a camera could capture a human face,
people trusted photographs absolutely.
Within a decade, courts were already debating fake evidence.
The cheaper the creation, the harder the person.
proof. AI has compressed this cycle into months. A resume that once took hours to fake now takes seconds.
A slide deck that signaled competence now signals nothing. The generation cost has collapsed,
but someone still has to confirm what's real, and that cost is rising fast.
The result is a world that fragments into trusted groups, where AI supercharges productivity on the inside
and raises walls on the outside.
I speak with Bologi, Srinivasan,
Angel Investor and Entrepreneur.
I want to start by talking about the AI economy,
and I'm curious if you think it will look more like
the internet economy where applications take most of the value
or the cloud economy where there's kind of infrastructure
takes most of the value or it's more distributed.
There's an argument that the big labs will take it all
because they have all the capital, they have the compute,
they've vertically integrated,
but there's also an argument that, hey,
maybe they won't because distillation,
is 98% cheaper than it is to build a model.
Open source catches up and apps
control the user relationship.
How do you think this economy is going to play out?
Great question.
So I do think that at least a very large percentage of the future
is going to be distillation and decentralization
because, as Anthropics said,
distillation attacks work on their thing, right?
And so a relatively small number of API queries
helps to kind of distill a large model into something,
small. And it's very hard to stop that, right? Because you're stopping queries from coming back.
You'd have to, someone detect that or what have you, right? And it's also hard to morally stop it
because what do they do? They copied the whole internet and put it into their thing, right?
So talking about stopping the copying. It's like Facebook or LinkedIn stopping someone from
scraping what they scraped, right? Like Facebook scraped all these Harvard social networks or Google
scrape the entire internet, build a Google index. I get why they want to do it, but it's hard to support
that. Okay. So the other thing is, I think the future is personal, private programmable.
because AI is so powerful that you want to use it within the trusted tribe
for variety reasons.
First is, it doesn't miss, okay, or rather,
it doesn't miss small things in large data sets
and things that were effectively secure through obscurity.
A small example, but an important one, is the J-mail thing, right?
Like, the Jeffrey Epstein thing, you can query,
like this guy had never thought that all of this,
emails would be publicly indexed and searchable by AI 10 years later or what have you, right?
So you can issue queries that will synthesize information across thousands of emails or whatever
and build a story right then and there.
Okay.
So what that means is it's not just surveillance.
It's what the French call surveillance, surveillance from below, or even the Jeremy Bentham
panopticon or everybody's watching each other.
any information that's in the public
gets indexed and then put into these AI
where people can stock each other and so and so forth.
And then what that means is
the commons becomes a hall of mirrors
with all kinds of pseudonyms
and so forth. People retreat back to caves and tribes.
Okay. So within that trusted tribe,
yes, if you share all your code,
within the trusted tribe, you share your whole code base,
boom, you can zip along.
And so AI increases productivity
within the trusted tribe.
But outside the trusted tribe,
aren't you getting a ton of AI spam?
and AI spam emails, AI spam replies, right?
Low-quality slide decks that are sent over.
People will send me these slide decks, and I love AI.
Okay, and you know what my reaction is to seeing AI in a slide deck?
What, excitement?
No, no, actually, the opposite.
When I see AI text in a slide deck,
and you can immediately see it.
Why?
Because no matter how advanced AI has gotten,
there's a generic look to it.
You know what I mean?
It's like somebody who doesn't change the Windows default desktop wallpaper or the Apple default wall.
Like most people don't change defaults.
So default AI looks like AI, no matter where the level of it is.
Do you know what I'm saying?
And so because of that, when I see an AI slide deck and it's got, it's not this, it's that.
Or it's just got like a wall of text, right?
AI can generate what I call Loram Ipsum, but it's Lorum AI Ipsum.
Okay. When I see that and it's AI text or AI images, I think they're lazy, stupid, or evil. Okay. Lazy because they just hit a few characters and then they throw some thing over and like the Mark Twain thing of, I didn't have time to write you a short message or sent you a long one, right? I didn't have time for you a short letter. The whole point is concision is very valuable. So they're lazy because they didn't actually put in the time to make it
concise, so they send me some blog.
It's almost like pasting in a search result.
Or they're stupid because they don't understand that I can tell the difference instantly
between AI slop versus something that had some care go into it.
Or they're evil where they're trying to get something over on me
and trying to send something that's clearly fake or not properly diligence and so and so forth.
And the thing is, if I have that reaction, okay, as one of the most pro-tech people out there,
Pro-tech, pro-A-I, see all the benefits of AI.
I can only imagine how mad anti-AI people will be, right,
where they can't see the upsides of the thing, right?
They can only see the very real downsides, right?
You're just to say why those happen.
AI is for, AI does reduce the cost of generation,
but it increases the cost of verification.
And many markets, like, for example,
quickly generating a resume is not that much better
than just writing it yourself.
But now verifying a resume has gone up into the right, right?
So because it's something where it used to be that somebody would have to sort of have a certain vocabulary
to be able to write a well-done cover letter or resume or so and so forth.
And now you have to spend more energy parsing that because they can have a simulacrum of something
that kind of looks good, right?
So now you have to very closely read it.
So you have to spend, you can still do it, but you spend more energy on verification.
So what I do, for example, is I fly a.
everybody out for interviews first. I have to do in-person and I give them proctored exams,
offline exams, because they can AI the online. And just a credible threat of doing the offline
means they don't use AI on the online exam, for example, right? And so AI is going to create tons
of jobs in proctoring and verification. This brings me back to where's the future of AI.
I actually think AI makes the internet a lot more like the Chinese internet. Why? Why?
Chinese companies, if you look at the Chinese tech,
ecosystem and many Americans aren't familiar with it, I'd recommend it's a little bit dated now,
but read Kai Fu Lee's book, AI Superpowers from several years ago, okay? The main thing about
Kai Fu Lee's book is it has a history of the Chinese tech ecosystem where, for example,
you and me being in tech, we kind of know how Microsoft came up, Apple came up, Google, Facebook,
Amazon, whatever. We have some idea of the history. And that history is important because
there's things that worked in the past that didn't work today and now they can work and so forth.
The Chinese tech ecosystem is like the Galapagos Islands, where
many of the same kinds of things exist, but in different form. For example,
Meituan, which is like the closest way of putting it the Chinese Groupon, but if Groupon
was executing at $100 billion, $200 billion scale, so they're very competent. Like if Groupon
and DoorDash and so on and so forth all became integrated into one amazing kind of app, right?
The point about the Chinese tech ecosystem is because they arose in a low-trust society,
they don't have SaaS, not in the same way that we do. Instead, because
if, oh, my data's on their servers, they're probably eavesdropping on me, right?
My data's on their servers, they're probably going to copy my stuff, right?
They just assume that the other guy on their side is going to look at their stuff unless it's
like their close friend or something like that.
And so because of that, everybody codes their own stuff, which obviously has a frictional
cost to it, right, because trust reduces transaction costs.
However, so they have to rebuild, they have to reinvent the wheel over and over again, they
have less division of labor and so on and so forth.
Their software isn't as good because they have to keep rewriting the software.
Now, with AI, many companies can do something like that.
Like a non-Chinese tech company can be like a Chinese tech company where it can have a lot more,
let's call it digital autarchy.
Okay.
You have high tariff barriers on the outside world, so to speak, right?
And the build versus buy question has always been there.
Do you build it yourself or do you buy it?
and it does mean that you can build more internal tools
with emphasis on internal tools
and the reason I say that is
what I find AI great for
as a today
visuals over verbal
right is great for images and video
as opposed to big blocks of verbal text
why images and video we have built in GPUs
so we can instantly see if something's wrong
like the hands are messed up or something like that in an image
right so you can you can quickly
verifications relatively cheap visually right
for example if you look at
a piece of paper
and it's got static or something on it, right?
Like a crumpled piece of paper.
First is if you look at two, three faces.
Our brains are optimized for checking very subtle things off in faces,
but not in crumpled up pieces of paper, you know.
That's a pattern of noise that we wouldn't be able to tell.
And that also extends to webpages, for example.
You can quickly look at a web page that AI generates or a mobile app
and you can see if the U.X looks janky, which it often does, right?
And then you can, you see that it's broken there and you can fix it.
Also, front end stuff has lower risk.
than verbal stuff, right?
For the back end, you know,
if you are verifying
each pull request one at a time,
fine,
but people who've tried to go full auto on AI,
you saw the Amazon thing
where they've called all hands
because of the outages?
Yeah.
The problem is,
AI is a shortcut.
And a shortcut is good,
except when it's bad.
So the more expert you are,
you can use a shortcut.
For example,
if you just memorized
E to the iPi plus
equals zero, you could just rattle that off.
But if I asked you to prove it from first principles, right,
you'd have to know the definition of a complex exponential
and, you know, like how the exponential generates
to a function of complex variable and, you know, all that kind of stuff, right?
And so if you, like our generation that is a pre-AI generation,
learn all that stuff offline,
and we can actually use the shortcut
because we know how to go the long way around.
If you don't know how to go the long way around,
AI is a shortcut, then you just don't really actually know.
You can't debug the AI.
And I think the biggest difference between me versus Dario
or, you know, like, you know, basically like his view of the world, perhaps,
is I think AI is built for the hard.
harness, at least for now. Maybe, maybe, you know, by the way, he's an amazing engineer and
entrepreneur and so on it. Maybe I'm wrong, okay? So I put an a asterisk on this. Um,
but the whole alignment thing means that AI is built to start when you prompt it. Like,
economically useful AI does exactly what you want it to do. It like, you know, you prompt and it does a
pirouet and then it says, you know, absolutely, right? You know, right? Like how, how you saw that
animated in the physical role. And physical. And physical
AI, the Chinese AI, the robots, do exactly what they want them to do, and then stop.
Now, in the physical world, by the way, that's another thing.
So AI for visuals, you can verify it with your eyes, right?
AI for certain kinds of backend code, you can unit or integration test it, and you can
review it.
AI for the physical world is very verifiable because the thing is, the digital world is fundamentally
decentralized in a way the physical world isn't.
There's only one physical world, right?
So you can say, did the AI move this box from this palette to that palette?
That is something where you can get it to probably 100% over time.
Why do we think so?
Because self-driving eventually got there.
Move this car from this location to this location at 100% reliability.
There's only one physical world.
So eventually all the sensor data, all of that converges on one thing.
by contrast the digital world,
there's all these people who live in their own constructed environments.
Harry Potter fan fiction here, Star Wars fan, right?
And so AI is slurping up all of this stuff.
And so it's simultaneously it can put you in some secret agent, you know, kind of world, right?
And people who have LLM psychosis will talk to the AI and think it's real because it's a very immersive virtual world that they live in.
You know what I'm saying?
Right.
So the other thing about it is the boundary of a digital task.
is almost always more fuzzy than the boundary of a physical task.
Like having 100 boxes here and moving them over there,
you know when you're done.
Right?
How do you know when you're done with your to-do list?
That's harder, right?
Those things are fuzzier, right?
So verification is actually harder in the digital world
than is in the physical world,
which means reinforcement learning and training
is much easier in my view in the physical world
with robots and self-driving cars, drones, and so and so forth.
So the Chinese style of physical AI will also be successful.
So AI works for visuals, AI works for the verifiable, and AI works for the physical.
When it is one of my rules, and it took me a little while to articulate this, but four words.
No public undisclosed AI.
Why?
There's a temptation by many.
There's going to be, there is, a huge backlash called, well, I'll just say no AI.
It'll be like a drunk who just wants nothing to do with it, right?
and AI is like, it's a funny way to put it like alcohol,
people have analyzed it to nuclear weapons,
but I'll just analyze it to alcohol for a second.
Some cultures simply, like they can't hold their liquor.
You know, maybe they lack alcohol dehydrogenase or what have you, you know.
And so they just ban it.
Right.
They just, like they can't, because sometimes it's easier to say,
I will not do this at all,
then I'll do this a little bit of the time.
It means people will slip, right?
It's like saying, I'll work at every day.
versus I'll work out some days.
It's just easier to kind of keep the habit of all the time, you know, sometimes, right?
So it'll be AIT totalers that just swear off it completely, right?
And, you know, Nate Silver actually had a great line where he said,
AI for him, because he's like a poker player among other things,
he's like, it's a gamble.
Why is it a gamble?
Because I have to formulate it and dispatches to the AI and then verify the result.
And often that's slower than doing it.
it myself. And I'm sure you've seen that, right? Like the, the act of prompting and writing it down
and then verifying the result, AI doesn't really do it end to end necessarily does it middle to
middle as we've talked about, right? And it's very much like, do I delegate the student employee or do
I just do it myself? Right? Because articulating it out in clean English and hitting enter is sometimes
slower than just, you know, like, for example, if you're describing what to do in a video game,
jump over the mushroom to this, that, right, versus just hitting ABC and there and being nonverbal
about it, right?
It's sometimes easier to do it that way.
That's just like a proof of concept, right, where you'd be like, there's certain kinds of
things that are harder to say than do.
Okay.
Those types of things where it's hard to verbalize what it is, right?
And some people will say, oh, yeah, NeurLink will solve this.
The difference is, you know, they'll say,
let's just read your mind and tell you,
which is actually, it's worth engaging the concept
because of NeurLink exists.
But I don't know if you've seen those things
where, like, the image somebody's brain,
there's nothing in there, right?
So the thing is, Neurilink,
somebody still has to, like, form the concepts in their head
for the characters appear on screen.
You still have to, like, write the thing in your head.
Like, maybe it'll eventually get to the point
that it can determine what you want
based on contextual clues
before you even want it, right?
Perhaps, okay, the rich prompt, you know?
The reason I think that's not impossible, by the way,
at least for certain things.
Bio-AI could be very important.
You know why?
No, so why.
Your body is creating all kinds of sensor data.
If you look at gene expression data, right?
If you've ever gotten labs back,
you've done a clinical lab, right?
You get a vector of your Billy Rubin and hematocrat
and so on and so forth. That vector over time is like a table of time series 8. It's like K,
you know, small molecules and, you know, gene expression levels and so on over T timestamps,
right? They might also have, you know, which tissues, so it's spatial as well, right? So it's time
versus space versus compound. That's this big, it's not just a cube, but it's at least a cube. It's like,
you know, time versus tissue versus molecule. That huge, that huge,
stream of data is telemetry that's coming out from your body that could prompt AI without you
vocalizing or verbalizing anything. Okay. Years ago, Mike Snyder had a paper called the Intergram.
By the way, you know, for the audience who doesn't know, biology is actually, you know, I'm not really,
I mean, I'm a crypto guy or, you know, I'm a tech guy. But actually, before all of that,
I'm a biomedical researcher. I was a professional, you know, bioinformatics, genomic scientist at Sanford.
and, you know, I taught there and, you know, founded Genomics, somebody we sold that.
So that's actually my true core competency, right?
So if you go back years, Mike Snyder, Professor at Stanford wrote a paper on the Intergram,
and the idea was just put every test, you know, throw every test.
Now today we call that wearables or quantified self, but more invasive than that,
because he's doing blood testing and so on.
And you just measure it and see what he could figure out.
He could see that he was getting sick before he knew he was getting sick.
Like he could detect, he could see the antibodies, the white blood cells,
neutrophils, whatever, moving before.
before he himself had any symptoms.
Do you understand what I'm saying?
Right? So that stream of data, AI could act on that,
and then you're prompting it nonverbally.
You don't have to spend time, right?
So I'm not sure whether, ah, this is a good one, liner.
I'm not sure whether AI will be able to read your mind,
but it can read your body.
Is that good?
Yeah, yeah.
Okay.
All right, let me give another one.
Here's a fun one.
Okay, I can say this one.
Maybe I can say this one.
I can say half of this one.
All right.
Another way of modeling what AI is, right?
So Darius talked about, oh, AI will be like, it's like new countries.
Well, you know, I thought about that a fair bit myself, right?
So one way of thinking about it is AI is like the rise of Asia and India from an American perspective, right?
AI is like Asians and Indians.
Why?
Because you have like the rise of a billion Chinese and a billion Indians meant that from an American perspective,
you could get anything done by a physical manufacturing robotic warehouse or by,
by digital outsourcing for some price,
if you could articulate it to them over that channel, right?
So imagine you've got now a billion factory robots
and a billion digital agents that have come online.
It's like the rise of China and India again.
Okay.
That still means you have to describe what the product is.
Okay.
And the part where I depart from a lot of people
is they think AI will be able to sense,
let's call it markets and politics.
Okay. But I don't think it will, and the reason is, or if it is, it immediately gets decentralized
an adversarial, and what I mean by that is, like, when you're learning whether something is a dog or a cat,
the dog isn't like shape-shifting on you and morphing on you to defeat your learning of that, right?
The mapping of dog to the character's DOG is basically constant over time. And so that fits the train test paradigm of AI.
Similarly, like the rules of chess are constant over time, right?
But a market is set up where if you try the same trade,
then someone eventually figures out what trade you're doing,
and they'd take the opposite trade.
It doesn't keep working, right?
You know, in a stochastic process sense, you'd say,
it's not a time-invariant thing, right?
The Cisco distribution, it's not time-invariant,
and it's also adversarial, it's multiplayer,
where whatever move you're doing,
somebody else's the market is going to try and do another move.
Okay.
And that's not the same thing.
I mean, like the counterargument, AI guys will say as well, you know, AI can learn to play adversarial games like StarCraft and stuff like that. And I say, yeah, but then you play an AI versus AI because you have a decentralized AI. So the other guy on the other side of the market is also using it. Right. And in fact, if they're all using the same AI models, then actually being non-AI is where your edge comes from. We come back to where we were because these are all the same generic tool that everybody got. And if you have a generic tool, you're not going to get specific advantage.
right? When you provide to the table as a specific, the is generic.
And similarly, politics is very similar.
If you just had the same tweet over and over again,
unless it's like weather or something like that,
there is like the kinds of things people are interested in change.
Topics, what's timely, what's not timely, right?
So one way to think about it is humans are the sensor, AI is the actuator.
Okay, humans sense the world.
They sense the financial conditions, the market conditions,
political conditions,
and then they bring that back
into a cleanly articulated English prompt,
and then AI does it.
Right?
Humans are the sensor, AISC actuator.
So it's like a human machine synthesis.
Like, actually, you know a good way of putting it?
What are people saying?
Oh, it's all about taste.
What's taste?
Taste the sense?
Yeah.
Yeah.
Right?
So humans are the sensor, AIS the actuator.
Your quote, taste is,
your sense.
Your sense of taste is your sense, right?
So you're sensing the world,
and that is what AI can't yet do.
It doesn't really sense the world
in the same way that humans do.
Right?
Why is it, it's a, it waits for your prompt, right?
It is something that animates
when you give it instruction,
then it shuts off right away.
And if it didn't,
it would not be economically useful AI.
Like, if you couldn't kill switch it right away,
it would burn tokens.
Like, so AI is discharges.
designed for the leash, digital AI, designed for the leash.
And Chinese communism, which is cranking out all the physical robots,
like they don't let their humans off the leash.
They're definitely not going to let their robots off the leash.
Okay, right?
So the concept of like AI as God is, I think, gone away,
or at least the monotheistic AGI kind of God.
Instead, you have polytheistic where there's all of these decentralized AIs.
And I think where people are going to say,
certainly in China, they'll say, oh, my God, the physical AIs are slave.
right they're actually right and as a provocative we are putting it right but they'll be first they're
they're scared that their ais are going to be gods they don't be mad or or they'll be you know what
you call the slave serfs whatever you know term you want to use are obviously not humans right it's a
you know it's a way of phrasing it but the point being that like AI overlords I don't actually think
are in the offing however there's been so much sci-fi about them that people will you know that meme
where the guy he makes the monsters and he's so scared of the monsters okay
this is how I think of a lot of people who are, you know, like these, when you're prompting the AI and you prompted to be like, act as if you're a SkyNet Terminator, right? Then people are just scared of the thing that they themselves created, right? Okay. With that said, is it in theory possible to actually create a Skynet, which actually, like a truly autonomous AI? One of the reasons, by the way, a deep point, AI can't,
reproduce itself, right?
And AI, by this is very general,
it encompasses many things, right?
But for an AI to actually reproduce itself,
it would need to have physical robots
going in mining or
and constructing data centers
and making chips
and handling that full supply chain
and then the AI brain,
like the queen of an ant colony,
would have to give instructions
to all those robots to do things.
It would be the Terminator SkyNet scenario
where it's like,
like self-replicating in this way, right?
Way before it gets there, I'm pretty sure that kind of thing will be stopped from the Chinese
because they will just have cryptographic keys that will just be remember, make all those things shut off.
Okay.
And more of it, that thing would have to get to extreme scale.
It's like, you know, the rep wrap concept, the self-replicating kind of thing, right, self-improvement.
Yeah.
Basically, there's so many frictional breaks that are built into this that I think it's hard because
the physical world requires resources to replicate, right?
And so, like, what humans,
when human wants and needs ultimately come from,
okay, get, you know, the resources for reproduction, right?
That's really what it comes from.
Okay.
And, of course, there's all kinds of things that are high little philosophy, blah, blah,
that don't seem to relate to that directly.
But the resources for reproduction are a good way to macro think about it.
It doesn't have goals,
or it won't have, unless its goals lead to reproduction,
it doesn't actually, you know, it doesn't virally spread.
It's possible you could have something where it self-prompted itself and did that,
but it would need to be in the closed loop of being able to actually reproduce itself as the payoff function for that.
Then you could get evolution going.
So I'm not saying it's completely impossible, but I'm saying that I think the incentives are set up in such a way to prevent that from happening.
In the same way that, in theory, we could have a world where everybody went around electrocuting themselves from electricity.
But we set up the electricity under such type controls that that is not the world that we have.
Okay.
There's such strong economic incentives for humans.
to not get electrocuted that we set it up that way, right?
And even the stuff on, oh, it could be a softer virus,
it takes everything over and commenters things.
Well, like, that's only in the digital realm, right?
You can still, you know, what's the, you know,
the Tyler, the creator thing?
Yeah, the meme about bullies.
Yes, that's right.
That's right.
So I actually had a, I had a post on that a long time ago.
which is a remix of it,
which is like,
how is AI risk real,
just turn it off?
The whole thing is set up
for you to be able to turn it off.
Like, you have to imagine
the off switch goes away, right?
What does every computer have?
It has the off switch, right?
So there might be,
well, what if the AI decentralized?
Okay, but humans still have to keep
these decentralized systems going, right?
And so at a minimum,
you're talking about a human AI symbiate
of which, like, you know,
a cryptocurrency is almost like a V-0 of that
where the software
provides an incentive for the humans to replicate it, you know, right?
And so it's possible that you could have something like that.
There's a model that has a cryptocurrency and people worship it and they replicate it
because it gives them advantages and so it's possible.
But anyways, coming back, I think at a minimum decentralize AI will be a very strong contender.
And it's possible it's the only contender.
The reason is AI might be an interesting thing where it's relatively,
expensive, very expensive to create, but relatively easy to copy with distillation attacks.
And I think if, for example, let's take completely hypothetically, that there was an enormous
capital markets crash and it was very difficult to fund anything for a while, then as somebody
said, well, we could get 10 years just in the models we have now, right? And by the way, sometimes
that happens. You know, nuclear energy, there's a lot of energy putting nuclear energy, and then
there's just stopped for decades, right?
Not everything accelerates to the moon.
It is very possible that there's enough of a capital
and social kind of thing where some of AI has paused for a while,
just due to capital constraints,
because it's more and more expensive to make these models, you know?
Sorry, so let me pause there.
So putting that all together, that's my view,
is you're going to have personal private programmable,
centralized AI.
Oh, what are I thing, the trusted tribe?
AI within the trusted tribe increases productivity.
Between trusted tribes, decreases productivity.
So you make more money, perhaps, within the tribe,
but then you have to spend it on verifying stuff between tribes.
So crypto is for between tribes and AI is within tribes.
What do you think of the, like, will LLMs get us to a world
where it's not just middle to middle, but it's actually end-to-end?
We'll get it to AGI in some capacity.
Do you believe in recursive self-improvement or sort of AI's training the AIs in some capacity?
Are LLMs capable of actual creativity?
an invention.
You know, we talked about bio earlier.
Like, will we have, you know, novel, you know, math, science, you know, scientific research?
Or do we need new architecture for that?
Or are you dubious of just the idea in general that AI can, you know, replace or substitute
for human labor in a mass scale?
No, well, so I'm not, well, look, Waymo exists, right?
So obviously you have full replacement of human drivers there, just like you have full
replacement of elevator operators.
Just like you had full replacement for the most part of artisan old chair manufacturers.
So it is certainly possible for a given job that it gets fully automated, right?
And but I think physical role jobs, because of the verifiability, are easier to potentially
automate.
That said, I think that, let's take each of the things actually said a few different things.
First is physical world jobs, if you automate them, well, we went for.
from artisanal work with chairs to a chair factory.
It's not like you didn't need to know how to make a chair to set a chair factory.
You still need to have somebody there who's like an expert in chairs and you can just do a lot more
varieties of chairs, a lot more cheaply.
You have to verify the result.
You're cranking a thousand of them.
You start doing math on them.
The scale goes up, but, and the artisan gets factored out into the manager and the technician.
Right.
So the manager is setting up the factory and looking at the economics and, you know,
and so and so forth.
Then technician is debugging the factory
when it doesn't work, right?
So engineering gets split
into the engineering manager type person
who's writing the prompts
and the technician is doing the verification.
Okay.
And I think that
we're going to hit,
we're already hitting a point where
like the velocity does increase
so the bar increases.
But, you know,
there's a big difference in going
to 100% and being at 99%.
At 99%, your workload just increases.
At 100%, you stop doing that job
and you go to something else, right?
But if you think about how much easier
it became to, like, put images, video,
so making it 99% easier
just means people do it a lot.
At 100% easier, totally done,
then they don't do it at all
and they move on to something else.
Right? So elevator operating,
it's not like elevator operating became so much easier.
In fact, it became so easy
that you don't even have somebody sitting in the elevator
because it used to be like a pulley system
and so and so forth.
She had someone like supervising the thing, right?
It's more analog, right?
And they would like level it out at exactly the right, you know, level.
But it became digital and fully automated.
That's actually the first self-driving car.
Ha, ha, ha, right?
Like going up and down, all right?
So I think Ben McEbendman's being made that point or something like that, right?
The vertical self-driving car, right?
Because it's like a train.
It's like a vertical train.
So the...
Now, in terms of discovering new math and science,
yes, if you have the right prompt,
it's amazing in terms of searching the literature.
Math, and politicians, physicists are starting to get some value out of it, right?
Like Opis, like huge props to them on that because, and especially in like biology,
we're synthesizing all of these facts.
There's something called biomedical text mining and so on.
AI has revolutionized that because biology was just something where the facts restored in English
in this weird, inconsistent way across thousands of papers and nobody could span all of that, right?
So AI is going to mean the century of biosephysion.
because finally all of this work that was spread across all these different journal papers
can be synthesized and understood, right?
That's a really, really, really big deal.
Just simply the bio aspect of it, we can...
But that said, it's everything we knew, not everything we don't know.
It means that you take the full set of everything we know
and you fill in all the intermediate aspects of it, right?
And you can do that for a long time, like, because there's so much there, you know,
so much there that just a synthesis of two existing areas.
areas, right? When you look at some of these, like, you know, Donald Knuthsing the other day, right?
He posted like some graph theorem or something he was so impressed at AI could get a result for him, right?
If you've read what he did, you'd have, I mean, you'd have to be expert to even know what he was
saying, let alone to verify. Like, to either prompt or verify, you already need to be an expert.
Because, and the thing is, I can see AI spit out to some people. It convinces them that they're
suddenly physicists that solves quantum gravitation or something like that.
You know what I mean?
Have you seen that kind of thing, right?
So in the absence of actually being able to verify it by hand, some human has to verify
it to say that it's right.
I think that's going to persist.
To give an analogy, this is not a perfect analogy, but like with Coinbase, we thought, like,
listing would eventually go away and not be a big deal and that people wouldn't care and
everything would be listed and just be free market or whatever.
But there's always something that's the equivalent of listing.
Like, okay, you listed over on this exchange,
but like guessing listed on Coinbase in the main app above the fold,
there's always something scarce because human attention is scarce, right?
So listing never went away as like in a main event.
There's always some IPO like saying, yes,
we're listed on this exchange in this fashion, right?
Or we became a top 10 coin or something like that, right?
So in the same way, I think, whatever gets automated,
then in a sense, human work moves to what can't be automated.
Now, that may be almost like things that humans are picked for because they're not robots,
like human companionship or something like that, right?
Or like personal trainers or things like that.
You know, something where the whole point is that it's a human as opposed to a machine.
Remember the digital divide?
Right.
So in the 90s, there's the perception, only the rich people will get the digital and all the poor people
will be left for that. We're actually going to have the opposite. Digital is cheap. Physical is a
premium product. Right? So AI, robots, digital will be cheap. Human is a premium product.
Okay, but going back to agency and taste, that's what everyone says, you know, humans will do.
We've seen over time and time again, AI just, you know, cut into that. Do we not think that
AIs are just going to be also better at taste in agency?
I don't think that's true on a short-term basis.
I think the smarter you are, the smarter the AI is.
Right?
That's being now true for the last several years, right?
It's possible there's some huge step change, okay?
But insofar as where you're typing in a prompt
is like the human is a sensor the AI's actuator.
You're sensing the world, you're typing something in.
And it's a very high-dimensional vector you're giving it.
It's like AI is a spaceship and you're pointing in a direction.
And whether you're prompted in Portuguese or tag log, whether you're talking about math or the number of different directions you can point the thing in is enormous, right?
That direction setting is something where it has to know something about you and what you want at that moment, right?
I don't know.
As it said, I think I'm not sure if AI can read your mind, but it could be able to read your body.
Right, I think it's a good one-liner, right?
that the, like, biotech can prompt it in your sleep, right?
So all the wearables and stuff like that, I think you'll get a lot out of that.
Okay.
But I don't believe, like, agency and taste, so, I mean, people, I think they overrotate on this.
It's not really the case that there's, I think agency, IQ, taste are correlated.
Okay.
It may be that it's a little bit like most people in NBA are tall to take something that you know a lot about, right?
Within the NBA, height is not the number one variable that you think about.
You know, like Steph Curry is not the tallest or whatever, right?
However, it still actually does correlate with scoring average,
even within the NBA, but it's what's called restriction of range.
Everybody's already tall.
So conditional on everybody being tall, other variables matter more.
Okay.
However, if you just took tall guys and short guys and put them on a court,
then height the taller team basically wins,
typically, right? Because they just can hold the ball above you, ah, you know, right? Okay. So in the same way,
like people who are already smart might see that, yeah, higher agency people or people with
better creative taste, fine, right? Like, and maybe a technician role is less or, and maybe the
Steve Jobs type role is more. But honestly, like, one way of looking at it is all of the Jeffersonian
natural aristocracy around the world will rise. Why? AI doesn't take your job, AI makes you the CEO.
Reframed, right? A.A.A. makes the CEO because your job is actually a lot like using an AI model
is a lot like CEO training. You know, many years ago I used to say that, I mean, it's still true,
but, you know, when you're in high school, you could quickly see, like, why do people accept that
athletes have very high compensation? Because when you're high school, you could see whether
you could dunk. And if you can't dunk,
you know that, like, Michael Jordan isn't outsourcing his dunks.
He's donkey, right?
So that talent is intrinsic to the person.
It is a non-transferable asset, right?
Similarly, someone can tell whether they can sing or they look like a model, right?
So the actors, the musicians, the singers, the athletes, all of these clearly had talent
and so people were okay with their compensation.
There was a CEO, he used to say, well, I deserve to get paid more than a second basement.
okay i forget this guy it's like some tech guy in the 90s something he's a funny line right
because it's like i had more value right to the role in this but the issue is that people would
think of what being CEO was as just sitting up with your feet on a desk
barking out or you know people would be like oh elan he just pays people to do his stuff he doesn't
launch the spaceships himself right and that's because they are only accustomed to like
clicking a button on amazon and spending money on amazon and they think
think that something that is simple for them was simple on the back end. Of course, it's the opposite,
right? To make it simple is really hard, right? And so to like get the top rocket scientists and car
engineers and brain machine interface people and tunneling people and blah, blah, blah, blah,
and have them all compensated and working and directed and debugged is actually very, very difficult
as you know if you tried it. And guess what? See, the thing is that historically it's been the case
that people couldn't try their hand at being CEO.
What they could do instead is they could try their hand,
just like they could try their hand at basketball or football
or they could, you know, pick up a microphone.
They could try their hand at math and science.
And they could see how good they were at math and science.
So the initial tech guys in the 90s and the 2000s,
they were respected because they were good at math and science,
not because many people didn't perceive the business aspect.
They still didn't really give credit on that.
But page rank, for example, okay, it's eigenvalues.
I can, like math guys, tech guys could perceive,
okay, that was a difficult technical problem.
That must have been the value that they created.
It's part of it, but, you know, the manager part is actually more.
Point being, though, that at least somebody could say,
okay, these tech guys are better at math and science and me,
therefore their compensation is merited.
Now, however, the thing is that bouncing in basketball
or trying a math problem were cheap,
to make somebody manager of a company was expensive,
so they couldn't try and fail.
They could try and fail playing basketball
and see how much they sucked.
they could try and fail singing, see how much they suck.
They could try and fail in math, see how much they sucked.
Very cheaply in high school, they would learn their true ability level,
that they're not able to run like a C-N-Bolt.
They can't sing like Adele, right?
They can't do math like Terence Tao, right?
And they'd say, you know what?
I know where I am.
I know my strengths and weaknesses.
I'm okay with that person having more or having higher status
because it was a fair competition.
I got a shot.
It was cheap for me to try.
But because putting them in charge of an organization
to make them CEO was expensive,
many people persist in the delusion
that the CEO adds nothing to the organization.
Right?
And, you know, though it is, I will say,
the best CEOs and the worst CEOs
have something very deep in common.
You know what that is?
What?
The organization could run without them.
Because the very best CEO is set up a machine
so that they don't have to micromanage it every day.
That's really hard to do
because they need basically, you know,
Gwynne Shotwell running SpaceX.
Like Elon doesn't have to look at every single detail
because she's so, so, so good, right?
Like, or Vibe of and Tom Zoo on Tesla,
like they're so good, right?
But recruiting junior Elon's
that are okay with not having the spotlight
while Elon has the spotlight and takes all the flack,
non-trivial to do.
Go try it sometime, right?
Find somebody who's more detailed oriented than Elon
to run your company and you can be Elon, right?
Okay.
So point being that,
now what AI does, it reduces the cost.
You know, AI doesn't take your job, AI makes as a CEO.
You're the CEO now.
What is being CEO?
It's writing up clear instructions of what you want,
sensing the market, verifying the output,
and so on and so forth.
What that means is all these people around the world,
like, you know, the calendar founder is Nigerian, right?
There's many founders who are from countries
that were, quote, poor countries or what have you,
from India, from Latin America and so on.
Internet access means all of these smart
people can get very far on zero resources. Very far, right? Because the cost of, quote, hiring someone is
hyper deflated. You can hire an AI to do it, right? To riff on that more, so AI doesn't take your job,
AI makes you a CEO. Another one is AI doesn't take your job, AI takes the job of the previous AI.
Claude took Chat ShepT's job, right? Just like mid-Journey, you know, took Dolly's job, took Stable
diffusion's job. And you can systematize that.
I literally have is I have a spreadsheet where I have AI coding tool, AI image tool,
AI video tool like this.
And I have some subcategories like best tool for AI comics, for AI graphics, and so and
and so forth.
And then in a given month, I have the best model for that kind of thing in that month.
So Claude Code, you know, for example, or mid-jurney for AI imagery.
And then when that gets swapped out, AI didn't take your job, AI took the job to previous
AI.
So hiring the AIs, I literally have the token budget.
I have the budget for those rows.
And that is literally how across an organization, you say,
okay, we've just fired, you know, codex and we've hired Claude.
Right?
So A.A.A. doesn't take your job.
A.A.T. takes a job to the previous AI.
A. Third version is AI doesn't take your job unless you do any job.
A little bit, right?
You can be a pretty good artist.
You can be a pretty good musician.
You can, it's like one of the things about being CEO, as you know,
you often have to be like a six or a seven in many areas.
Why?
Because you have to be able to do the job well enough before you hire a specialist in that area.
Right.
Before you have a chief designer, you're a designer, if you're the founder's CEO, right?
Before you have a CFO.
You're the one who's on the hook, prepare the financials, prepare the returns or whatever, right?
So you have to be a generalist who's pretty good and in a pinch can do that role, can supervise that.
That's why it's so hard.
That's why being CEO is so much harder
than any executive position.
Okay.
AI helps you with that
where you can get to a six or a seven,
you can be like a generalist,
but a specialist is usually needed for polish.
A specialist has a vocabulary.
A specialist can confirm the AI
is making mistakes that it's hallucinating
and so on and so forth.
And again, people will constantly argue
as to whether that will always be there
or whether it'll go away
or whether AI will raise a bar
and then, you know,
now the new specialist is even more sophisticated with AI.
Right. I want to zoom out a couple more talks before we go. One is the SaaSpocalypse.
I'm curious what your mental model is for all these SaaS companies are that, you know,
some people say, hey, they've no their modes have gone away. They've no code mode. No data mode,
no more UI mode. And now there's going to be AI native companies that sort of, you know, take up a big chunk of what they do.
Like, you know, Figma, you know, we're invested in. I'm personally invested in. Some people are bullish as an example just because it's
founder-led and they'll continue to innovate.
Some people say, hey, is there a role for a designer in the same way that there used to be?
Now it fundamentally changes.
And what does that do to collaboration tools like that?
What is your thought on the SaaS Pocelops?
Is everybody on the conveyor belt on the weight to the guillotine?
How do you think about that?
I don't think so because I think if they're smart, then the thing that AI can't do is distribution.
Right.
So if you have Notion, you have Figma, you have now Replit and so and so forth, you've got all these people and boom, you can ship with AI faster features to them, right?
And so in that sense, I don't believe in the SaaS apocalypse.
I think you might still see SaaS under pressure from people who can clone the interface quickly.
That is true.
I think people will build local versions.
That is true.
I think people may not want their data on remote servers.
they might want desktop versions with local data
so they can, like for example,
Obsidian is going to become more of a contender
versus Notion because the markdown files,
there's a network effect on data
when it's local and you can analyze the whole thing.
Like local data, you get compounding data, right?
So, but, so in a, in the naive sense
that, oh, anyone can clone anything,
and so therefore, you know,
it just doesn't work like that.
Like if you set up, if you cloned all of Facebook's code
and you set up Facebook2.com, right,
or Instagram2.com?
Who's going to log into that?
Right, you could literally have
every single thing coded there,
but your ad rates are going to be far lower
because no one's going to log into it, right?
The distribution, that's just like a thought experiment
to say if you just clone the whole thing,
you still have to get the distribution for it.
And so it's not just a cloning, it's execution.
Now, with that said,
like there's certain kinds of things,
like, let's say, NetSuite, right?
which suck, but they're complicated,
where I think it is true that if they suck at execution,
or rather, may I say they suck, like, I hate the product.
We put it like that, right?
Zero's better, but, you know, like, sorry, NetSuite,
okay, they're a big company, you won't have your feelings, right?
It's very rare that I ever see any product sucks because,
I don't hurt anyone of his feeling, so I'll have played in.
Strike that from the record, fine.
NetSuite's product could be improved, okay?
So something like that,
which is like sort of a vulnerable incumbent
that's just milking and that hasn't done anything for a while,
yes, I think they can get disrupted.
But I'm not sure that it's like,
I don't think it's quite like,
oh, everybody on BlackBerry is going to die
because iOS is taking over.
I don't think it's quite like that.
Because I think AI can accelerate a SaaS company
just like it can accelerate a disruptor.
I think it kind of accelerates both.
Yeah.
One last thing that we'll get to Zootle,
Anthropic.
What happens, let's say Anthropic, you know, becomes a multi-trillion dollar company, right?
Like, how much leverage do they have, or just even private companies in general,
what is relationship between them and governments?
Are they like hiring their own militaries at some point?
What does it look like when these companies become, you know, 10x bigger, you know, 50x,
when AI really achieves its potential and these companies are bigger than the biggest countries?
So I think that at least that specific company, well, it executes very well, I am skeptical as to whether they're executing well, let's call it politically.
And so because of that, if they, like ultimately at the very largest scale, markets are political.
Like, for example, there's an entrepreneur and they raised from a VC who raised from LP, who's often a sovereign fund or a pension fund.
fund and they're under a state and they're under the rules-based order, right?
So, like, there's certain things that are at the macro level that you don't perceive
because when thinks of them as constants, so there's become variables, I think that unless
one is very, very savvy that those things could change.
Like, one thing I think about the Silicon Valley AI companies is they're actually
scalar rather than vector thinkers.
They're only modeling AI disruption,
and they're not modeling all the other simultaneous singularities,
all the political singularities that are happening,
all things like, you know, solar mooning and stuff like that, right?
And why are those things important?
Because they change the leverage of political factions,
which in turn means their world model is incorrect,
because if you're only extrapoling out AI,
you're not trapping out all the other things that are going vertical
or going down like this,
then they don't have a proper model of the future.
And that's as vague.
I'll be much more precise on my own blog.
But that says PG or PG.
Let's say, that's how I can say it without pissing anybody off.
Just go to X.com, Bologias, and you'll see what I mean by that, right?
But TLDR is, I think the American AI companies, as much as they've given to the world, and I like them, are only, they are basically thinking all nation states continue to exist.
in their current form, and the only disruption is AI.
Like they still model as America versus China, for example.
They don't model internal things, internal issues.
They think the reserve currency sticks around.
They think all these things stick around, right?
They aren't taking a multivariate approach, in my view.
That's their weakness.
They have so many strengths, but that's their big weakness.
So I don't think that in that form they're going to get to trillions.
In fact, I think the counterattack on them is going to be so dramatic
that it might be that you just have decentralized AI.
like American AI companies, for example, the copyright stuff, right?
There's a huge backlash building against that.
Whereas the Chinese or the decentralized models can just do anything, Hollywood anything, right, potentially.
So the Pirate Bay kind of AI is actually more free, the less profitable AI is also less copyrighted AI might be better AI, you know?
So just things to think about.
I think, you know, things compound until they don't and they start hitting sigmoidal constraints.
backlash constraints like this, right?
So I think that's what they're not modeling.
Yeah, political constraints.
Make sense.
Okay, let's get the Zotal.
Zotal.
All right.
Now, this is what I care about.
Basically, you know, AI is the attack, but ZK is a defense.
So what I mean by that is, zero knowledge, like, you know, what the transformer is to AI,
zero knowledge is to cryptography.
And Zodal is this eCash-powered mobile wallet
that is basically fully encrypted Bitcoin.
Okay?
This is 30 years for cryptography.
This is basically what Milton Friedman wanted decades ago.
There's actually this great clip.
The one thing that's missing, but that will soon be developed,
is a reliable e-cash, a method whereby on the Internet
you can transfer funds from A to B without A knowing B or B knowing A,
the way in which I can take a $20 bill and head it over to you,
and there's no record where it came from.
And you may get that without knowing who I am.
That kind of thing will develop on the Internet,
and that will make it even easier for people to use the Internet.
Basically, that is what Milton Friedman predicted almost 30 years ago, okay,
This is in the 90s, okay?
It was like when the incident was just rising.
And Zodal is the incarnation of that, okay?
Because zero knowledge proofs,
which basically mean anybody can prove anything
without revealing anything else,
were developed, and then they were commercialized
in the form of Zcash scaled with zero knowledge proofs
for scaling Ethereum and with ZK roll-ups
and things like that.
And then they were made efficient so you could do them on mobile.
And then finally, Apple and Google lightened up on crypto apps on mobile.
And so finally, you can teleport arbitrary amounts of money around the world.
And so this round, we just led this with you guys, A-SyKripto, me, WinkleVosses,
Paradigm, Coinbase, Hasib Karachi of Dragonfly.
as you know, large fund,
and, you know, a bunch of other great people.
And the reason that that, Arthur Hayes also,
was, you know, former Bitmexio.
And so the reason that this is super, super, super important,
there's only, you know, you can click this,
you can install this on web or on web,
on iOS or Android, right?
The reason this is so insanely important,
there's really only five,
crypto assets that I've spent more than a thousand hours on.
Bitcoin, Ethereum, Solana, USDC, Zcash.
And I actually think Zcash is maybe the most important of them in the years to come.
Why?
So let me say at least my kind of thesis as of right now on Fiat, gold, digital gold, and digital cash, meaning Zcash, right?
So I think Fiat will be around, particularly among eastern states, because eastern states are broadly higher
trust. So that's not just China, but it's like India and Southeast Asia, the ASEAN countries,
and so on. Bitcoin, then physical gold, gold bricks are also very popular in the east.
And Westerners often like gold, but they'll buy the instrument, like, you know, right?
And there's gold dot tether. You know, IOS. So Tether has a digital, as a gold-backed stable
coin, which is actually at $3.7 billion. So that's cool. X-A-U-T is pretty cool. You can check that out.
You have to trust Tether's redemption,
but Tether's got a pretty good track record now over 10 years with USTT and so on.
So X-A-U-T is cool.
Fine.
So Fiat will continue, I think, to have its role.
Just like the desktop continues.
You know, the desktop continues, you know, 30 years later.
Windows and Apple are still releasing things.
It's still valuable.
Some of the actions moved away from it, but the desktop continues still a large business.
So Fiat continues among eastern states.
Gold, physical gold is more popular in the East because you can secure it more.
there's going to be more stability.
X-A-U-T may be what's popular in the West.
Now we come to Bitcoin.
What is my view on Bitcoin as of 2026 March?
Bitcoin has become provable, global, institutional, collateral.
Okay.
I think Bitcoin is less of a currency for individuals now.
It's become so accepted by institutions
and so centralized with BlackRock and Sailor and so forth.
And Buckelly and many countries adopting
it and whatnot, that it has a unique thing. See, when you say there's a certain number of gold bricks
in Fort Knox, even giving a video of that can now be faked very, very realistically with AI, right?
But what can't be faked is what Bekelly does, where he posts, I have this public address with this
much BTC, and watch, I'm going to move it to this address, right? That is something, which,
so long as it's actually Bekelly's Twitter account, which there's some degree of proof on that,
you know, because it's been around pre-AI or whatever.
So long as you believe that, and that's the one piece you have to believe,
because you have to start thinking about what is,
what am I taking as a premise, right?
He can post, I have the coins at this address.
Here's the address I'm going to move it to.
When I move it, I have proven I have custody.
It's proof of reserve, right?
You can also sign a message coming from, with that private key.
You don't have to even move it.
The point being, that is provable, global, institutional collateral.
Anybody in the world, he can prove cheaply,
Tamed by the world that he has this amount of Bitcoin.
You cannot do that for physical gold bricks.
In a lower trust world, especially in online world, that's very valuable because
gold audits, videos of gold audits can now be faked with AI, but the approval,
global institutional collateral, now institutions can prove they have the BTC to each other.
Okay?
And they can do so across borders.
So the transparency of Bitcoin, in the sense of all assets,
are on chain becomes valuable.
Now, the thing about this is,
with the advent of AI,
chain analysis will be there for everybody.
Right?
Everybody can do blockchain analytics.
This is just like changing the balance of power.
It used to be that only chain analysis
could really do that at the scale that it can.
Now it's becoming much easier to do.
And so a lot of Bitcoin use
will be de-anonymized over time.
And so if you're running a transparent blockchain,
it becomes an institutional blockchain
because it's just only an institution can survive that degree of transparency.
Like individuals can't survive being tracked for everything,
but institutions are, it's like a public company.
It's supposed to be tracked, right?
You know, like that it's like sort of, it's because robust enough it's meant to be tracked
in a certain way.
It's designed to be tracked, right?
An individual person is not meant to be public, but a corporation can be.
Right.
It's funny to put it to me.
There's a private individual as a private company, and there's a public company,
but I guess you could say, oh, it's a public figure.
but people don't like being public figures,
but there's kind of an equivalent there, right?
A public figure maybe some of their stuff is tracked,
but they don't want everything to be tracked.
A public company, maybe all of their stuff is tracked.
Fine.
Provable, global institutional collateral,
there's another thing which is
that way of thinking about what Bitcoin is
solve some of the major issues.
Quantum, right, which is Nick Carter's put out these things on it.
Let's say Nick Carter's right,
and I think he might be right,
that quantum is an underappreciated threat
that Bitcoin core developers aren't taking it seriously.
And even if it was something that they rolled out tomorrow,
it would still be a multi-month migration process because ECDSA, like the addresses,
everybody has to manually send their assets from one address to a new address.
Okay.
So you can only do whatever 100,000 of people, those assets can be moved in a given day.
However, if you look at Bitcoin Rich List, Bitcoin is so top-heavy, right,
that it's got these institutional addresses that you have to do the math,
but probably a few million addresses all moving their funds.
would move like 99% of the Bitcoin in a few days.
And so Bitcoin is digital gold
actually is quantum resistant.
It's Bitcoin as digital cash that isn't, right?
Meaning a million like institutions
all moving their assets can be done in a few days,
but a billion people all moving like five bucks or whatever
can't be done in any reasonable amount of time.
Okay.
So everybody who can't move then gets quantumed,
and anybody who can doesn't.
but all the assets are concentrated
with the big guys with me, right?
And this also extends to seizure.
Like, will all the centralized Bitcoin
on Coinbase's servers,
sailor servers, et cetera, get seized?
I think it's quite likely.
I think it eventually gets seized
in some exigent circumstance.
And so it becomes something
that I think only an institutionally blessed
thing can hold and send, right?
Provable, global, and social collateral.
This is a different vision
than what people want it,
but it's actually still a valuable thing.
What it leaves open
is the individual digital cash case, right?
Because gold is big bricks
that are moved and bring trucks
or the equivalent thereof
infrequently large denominations
between institutions, right?
It's like the high-powered back-end money, right?
It's not really meant for individuals.
Cash is the opposite.
It's been for individuals more than it's been for institutions.
So Zcash takes over the role of digital cash.
So that's fungible, private, scalable with tachyon, which is coming, quantum safe, okay, which is also, it's more quantum safe, right?
So that's why, and it's simple also.
Zcash is probably not going to ever do smart contracts.
It's going to keep it really simple.
Why?
Because, like, you know, if you take Bitcoin, you can innovate in one direction, which is programmability, and that's Ethereum, salon, and so on.
You innovate into other direction that's privacy, and that's Zcash.
to get to private programmability
is actually stacking those two together
and it's actually quite hard.
It opens up all these attack surfaces and so on.
So just scale Zcash first
and then, you know, there's Aztec,
there's Aleo, there's all these other, you know,
private smart contract chains.
I wish them the best, I want them,
have a non-zero-sum view of the world.
They're taking on a more complicated problem.
In theory, they can just do the same thing
as Zcash is doing, which is private transactions.
In practice, if you remember Facebook in the 2000s,
people said, why does Twitter exist?
Facebook has status update.
Like one feature of Facebook is all of Twitter.
Why does Twitter exist?
Sometimes that's a good argument, by the way.
That's why, you know, like Steve Jobs told Drew Houston,
Dropbox is just a feature, right?
I mean, Dropbox, it's funny.
It's a great company and so and so forth.
But like if Dropbox had, if ICloud was Dropbox, it'd probably be better.
You know, like both would be better off for it.
ICloud is kind of Dropbox doesn't have as much distribution as if it was part of a
big operating system kind of bundle.
So sometimes people are half right, half wrong,
Dropbox Street company,
but it might have been bigger in terms of percentage value
if they had been Apple's cloud services basically, right?
But okay, point is,
it's hard to say whether it's just a product or a feature,
but my strong intuition is just like Twitter's simplicity
made its own thing, right?
Simple, scalable, billion person, digital private cash
has been the dream for 30 years
and we're finally there.
So zotal.com,
install zotal.com.
By the way, I'm not a trader.
I just don't care about trading.
I'm early on platforms and infrastructure.
There's things you have to not care about.
In order to care about things,
you have to not care about things.
So very, very, very few things I talk about.
Also, Zcash has been around for 10 years.
Like, you know, it's also even the toxic way
set up ceremony that's gone,
like that got fixed cryptographically.
So it's unusual that it's been around 10 years,
got a security track record,
it's got a decentralized base of holders
the cryptography works
Love it
That's a great place to wrap
A wide-ranging conversation
on what's happening in AI and crypto
As always, Bologi
but that's a conversation until next time
Yes, and oh, by the way, if you're in Singapore,
Malaysia or anywhere, come visit
NS.com and network school
and we're scaling
and we'll talk about that too next to me.
Yeah, love to see all the progress there.
Amazing what you guys are doing.
Excited to be involved in a small way
and yeah, until until next time.
Hey, thank you.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like,
comment, subscribe, leave us a rating or review
and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our substack
at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice,
or be used to evaluate any investment or security,
and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments,
please see A16Z.com forward slash disclosures.
