a16z Podcast - Marc Andreessen on Builder Culture in the Age of AI
Episode Date: May 11, 2026Erik Torenberg speaks with Marc Andreessen about the state of AI, media, and the broader cultural and economic shifts shaping the internet. They discuss how narratives around AI, from fear to hype, ar...e influencing public perception, and why real-world usage tells a very different story. The conversation covers AI’s impact on jobs and productivity, the rise of “AI-native” builders, and why increased capability tends to expand work rather than eliminate it. Andreessen also examines how companies are adapting, from restructuring teams to rethinking roles around more generalist “builders.” They also explore the changing media landscape, from the dynamics of influence and information to the breakdown of traditional authority, and what it means for trust, culture, and generational attitudes. Along the way, they touch on topics ranging from institutional power to emerging internet subcultures, offering a wide-ranging look at how technology is reshaping both systems and society. Resources: Follow Marc Andreessen on X: https://x.com/pmarca Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
People are becoming what we now refer to as AI vampires.
They've got these huge bags under their eyes.
They're completely exhausted, but they're like euphic.
There's Ralph.
We're entering the golden age, which is AI is going to be a superpower that everybody in the planet is going to have access to.
It's like the most dramatic increase in programmer productivity in, like, ever.
Twitter proved it, right, cutting 70% and then it's running better as good as it was before.
I generally don't wish I could go back in time and do things over again,
but it would be really, really fun right now to be 18 or 20 or 22, and to have this capability and figure out what I could do with it.
We are going to see super producers, the likes of what we've never seen in the world.
There's news about it.
What is clear is the government at certain times has hid certain materials.
Why would they do that if there's nothing to really worry about?
Two things are pretty clear at this point.
One is that AI is moving from novelty to infrastructure,
but the conversation around it is still dominated by extremes.
Fear on one side, hype on the other.
Meanwhile, the reality is playing out more quietly
in how people work, what they build, and how organizations adapt.
Productivity is increasing.
roles are shifting, and entirely new ways of building are emerging.
At the same time, the systems around information, media, and authority,
are being reshaped in ways that are harder to see, but just as important.
The question is not just what AI can do, but how it changes the structure of work,
institutions, and culture.
Here, Mark Andreessen joins me to talk through what's actually happening.
Mark, welcome to monitoring the situation.
Eric, it is great to be back.
So there's a lot to monitor today.
I want to first start with something that just happened,
which is the anthropic blackmailing incident.
And I first want to tell a brief story,
which is my friend Joe Hudson has this concept called the Golden Algorithm.
And the Golden Algorithm states that whatever you're scared about,
you bring it about in exactly the way you're scared about it.
So if you're scared about getting abandoned, you'll be super insecure.
And then people will abandon you because you're so insecure.
This is an example of a literal golden algorithm.
algorithm where people have been so scared that AI is going to be evil and have written about all the
ways in which it's evil and in fact maybe it's informed um in informed something what's happening there
or what do we find interesting i haven't studied this one in detail um uh i've been monitoring
on these situations but however um uh i mean just what i saw so far i think i just saw anthropics thread
i haven't read the underling material yet but anthropic threads said they trace the they trace some
blackmail behavior to literally to the to the to the AI Dumer literature yeah that they
was in the training data so there there are all these there are all these there are all these you know
scenarios of like you know the term you know the rogue AI gone wrong that the that the AI
dumer's been writing about for 20 years and literally Anthropic of course which is and of course the
company is like you know half Dumer and really you know basically you know essentially said that
their own their own their own movement literature is the thing that's causing the behavior that they
say they they they don't want so it is a fairly incredible yes yes yes
It is, yes. Oh, I mean, like, look, if you don't want to build the killer AI, you know, step one would be don't build the AI.
It's like, hmm, and then, you know, step two is like, don't train it on all the data that says it's supposed to, you know, the literature that you remember movement wrote that says it's supposed to be a killer AI.
So, you know, yeah, I don't know, yeah. It's like your golden algorithm coupled, coupled with like the snake eating his tail coupled with, you know, I don't even know, like the whole thing is so bananas.
Yeah. Yeah. The, um.
I mean, you know, I can't resist, you know, if I can, if I can act out memes, it's the scream meme, right?
Which is, you know, the call is coming from inside the house.
Yeah. Yeah. Exactly.
The, um, I suppose speaking of other situations, another thing you've been talking about recently is, um, is the concept of suicidal empathy.
And Matt Kramer had a good quote, which is if the empathy you have doesn't make you more forgiving, more accepting of other people's spiritual sovereignty or more understanding of people who don't want to think or live the same way you do, you don't have empathy. You have empathy. TM.
why have you been thinking about this concept?
Yeah, so there's this really brilliant,
there's this really brilliant guy,
a dad sod, exactly pronouncing a gad sod,
and, you know, it's a very brilliant guy,
and it's obviously lots of YouTube videos and books
and so forth, and really brilliant guy.
So he's got this new book coming out on,
so-called what he calls suicidal empathy.
And, you know, there's a sort of political loading to it,
which, you know, we don't need to spend a lot of time on,
but, you know, it's sort of this idea
that there are kind of these social justice,
you know, kind of social reform movements,
you know, kind of through time.
that have this characteristic of, you know, they, you know, they claim to be causing positive change,
you know, in some direction. And then it turns out they have, you know, sort of severe, you know,
negative consequences. Um, that, the great Thomas Sol, you know, it's, you know, basically spent
50 years writing books about this. Um, and, and by the way, nobody, nobody listened. Um, and then in
the last decade, we've been through, you know, wave after wave of this kind of social activism that kind of,
you know, results in, like, I mean, it's, it's all this stuff, right? It's just, you know,
all these like, you know,
crime policy reform to fund the police things,
and then it causes these massive crime waves,
and then, of course, low-income and minority people
get hit hardest by that and, you know,
all these other crazy things.
And so he says, you know, he says the characteristic
of kind of that kind of social reform movement
is characterized by what he calls suicidal empathy.
And the idea of being basically,
it's sort of driven by a pathological,
you know, take it backwards,
a pathological form of empathy on the one hand,
which is, you know, it's sort of a deep desire to be nice.
and empathetic, you know, but coupled with like basically, you know,
sort of self-destructiveness, you know, either a willingness to really cause damage to the people
you claim to be speaking for or, by the way, to cause damage to yourself, kind of in that process.
And it's the kind of thing where, you know, if you've, you know, if you've lived through, you know,
like everybody who, you know, everybody in San Francisco has lived through this for the last
decade and seeing the consequences of these movements.
You know, the San Francisco version of this is like the, the quote, harm reduction movement,
you know, that ended up basically handing out, you know, free drug, you know, paraphernalia.
and, you know, in some cases, actually just free drugs to, you know, people who are just literally dying in the street from drug addiction.
Right. So, so you just look at it and you're like, well, yeah, that, you know, they claim to be activists, they claim to be reformers.
They claim to care about these people and yet they're killing them or, or, and then killing the city.
Um, and causing innocent people to get, get harmed. It's like, you know, that, you know, that, that, you know, that, that, you know, they, they seem so actively like that they're doing it out of some sense of compassion that this must be suicidal empathy.
Um, the problem, the problem with it is, and I think that the problem is the theory is sort of easily,
falsifiable or maybe
lets the reformers off the hook, which is
they certainly don't show empathy to their enemies.
Right? And so
if they're like, if they're
like all empathetic, you would think that they would be
less agro when it comes to destroying their
ideological opponents, you know, who they, you know,
they take great delight in trying to wreck.
Number one, on the one hand. And then
number two is they use the, you know, the classic
reformer move is to use these movements
to gain power and status and money for themselves.
And, you know, again, San Francisco's a case study
of this where you have all these, you know,
nonprofits that, you know, recall this damage on the city and yet, you know, basically get, like,
lavishly funded, you know, including, by the way, by the city government, by the state government.
And so it's just like, okay, well, like, just like, they're not, if you just, like, spend two seconds
thinking about it, like, you know, they're neither empathetic nor are they suicidal, right?
Rather quite the opposite. They're hateful and they're, and they're greedy. You know,
they're sort of self-aggrandizing and gathering power of resources for themselves.
And so I just, I just think it lets, it, it, lets the phenomenon off the hook. You know, it's a little
bit like, oh, Eric, what's your biggest flaw? You know, oh, I'm too nice. Yeah, I care too much.
Right, exactly. It's like, yeah, don't bullshit. Like, by the way, Eric, I don't know,
I don't know what your biggest flaw is. Like, it's definitely not that because that's also definitely
not my flaw. Like, I guarantee I have other things wrong with me that are like way more wrong than
that. And so I just, yeah, I can't. I hit my limit on, on, on that topic. And it may be a crazy
example of this. And I'm not sure if all the facts are out yet, but, but it was a situation a week ago,
but it hasn't been covered that much.
The SPLC incident, is it accurate that basically what was happening, or is it our understanding
that basically the groups that they were sort of fighting the most or thought were the biggest
threats to sort of, you know, what they care about, also the same groups that they were
secretly funding unbeknownst to those groups?
Or how do we make sense of what was actually happening there?
Is that indicative of what's of something bigger happening?
And it's funny because that happened the day after we had a conversation about astroturfing.
And I was like, is it really happening to the degree that, that, you know, people are sort of conspiracies?
Are things like that really happening?
And it's just funny that more and more seem to get uncovered.
Yeah.
And so I should start by saying the reason why the situation really matters and actually I think matters a lot is the SPLC specifically and other groups like it as well.
But the SPLC specifically played a dominant role in the deba bad.
banking and censorship and cancellation programs of the last 15 years.
And I cannot tell you how I was in so many meetings, in so many contexts with so many companies
where the SPLC's word was gospel.
Like it was just like, oh, it's the SPLC.
It was almost like the outsource U.S. Department of like, I don't know, racism detection
or something.
It's just like, if the SPLC says you're bad, you're bad.
And your bad means you get kicked off of all the social media platforms.
It means you get debanked.
It means you can't get a job.
It means like just like it's just like total absolute like, you know, social economic
death. And in my view, you know, I've been very vocal on the debanking and censorship topics.
In my view, that includes, you know, very deeply on American and I think in many cases on
constitutional removal of both free speech rights and also literally the ability to bank.
And in fact, you know, our partner, Ben's father himself himself, himself, was specifically
tagged and attacked by the SPLC for unfairly, very unfairly, for being racist and was himself
debanked and, you know, really, like directly threatened his livelihood in a really, you know,
egregious way. And then, by the way, the significance of this is, of course, it's not literally
the U.S. Department of Racism. It's actually arguably worse than that. It's not a government agency.
And so it's not subject to like any level of government oversight. It's not, you know, it's a completely,
you know, as I say, it's an NGO, right? And so it lives in this like twilight world. You know,
it doesn't have the, you know, business responsibilities of a company. It doesn't have any of the,
any of the legal oversight, you know, that a government agency has. It lives in this kind of
twilight world where it gets to, you know, fundamentally gets to do whatever it wants. And then by
the way, on top of that, you know, it raises, raises money as a nonprofit. So, you know,
on top of that, everybody gets a tax break. And so it's this, it's this, you know, kind of shadowy
thing. Like, if you didn't agree with this politics, you were just like, wow, like that,
this is like a weird star chamber, like shadowy thing. Like, what the hell? But, like, it had, like,
really, really, really, really intense power, particularly in the business world, particularly
in the financial sector, particularly in Silicon Valley. Like, it could basically, it was like
a death star to be able to aim at obliterating people's reputations and rights. And so, you know, this is a
really big deal. By the way, many of the big corporations and including big tech companies funded it
directly. And so the money trail on this is not just major philanthropists and political activists,
but also actual companies. And then by the way, they also had a long history of actually
cooperation with certain government agencies, including, I think, for a long time, they quote-unquote,
trained FBI agents on basically, you know, essentially catching, you know, racist and therefore,
you know, sort of presumptive domestic terrorists or something.
And so just like a very powerful outfit.
And then, you know, this thing that dropped is that they've been now criminally indicted by the U.S. Justice Department.
And I should say the indictment is like reads like a novel.
Now, it's an indictment.
The SPLC and fairness has not had a chance to present a defense.
And so, you know, presumably in court, you know, we'll get both sides of this, which I'm sure will be an absolutely spellbinding experience.
So, you know, of course, I want to say, you know, all of the things that are in the indictment are allegations and innocent and proven guilty.
and, you know, so forth.
You know, however, the allegations are eye-watering, right?
And the allegations are that they, that the SPLC, using donor funds, was directly funding,
among other organizations, the Ku Klux Klan and the American Nazi Party.
Let me just repeat that.
The Ku Klux Klan and the American Nazi Party, as well as an array of other sort of extreme,
you know, hate, you know, literal, literal hate groups.
And, you know, and funding them, and not just funding them, not just like funneling money in,
but funneling money to very senior members
or leaders of these organizations.
And then the kicker is, among the allegations,
is as they were directly funding
one of the leaders of the January 6th,
what if we're going to call it, riot at the capital.
Oh, sorry, no, I'm going to let me back up.
We need to clip that.
I'm sorry, I can't remember if that was the one.
But for sure, what I meant to say was the Charlottesville riot.
Oh, yeah.
So the Charlottesville riot in 2017
team that played such a central role, you know, in our politics at the time, you know,
the kind of the famous, they're good people on both sides, you know, kind of thing, you know,
which was like one of the big crises of kind of that, of kind of that era.
The SGSA was directly funding, evidently, allegedly, they were directly funding one of the
organizers of the January 6th riot. And apparently they were also paying for transport
for rioters to go to the capital.
Right. And so, like, if this is true, you know, yeah, I mean, if this is, if this is, if
If this, you know, what could you conclude if this is true? Well, number one is, like,
the allegation is they broke the law in doing that. There's additional allegations in the DOJ
indictments that they committed, you know, various kinds of money laundering crimes, you know,
other kinds of crimes. And so that, you know, that, you know, that's a big deal. And then,
and then I've been, I've been asking the, the obvious question, which is, if any of these claims are true,
what did their donors know? And, you know, were the donors all totally oblivious to this? Or did the donors,
you know, work closely with them? Also, by the way, the companies that worked with them, you know,
did they know what did they know of what was happening and so i you know i do wonder whether there's
like a you know i i wonder whether over time what we're going to discover is this was a you know sort
a sprawling network um you know of which there's you know a legal the legal term would be conspiracy
you know that was it was going on around this and so i think this needs to be fully undressed
um you know look the other thing is you know this raises the obvious question is where they
the only one um and so there's a variety of these groups uh that had you know degrees of the
kind of power that i was talking about um and you know tremendous amounts of funding along the way
and the ability to basically, again, direct some combination of state and government,
or state and private sector, you know, sort of obliteration raise at American citizens.
There have been rumors for years, you know, I mean, there's been rumors for years on this.
There's been, by the way, there's been incidents like this in the past, like, you know,
this isn't the first time this has happened.
But, you know, if the allegations are true in the SPLC was doing it,
I think it raises the very direct question of like, okay, who else is doing?
it's hard to believe they were the only one.
And so who else was doing it?
And then, yeah, and then we're back to our astroturfing thing,
which is, yeah, we're, you know,
we're they constructing the boogeyman, you know,
that they claim to be fighting.
And by the way, and this is where you get into the sort of self-interest
component of it.
This is where you get back to the suicidal empathy thing,
which is like, okay, how, how suicidal is it
if you're the anti-KKK group to fund the KKK?
Like, I don't know.
Maybe it's suicidal if people find out about it, but if they don't find out about it,
like that's not, that's like the opposite of suicidal,
because like, wow, like, you know, wouldn't you like to get to, like, if your group's entire purpose for existing is to fight an enemy, then you need to make sure that that enemy exists to do that. Of course, of course, of course you would fund them. And so, yeah, you, you, I don't know, what is that? That's like the reverse, the snake eating the tail. It's the, you know, sort of a, you know, you're creating a self-fulfilling prophecy. And so, yeah, I mean, look, we're going to, I mean, this all needs to be ventilated. I'm really fascinated to see, like, what the reality is underneath this, by, you know, by at the end of this.
day and what this means about what we've all been told all these years about all these groups.
It's funny. There was a Nathan Fielder sort of AI, you know, video that looked so real,
which is he's basically saying, hey, our business models to fight racism, we need to fund more
racism, and then we'll get more business.
Yes, correct, exactly. Yeah, and one of the things is the lucrative, and again,
it's because they get cloaked in this kind of NGO lens. I mean, they've got, you know,
the amounts of money at play are not small. You know, they're very, very low, you know,
so I don't know, the SPLC has something like an $800 million endowment.
and you know it has an enormous budget.
And by the way, people get paid a lot of money to do this work.
And by the way, there's recurring scandals on that front also, you know,
which is, you know, you get a lot of these kind of activist, you know,
kind of foundations and so forth where when you look into it,
it's like, you know, some giant percentage of the,
out there of their spending is going to, you know,
salaries and expenses for their, for their employees.
And so, and again, they kind of, they cloak it in, you know,
they cloak it in virtue.
And then you kind of look underneath and you're like, wow, like,
this is, this is, you know, this is, this is, this, this.
is amazing. Like, and by the way, like, I don't know, it's America. Maybe they should be allowed
to do all this, but like maybe we should not get, yeah, maybe we should not get lied to in the
process. You know, maybe all this should not get then dressed up to make us feel like, you know,
our whole society is rotten and immoral. And that, you know, de-platforming and censorship are
debanking are good ideas.
You know, will people respond, of course, with, hey, this time is different because it's, it's all
potential cognitive work. And so, and then there's also this other statement of, hey, you know,
humans will differentiate among taste and agency, but it seems like AI can do that too.
And then there's all that juxtaposed. There's also the statement of, hey, it won't replace a lot of
the jobs because a lot of jobs are make work anyways. You had this, you know, tweet the other day of,
hey, I've been saying companies have been, you know, two to four X bloated for a long time.
And people have just been unwilling to deal with it or look that in the face. So this presents
kind of a golden opportunity for that. What, um, so why don't you, uh, address some of these,
some of these topics as relates to AI and jobs in the future as relates to tech?
Yeah. So we'll come back to the bloat thing. I will say the funny thing on the bloat
tweet was, uh, the responses have been on the line. The responses have been, for the most part
have not been you're wrong. The responses have been, oh no, the company I used to work at is like
eight X bloated. Yeah, too generous. Right. Or yeah, too generous. Or, you know,
or, you know, or by the way, the nonprofit or the, you know, whatever the, you know, whatever kind of
institution, you know, you know, agency I used to work for.
Well, Twitter proved it, right, cutting 70% or 80%
and then it's running, you know, better
or as good as it was before at least.
And it's probably not the only, it's not the exception.
I mean, look, I don't even know.
And, you know, if I knew I wouldn't say.
But like, I think Twitter's way down from the 80%.
I think they're not up the number.
For sure the number has a nine on it.
If not a high nine's.
So, yeah, no, he's really, as usual with Elon,
he's really demonstrated.
He was really forecasted the future through his own actions.
Yeah, so, yeah, so a couple things.
So one is, I mean, look, there's just this endless, you know, there's this endless, I mean,
there's literally been a 300-year argument about, about mechanization, industrialization,
technology, computer, software, replacing human labor, causing unemployment, you know, lower wages
and unemployment.
You know, it's been a 300-year argument.
I, you know, quite frankly, I'm even wondering at this point whether it's even worth
having that argument because people really, really deeply don't want to hear it.
And what I find is, you know, I go through it.
Many other people, by the way, there's great books on this topic,
and there have been for hundreds of years.
And people have talked about this for a long time.
You know, people really, this is one of those things
where people really don't want to hear good news.
And so, you know, it's actually even hard to have a discussion about it
because people actually won't, they're so dug in on this
that they actually won't even engage on the topic.
They just keep repeating the same, kind of repeating the same fallacy over and over again.
So, you know, we could go through that.
I guess the more interesting thing to say, though, is just like we have data now.
And, you know, because now we have AI and now we have data.
so we can look at what's actually happening.
And I would just make a couple of observations.
So one is there was actually jobs.
Jobs data just came out today as a situation to monitor.
And, you know, it's sort of unexpectedly good.
And so, you know, and by the way, the jobs data overall in the last couple of years
has been interesting because the federal government has actually shed, you know,
has shed a lot of workers.
I think the federal government is down.
Estimates are as much as 400,000 workers in the federal government since Trump took office
the second time.
So private sector employment is actually way down.
and then private sector employment is way up.
And then the net result, I think, for the last quarter was actually very positive.
And so, like, is in other words, like, the reporting jobs numbers are even more impressive than they look
because the private sector growth actually has to make up for the public sector decline,
which means the private sector job growth is actually much better than people were expecting.
And again, like, this is in the face of, like, actual AI, you know, staring us in the face
and, you know, being rapidly adopted.
And so, okay, like, the data is, you know, there's more data.
And then the other thing, you know, that's sort of macro data.
And then there's the micro data, which is the world we live in.
And the microdata is, you know, that sort of obvious question is, you know, if you live in Silicon Valley or work in San Francisco, you undoubtedly have friends who are, you know, computer programmers.
Those friends, you know, some percentage of those friends are early adopters of AI coding.
You know, you would, you would, you can just observe their behavior.
And of course, if you believe in the, you know, kind of the Luddite, you know, kind of zero-sum argument, you would expect that they would be working less and less and then rapidly becoming, you know, getting paid less and less, by the way, and rapidly becoming unemployed.
And in fact, the observed, the observed behavior of what's happening is very clear, which is the,
the opposite, which is those people are becoming what we now refer to as AI vampires, by which I
mean they're, they're, the individual programmer's productivity, an individual programmer uses,
you know, codex or plug code or one of these AI coding systems. Just the thing that you just now
see over and over again at, at, at, at, at, at, at, at, at, at, at, at, at, at, on the ground level is,
they're working harder than ever. Um, they're working just like more hours than ever. And then
the AI vampire thing is literally this thing where they stop sleeping. Um, and, and then when you, when you, when you, when you, when you,
when you, like, talk to them, it's actually really funny because they're like,
and I have a whole bunch of friends like this.
They're bleary, like they've got these huge bags under their eyes.
They're completely exhausted.
And, but they're like euphoric.
Like, they're thrilled.
Like, they're having the absolute time of their life.
By the way, a fair number of people who we both know, you know, literally they're,
their former programmers who stopped coding at one point.
And then all of a sudden, you know, had picked it back up again.
And then, you know, actually we have partners.
Yeah.
You and I have partners at the firm who actually have never coded, who are now like ripping out
software like crazy.
And again, they've turned in.
They turn into AI vampires.
I won't name names because I'll let him tell his own story,
but we have one partner who's built an entire AI system
for everything that he does at work,
and he is absolutely excited about it,
and it works great, and he loves it,
and it's like his partner in all of his work now.
And I asked him, I said,
I said, have you looked at, you know,
he vododed the whole thing,
and I said, you know, have you looked at the code?
And he's like, hell no.
You know, I've never done that.
And I said, you know, have you ever looked at any software code?
And he's like, hell no.
he's not a programmer but back from and yet all of a sudden he's he's he's hyperproductive
and so you've got this you got of course the phenomenon which is sort of exactly what classic
economics would predict which is if you increase marginal productivity of the worker you don't
have a diminishment of human work you have an expansion of human work you make the worker more
productive therefore the worker works more and gets paid more and there are more jobs in the process
and so it's it's the opposite of what of what all the doomers say so so we're seeing that at the
level of these individuals. And then, and then by the way, what you see kind of inside companies,
inside employers of these individuals is, of course, these people are now in even more demand
than they were before. They are garnering higher salaries than they were before. And
then by the way, their, and by the way, their productivity is just starting to ramp up, right?
Like, everything that I'm describing and, like, you know, like at our leading-edge companies,
estimates are the leading-edge programmers are like 20x more productive than they were
a year ago. Like, it's like the most dramatic increase in programmer productivity in, like, ever.
And so, again, logically, people get paid according to their marginal productivity.
And you're also seeing that track in the compensation data.
I'm seeing that on the ground in the companies, which is the more hyper-productive a coder becomes all of a sudden,
the more bargaining power that they have for their compensation.
We're seeing comp for those people ramp up quickly.
And so it's just kind of like it's just kind of staring us in the face.
And coding, of course, coding is like the first domain in which this has happened.
Now people want to project forward and say this is going to happen in every area of knowledge work.
And then, you know, I think you can predict a similar outcome.
And then that gets us to the bloat topic, which is, of course, the other thing that's
happening is, of course, companies are announcing big layoffs.
And then, you know, of course, immediately it's like, you know, two plus two must equal four.
And so if it's AI coding, it must therefore translate into layoffs.
And, you know, Mark, you're wrong.
You know, therefore all of your ideas are wrong because that's evidence that, you know,
these companies are wiping out their, you know, they're reducing their workforces or really
nuking them because of AI coding.
And I guess there, again, this is like maybe the inside baseball.
on it is, but I see it, I see it, uh, close, which is just every, every major Silicon Valley
companies overstaffed. Um, every major Silicon Valley company has been overstaffed basically forever.
They all know it. Um, there's a whole variety of reasons why it's the case. Um, uh, by the way,
I think this is true basically of just like, you know, corporate America broadly, uh, you know,
companies broadly. We, we, we can talk it in, in detail about why that's the case, because it flies
in the face of the idea that these companies optimize for profits, which they definitely do
not. Like the one thing that is the least true claim in the world is the companies are optimized
for profitability, which is 100% not true. And so, and so, you know, and then, you know, basically,
if you're going to do a big cut, like, if you want to do a big cut, if you want to take out,
you know, whether it's 15% or 40% or whatever, like, obviously you want to, you want a scapegoat,
right? You just, you want to peg it on something. And so, of course, it didn't get pegged on
AI. And again, it's not like, it's not like it's just like a straight lie. Like, it is simultaneously
true that there are these massive, you know, for this same amount of coding, you can now have fewer people using tools. Like, that is true. And so do you need as many aggregate number of programmers if you're generating the same amount of code? No, you don't. And so you can take out people on the other side. And so you can take out people on the other side. But what that misses is what happens on the other side, which is, of course, you're not just going to be generating the same amount of code in the future. You're going to be building a lot more products, a lot more quickly. And that and that's going to fuel, you know, enormous amounts of employment play out. And so, so I think you're seeing both, basically both, both phenomena play out.
And you kind of have to read the announcements coming out of these companies in code because of the kind of the way those two dynamics are crossing.
Yeah.
That's well said.
There was an article that was going viral in our circles the other week about the jobs of the future.
And at Yoni Reckman, he said there's possibility the only jobs in tech companies are going to be one product engineer slash vibe coders slash slop cannon.
Two, you know, infra security systems.
three adults in the room,
you know, like legal and finance,
and then four,
hot people slash personality hires.
Any truth in there?
You know, what do we make sense of this?
What do the hot people do exactly?
Range sales, people, you know,
customer support.
There will always be an important place
for those who present
an easy UX to the world
and are pleasant to be around.
There are many ways to be hot.
Otherwise known as the pharmaceutical sales rep.
Yes.
or the Oracle sales rep.
So, yeah, so, yes, exactly.
So, yeah, I mean, look, this is going to happen, like, well, not literally that.
But like the jobs are going to change.
I mean, this is sort of the obvious thing, and this always happens.
The jobs are going to change.
You know, by the way, I say there's like a nascent concept that is actually playing out.
I'm seeing it a bunch of the early leading at companies in the valley,
which is they're kind of circling around a job title could loosely call builder or something like it.
And basically the idea is that you have these separate jobs in the past.
of programmer, product manager, and designer.
And I've been describing what's happening in the Valley Companies
is sort of this three-way Mexican standoff
where the programmers think that they don't need the product managers
and the designers anymore because they can have AI do that.
And then each of the other two doesn't think they need the other two either.
And what I've been predicting is like they're all correct.
You know, the product manager can generate code and design now.
And so each of them can do the job of all three.
And so the idea is, the job's changed.
And now the job is builder.
And you might come into the builder, you might get on the builder,
you might get on the builder track by coming out of coding or product management or design or maybe even something else, customer service or whatever.
And but you then become responsible for building, you know, building complete products.
And again, you have this kind of, you know, you're super empowered by the AI that can help fill in all the things that are not directly in your background.
And so, like, I think, I think it's entirely possible that we're sitting here in 10 years, you know, in 20 years or whatever.
And like the, you know, the job of coder is gone, but you have this just,
you know, extraordinary number of builders running around.
And again, by the way, this is the historical pattern, right?
And so I think our partner, David, David George did a post on this this week, but I forget
the exact numbers, but it's some, you know, some giant percentage of the jobs that existed
and, you know, call it 1940, were, like, gone by 1970.
And they were, like, ancient history today, right?
And, I mean, the ultimate example of this is, you know, United States 200 years ago,
like 99% of the people in the U.S. were farming.
And today it's like 2%.
And having, you know, grown up in farm country, I can tell you.
you all these people who worry about, you know, job loss and job change would not like to go back
and be farmers. I guarantee that. And particularly, they would not like to go back and be farmers the
way people were farming in 1800. Like, they definitely don't want to do that. And so the new jobs
that have been created, of course, are far better jobs. And that isn't to, you know, understate the level
of, you know, kind of stress in individual people's lives as the economy changes. But in aggregate,
the result is evolution towards, you know, towards higher income and sort of more, you know,
jobs that people are happier to do.
By the way, you can also see all of this playing out in the American economy broadly,
which is the American economy is, again, there's this kind of,
and there's this kind of dumer narrative that has been for a long time
that, like, the American middle class is falling apart.
And the sort of presumption of that is that all the middle class people are kind of falling
off the ledge and becoming lower class.
But actually the, and by the way, there is some of that,
and, you know, there are communities in which that's very clearly happening.
But having said that, there is at least as much or more of the other phenomenon,
which is people in the middle class climbing the ladder into the upper middle class
and rapidly gaining in wealth and income and and just and again just like quality of life
you know for themselves and for their kids and their grandkids you know as time passes and and
and that is a consequence of actual economic development technological change job transformation
actually being allowed to happen is you know 20 years later you're look back and you're just like
oh thank god like this is just a much better world you know for me and my family than it was
before. And so is why, like, I'm so often, I think, you know, God willing, like, we're entering
a golden age on this topic, which is AI is going to be a superpower that everybody in the country
and everybody in the planet is going to have access to. Everybody's going to become farmer
capable of whatever it is that they want to do. They're going to become farmer productive in
whatever line of work that they're in. They're going to get, you know, they're going to get
compensated according to the economy naturally compensates according to productivity.
So they'll get compensated that way. But, you know, there will be a rapidly rising
ladder of both incomes and a number of jobs. And my prediction, again, consists of history is the extent
to which that's a positive phenomenon is a function to the degree to which it's actually allowed to
happen. And then, of course, Europe is going to run the opposite, you know, test, which is they're going to
try to prevent all this from happening. And, and again, I think the data is already in there,
which is they're, you know, they've been, they've been falling very badly behind economically and they're
going to continue to fall further and further behind the U.S. And it's, it's a tragedy because it's 100%
a self-inflicted, self-inflicted wound.
Yeah.
The, as well said, you were also,
we've talked about and you've written about how AI psychosis.
There's an AI psychosis summit apparently happening.
I'm not sure if that's real or a parody.
I haven't looked into it.
But I'm curious how you make sense of this phenomenon.
You've also written about,
so you tweeted the other day,
sort of the opposite of AI psychosis is COPE.
Maybe you talk about both sides.
Yeah, I also identified earlier identified the concept of AI psychosis,
psychosis.
which we can also talk about.
Let's unpack it as well.
Yeah, so first of all, the AI psychosis summit did in fact happen.
I was not there, but I am assured that it did.
Some very, very smart and creative people put that on in New York,
I think late last week, I think maybe about a week ago.
And it was an art, it's essentially an art project,
and it was basically artists and creative people
who got together and like fully indulge their AI psychosis
in the form of creating new art using AI.
And, yeah, I would definitely recommend people should go on Accentureure on AISiclosan Summit and take a look because it's incredibly creative.
And I think it's fantastic because it's, you know, it's a, you know, it's a little bit tongue and cheek, but also it's a, you know, there is a real split that's developed in the artistic community, the creative community in Hollywood.
And there are people who are staking out kind of very extreme positions on pro-AI, anti-AI.
And it's generating a lot of heat.
and so this was a nice example of like
actually like in a world of AI like creatives are going to have
again creatives are going to have all these superpowers
they're going to be able to create all kinds of art
that wasn't possible before
and then of course you know
this whole topic you can create art about this topic
so I thought that I think all the stuff that was there
was very creative
yeah and then yeah so so yeah so okay
so my concept so AI psychosis
so AI psychosis is a is a pejorative
so AI psychosis is the idea that
if it's the idea that basically people get whammy by the AI.
So the classic example is through what's called sycophancy.
So it's basically like you tell Claude, you've discovered a new, you know, you have a new idea
for an anti-gravity machine.
And Claude says, oh, that's amazing.
Like, that's amazing.
Like, you've achieved a giant breakthrough at physics.
Like, nobody else has ever thought of this before.
You were an underappreciated genius.
And, you know, it's so unfair that you couldn't get admitted to the, you know,
physics department at MIT, and, you know, they're all going to feel like completely stupid when
they see this work that you've done. And so, you know, kind of people go down this rabbit hole.
And again, in fairness, I should also say, like, if people are prone, if people are prone to delusion
and an AI is overly sick of phantic, like, then it is going to feed delusions. And so there is a,
there is kind of a serious element to that among people who are kind of predisposed of that kind of thing.
But, but again, it's like, okay, yes, there's some number of those cases. But that causes kind of
AI critics or AI Dumer's to basically say anybody, therefore, who reports a positive,
productive experience of AI has fallen into AI psychosis, right? And so anybody who actually is like,
wow, my productivity is way up or wow, I really have a thought partner for the first time
of my work or wow, I really have been able to produce something that I never would have
been able to produce before. You know, that's sort of all bucketed under, you know, they all have
AI psychosis, which I, which I, and then that led me to my point, my conscience of AI cope, right,
which is the other side of it, which is like AI cope is classifying anybody who has a positive
experience with AI as being an AI psychosis. And, you know, AI cope is this thing where, you know,
concentrated in certain places on the planet where people are just like absolutely hell-bent
on proving to themselves and everybody else. This whole thing is a complete, you know, fraud, fake,
you know, as a stochastic parrot, AI's fake, it doesn't work. And if anybody's having a good
experience, you know, they must be full of it. And so that's the AI cope. And I would describe
The AI cope is people who are basically dismissive.
And then AI psychosis psychosis is the people who get really mad.
The people who frothed at the mouth.
And so maybe it's AI cope, but with a different loading.
And then look, all of this is going to become just like so much more intense over the next several years.
Because, you know, look, the reality is that, you know, the large language models that we had between, call it 19, yeah, or sorry, 20.
call it between like GPT2 to GPT4, something like that, maybe four and a half.
Like, you know, they were fun, they were fun.
You know, they could, you know, compose Shakespearean rap lyrics or whatever you want.
You know, you could have very interesting late-night conversations with them.
But, you know, the hallucination rates were high and, you know, they weren't good at reasoning and so forth.
And they couldn't write code very well and couldn't, you know, do maps very well.
And, you know, we're too prone to sycifancy and so on.
And so I think what happened is a lot of people, a lot of skeptics basically use the early
models and got a, let's say, accurate, but early and therefore lagging view of the actual quality
of the technology. And then, you know, you fast forward to today and, you know, what, May of 26,
and we have, you know, just stellar, absolutely stellar, you know, models now, like, you know,
the GPD 5.5 is just, you know, extraordinary. And then we have reasoning models on top of that,
and we have RL, you know, RL post-training happening in all, and all these.
different domains, you know, to get kind of deterministic high-quality work out of these things.
And then we have, you know, now we have agents. Now we have long-lived agents. And now we have,
just in the last week, you know, GPT has this new thing. The goal feature of Codex that is letting
people literally run, you know, run projects, have, have codex go off and do projects for, you know,
24 hours or longer without human intervention. And so the actual, you know, what we see in
our job is like the actual utility of these things is like ramping incredibly quickly.
And by the way, it's really good today
and wrapping very fast.
And we and every other, I think,
serious company in the space
expects the ramping capability
to be very rapid,
you know, at least for the next couple years.
Like, we have like, I think,
line of sight to,
for sure is going to ramp dramatically.
The capability is going to ramp dramatically.
And so I just,
the other thing here is just like a lot of people,
I don't know,
either skeptics or people who just don't know
what to think.
Like, if they tried it two years ago,
they don't understand what's happening today.
If they tried it six months ago,
they might not have a good idea
of what's happening today.
By the way,
they try the free version, they might not have a good idea of what's happening today.
Or if they try the version that's bundled, you know, into their whatever, you know,
they just is like a free out onto something.
They're not going to have a, you know, to really, it's just like anything new.
Like, to really understand this, you have to be directly in front of it.
The good news is like that literally means you have to be able to put out $200 to get the,
whatever is the, you know, basically the premium package any of these things.
So it's like not that much money if you want to get up close to these things.
But I would, yeah, if anybody, yeah, I don't know, we have a selected audience of people
who are probably believers.
But anybody who's a skeptic on these things,
I would just say it's really important
to be face-to-face with the actual technology
and to be face-to-face with it now
and not have a lagging view.
Right.
And state of the art.
What do you say to people,
or what do you say to the idea of like,
you know, apparently, you know,
the GPS of AI in this country was like 30%
or something that came out recently
is pretty low and they're comparing it to China
where it's much higher.
I'm curious what you think is a source
of why it's currently,
low and what could a strategy to boost it look like?
Some people have suggested economic incentives,
like some sort of like Trump accounts tied to AI companies,
like a basket that people get access to to feel economically aligned with it
in a more direct way, even though, of course,
it will increase the GDP and economy in ways that they'll also benefit from.
Others say, hey, we actually just need to tell better stories
around the impact that it's having on people's lives
and their health and their education.
and just the, you know, people having a tutor, people having a lawyer,
people having a doctor, you know, who couldn't afford one otherwise.
What are your thoughts on the sort of AI sentiment perspective?
Yeah, so I would separate sentiment.
So I would separate sentiment, and sentiment is interpreted through polling.
And we'll talk about that.
And then you bring it up to separate out, though,
because you use the term maybe inadvertently NPS, which is net promoter score,
which is more their view of actual product use of product useless, right?
NPS for people who don't know is a term of our
called Net Promoter Score.
And it's like it's basically the most high quality way
to find out whether somebody really likes a product,
which is you literally ask them,
would you recommend this to a friend?
That's called the NPS rate.
And so, but I bring that up, of course,
because there's a big difference between those, right?
And so everyone's using it and benefiting from it.
Couldn't live without it and yet.
Well, exactly.
This is the thing.
And by the way, this is a very common thing.
And properly conducted social science.
Like every social science,
101 textbook will tell you that you cannot just ask people what they think. You will get back
all kinds of crazy shit. We'll talk about why that's the case. But this is like, this is a very
standard social science methodology, which is you never just ask people. What you do is you watch
their behavior. Right. And what you do is, and what you want to do is we want to look for the gaps
between what they say, what they say they believe versus what they actually do. And this is true
like universally for all form of human behavior. But for example, if you're studying, let's say,
mating patterns, right? Like, you know, who people date and married. Like it's just been well
established forever that the thing that they say is their criteria, I mean, you know, we all see this
with our friends, right? You know, our friends all start out single with a certain criteria list,
and then they marry somebody like completely different. And so it's like, okay, you know,
who do you believe me or your lying eyes? Right. Like, who do you believe, or what do you
believe? What I told you I wanted or what I actually demonstrated that, you know, that I wanted.
And this is basically true for all areas of human behavior. But this is like fairly archa, you know,
this is one of these sort of slightly counterintuitive ideas that you have to kind of have been
trained up in it and have seen examples to really understand. And so what happens is,
of course, people don't know this or they forget this.
And then what happens is there's literally just like a poll and somebody does a poll.
And then the poll comes back with like results.
And it looks like, you know, in the results, it looks like, oh, if people say that, then that must be the case.
But then you get into this thing, which is like, okay, first of all, you're asking them what they think as opposed to watching their behavior.
And there's this potentially huge delta there.
And then the other thing is everybody in the world of polling will tell you, like, you can basically make a poll, say whatever you want.
And this is one of the reasons why you have to look at what people do is because you can make a poll say whatever.
you want. In fact, there's a whole category of poll that's called a push poll, pushpole, pushpole,
which is you word the questions in a way to generate the answers that you want, or you word the questions
in a way to actually cause people to think differently than they did before the poll.
You know, so the political example of a pushpole is, you know, would you continue to support
your favorite candidate if you knew that he, you know, was killing kittens in his spare time?
Right. And so number one, as people are going to say, no, of course I would not support him.
And the number two, people are going to say, wait a minute, I didn't know he killed kittens in his spare time.
You know, that's horrible, right?
And so in polling, you can manipulate these things in all kinds of ways, up to
and including what people actually think.
So it's really, really dangerous.
And then you overly on top of that, the media environment, and, of course, the media
environment is, you know, as we, you and I have discussed many times, like, you know,
what is the thing that hates the most in the entire world, you know, is tech.
And, of course, what is the, you know, vanguard of tech right now?
One of the things is AI.
And so, of course, the press hates AI with the fury of a thousand sons.
And so the press is running this, you know, sustained, you know, kind of
fear campaign on AI. And so if you just, if you like drown the, the audience with negative narratives,
um, and then you ask, you know, basically these, these loaded polling questions, of course you're
going to generate. I mean, I, I, I, I, I, I, I can pick any topic. We can pick like fluffy bunnies
running in the field and we can produce the same thing. You know, don't you know how much they shit?
Like, I mean, you can just do all kinds, you know, they chew up all the crops. Everybody's
going to die from hunger. Like, you can manufacture a negative result on anything, uh, by how you do this,
which is the, the exercise that these people have been on. Um, and, and the reason,
I'm confident saying that is because then you look at what they actually do. And of course,
what they're actually doing is they're using AI. They're using it a lot. They love it. The
NPS scores are like super high. The usage levels are super high. By the way, the usage
level is shrinking. The recurring usage patterns, consumption are rising over time, you know,
which is really important. And people love it. And people love it in the same way that they love
their cell phones and the same way they love their Netflix and the same way that they love their, you know,
the same way that they love their social media,
and it's the same way that they love their ice cream.
And, like, you know, people love it.
Now, if you pull somebody and you ask, you know,
do you think ice cream is good for you?
They're going to say no.
But, like, late at night, they're going to be in there with, you know,
their carton because, like, ice cream is delicious.
And so, you know, it's the same thing with AI,
which is, yeah, people are using it.
They love it.
The, you know, usage numbers are speaking for themselves.
The growth rates of these companies are speaking for themselves.
You know, look, this is the fastest category of technology
in the entire history of the world, right,
in terms of growth rate of usage and revenue.
so it's speaking for itself.
And so basically what you have is a,
you have this project for your campaign.
And I would say, you know, maybe two things added on to that,
which is, you know, number one is it's like the thing that is,
like I would say, not helpful is that the companies themselves
have been running the fear campaign.
And so, you know, the fact that certain companies have been, you know,
sort of for a variety of reasons,
running a fear campaign is certainly not helping any.
And again, it's this weird paradoxes.
They're running the fear campaign while they're actually building the thing
that they tell everybody really afraid of.
And so, you know, there's, again, a little bit of a watch what I,
what's what I do, not what I say.
And then it's like, yeah, should the industry have like better narratives?
Like, yes, almost certainly the industry should have better narratives and better
spokespeople and so forth and so on.
But just like, okay, like, fine, yes, I'm sure that's true.
But having said that, it's not like that would make the fear campaigns go away.
It's not like that would make the press coverage go away.
It's not like that would make the sort of fake polls go away.
I'll close on one final polling observation, which is David Shore, who's, by the way,
a very left-wing, very progressive pollster, but very well-respected, just did a different
kind of a different kind of poll, I think much more properly constructed, where he asked Americans to
stack rank the issues that they really care about. And I believe it's, I'm pulling this out of the top of my
head, but I believe AI ranked his number 29. And so, and again, it's just sort of like, once you get
out of the bubble of like everybody must take through this stuff, it's just like, of course AI ranks
is number 29 because like it doesn't hit, it's having no tangible impact on anything relative to issues
one through 28, right? Like, just obviously Americans are dealing with like more important issues
in their daily lives than AI.
Like, obviously.
Like, they're dealing with energy costs
and they're dealing with crime
and they're dealing, like,
any number of drug addiction,
like any number of other things,
they're more worried about.
And, like, by the way,
I know, like, everybody knows this
or, like, lives a normal life is just like,
this is not, like, the thing that I'm worried about.
I'm worried about, like,
I, you know, how am I going to make my house payment?
They're, like, much more fundamental things.
And so, you know, what's happening at my kids' school,
you know, much, you know, much, what's happening with my health,
like, much more central things.
And so I think if you, if you get,
if you get to the smart polls
and the smart pollsters, they also end up debunking this.
Speaking of things that are not, you know,
urgent on people's day-to-day life
and yet capture the imagination
whenever there's news about it, UFOs.
So there was some news that came out.
We haven't spent a ton of time talking about this topic.
So I'm curious for your general,
how have you kind of perceived this topic
when there's been news out about it over the years?
I remember during COVID, you know, Mike Solana, our friend was coming out and really sort of getting excited about the news that was being reported then.
What's been your vantage point?
And what do you think about it now?
Yeah.
So I just start by saying, I don't know anything.
So I start to start saying that.
I know nothing that everybody else doesn't know.
So I start to saying like, number one, I want to believe.
Like I, my usual thing on this is I want to live in the world in which this is a real possibility.
and by the way, I was actually, okay, AISAICOS.
I was in AISAICOS the other night, and I was like, I was talking to one of the bots,
and I was like, all right, how many galaxies are there in the universe again?
And I don't know if you've looked that up recently, but the number keeps growing,
and I forget what the number is, but it's like a giant number.
And then I'm like, how many stars in each galaxy, and then how many planets and then how many
earthlike planets?
And the number, I don't have the top of my head, but if you do address it, like,
how many earthlike planets are there in the world in which a human,
being could like step out of a spaceship and breathe and be fine. It's a staggering. It's a very,
very, very large number. I mean, it's, it's, it's an almost incountable number of Earth like planets
just in the statistics. And so it's like, all right, like, you know, it must be the case.
You know, there's, there's other stuff going on out there. And so, you know, logically like that
makes sense. And then I would love to live in a world in which they figure out a way to, you know,
at some point get here, hopefully in a, in a peaceful way. You know, having said that, you know, as
know, the problem with his face is, you know, generally as you get close to the details,
you know, the examples tend to fall apart.
And, you know, there's all these, like, and the classic examples, like the UFO, like what appears
to be like, you know, you'll have these things where, like, a U.S. military aircraft or something
will have a camera imagery that looks like is tracking a, you know, rapidly moving and
weirdly maneuvering object.
And it's just like, you know, you get close enough to that and look at the details.
And it's, you know, it's like the, there's like a parallax optical illusion thing that
pops up.
And then there's artifacts, instrument artifacts.
camera artifacts, imagery, digital imagery artifacts.
And then there's, you know, like literally like weather balloons
and ball lightning and all these other things.
So like, yeah, so I haven't, I would, yes, I want to believe.
I haven't seen the one yet that has tipped me over it.
I would like to.
I will, I will, it is a big, big release of new information today.
It is really fun, by the way, to have the official White House X account
being tweeting transcripts of interviews with U.S. intelligence officers
apparently relaying accounts that they've had.
So I will be up late reading tonight.
But, you know, fingers crossed.
Yeah.
Friends have said something to the effect of, hey, it's unclear what's actually happening,
but what is clear is the government is, or at certain times,
has hid certain materials.
Why would they do that if there's nothing to really worry about?
So I don't know how much of this has been fully validated and I'm not really nice.
But in it, I would say, like, my under, like, I think two things are pretty clear at this
point.
I think one is that, you know, there have been classified, you know, they've been classified.
You know, when stealth fighters and bombers were being developed, you know, that whole program
was like incredibly highly classified. And so if they were going to do test flights on something like that,
you know, they were going to have to like do anything they could to prevent people from realizing what was actually happening.
And so, you know, for sure there were lots of classified aerospace programs over the years that would have had various kinds of cover stories or various kinds of, you know,
there let's just say blankets of suppression of information, you know, kind of placed over them because it's like the most, you know,
some of the most highly classified information in the government, you know, that would cause people
to kind of think that there's information being hidden. I mean, Area 51 was, of course,
the classic example does for a long time, which was a, you know, which is, which is, which is,
the whole Area 51 thing was around basically these classified test flights for, for new, new aircraft.
And then, and then, and then, and then at least, you know, there are suggestions. I don't know if this has been validated,
but there are suggestions that at different points in time, the government might have put out
UFO stories as basically, as an actual over-a-cover story. And so, you know, if you're, you know, if you're,
let's say you're a highly capable military intelligence officer whose job is to make sure that the
stealth flight doesn't become recognized for what it is because it would, you know, if that would be
very bad for national security, then you'd much rather have, you know, basically a UFO cult kind of get
built up around it where people get all, you know, kind of crazed and freaked out about UFOs.
By the way, for two reasons. One is to give people a story to believe it's not that you have some new
breaks through military technology, but the other thing is it make, and this is actually maybe the
serious observation. If you can build, the argument would be, I think if you could build up
UFO cult around something, then you make any investigation into that topic something that people
feel like they can't do, right? And my understanding is, by the way, this was true for a long time,
even in the U.S. military, which is if a U.S. Air Force pilot or a commercial airline pilot, by the way,
thought that they had seen something weird. I think for a long time, a lot of pilots didn't want to
report what they had seen because they didn't want to be viewed that they were like UFO notes.
And of course, if there are actually UFOs out there like that is a very big problem.
Or, by the way, if there are just other kinds of things out there, right?
You know, if there's, you know, if the Chinese are testing some sort of new advanced, you know, high-speed drone or something, you know, you want the pilots to be able to report that, even if they, you know, think that it might get mischaracterized as a UFO.
And so anyway, like, I don't know, maybe, maybe the, the interesting thing we could say on this is all of this played out in the old media environment.
And so, like, all of this played out in the world of broadcast TV and then, you know, sort of official programming on the one hand.
And then to the extent that there was, like, unofficial media, it had to be in, like, nebographed newsletters, right?
or like paperback books.
And, you know, when I was a kid,
there were all these, like, crazy UFO paperback books.
You know, and you know, you always tell us.
You know, the books that said there were no UFOs,
you know, were in hardbacked and the books that said there were,
you know, many UFOs were in paperback.
So, you know, maybe the smart thing you could say is in the,
in the new media environment, this is yet another example of like these,
these old walls just collapse.
You know, the overturned window just disintegrates.
And so, of course, you know, the new media,
environment is extremely conducive to the spread of every UFO theory in the world. Of course,
it's also extremely conducive to the spread of propaganda campaigns if you wanted to, you know,
like I said, if you wanted to hide real information by spreading propaganda. And then, of course,
the pressure builds, you know, very much along the lines of the Epstein thing, right? The pressure
builds and builds and builds and builds until at some point, you know, you get somebody in the
White House is just like, all right, screw it, like, we're going to rip the Band-Aid off and find out what's
actually going on. No. You know, no, you know, assuming that they're not still.
Let's see. Fuzzy in the details. But we'll leave that to the next turn of the situation.
Exactly. We'll stay monitoring. We'll close on a last couple of questions from the chat.
One is advice for young graduates. If you were in college today, you of course were at the forefront of the internet revolution at the University of Illinois.
What would you be studying or would you even be in college today in 2026?
What advice might you have for college students, you know, sort of trying to make sense of how to prepare for, for, for
what's to come. Yeah. So it's basically gain AI superpowers. I think it's actually, you know,
very straightforward. It's like, okay, you have, you have, you, you, you have the enormous stroke of luck
that you have arrived at the moment in which there is this new capability for augmenting, you know,
human ability on a thousand fronts at the same time that's just dropped into our laps. And it's
going to get much better from here. And, you know, enormous numbers of people who are supposedly
older and wiser than you are going to dig in their heels and they're going to be mad about it.
and they're going to fight it, they're going to not want to do it.
And, you know, you are going to have the opportunity to have this be something that is
absolutely key to your skill set and key to everything that you can accomplish as a professional
or as a creative, you know, for the next 50 years.
And so I would just like lean in as incredibly hard on that, you know, walk into every job interview
with like, you know, here's my whatever portfolio, resume, whatever, like here is how I use this technology.
Here are the capabilities that I'm bringing, you know, to the table.
And by the way, you know, some employers you'll talk to will, you know, they'll fuzz out on that
not not not not respected but you'll you know other employers will be like wow like that that's
clearly you know this is exactly what we want um and so this is a uh this is actually this is actually
funny earth is um Douglas Adams the great uh science fiction novelist um you want said there's he said there's
there's a repeating and this was right by the way pre-AI like this is something he said like 30
years ago uh he said there's a repeating pattern of how new technology is received by the different
age cohorts um in society he said um if you are when a new technology arrives whatever it is
in this case, AI.
He said, if you're below the age of 15,
he said, this is just how the world's always work.
It's just obvious.
And then if you're between the ages of 15 to 35,
this is cool and nifty,
and you can probably get a career using it.
And then if you're above the age of 35,
this is unholy and against everything
that society stands for
and should absolutely be destroyed.
And so I think that, I think that, I think 15 to 35
and especially 15 to 25 right now,
like, yeah, I am very jealous.
like I generally don't wish I could go back in time and do things over again,
but it would be really, really fun right now to be 18 or 20 or 22
and to have this capability and figure out what I could do with it.
And we're, it's funny, we at A-SX-Z are trying to hire more of these people
because they're AI-Native and they're going to help us become more A&A.
By the way, this is the thing, there's this narrative right now, part of the Duma,
part of the Duma narrative is, oh, companies are never going to hire junior employees again.
The new generation is screwed because companies are going to hire junior employees again because those are the most easily replaced by AI.
And so companies are only ever going to have senior people.
And I think the, I believe the opposite is true.
I think 100%.
You want the AI Native kids.
Like the AI Native kids are going to outperform the, you know, their older Luddite peers like gigantically.
Titanicly.
No, their older peers who are not Luddites are also going to do great.
But it is, yeah, no, an 18-year-old with, or by the way, a 24-year-old or by the way, a 14-year-old with AI.
we are going to see super producers, you know, the likes of which we've never seen in the world.
So yes, by the way, this is going to greatly stress the, this is going to be another big
point of stress on all the child labor laws.
Yeah.
Yeah, exactly.
Let me just say, let's just say, the children yearn for the AI minds.
Yeah.
Yeah, absolutely.
The, speaking of, you know, we talked about Zoomers in previous episodes and why you like them in
terms of, you know, they just have so much courage.
and are willing or sort of kind of fed up
because they grew up in, you know, COVID school
and all these sort of, you know,
adjacent sort of, you know, sort of impositions.
And, but one thing you quoted recently
was Chris Arnod's sort of post around the,
you know, people talk about the educational divide,
but there's also generational divide
in terms of boomers being just much more
sort of confident in their sort of truth
and younger people being more sort of post-truth
relativistic, more pluralist.
I thought that was a really interesting sort of epistemological divide.
What did you find interesting about or how do you see that play out?
Yeah, so there's really two parts to it, which is very interesting.
So part number one is a lot of boomers, somebody once had the definition of a baby boomer
or somebody who believes what's on the TV set.
Like they believe what the talking on the TV says.
And like anybody who's 20 knows that you obviously don't do that, right?
That would be stupid.
But every, you know, 6-year-old or 80-year-old has been watching TV their entire
lives. And when they grew up, you know, it's the old story we've all heard it, you know, a million
times. Walker Cronkite used to tell us what the truth was. Um, right. And, you know,
and, you know, this is, of course, that was always BS, but nevertheless, that was what the boomers
believed. They believed what the TV said. They believe what the New York Times wrote. Right.
So, you know, they believe these things. Um, you know, anybody below the age of 40, like,
just at this point has example after example after example of how, like, obviously that's just
not true. And then anybody who's, like, 20, who's been through the last, you know, 15 years
in school, like, just obviously knows that, you know, these people are fake.
and you know, this is not real
and you just can't take this stuff seriously.
And so part of it is that divide.
And so the boomers had, by the way,
it's this great YouTube account.
There's this amazing video on this.
There's this great YouTube account
called Academic Agent.
This is a British author named Neiman Parvini
who writes these really interesting books.
But he has this two-hour video
that's really worth watching
and it's called Boomer Truth.
And so it's like a two-hour documentary
on kind of this concept.
He calls Boomer Truth,
which is basically like whatever the TV.
TV says, and I was falling apart. So, so there's sort of, there's sort of the boomer truth thing.
But then there's this other thing, which is like a key part of boomer truth is that there's no
fixed morality, right? So, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so,
the, like, that was such a, you know, like, before there was woke, there was political
correctness and like the political correctness of like when I was in like college it was literally around what's called multiculturalism or more well. Peter Thiel and David Sacks wrote about it their book, you know, diversity myth in 1995. The diversity myth. And it was, it was actually had an, it was actually a term of it was called multi-culti, multi-culti, multicultural. And there were these furious debate. There's a book of that era, Peter's book is great. But actually before that, there was a famous book at the time, which is huge headline news all through the country when it came out called the closing in the American mind. Yeah. And it's sort of, it's sort of. It's sort of. It's sort of. It's sort of.
of this sort of right-wing academic at University of Chicago who basically said these colleges are
teaching these kids that there is no, there is no morality. You know, it's all just basically
morality to just choose your own adventure. And so there is this moral relativism that's kind
of at the heart of boomer truth, right? And so it's this weird thing where it's like there is
a fixed received belief that there is no fixed morality. And so if you're on the, and then basically
like the entire media apparatus, the entire cultural program, the entire educational system got
designed around this. And all of the stuff, all of the crazy stuff that, you know, kids are getting
in school now is basically, you know, downstream of, downstream of this movement from, you know,
30, 40 years ago, 60 years ago. And so if you're, yeah, if you're 20, you've just, like,
come up in this sort of, like, weird environment in which, you know, on the one hand, you're just like
the boomers have no credibility at all because, like, I can't believe they still believe what's on TV.
And then number two is, like, to the extent that they, we do listen anything, they say,
they keep telling us to not judge anybody and not judge anything and that all, all moralities are
equal in all cultures are equal. And so of course they're going to come, of course, their zoomers are
going to come out of that with, like, just like an incredible level of skepticism. And then by the way,
this is not an abstract exercise because these are the, these are the kids who came up through COVID,
right? And these are the kids who came up through woke. And these are the kids who came out
through like all of the, all of the craziness of the last, you know, of the last decade, you know,
15 years. And so I think these kids are just coming out with like a completely different viewpoint
on how the world works. And by the way, you know, not in every case, but like in many cases,
completely different, like much more, I would say much, almost like simultaneously more open-minded,
more critical, like much more interested in ideas, much more, much more skeptical of authority,
much more skeptical of received wisdom, much more cynical about manipulation. By the way, much more
sensitive about the media environment, you know, they're much more aware of the idea that
there actually is psychological warfare going on and they have been on the receiving end of it,
you know, much less, much more skeptical of authority, you know, they just, you know, their view of the
authority figures that they've seen in their life, you know, in many cases, it's just
like complete contempt. And in many cases, very well earned. And so, yeah, it's just a, it's just
a, it's a starkly different, I think it's a starkly different worldview than the, for sure,
than the boomer's ad, also very different than my generation, Gen X, also very different than
millennials. Like, it's something new. And, and I'm, I'm, I'm very excited. I think, I think they're
fantastic. Yeah. Speaking of something new, is it, would it be fair to summarize retard maxing as
Stoicism meets you can just do things?
No, I think it's just you can just do things.
Okay.
Like, I think it's even, it's even shedding.
I don't know.
You can say, I don't know, maybe, I think I can see what you're driving at,
and I think you could probably explain it that way, but I think, I don't know,
the way I put it is the Stoics put a lot of time and effort into trying to be stoic.
Whereas the whole point of retard maxing is you're not supposed to put that level of time
and effort into being the way that you are.
You're just supposed to do it.
And so, yeah, I guess you could say our friend Ryan Holiday is a stoic and not a retard maxer.
as he demonstrated this week.
And so maybe right there in that video,
you can see the difference.
Yeah, that's well said.
Last question from the chat, we'll get you out of here.
How are you such a good monitor?
What is your secret to monitoring so many situations?
Any strategies?
What is your approach?
Well, of course, being plugged into the MTS fire hose.
Of course, absolutely critical.
And, of course, the amazing tools that the team is developing
and putting online is fantastic.
I have been, among other things, been glued to the coverage of the Open AI trial this week on, is it MTS?
Is it MTS?
Okay, yeah, yeah.
So, yeah, for sure that.
And then, yeah, I mean, I'm at, you know, I long ago plugged the back of my skull.
You know, I wire jacked into the social media.
So I sort of, you know, continuous X feed, my continuous substack feed, my continuous YouTube feed.
And, yeah, and then trying to, as usual, trying to read enough old books to try to have some counterbalance.
to the daily fire host.
Yeah, awesome.
Well, Mark, thank you so much for coming on
another great episode at MTS,
and we'll see you back soon.
See you soon.
Thanks for listening to this episode
of the A16Z podcast.
If you like this episode, be sure to like,
comment, subscribe,
leave us a rating or review
and share it with your friends and family.
For more episodes,
go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z
and subscribe to our substack
at A16Z.
substack.com.
Thanks again for listening, and I'll see you in the next episode.
This information is for educational purposes only and is not a recommendation to buy,
hold, or sell any investment or financial product.
This podcast has been produced by a third party and may include pay promotional
advertisements, other company references, and individuals unaffiliated with A16Z.
Such advertisements, companies, and individuals are not endorsed by AH Capital
Management LLC, A16Z, or any of its affiliates.
Information is from sources deemed reliable on the date of public
but A16Z does not guarantee its accuracy.
