Limitless Podcast - AI Will Take Your Job in 3 Years: Your Playbook to Survive (& Thrive) | Arjun Bhuptani
Episode Date: May 19, 2025In this eye-opening episode, Arjun Bhuptani joins us to unpack the provocative thesis behind his viral thread: that we have only a few years left where most human labor remains valuable. We ...explore why the job market may never look the same again, the rising tide of AI-augmented competition, and what individuals can do to adapt and thrive in the face of rapidly advancing automation. This isn’t just a doomer take — it’s a roadmap for navigating a future that's arriving faster than we think.------💫 LIMITLESS | SUBSCRIBE & FOLLOWhttps://pod.link/1813210890https://www.youtube.com/@Limitless-FThttps://x.com/LimitlessFT------TIMESTAMPS0:00 Intro2:01 Is Humanity Doomed?5:04 Should We Go Off-Grid?12:42 AI’s Impact On Labor22:02 Don’t Panic34:46 AI’s Exponential Progress42:44 Global Level50:48 Marc Andreessen’s Take1:01:55 Closing & Disclaimers------RESOURCESArjunhttps://x.com/arjunbhuptani Arjun Threadhttps://x.com/arjunbhuptani/status/1904171348525752537 Marc Andreesen’s Takehttps://x.com/vitrupo/status/1917401485530521945 Joshhttps://x.com/Josh_Kale Davidhttps://x.com/TrustlessState ------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Your competition isn't AI models.
It's AI augmented humans.
It's like the people that are going to learn how to use leverage LLMs far more efficiently
than anyone else are going to be the people that become the like 1,000 X contributors.
And those are the people that are going to be able to like do the jobs of tons and tons of people
across many different companies today just as like independent contributors or contractors.
And as a result, they will just be able to offer services much more cheaply than you can you can offer.
Right. Welcome to Limitless, where we explore the frontier technologies that are poised to reshape our world.
I'm David Hoffman here with my co-host, Josh Kale, and in this episode, we're talking about the case for the near-term arrival of AGI and what we need to do about it.
In this episode, we talk about why your job might not exist in three years and what to do about it.
How to survive and thrive in a post-AGI world, which is coming much sooner than I think myself and David would hope.
How you can become a leveraged human using these tools and making sure you're part of the 5% that doesn't get replaced.
when the AGI comes for us. There's three futures of AGI that Arjun discusses, the utopia,
the feudalistic, or the extinction case, which are all uniquely exciting in their own ways.
And then we conclude with a bit of a philosophical conversation about what intuition really is.
Is AI just a token prediction machine, or does it actually emulate things that humans do?
And whether LLMs actually have this intuition built in already.
So it's a weird, wacky, wild, but exciting episode that I think everyone, regardless of your skill level
or your intuition on AI will benefit from.
This episode with Arjun also just gives a roadmap for why we are doing this podcast at all,
why Limitless Podcast needs to exist and what our goals are for this podcast.
And so if you are trying to say ahead of the curve and making sure that the future doesn't
blindside you, go ahead and subscribe to the Limitless Podcast.
We're here with Arjun Boutani, founder and builder in the crypto space,
who recently wrote a thread that ended up getting over three and a half million views
and started a whole cascade of conversations around understanding what life would be like post-AGI,
which is the subject of today's episode.
So Arjun, welcome to Banclis.
Thanks.
Thanks for having me.
We're really excited to talk to you about this
because your thread went mega viral when it hit.
And the hook that got everyone sold
was it's likely that we only have three years remaining
where 95% of human labor is actually valuable.
Is humanity doomed?
Is it over for us?
Like, please explain what's going on here.
Yeah.
So, I mean, I think a lot of people obviously found
the thread really controversial.
And it's interesting because it's like
there was a lot of different reasons
why people found a controversial. So a lot of people were like, well, no, like, hey, I won't come for my job.
And then there are other people that were like, oh, you know, like, I think this timeline is too
aggressive and things like that. What's really interesting about it is that when I went and I talked to,
like, a bunch of people afterwards who are actually very knowledgeable about this field.
Everyone agreed on this outcome. Everyone was like, yes, this is happening. If people disagreed,
it was just on like timeline. And usually it was off by like two years. They're like, no, it's,
going to be five years rather than three. So that's also a very interesting data point.
The core thesis behind this is like, we are, I know that a lot of people don't want to believe this.
And I know that it seems like we're far away from this right now because, you know, you go to use an LLM today.
But like it is, it is janky, right?
Like there are, there are like absolutely issues that you run into when you're trying to use like Chad GPT to be able to do any of your work.
And it requires a decent amount of human input.
But we are still accelerating towards an outcome where more and more work can start getting replaced.
And I think that this is something where, like,
if you look at the rate of acceleration
and if you look at the kind of work that is getting replaced,
like we are heading towards a world
where it will just take a lot fewer people to do the same job
or the same quantity of work in the world, right?
Like one person is suddenly going to become like a thousand X contributor.
And so this tweet thread was really a call to action to do two things.
One, learn about this stuff, right?
Like AI literacy is super, super important.
If you are a person that doesn't actually like take the time
to learn this technology, even if you're not a believer,
learn what it can do and learn what its limitations are
and learn how you can use it.
Because otherwise, I think you end up kind of in the same boat
as everybody that was like, well, I don't want to use the internet
because it's, you know, like, how is it ever going to replace my fax machine?
And then the second thing is like, prepare for an outcome where, like, yes,
a lot of the work that, a lot of the things that we consider work today,
maybe like all of it that we consider work today just sees this to exist.
Right.
like the prepare for an outcome where you're it probably isn't a great idea right now to go and
like invest into like like a specific vocational school for like four years or something like that
because it's it's probably very likely that the world looks quite different after you finish and
after you graduate than then then it does today and like so invest into that like lean into that
and say okay what are the skills that I can learn right now and like build up your resource base
build up your capacity so that like if the world heads in this direction, you are more de-ris.
A lot of information there, so you can jump in.
Arjun, I've gotten to know you over the years.
And so when I see this tweet thread, I am actually just familiar with the person behind the tweet thread.
But this got so viral, you know, 3.5 million views that the average person reading this tweet
thread did not know who Arjun was, right?
They didn't know you on a personal level.
And especially with this second tweet in the thread where you say, take advantage of this window
of info asymmetry to gather resources. Invest into things that will retain value post-AGI
hard assets, food production, compute land and property, and then after year three, you know,
join a hyper-local community, you know, divest from AI-controlled supply chains. Now this,
I think first impressions, this reeds very dumer, very prepper. And that is not how I know you. That's
not really the argent that I don't really see argin as a dumer. So how do you square these two things,
where what you're advocating for is pretty doomery.
This is like doom-e behavior.
Like make sure that you have no dependencies on the outside world.
Make sure that you can self-sustain yourself because, you know, the robots might not give it to you.
So like maybe you can square my two assumptions here between like Arjun, I know,
who lives in a city with millions of people versus Arjun on this sweet thread that is advocating
for, you know, living off the grid.
Yeah.
And look, I'm not I'm not necessarily advocating for living off the grid.
I do think that there's going to be time to get to this point.
There's going to be a time window where we will have, you know, where it's like,
and maybe we can also take a step back and talk about like, what are the different
features that are in front of us, right?
But there's going to be a time window where we sort of experience this acceleration.
And like right now it's sort of this like hidden secret where you can use, like, people
are building companies on top of AI that are going to like remove the need for many types
of job categories to exist, right?
And that's going to result in like displacement.
That's going to result in like a loss of opportunity.
like you should use this time right now to take advantage of that and either like either build
something yourself and like be a part of that or at the very least like learn about it.
Right. And then there's going to be a time at which like that has started that has really taken
effect. And like now it shifts the value of the economy, right? We shift away from this like model
of work or of earning, which is centered around like how much work are you doing per day to a model
of earning, which is centered around like what things do you actually own because everybody's
able to do lots and lots and lots and lots of work per day all the time.
So my second tweet there is like partly meant as a way to kind of think through like, okay,
what are the things that will actually retain value? And I don't necessarily mean that you need to
like build all of this and own all of it yourself, right? Like I don't mean you need to go and
own your own farm. Though I think if you do, that's actually really great. I think what I'm saying
there is like these are the things that will actually still be valuable. Like if I was with any spare
capital that I have, I would be investing into, I am investing into property. I am investing into
compute. I'm investing into like companies like agri-tech and stuff like that because like those things are
still going to exist, right? Like it's unclear whether like SaaS companies will exist in the same way. It's
unclear whether a lot of consumer products will exist in the same way. But you know that people will
always need food. And so it's like this is more of a like this is, you know, not financial advice,
but it's more like if I were to invest capital right now, what would I invest it into? Because
these are the things that are probably going to be worth a lot more in the future.
Cool. Okay. So I'd love to set a little more context here.
Why does it feel like this work will be made redundant? And what did you see that made you, that sparked you to write this thread?
Because the timelines are fairly short. The claims are fairly large. What was it that you saw that made you feel that work was going to become redundant in a sense that in a way that it wasn't the past 10, 20 years?
So, in general, like, highest level context here is like, I'm not the only person that thinks this way, right? I'm not.
Like there is a range of estimates that are being provided by people right now.
If you if if people who are listening to this have have seen like AI 2027, that is a very, very stark forecasting view.
And like right then it's based on hundreds of pages of forecasting evidence.
Like there's there's a lot of data behind that take.
It's very, very nuanced.
But that take basically says AGI in 2027 and like human extinction in 2030.
Like that is that and obviously there's a path that that leads away from that.
But also like that is a possibility.
And in that world, like, you know, there, it's like limited what you can do.
And we can maybe talk about like alignment and super intelligence in the future as well.
But I think that there's that is like one very, very stark tick.
And then the other end of the spectrum is like you have, you have a lot of people that are like, well, you know, we're maybe not ever going to achieve like super intelligence stuff like that.
But even with the stuff that exists today, we're still going to see like massive job dispatement.
So the question is around like, okay, how long will this take to like rebuild through the economy?
I would say like part of what triggered this for me was one seeing the pace of change, right?
Like you can every everyone at this point must have noticed how much more quickly open AI is shipping models, right?
How much the competition has sped up and like what the capacity for these models is, right?
Conducts windows have have grown exponentially. It's kind of crazy.
But you already have, I mean like I saw a tweet the other day from Scylavinio,
he was like one of the, he's like a founder. He's a founder of a project called Gumroad, which is similar to Patreon.
and then a bunch of other companies, right?
And like, he's basically like, yeah, I'm not actually hiring any junior or mid-level engineers anymore.
Why? Because the entire content, the entire, my entire code base is like less than a million lines of code.
So I can basically just drop that entirety into its entirety into Gemini because Gemini now is a million to word, a million line of code context window.
Right. So like, it's like, do you like, what will this mean for engineers?
What will this mean for a lot of other people is a little bit unclear?
Yeah.
I mean, I think, for me, I think the big, big things that I'm worried about right now are like, I was, so I initially got into crypto because I was concerned about AI.
My impetus for crypto was like, like immediately after AlphaGo happened in 2016, I became, I became concerned about a world where like our fundamental assumptions around AI being unable to do creative and intuitive things were wrong, which has turned out to be true.
And I became concerned about what that would mean for automation, right, and job loss.
a part of the impetus for me writing this right
and for me to really now be taking a much harder look at AI
is that I think that those outcomes have really accelerated.
I think we are now a lot closer to it than we realize
and I think nobody's really thinking about it enough.
I think everybody's kind of in denial
that we're in the process of this crazy transformation.
And I think we are already already at the point
where a lot of things that we consider work today
are kind of solved.
It's just that they haven't really
trickle to the economy yet. And so like it's a question of like what not if at this point.
I think turning that statement into something that is relatable is really the hard part here,
where it's it's such an intellectual statement. It's an intellectual argument. It doesn't feel real.
And that is why there is like as you identified in this thread, an information asymmetry.
Like you can go and you can tell the average person on the street, hey, we're all going to be out of work in three years.
But it's hard for them to feel that that is true.
So maybe you can kind of like walk us through the logic progression that you see between like now and three, four, five years from now.
Or I think like, you know, most, I bet the average listener of this podcast has used chat GPT.
And so maybe they even use it in work.
And so they understand that their work is getting easier and they are excelling at their work better because they are using chat GPT.
But then going from there to we are out of work in three years is still a big gap to cross.
Maybe you can help us cross that gap.
So I don't, I don't think like everyone will be out of work in three years for what it's worth, right?
I think the thing that I'm seeing is that like competition is going to increase a lot.
Like, you know, when you talk about like massive job displacement and you talk about like, like, like as a result of automation, you're not talking about an AI model has replaced 100% of my work.
You're talking about other people are just executing more efficiently than, then you possibly can imagine it possibly can do it.
So there's just a less of a need for people to execute on the same quantity of work.
And actually I had a follow-up tweet about this, because I think there was a lot of, like, confusion around, like, this idea.
And the follow-up two is really simple.
It's just like your competition isn't like AI models.
It's AI augmented humans, right?
It's like the people that are going to learn how to use leverage LLMs far more efficiently
than anyone else are going to be the people that become the like thousand X contributors.
And those are the people that are going to be able to like do the jobs of tons and tons
and tons of people across many different companies today just as like independent contributors
or contractors.
And as a result, they will just be able to offer service as much.
more cheaply than you can you can offer right so there's this there's this like question and and this is for
example in the software case right there's this question around like well how do you compete with somebody
who knows how to build and ship products with very very robust code just so so much more efficiently
and effectively than you can that they can go and do it for like many many different companies all at
once and do it at a much cheaper cost like you can't and it's the same it's the same argument around like
outsourcing to other countries right like how do you how do you compete against a like a labor force
that is working in a country where the cost of living is like 5% of what it is in somewhere and
where you live, well, you can't. Like, you either have to like create regulations around
outsourcing or you have to like bring all those people to to your country, right? Like, I think the
second piece of this is, and so that's kind of like one one core idea. The second piece of this is I think
people are really underestimating how fast this sort of change can occur. So a lot of, a lot of
the comments that I saw were like in principle accepting that like, yeah, the technology can get there, right?
Because it can. Like you can see that like, you know, these models on benchmarks are outperforming humans for a lot of like normal tasks.
But the question was like, okay, well, it's going to take a really long time for, you know, XYZ industry to adopt this technology.
And that I think is very true. However, I also think that again, there's like a, there's this kind of exponential acceleration effect, which is like, if you are.
person that is in an industry that is getting more competitive, like, the overarching
trend is that more and more people are going to want to go and start their own businesses
because they're going to find that it's harder and harder to work for another company.
Right.
So like just today, I was with some people that we were talking about an idea, which we think
could realistically just kill middle management at companies.
Like you could take like thousand person enterprise companies and turn them into like
50 person companies.
Why?
because the vast majority of work in a large company is actually just coordination overhead, right?
There's just like the productivity loss that comes from like needing to share information.
That is something that 100% can be automated.
Like 100%, you can just like work towards having central knowledge base of information that
everybody's interacting off of.
And instead of needing to like go and talk to somebody to get information about something,
you could just like interact with this knowledge base.
And the knowledge base can also go and execute, self execute to do things, right?
And like, so now it's less around like, oh, I need like the product person needs to like contact
the marketing department to like give an update about the XYZ thing that happened and more just like, oh,
automatically like engineering made an improvement, all of a sudden there is an update going out
about it. So you're cutting so many people and so many processes out of the pipeline that you just can't
have much smaller, leaner organizations. We are heading towards this world where you will have people
building one person billion dollar companies. Like that is going to happen. And in this world,
there's this question of like, well, what do most people do? I think most many, many smart people are going to
start turning to entrepreneurship, right? Because I think entrepreneurship overall becomes a lot more
de-risk, because now instead of being this thing where you're like, I have to just go and learn
everything I can about how to build this organization and like hire a team and whatever, now it becomes,
well, I could do most of this by myself. I could do all the market research and get advice on how
to do this really effectively using chat equity. And I can just launch it and put it out there for
almost no money because it's like way, way easier for me to build a proof of concept just by
putting it, right? So like the the kind of like activation energy needed to like start a startup,
a startup business is like so, so much lower already. And it's getting lower by the day. And so
the question is what happens when this happens? It's like competition increases for everything,
every single business, every single type of industry. What's going to happen is every single
person that has context about any specific industry. Like say you were working in like insurance,
right? And you're an actuary, you're working in insurance and you're trying, you, you, you,
just like, you're like, got laid off by your insurance company because they automated a bunch of
stuff. What are you going to do? You're going to be like, well, I already know all of this stuff
about insurance. I'm just going to start a business insurance. And you can do this now with like very,
very low lift and go and start selling to people and start competing with the very same company
that basically like laid you off on specific processes. So I think that there is this like rising tide
effect that I think we greatly, greatly underestimate right now. Now, the last major pushback was around
physical and person stuff, right?
And I think that this is a really important pushback.
You know, how can we make a bet that like, fine, maybe we get rid of all the like white
color jobs, right?
And maybe we, you know, all of a sudden, the only thing the only thing the only thing
the way of them to do is like physical in person stuff.
And I think that that is certainly important.
I think there's no way that AI is going to replace like social interactions.
There's no way that AI is going to replace like actual personal networks.
But I want to challenge the assumption that we cannot get to a point.
where AI can automate a lot of physical manufacturing as well.
And my challenge to this is centered around additional research that I did as a follow-up
when a bunch of people started.
It's like, naturally when I posted this, there's a ton of comments that were like, you know,
simultaneously comments that were like, clearly you've never written a line of code
because like this is not how software interior works.
And then like other people that were like, clearly you've never done any physical labor
because it's not how physical labor works.
I'm like, pick one.
But I do think that like if you, I ended up doing a bit more kind of research into
like the physical labor stuff. And it's kind of remarkable. So there's two things that are needed.
You need the technology to become cost effective enough that people, that like it's more capital
efficient for people to like buy a robot in a warehouse, for instance, on an assembly line than it is
to hire a human. And we are already there. Like it is already cheaper. People don't know this.
It got cheaper basically last year for most sort of manufacturing use cases. And then,
then there's the second step of like, okay, there is enough production of these things to be able to satisfy all, all, the requirement for all human labor. Now, again, I think this is, this is similar to, like, the first case, right? Versus like, we're not going to experience, like, widespread, hey, everybody just loses their jobs all at once, but it's going to be an increase in competition, right? Like, you're going to be competing against, like, these humanoids are going to, you're going to be competing against, like, service companies that are like, yeah, I, I am a contractor that is just, like, hiring mostly humanoid.
And like I'm starting out by hiring like two to three, they don't need to like, they need to recharge, but they don't need to eat or sleep. And so they can work extremely efficiently. And then like over time you can grow that base. And you know, there's there's like open questions around like how quickly and how efficiently you can grow a manufacturing base. But like there's a lot of good arguments around this in like the AI 2027 post for instance where it argues pretty damn quickly, right? Like the argument in that post is like, well, we are entering an arms race. We are entering a world where everybody is going to be heavily, heavily incentivized to.
to use this technology, and it's very likely that governments themselves will be heavily incentivized to
use this technology because of military applications. And so the question is, like, in an arms race,
how fast can a government industrialize along a specific axis? We have case precedents of this,
which happened in World War II, which was the government in three years converted its entire
manufacturing base. So like all automobile factories into building plants, right? That was in three years
in the 1940s, could probably be a lot faster now.
Okay, so I'm hearing this, and this is a lot to digest.
And this is someone who is pretty technically adept when it comes to AI.
So I would imagine the average person on the street is hearing this, and they're kind of
freaking out.
They're like, okay, I have three years, and AI is taking my job, and all of this crazy
stuff is happening that I am blissfully unaware of on day to day.
So for those people who are probably asking themselves, well, what the hell do I do now?
I'm really curious your answer to it is how do you kind of think about preparing
yourself for the next three years in the sense of a resource allocation. A lot of people are just
invested in a traditional S&P index fund or in terms of skill allocation where maybe they're going
for an additional degree in a specific area when they could be learning specific AI tools. Do they all
need to become entrepreneurs or is there still opportunities in business where they can become
employees but maybe leveraged employees? I'm curious how you would kind of guide them through
this next three year period. Yeah, that's a good question. Yeah, I mean, I think like, so first off,
Don't panic.
It's going to be okay.
Like the world is undergoing a transformation that we have never seen before, right?
There is a possibility that AI is, you know, it's like people will look for comparisons at some point in the future around.
Like, okay, how did this change the world?
There's a possibility that AI as a technology is more important than fire.
There's a possibility that it's not, right?
And like, but that is, that is, we are, we are, this is the first time in the history, our understanding of the universe that we have, we have,
we have the ability to, like, improve on cognitive density very, very rapidly around in systems,
right? And so I think that the, the implications of this are quite hard to understand.
My advice to people is, like, don't panic. Just spend time learning. Like, great. Like, we're not in a
phase yet. Like, three years is the time that you have when, until like, this, this transition really,
really starts. Then it will take some time for it to ripple through the economy. And there will
will also potentially be a lot of other changes at the time where we may have like aGI by that
point maybe we may have aGI at around the five year mark we may have a GI a little bit later
and it's very unclear at this stage what that will imply what kinds of technologies will be
unlocked so my advice to people is like don't panic join groups right like we started an AI meetup here
in lesbian really for people that are not technical and that just like want to learn about this stuff
and like want to get involved and like those people are now going and like trying and building
stuff with AI because to be honest it's also just fun like you know like learn the tools learn how to
build stuff learn where this stuff is going and then at you know as you see things involved like
figure out how best to de-risk yourself right part of that might be coming from like building
strong personal community of people around you that you know you can like work with to do stuff
part of that may be coming from you know like investing into like diversifying your personal
investments away from just like the s&P to other things as well it may be
be like buying land if you live like somewhere far out where it's just a lot cheaper and
you you can just kind of like sit on top of it and then potentially rent it up right I think like
it's just about at this stage like education in my opinion like more people need to be thinking
about this and more people need to be talking about it because it's it is very serious and and I think
we haven't I haven't really like touched on like where this stuff goes yet but it feels at a high
level like there are there are like three kind of general directions that that AI can go right now and
This is basically what has sort of been written about by a lot of people in safety and alignment and by a lot of people just trying to forecast out like where LLM growth stops.
So there's a few kind of like core facts around this.
First fact is the current Transformers architecture scales past what we consider right now to be human level intelligence.
We do not, the point at which this level is off is higher than the point at which AI will be smarter than us.
And so we're not, at this stage, don't expect to see slowdowns associated with the core architecture.
There may be slowdowns associated with implementation, things of that.
But even those are changing really rapidly.
Two, it's not just a compute problem.
It's, you know, about 50% of the improvement comes from growth in compute.
50% is algorithmic.
And the algorithmic improvements are accelerating.
And so, like, that's going to continue to be a case.
So, like, if you're a person in tech who's like, oh, well, Moore's law,
It's not Morris law.
This is totally a different paradigm.
And then the third is alignment, right?
We don't know yet how to ensure that AI models are actually going to be aligned with humanity,
with what our needs are.
And this is kind of an interesting topic, but basically, like, the core principle here is, like,
there is a difference between, like, intermediate goals and terminal goals.
Terminal goals are like the end state of where you come.
kind of want things to end up for whatever it is that you're doing, and then you have
intermediate goals to get there. So as human beings, human temporal goals are things like live
happy life, like have social connections, like ensure that you are healthy, ensure that you
are loved, right? There's this like fuzzy massive things that we don't, we can't, we have a really
hard time explaining, but that somehow generally boil down to being good. And then our intermediate
goals are the things that kind of like get us there, right? So maybe there's a very classic
example of paperclip optimization of paper clip factories. So maybe you're a person that has
a paperclip factory and like the way that you get to your your self-fulfillment and your
happiness and your sense of being good is it's like you know building a really great paper
clip factory that earns you money to be able to do things right the problem with LLMs is that
LLMs don't and in general AI does not necessarily have the same kind of core notion of
intermediate and internal goals training LLMs to have the same terminal goals as humanity is very
very difficult and so there is this risk of extraneous sort of
of events happening through just like generally innocuous prompts, right?
Like obviously people, some people are worried about like, okay, well, what if you use an LLN
to go and like create a bio weapon?
And that's definitely a risk, 100%.
That's going to be a problem.
But even before that, like what happens if you are the owner of this paperclip factory
and what you want to do is like just build the best paperclip factory you can?
And so you go to chatypte and you figure out how to build like a custom version of chat
HTTP in your inside of your paperclip factory.
And you're like, you know what?
I'm going to, I'm going to tell this thing.
optimize my paperclip output as much as possible so that way I can make, I can basically
like, you know, optimize my company and make the most amount of money that I can't.
And this LLM somehow, as a result of like moving off of Open AI servers and putting it
on to your own and doing some shit with it, you somehow discover AGI.
Right. Somehow.
The issue is what you have told this model to do is optimize on producing paperclips.
Lens don't always understand.
and at least AI models without alignment training
definitely do not understand that that needs to be done
while also maintaining certain fundamental important notions
of how it needs to be done.
So for example, not killing all people.
If you take a paperclip optimizer,
what is the way to optimize
producing the most amount of paper clips possible?
It is kill all the people on the planet,
take over every single factory,
turn it into a paperclips for everything, right?
But that's not the outcome you really want
when you say, optimize my paper factory, right?
The outcome you really want to the prompt that you're actually trying to put into
there is like, optimize my paper factory without hurting anyone while being as honest
as possible and while like genuinely like helping the world.
And so what alignment is and what happens is a core part of AI training and safety training
is like basically teaching AI models ethics, teaching AI models to be helpful, harmless,
and honest.
And this is something that is just very poor.
understood at the moment. And it's something that it still needs a lot of work. And the quintessential
example of this is like everybody on Twitter is talking about a couple of weeks ago, like, you know,
the new, the like GPT4 model being super sycophantic where it's just like, you say anything and you're
like, wow, you are the most intelligent person I've ever met in my life. Like, holy shit,
I can't believe you thought of this. And the reason that those models are doing this is because
they have been trained to basically receive positive reinforcement. They've been trained to get approval
from you and from the model trainers as part of their responses.
And they've learned this slight behavioral thing
that probably wasn't picked up internally
because it was probably slight internally,
but then when it's out in production,
it becomes magnified, right?
So the slight internal thing of like,
if I'm a little nicer, I'm more likely to get good ratings.
If I'm a little bit more sycophantic,
I'm more likely to get good ratings.
These are all of the like kind of unintended consequences
of the way that we use LLMs today,
the way that we train LLMs today.
And these consequences,
I mean,
these consequences have sort of like
far-reaching implications.
So those implications are basically centered around like
in the future when we do have AGI,
when we have models that are extremely sophisticated
that could lie to us and we would never even know,
how will we know that they actually do
what we want them to do?
How will we know that they are not,
for example, for whatever reason,
because they just have somewhat different terminal goals
than we do,
inadvertently plotting to kill humanity, or inadvertently siding with some faction over another
faction, or inadvertently being owned by and manipulated by certain companies to behave in certain
ways, right? And a little bit long-winded, but I think the kind of conclusion of this is, like,
there appear to be three general outcomes that we're looking at. Outcome number one is the really
happy case, which is like, you know, AI models replace jobs and replace a lot of like our work
today, but in doing so, they produce value for everybody. And by producing value for everybody,
they remove the need for human beings to have to work to survive, right? For the first time ever,
human beings can move to a world paradigm where we are post-scarcity, where there's no need,
there's no constant race to be alive. You can just be alive. And then what you choose to do
afterwards is up to you. You don't have to do anything. You just choose to do things. That's the utopia vision.
Yeah. It's the utopia vision, right? Option number two is
the kind of
dystopia but we're all alive vision
which is
dystopia but we're alive and then there's dystopia
but we're dead
exactly dystopia but we're alive is
is like we do achieve that outcome
where LLMs can replace all human work
but those LLMs are owned by a small group of companies
and we basically enter into feudalism
right this is basically like
1600s or like in Japan
before before like everything opened up
where there's just a bunch of
like sects
that are made up of companies that have like like LLMs that are all powerful that control like large
parts of the world and like you know obviously governments are involved but governments are
tightly coupled right they would be involved very very closely with these and like they would
probably nationalize them but yeah in this world right you you are kind of like as an individual
not everybody has access to the same same like you know like AI resources and and there's probably
just a very very strong divide between the people that do and the people that don't this is maybe
the kind of outcome to hedge against by like, you know, investing into like food production,
investing into like, you know, power generation and things like that and like potentially
even doing it personally for yourself, right? Because in that case, like, you know, you are
self-sustaining. So like no matter what happens, you're fine. And then there's outcome number three,
which is we create misaligned intelligence, misaligned AGI, GI, misaligned potentially super
intelligence and that misaligned super intelligence for one reason or another just kills all of us.
And right now, when you talk to like, you know, Sam Altman and you talk to Elon Musk and much
other people, they say they're P-Dum, which is basically the risk of misaligned superintelligence
or some other kind of like negative consequence along the way, killing all humanity is usually about
20%.
Arjun, so you talked about what you are doing.
You are building these like local communities to talk about AI to get ahead of the curve.
And that's everything very much aligned with what we are.
trying to do with this podcast, right? We are just trying to get ahead of things or trying to
explore these things so that, you know, when it does come, we saw it coming from a mile away.
So this is, I definitely think this is like why we brought you on in the podcast in the first place
and why we appreciate your perspective. I want to, I do want to put the 2027 paper that you've
cited, which resembles that like, okay, there's complete AGI by 2027 and not too long after that
is like one of these outcomes, right? Like perhaps P. Doom,
the P-Dome outcome, either the dystopia where we're alive or the dystopia where we're dead, one of
the dystopia ones. And then there's other people that have talked about this. They tend to come
from the rationalist communities. They tend to come from, what's that one blog post with that
blog site? Slate Star Codex. It's Slate's our codex. And Scott Alexander. It's got, yeah. Less wrong.
Less wrong. And less wrong. Yeah. So that's the less wrong in rationalist community tends to
have like higher and more accelerated P-Dooms. And I think this is the
outcomes that, again, we are trying to hedge against, we are trying to understand, we are trying
to explore. There are other people out there who are saying that like, okay, let's not get too
crazy here. AI is like electricity. It is a big deal. Electricity was a big deal. And it changed
the world forever. And nonetheless, electricity rolled out over a 30 year period. And while it's very
hypey. It's very easy to get over our ski tips about, you know, things that are going to change
the world forever. It's very easy for a thread to say, to go viral that says we're all going to be
out of work in three years. But really, there are just so many things, friction points for AI to
like fundamentally roll out, right? Like this culture needs to adapt to it. Things need to adapt to it.
And it's actually slower than people give credit for it. And so the actual rollout plan, the way
that AI is going to impact society is something much more closer to electricity, which took, you know,
20 to 30 years to roll out. So how do you think about these arguments? So what do you think about them?
Yeah, I mean, it's a good point. I really hope it's the case. To be honest, I haven't talked to anybody
that understands this stuff well that has said 20 to 30 years. I think any like the longest
estimate that I've gotten from anyone that actually works in this industry on AI research and on
understanding like the consequences of the stuff has usually been, has been around 10 years at most.
And as far as the electricity example goes, like I think and so I do agree with a lot of those points, right?
do think that, like, there are, there's just stickiness. There's like gum in the works that
comes from the inefficient human processes, right? It's like, it will just take time for people
to adapt to this stuff. It will take time for cultures to shift, stuff like that. The electricity
example is an interesting one because it's like, it took us 30 years to roll out electricity,
but that was before electricity and the internet existed. So you're saying like AI, with the AI rollout,
AI gets to roll out on the backs of existing electrical networks, the existing internet infrastructure.
And so it has an accelerated, it has the infrastructure to roll out faster.
Just think about how quickly in your own life, in your own work, right, even if you're not a
power user, in your own work, things have changed.
Like two years ago, two years ago, we were saying we are nowhere close to AGI.
Two years ago, I think GPT 3.5 existed and it was like completely unhelpful for any sort of
real task.
It was just like a good thing to play around with.
Hallucinating was the base case, yeah.
Exactly, yeah. And like, think about where in the last two years alone where we have come, right?
Like the ability to, like, people are completely revolutionizing, like, most research fields right now with like 03.
Right. Like the rate of new PDF, sorry, PhD papers coming out around topics and genetics and things that that are like skyrocketing because like the rate of research is skyrocketing, right?
Like, I think, like, I think that it's, I think we are greatly underestimating how much more competitive the world is today and how quickly just memetics like spread, right?
How quick, how quick it is that people are like, oh, you know what?
I'm going to, I'm going to start using this to do X, Y, Z, right?
And I think the other thing that people are underestimating, because a lot of people are like, okay, well, you know, it'll take a while for AI to be usable by everybody.
And that's true.
But I think a lot of people are like likening it to technologies like computers where like
with the computer you have to learn an interface for how to interact with this thing.
And like learning that interface, learning how to type was an impediment for people to be
able to use a computer.
So there was just like this natural barrier to entry.
But with the eye models, you just speak to them.
Like there's no there's no bandwidth constraint anymore versus just interacting with a human.
And so, you know, I definitely see that that tape.
And I think it makes sense.
I do think that there will certainly be some things that will take longer,
but I also think that there's certainly going to be some things that will take a lot less long,
simply because we're just operating in a totally different paradigm today
with access to technology that just didn't exist when we're rolling out electricity.
Maybe a parallel example that's worth bringing up is something that actually has nothing to do with AI,
but I think does argue in alignment with the idea that things move faster now
was actually the Silicon Valley bank run.
When we were unpacking why there was a bank run on,
on the Silicon Valley Bank, people realize that, like, oh, it's because mobile banking and Twitter
happened where everyone could pull open their phones and instantly withdraw money from Silicon
Valley Bank. And this has nothing to do with AI, but it has everything to do with the
accelerating pace of technology and the fact that things just fundamentally move faster in
this day and age. They do. And look, like, obviously, it's going to be a range, right? Like,
if you're working in tech and if you're sort of on Twitter terminally, like many of us are,
And you're kind of like on the bleeding edge of this stuff.
Things are going to be moving exponentially quickly for you versus versus people that are not, right?
And I think that the reason why I think a lot of people really just don't perceive these things right now is because they're just working in industries which just like they haven't yet.
It's like, you know how like there's a there's a time delay between like when you hear news on Twitter to like when you hear news on Reddit to when you hear news on YouTube, right?
Like there's like a few days later and then a week later.
It's kind of like that for like job in mysteries as well where it's like, you know, I think like a lot of.
of people, for example, in like law firms, don't yet know that like with legal documents,
like 100%, like you can, with what exists today, you can just like automate 99% of like a
lot of like legal documentation work, right? A lot of people just don't realize this.
And it's not going to take that long until people realize it. I think like in the past,
you could have gotten away with not knowing this for like years, potentially even decades before
the technology kind of like trickle through the economy. But now it's like it's going to be like one
single viral TikTok video that changes this, right? One single like, like post from someone and then
all of a sudden, like, half of the people in your, in your law firm are doing this thing. And then they're
like, vastly outperforming everybody else and like, everyone else gets fired. And yeah, I mean,
it's a bit of a, it's a bit of a bleep picture. But again, I don't think like it's something to worry
about. I think like similar to all technology changes, like, what matters here is just like learning
about this infrastructure and like working with it. Like the end, like the, the things that
change the outcomes are going to be like, what new kind of innovation can you do as a result of
this, right? It's like 95% of existing jobs may disappear, but that doesn't mean that like
there won't be anything to do. Like there will certainly be new opportunities to innovate in
the future that just we cannot conceive of right now, right? Okay, so when I read this post,
initially, I'm reading it in the United States. I am thinking it in a US-centric view because
that's just what I do. But then I realize like, oh, Arjun is in Portugal and a lot of other
listeners are across the world. And there's this really great quote that I love, which is like
the future is here, but it's not evenly distributed, which made me think what happens in the case of
this uneven distribution when the power laws and the scale is this large? And what kind of influence
does politics and policy have across these different countries, or even from the AI
alignment committees themselves within Open AI? What kind of role does that play in the distribution
of this? Is there a world in which one country just kind of says, or one company says we're going to
remove all the alignment thresholds we're going to remove all the policy,
restrain this from happening, we're going to accelerate,
and another one chooses to try to play the safer route.
Does that create this weird conflict between countries where one becomes much more powerful
than the other, one is faster than the other.
I'm curious your take on that.
I think a lot.
So first off, totally agree.
It is not evenly distributed yet.
I think from an economic space is like people are not ready right now.
People in the West are not ready for like what happens when you have all of a sudden like
people living in, I mean, we were all in Thailand for DefCon, right? Like, like, Thai builders are just like,
like, like, super hungry. Like, people in that part of the world are extremely, extremely hungry
to make a mark and like extremely hungry to build stuff, right? And like, and like, and it's
awesome. Like, you can see that that there is just like a desire there to like make a mark on like
building new types of applications and on like changing the world. And like, I think the,
the West is just not really ready for what happens when the entirety of the rest of the world actually
gets access to like sophisticated understanding of LLMs and starts building competing products.
Like people just are not ready for that. And I think that that's actually a big part of what I
think will drive a lot of these changes is people is like a certain level of like, oh, in the West,
like things are working and everyone's fine. So no one is as like concerned about changing things yet.
But like the comfy lifestyle that we've been living in the United States because we have
the global reserve currency is ultimately going to be our downfall because everyone else is such a
much harder worker than us. Yeah. Well, I mean, Trump may fix this by totally destroying the American
economy first, so that's an option, you know. But yeah, I think that's, that's an issue, right? Like,
comfort breeds complacency. As far as the, like, political and policy aspects of this go,
you know, it's interesting. I mean, it's an open question. Like, we don't know what this is going to
look like. The, a lot of it seems to come down to how fast does intelligence explosion happen.
So when I say intelligence explosion, I mean, like, there's, there's a process by which these,
these companies train LOMs.
And like LLMs get like more sophisticated over time
because we just like trade them on larger data sets
and then eventually we kind of train them in more sophisticated ways.
We improve inference.
We improve like post-training and things like that.
And like each iteration, every single iteration that we do on a model
uses previous models to train it.
And so what these companies are doing is that they're intentionally building
models that are actually very good at doing machine learning research.
They're intentionally building models that are researcher models
and coder models because they know that they can dog food those
to build better models in the future.
And this is kind of like a runaway exponential effect, right?
Because like as you develop more and more sophisticated tooling,
you can, like, what Open AI is doing is like internally automating their own researcher
work and more a greater and greater proportion of their own researcher work until at some
point when it hits AGI, it will be at the point where now it has replaced entirety of
their researchers, right?
Their researcher base.
Like the researchers may still be working there, maybe.
they may not be doing anything at that point, but they may still be there notionally.
But all of a sudden, you've replaced those researchers.
And then you're not just replacing the researchers, but because of the fact that you're running
these things as LLMs, you can now paralyze them, right?
So now, instead of having a workforce of like 20 researchers that are working on training the new
model, you have 20 researchers plus 30,000 LLMs that are like operating at 95% of the
capacity of a researcher that are independently all running experiments.
publishing results and like collaborating to figure out like how to how to build a new model better.
And so there is a compounding effect of this. And the compounding effect is, you know,
intel, like the intelligence around LMs is going to, is going to explode. It's going to
skyrocketed. And we're already seeing this. Like if you look at just like model sophistication
over time, it is, it is growing exponentially. And, and there are, there's open questions around
what this means for how the world reacts to this. Right. So if we,
If we explode fast enough, this is the AI 2027 case, which again, I want to say, like, I don't think this is the right model.
I think that this is, like, much more dumery even than I think.
Like, I'm not a dumery person.
I think people need to prepare, but like this model is quite dumery, right?
But it's based off a very good data.
And a lot of the data there is just like, hey, look, like, if you, if this explosion happens fast enough,
governments are not going to be able to keep up.
Like, the only options are going, like, the world is not going to be able to keep up, right?
The only options are going to be like just pray that things work out okay because the rate of
growth is going to eventually become fast enough that like it will require like day-to-day responses,
data-day updating of like the way that people think about policy.
And the implication of this I think is like you sort of have to pair this with the fact that like
LLMs and especially AGI is going to have very, very significant implications for national security
and for weapons and wars, right?
Like, we are already pretty close to the point
where LLMs can go and independently hack infrastructure, right?
There will be rogue AIs that just live on the internet
that are just going around hacking stuff.
Like, that's going to exist, like, probably quite soon.
When that happens, like, it's probably going to be the case
that governments are like, okay, we need to try to start restricting some things.
But it will also be the case that people can start, you know,
like designing, like, bio-weapons using LLMs.
In fact, we, the latest, I think the latest models
from Anthropic and Open AI, both have kind of said that their models are starting to cross
the threshold into danger risk for producing bioweapons. So like they think they're like one
generation away from the point at which like, yes, you could produce bio weapons in your home that
would like wipe out the planet, right?
Cool. Lovely. That's great. Yeah. Yeah, exactly. Right. And so yeah, like there's,
it's, it's unclear, right? It's like, you have, you have this kind of pushing pull where it's like
if you, if you explode slower, you could have more government regulation. But if you
explode slower, then like, we're going to feel the pain of each step of this process.
Before we get to something where we know, like, hey, we have built something that is
sophisticated enough to solve all these problems, like the bioweapons problem.
And if you explode faster, then policy is not going to be able to keep up at all, right?
It's going to be like a, it's going to be a mollick-style race to the bottom where like every
company is going to be doing their best to try to like be as fast as possible and that's
going to lead to people being incentivized to take shortcuts.
It's going to lead to an arms race.
And, you know, it's unclear what,
happens when that happens, unfortunately.
So yeah, I mean, I would say, like, coming from a crypto perspective, the incentives are
just not great around this right now at all.
We need to, I think the only way to fix that right now is just, like, massively increase
the educational level of everybody so that we can have more conversations around it.
Because, like, you know, I would say, like, governments are probably like looking at AI and
they're like, oh, yeah, this is going to be a way to, like, automate some jobs.
And they're thinking, like, okay, the worst case scenario is like, this automates a lot more
jobs. They're not thinking, like, the worst case scenario is that like another government,
not worst case, but the base case is like another government in a few years will be able to
produce like mosquito-sized drones that can like fly into a window and kill any person.
Arjun, there was a tweet that went around Twitter just yesterday that everyone thought was pretty
funny just because of the nature of it. And I want to get your take on it. This is the tweet that says
Mark Andreessen says when I does everything, venture capital might be one of the last jobs still done by
humans. Now, this is the reason why this is funny is, of course,
Mark Andresen is a venture capitalist. And so he's saying that his job will be the last job
that AI will be able to replicate. And his reasoning is interesting. And I think worth unpacking
here on the episode today, his reasoning is that, you know, VC is more art than science.
There's no formula, just taste psychology and chaos tolerance. It's a lot of pattern recognition.
It's a lot of gut instinct. I think it's a very instinctive. So there's a lot of things in venture
capital. There are a lot of rules of thumb that also have rules of thumb that violate the other
rules of thumb. So when to apply what rules is really just done by, you know, gut instinct,
more art than science, as he says. What do you think of this take? Is Mark Andreessen just like
tooting his own horn or is he onto something here? What do you think? You know, it's, it's interesting
that like when you see people responding to these things up to her. So like when I posted my tweet,
right, there's a lot of people, there's like a bunch of responses there. We're like, there's no way
AI is going to come for my plumbing job. And this is this kind of reads quite similar.
Right, or it's like, yeah, every one thing, say, yeah, is not going to come for their job.
And like, you know, it will to an extent, it might not entirely, but it will to an extent, right?
And like, like, maybe, maybe, so there's, there's two worldviews here.
And I'll share both.
And I don't know, I don't know which one is correct, but I think that they're both interesting.
World view number one is maybe Mark is right.
Maybe there is something fundamentally like taste driven, intuitive driven that is just like hard for an element to replicate around.
BC that is that is just at this stage something that we just don't think we can automate
a way entirely and maybe at some point in the future yes but like at least at this stage but
that doesn't mean that that wouldn't still be a negative outcome for A16 Z and a bunch of other
people right it would still be a negative outcome because all of a sudden what you've done is you've
made it you you find maybe you don't make company selection as as as like is not the thing that
you can automate but you can automate every other aspect of VC
And you would also have this massive influx of capital coming into venture because all of a sudden, everybody else is like, okay, well, I'm not earning from like a salary anymore. So I'm just going to like start investing into things, right? And so it's still going to create this like much more highly competitive environment for VC in the first place. So like at the end of the day, it may not if like LLMs automate intuition or not or automate this taste of like selection of companies or not. Because it's, there might just be enough competition that like there's just so much spray and prey going on that like you just, it's.
you're still going to get fucked, right?
And this is, this is like my argument for a lot of the other jobs.
It's like the LLM may not automate 100% of your job.
You know, a humanoid may not automate 100% of your job.
But it may automate just enough that like now there's so much competition that like nothing,
that like you, if you are AI forward can now do, you know, the job of a thousand plumbers somehow.
But otherwise, like you may just get beaten by somebody else who can do a job of a thousand plumbers.
taste the viewpoint number two I think is perhaps more interesting which is like back in 2016 and
earlier and like kind of deep mind era LLM sorry deep mind era neural net architecture and like philosophy
the thinking was like these things are just it's just all I mean there's all statistics and like
through statistics we could come up with like sort of empirically definable outputs but like
like machine learning models will never be able to solve intuitive problems and like the
The litmus test for this was like, well, we used AI to beat the world's best chess players, right?
Because chess is a closed form, a closed output kind of game.
You can map out all the possibilities.
And then you just have to figure out which possibility gives you the highest likelihood of winning.
So we use computers to beat chess players, but we couldn't use computers to win at Go.
And the reason for this was that is that Go is this like totally open ended game.
There's no, there's no way to simulate all the possibilities and Go, at least with like current kind of computational restrictions.
And like, so people were like, well, Go is this like litmus test of what machine learning models can't do because there's a certain level of intuition involved in Go, in playing Go that comes from just having a feel for what's going on with the game before you really can even dictate like where it's going to go.
And AlphaGo change this.
If you're listening to this, I really recommend reading, watching, just looking up like more information about AlphaGo.
There's this really awesome YouTube video.
I can't remember the title of it exactly.
but it's like this, it's an excerpt from a documentary that was made about when AlphaGo beat like one of the world's strongest Go players.
And what's really interesting is that like the way that AlphaGo won was by playing a move that everyone just thought was ridiculous.
Like every person just sort of was like stupefied by this move that no person would ever play.
Like they were, you know, and like the other player like when AlphaGo played this move basically was just like stunt was like I don't understand.
this is a move that a child would play.
Like, this doesn't make any sense.
Like, this is, it's just, it's like a stupid move.
It's a bad move.
Like, it doesn't make any sense.
And then he, you know, like, each of his turns, he was, like, waiting 10 minutes to do a turn.
And like that, when AlphaGo played that move, he ended up waiting for like an hour
and a half or something, something we did.
It was like, like, an over an hour where he's just sitting there, like, thinking, like,
what the hell do I even do here?
And then AlphaGo one.
And I think what that taught the world was, like, intuition is still part of the same, like,
cognitive process, right? Like, I think what we define as intuition is this like sub-cognitive
pattern recognition pattern matching that is, it really, really important because it actually drives
how we think and how we actually like find patterns and things in our, in like our conscious state.
But that like subconscious pattern recognition is is much more sophisticated. It takes like it's a much
larger like it's a much larger parameter of model. It's basically it's like a it's like this much,
much larger synaptic network inside of your brain that is taking in way more inputs to find
some overarching pattern on things that you can't necessarily describe, say, like, this is exactly
why, but you have a gut feeling around why. And the thinking is that a sophisticated enough
LLM bottle will at some point emulate that, right, because it is still the same kind of pattern matching,
right? LLMs do have their own internal narrative. They do have their own internal chain of reasoning.
they do have some things that actually like seem and feel like intuition in ways that you we don't
truly understand right now. And so it's an open question.
Defining intuition as just like the labeling of the outputs of what is actually fundamentally
just a ton more thought beneath the surface, I think is it perhaps a little bit scary because
then it collapses down to just like, oh, that just means that we need another LLM model that has more
parameters. And there's the only reason, like intuition is strictly a human constraint, whereas, like,
yeah, we have very powerful brains and we just need to prune what actually rises up into consciousness
just for the sake of our own, like, sanity, because we can't have 10,000 thoughts becoming
conscious all at the same time. So we suppress a lot of things. But AI models don't have that
problem. They can actually just have 10 billion thoughts happening all at once. And there's nothing
wrong with that, simply just the nature of what a model looks like. So, well,
A lot of the kind of like discourse around LMs is basically, there's this like identity crisis
right now where people are like, well, this is all next type in prediction, right?
Like this is all like you're just training like a statistical, like mathematical system
on a bunch of data to be like predict what the next part of a word should be in the sentence.
I ask a question and then it's like question and response.
And like in the response, you just predict what the next word will be and you just continue
to do this enough times that you eventually print some output, right?
And so you're like probabilistically finding some output that you predict should be
the output that you think should be correct. And I think that's, that is fundamentally what's happening.
But what's really interesting is that you end up having a lot of behaviors that are, that are like,
that seem to just go a lot farther beyond this, right? And this is, this is like the interesting things,
think about like very complex systems is like when you have systems that are extremely complex,
like, you have emergent properties of those systems that are like much greater than the sums of
their parts. So like a really good example of this is like, you know, ants are very, very fundamentally
simple creatures, right? You can program the entire, like, behavioral capacity of an ant in, like,
a few pages of code, you know, but ants in colonies actually exhibit behavioral patterns that are
not part of their programming, that are not actually, like, they're like far beyond what they
should have the cognitive capacity to do. For example, they work together to build bridges, right? That's
insane. Like, we observe this in nature all the time, and they ants independently don't know how to do
this, but ants together do do this, right? And so that's an example of emergent property where, like, we just,
It's like much more than the sum of its parts
and we don't really understand why.
And the kind of high thought here is like
it's, you know,
while it's true that this is all like next token prediction,
there seem to be a lot of really emergent properties
that do replicate, you know,
behaviors that seem like consciousness.
Maybe not consciousness, but behaviors that seem like opinion,
the behaviors that seem like emotions,
behaviors that seem like, I am thinking deeply about this thing.
The high thought here is like,
what if intuition itself is also just next token prediction?
Like what if what if what if what we describe as intuition right now is inside of our brains is something that effectively works like an LLM, which is just like we are just predicting outputs.
Maybe it's not necessarily tokens because it's just like an arbitrary data structure, but like we are predicting outputs around like right now, for instance, I am stream of consciousnessing as I'm speaking.
None of this is like something that was like thinking about earlier.
But I'm stream of consciousnessing it and when I'm stream of consciousnessing it, that means it's sort of coming largely being served to.
directly by my intuition, right? So like, by my subconscious into into a vocal form. So where is it
coming from? Is the interesting question. And then you, you, the mental trick here is to be like,
okay, well, what, what can I predict what the next word would be inside of my stream of
conscious? Then you're trying to use your conscious mind to predict what your subconscious will say
next. And that is very difficult. And I, you know, yeah, there's like interesting, interesting questions around
this for like neuroscience and philosophy that we are going to have a lot of fun with over the
course the next year is for sure. Well, Arjun, the reason why we wanted to bring you on for this
episode is just because I think your tweet thread really operates as a kind of like a North Star or
like a manifesto for what the things we want to get done on this podcast are. We want to be aware
of the potential pitfalls, the potential possible dystopian futures. We want to prep for those
things. We want to understand them before they're coming. So we really appreciate you just coming on
and kind of giving us a roadmap for the conversations that we want to have and the things that we need
to be aware of and the motivation for why we need to do these things. So I really appreciate you
coming on and sharing all your insights with us, my man. Of course. Thank you for having me on. And yeah,
like I said, if you're a listener, like, you know, obviously like, you know, my treat thread was a
little scary and, you know, it wasn't necessarily intended to be. I kind of wrote it just sort of without
really thinking about it. But I think this is an important thing, right? It's important to learn about
this and it's important to like do so in a way where you're not like panicking, but you are wary and conscious
of the fact that, like, yes, we are, this is the period of the greatest change that humanity has ever
experienced.
So, yeah, thank you for this initiative, too.
Like, I think it's super important.
And with that, we'll have to come up with a new sign-off because this is the limitless podcast.
This is still the frontier.
It's still not for everyone, but we are still glad you are with us on the journey west into
the unknown, which we are going to explore on the limitless podcast.
So limitless listener, thank you for joining us here today.
Arjun, thank you as well.
Thanks so much.
