Algorithms + Data Structures = Programs - Episode 270: 2026 Predictions - AI, The Future, Books & More!
Episode Date: January 23, 2026In this episode, Conor and Bryce make their 2026 predictions and chat about the future!Link to Episode 270 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)SocialsADSP: Th...e Podcast: TwitterConor Hoekstra: LinkTree / BioBryce Adelstein Lelbach: TwitterShow NotesDate Recorded: 2026-01-13Date Released: 2026-01-23VOTE FOR YOUR FAVORITE ADSP EPISODES OF 2025ADSP Episode 111: C++23 Ranges, 2022 Retro & Star WarsADSP Episode 97: C++ vs Carbon vs Circle vs CppFront with Sean Baxtertrueup Tech Layoffs Trackertrueup Big Tech Employee Countstrueup Important Dates in Modern Tech HistoryArtificial AnalysisThe Psychology of Awakening by Gay WatsonThe Resonance of Emptiness by Gay WatsonPermutation City by Greg EganThe Peterman PodBoris Cherny (Creator of Claude Code) On What Grew His Career And Building at Anthropic (Peterman Pod)PantheonThe Metamorphosis of the Prime Intellect by Isaac AsimovFoundation Series by Isaac AsimovRonot Series by Isaac AsimovIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
Probably my boldest prediction is that people will stop treating these models as unintelligent.
All these different features that we are adding on to LLMs.
First it was reg, retrieval augmentation, then it was memories.
You know, that's what they're kind of currently working on.
On top of that is like context compression, which you could argue is a form of synthesizing information.
You're just describing the process of like, design.
designing intelligence.
Just because a plane doesn't fly like a bird doesn't mean it can't fly.
Yeah.
We are building intelligence systems.
We have put intelligence in a bottle and we are adding intelligence features to these models.
We should have had a global holiday for passing the Turing test.
And it's just, yeah, anyways, as my wife tells me, I have, like, radical views on all these topics.
And most people, you know, she says, she tells me that my views are outside the Overton window.
And welcome to ADSP.
the podcast episode 270 recorded on January 13th,
20206.
My name is Connor.
Today with my co-host, Bryce,
we make our 2026 predictions,
talk about the future, AI, books, and more.
2025, it's in the rear view, mere.
26 is here.
It is January 13th as we record this.
What do we want to say?
I mean, one of my favorite episodes of all time,
I believe is episode 11, I think it is.
And it was the.
Star Wars introed episode where I used the Star Wars theme.
I believe, was it episode 111? Yes. C++, 23 ranges, 22 retro and Star Wars.
So it was, I guess, it was three years ago when we were doing the 2022 retro.
And you started talking about how 2023 was going to be the year that the Empire strikes back.
So do you have any more great analogies?
They can be Star Wars themed again.
I mean, and or came out.
Obviously, everyone agrees.
It's one of the best shows of all time.
What's in store for 2026?
Is it the AI bubble popping?
Which obviously it's not going to pop, folks.
It was just a wobble.
It was just a wobble.
I've been listening to the Acquired podcast a lot.
And they, in more recent episodes, they've started at the start by saying,
this is not investment advice.
They put a little disclaimer at the start.
to their podcast.
Maybe we need that.
I'm going to speculate.
I'm going to make some very specific speculations about some specific people and what they
may do.
Are we going to name these people?
Yeah, of course, I'm going to name these people.
So first of all, first of all, I will speculate that sometime either this year or next
year that you, Connor, are going to get nerd sniped into like ML kernel design.
And I think at some point, what I've hinted at on past episodes, that at some point, we're going to be like, all right, we should go train our own language model and figure out how to do that.
At some point the next two years, I think that'll happen.
Because I think at some point, you're going to get nerds typed by this.
And I say that having somewhat been nerds typed by this now myself, and there's all sorts of cool algorithms underneath these ML models.
models. And it's like a place where you can really spend a lot of time and effort finding the
perfect algorithm. And there is a great value in return on investment for that. So I think that'll
happen to you. Well, I have the power to make you wrong. But anyways, in this case, I don't,
I don't think so. I don't think so. Time will tell. My second, my second prediction is that I think
we will have Sean Baxter back on this podcast sometime in the next year or two.
All right.
So you started off with one where you gave me the power to, you know, have you be wrong.
And technically you've done the same thing again, but also you have more ability to make that one come true.
But I could just be like, that's it.
We're not having Sean on.
Playing with fire here.
There's reason.
I mean, you're not even exploring the interesting part of that, which is like,
Why would Bryce want to have Sean back on the podcast?
I mean, obviously, Sean is the best person to talk to.
Some of his conversations we have with him were like our best episodes ever.
That one that was like Circle versus C++ versus whatever.
Yes, but is that, but you should ask yourself, why is it that Bryce thinks that we're going to have Sean back on this podcast?
I mean, we work for the same company.
I know why.
I don't think you do because the thing, the thing is very, very recent.
It's very, very recent.
Okay, maybe I don't know.
Maybe I don't know.
Yes.
I think Sean is working on some very exciting things.
And as soon as they are things that he can talk about, I'm going to say, Sean
should come back onto the podcast.
So those are my two predictions.
Bigger, like macro world predictions, I think we'll see a lot more.
more AI in the workplace the next year, like companies trying to produce the sort of results
with AI that we've seen for coating. They'll try to take that to other white-collar jobs,
and we'll see more like tools and frameworks built out around that. We'll see a lot more
adoption of that. And I think that there will be a lot more talk about edge AI and robots and
automation and AI and our various devices.
I think we'll see a lot of that.
And I think it's a great time to be a staff software engineer,
and it is an awful time to be anybody more junior than that.
I think there's an increase.
Somebody just asked me today about their programming in like Java and some other
languages and they're asking, like, what recommendations would you have for me if I want to get
into lower level languages, C++, et cetera, if I want to get more systems software? And what I told
him is, and he asked, oh, should I learn, like, modern best practices and like the APIs and, like,
good design philosophy? And it was like, no, don't do any of that. Learn performance analysis
and optimization, how to benchmark things, and how to reason about hardware and performance,
debugging, et cetera, I think those are skills that are still very valuable in an era where we can have
AI automate a lot of the laborious sort of authoring of the code. Obviously, being able to design
the system is still an important skill. But I think that there's a lot of people who have those
design skills. I think a far more in demand skill is being able to reason about
performance and optimize codes and then also debug codes.
So I think those will be increasingly important skills.
Everything lower than staff engineer, though, really?
That's the threshold?
Yeah, I think so.
I think that companies that are hiring are only hiring, you know, to very top, very senior
people or people who are, you know, I imagine there will be.
be like more product manageries sort of roles in the future because it's a lot easier for people
it's a lot easier for like a person to be like a one a one stop shop for a whole like product or
feature because they can be the person who like meets with the customers figures out what the
product should be goes and builds the product using AI and and then you know brings it to market
does the outreach, does the marketing, et cetera.
I think we'll see a lot more sort of integration
of multiple different parts of software development
and software publication into a couple,
into fewer people.
I think people's roles will get more general
because there will be less of a demand for people
whose specialty is just outputting code manually themselves.
So people who have diverse skill sets
are going to benefit.
Or people who have very, very deep skill sets,
but in a very specific way,
like people who are, you know,
leaders in their fields of like mathematics
or, you know, performance optimization
or something like more domain specific.
But just, you know, being a junior software engineer right now,
I think it's bad time for that.
Do you think that's going to change at any point,
or is this the way it is,
a new paradigm shift. There's a paradigm shift and this is just the world we live in now.
I think that yes, I think it will change because if you believe that we're in the midst
of an industrial revolution, then in the future there will be a lot more things that are
like powered under the hood by software. And so eventually the AI revolution will mean
that there is a lot more demand for software in general.
And so, like, every time there's been increases in efficiency and, like, worker efficiency,
we've seen, like, demand increased, too, because the cost of the thing goes down.
So I think we'll see that happen here, too.
Yeah, the amount of output that one engineer can do will go up, but that does not mean that
that does not mean that we'll have less engineers.
It just means that eventually the number of,
the amount of work out there will grow.
But there are like going to be growing pains.
And I think we also have the problem that a lot of people
have gone and pursued degrees in careers in computer science
because it's been a very lucrative field for a long time.
and there may be a glut, there may be an oversupply right now.
It's kind of hard to say.
I mean, hiring is very tight at a lot of tech companies.
And for a period, it seemed like that was because of the, you know, there was, was it
2021 or 2022 where there was just a really bad year for tech market contracted a bit.
There was a lot of like uncertainty because of world events.
And like COVID was kind of weird for the hiring market in general.
But after 2023, 2024, 2025, it's hard to argue that like big tech is in a bad place, right?
Like big tech companies in general, not just big tech, but tech companies in general seem to be doing, seem to be having a pretty good time.
So why is there less demand for for engineers?
Is it still really just the case that despite the good times that tech companies feel like there's a lot of uncertainty and so they don't want to.
you know, hire a lot of people and expand their, you know, how much they're paying out every two weeks
in salary? Maybe. Is it that they're seeing a lot of, you know, returns from using AI tools internally?
Maybe. It could also just be that a lot of the thing that's driving the good times in tech right now,
a lot of the things that are driving AI revolutions, the AI revolution, is powered by a relatively small number of people.
And that may be the case.
I mean, you see like these crazy deals where supposedly meta is paying, you know,
ludicrous sums of money to hire people.
And, you know, why are tech companies paying so much for people who are experts in ML?
And it seems to be because they're getting very good results out of these relatively small teams
who have this very rare knowledge of how to build these AI models.
and they don't need, you know, 10 or 20,000 engineers to go, you know, turn out a whole pile of code.
They need a small team of people who can go, you know, train and design a model.
And that maybe just doesn't require the same type of software engineer that is on the market right now.
Anyway, so those were a lot of words.
Well, let me, we'll describe this for the listener.
I was interested while you were talking about that.
I mean, there's the
the website
TrueUp.io slash layoffs
which tracks layoffs.
And admittedly December,
and we're only halfway through January,
but it's quite low.
But I had never noticed
that there's all this other
information slash stats.
And the one that was relevant
to what I guess we're talking about right now
is kind of hiring and outlook.
And if you can look at big tech employees,
it shows two graphs actually
and goes back to 2014,
and you see this little black line
that by 2016 is disappeared off the top of the chart,
and that's Amazon,
because they have 1.6 roughly million employees at this point.
And you can notice that, yeah,
since 2022, December 31st,
so I guess the beginning of 23,
it's kind of been flat,
if not down, on average.
Like definitely Amazon peaked back at the end of 2021,
and then their employee count has dropped and not recovered to that point.
And then definitely, I'm sure you've heard the reporting that's been done of Amazon talking
about how they're going to be employing, quote unquote, employing or using robots more
and have less need to.
Yeah, but the interesting thing here, you know, some people have the theory that, oh,
this is happening because AI is replacing the software engineering jobs.
and I think that it's very unlikely that that's true
because go back to that graph for a second.
Actually, go back to what you just had.
What was,
it just had a list of important dates.
Yeah, yeah, well,
there's a ton of stuff.
I mean,
just to skip around to all this different stuff,
I'll link this website in the show notes.
Go back and what was it.
It had the date for when chat GPT launched.
3.1 years ago on November 30th.
Yeah, 2022.
Go back to that chart.
The hiring sort of like peaked in 2022.
Because I think was it, 2021 was the bad year for tech, maybe it was 2022.
It was one of those two years.
I think it's hard to believe that in 2022, that across the board, tech company leadership
said, oh, hey, this AI thing is here.
That means that we should stop hiring because, you know, our software engineers are going to be so
much more efficient because of this. Like, 2022, I don't think everybody had drank the Kool-Aid yet,
right? Like, maybe some people did, but like even you and I, you know, we were not on the AI hype
train in 2022. So I think that there's some other explanation going on. I don't think that
Amazon or Microsoft are hiring more people or doing layoffs because they decided in 2022
that AI could make their software engineering.
workforce more efficient.
I don't buy that.
I think it's some combination of other factors.
And I think, like, maybe part of it is a lot of these companies have to deploy their
capital now to make these huge investments in hardware to train these AI models.
And so maybe that's where they're investing their money instead of in personnel.
Maybe it's because, you know, they don't need as big teams to build these models.
I don't know.
I mean, yes.
So I switched over to artificial analysis, which funnily enough, only goes back to March of
2023, which is four months after November of 2022, which is when ChatGBTGT launched.
And I recall really being like, what is the word, AI pilled when Claude 3.5 was out.
So that was like middle to late 2024.
So I've been using cursor ever since early 2025.
I think I bought my first subscription in January and then realized within a month that
Nvidia was paying for our subscription.
So I only paid for a month.
But I had been using like the online, you know, chat GPT and friends for helping out with coding.
And so, yeah, it wasn't until 2024.
So it took a full year and a half.
half until I started using these tools.
And I didn't have my mind blown until, I don't know, the Claude 3.7, which was in April, March.
Similar for me.
So I think it's hard to believe that AI, that the increase in worker efficiency in the software
field due to AI is the sole reason for tech hiring falling off.
And I think it's more likely that it's like a combination of multiple different factors.
And do you have any theories for if it's not, if it's not AI?
Kind of what I illustrated that why are all the like everybody is doing well in tech right now.
It pretty much is doing well because of AI in some way, shape, or form.
All the big, all the big tech players who are doing well are doing well because they have some part in the AI economy.
and I think that you don't need, you need a different sort of workforce to be an AI tech company
than you needed before.
I think that's what it is.
That look at, look at like Microsoft or Google, like what's important for them right now?
It's building out like their AI, their hyper-skiller cloud and in training the world's best
models. Do you need, is hiring 10,000 junior engineers going to help make Gemini better?
No, probably the thing that's going to help make Gemini better is going in hiring, you know,
the five, you know, key experts who know the thing that you need for this particular part of,
this particular niche area of AI where you've got a problem right now.
That I don't, I don't think it's, these companies need like, rare.
expertise right now, rare expertise is driving the tech economy right now, not, you know,
these large workforces of, not like just solely person power, you know, not just code monkeys
spitting out code. And I think that, I think that also, because these companies are deploying
so much money to build out data centers, to build out compute hardware, they,
they of course naturally are all you can't grow your workforce by 10,000 employees and also, you know,
go build, you know, billions and billions of dollars of data centers.
Like, you do have a limited amount of money.
And so I think that's the, they're choosing to invest in, in the compute resources.
And I think there's other factors.
I do think that there's a lot of uncertainty in the economy.
there's uncertainty about whether there's an AI bubble,
there's uncertainty because of, you know,
the politics of the world right now.
And so I think that's part of it too.
But that last argument,
I find less compelling now than I did in 2020.
Interesting.
What do you think?
What's your theories here?
I mean, on the topic of tech hiring
and corporate capital allocation
towards, you know, infrastructure build or compute spend versus employee spend.
I don't think I have a ton of, I don't really spend a lot of time thinking about that.
I will say to echo what, you know, you just talked about,
I have had the evolving view, or not evolving view, but I had a kind of moments where
I realized that experts really are required.
to use these tools correctly.
And as an expert in a domain,
you don't really realize it
because you're just asking these tools
to do what you want it to do
because you know that's the thing you want.
But when you go to a vertical
that you are not an expert in
and you do care about code quality.
So, you know, vibe coding, if you don't care,
it doesn't matter.
But if you are trying to get like
the idiomatic solution in some language,
whether it's Python or Rust,
and you're not an expert,
and you don't know the surface area of the libraries and the algorithms in the libraries
or the APIs and the methods in the APIs.
And then you post that as a PR somewhere.
You are going to realize very quickly when experts take a look at it that it's not necessarily
doing like the best practice idiomatic thing.
And I had that experience when working with Rust recently.
And so there's a difference.
You know, when you can't.
about the code quality, you know, vibe code away.
Or sorry, when you don't care about the code quality, vibe code away.
But when you do care, expertise is like not only necessary, I would say essential,
depending on the complexity of the task that you're trying to do.
And I think as like someone that operates typically in a domain of expertise or like my
expertise, you don't really notice how much you're, because anytime it does something
that you know is not what you wanted, you just tell it.
No, no, no, no, no.
please use this thing from this library.
And anyway, so that kind of echoes what you're saying is that, yeah,
I don't know if it's a staff level engineer that is all that people need,
but definitely I think expertise,
even though that these models are really, really good still,
that a lot of the times it goes down the wrong path and you need to be able to correct that.
And if you don't know that you've gone down a wrong path,
which is probably going to be the case for a lot of junior folks,
that's going to be a problem.
But I don't even think that that's necessarily the problem because I think we're still very early in the adoption of AI, even within the software engineering.
I don't know what percentage of, I don't know whether there's public data on what percentage of employees at big or small tech companies are using AI.
But I bet you, I bet you it's a lot less than you think because, as I always like to say, inertia is king in the software industry and switching to new tools.
pools into new ways of working is scary and it takes a long time for people to actually go and do that.
So if you told me that only 25% of employees at big tech companies were using, like 25% of
software engineers at big tech companies were using AI, I wouldn't be surprised.
I think it's probably company dependent.
I think there's probably some companies where it's like 80, 90%, if not all.
And then there's other companies where it's really, really low.
And I don't think that there's any, I don't think that any of the big companies are anywhere near that high because I don't think that like a company of like Google scale or even like at an Nvidia scale.
I don't think that you can get to those numbers that quickly.
I think, yeah, maybe a smaller company.
Yeah, but these really large tech companies, they don't have like there, there's a lot of like localized practice in culture and it takes a long time for stuff to trickle down throughout the entire.
company. Yeah, that's true. I mean, it's way, way easier for a startup with, you know, a handful of
folks to all be leveraging these tools, aka 100% of them, versus some company with tens of
thousands of employees. And just think what, just think what it's going to be like once,
once, you know, give it, give it five years and then we'll start seeing saturation in this
industry, I think. No, it's not, I don't think it's going to take five years. I think to
to get to 80 to 90%.
Like at some point,
let me put it this way.
At some point,
there was like a time before version control in the software industry, right?
And now it's unthinkable to not use version control.
And like almost to some degree,
it's almost unthinkable to not use Git specifically.
And like I think it'll take five years for AI-assisted coding
to reach that level,
where it's unthinkable that people aren't using AI tools
to write code.
How many years?
Five years.
I disagree.
I think it's going to be way sooner.
And I don't,
we can't,
do we know how many engineers?
How many employees does NVIDIA have?
Like 30,000?
36,000 as of 2024 end of year.
So that might be out of date.
And I mean, we have to redact this.
But in the cursor dashboard,
people that are using cursor.
And of our...
Yeah, dude, we're the AI company.
I mean, we're the biggest company in the world.
You just said the big companies.
And also, how many employees does Adobe have?
Because Sean said there was like 8,000 people at Adobe.
How many employees at Adobe?
Over 30,000?
Okay, so that's a lower ratio.
But like, of the 36,000 roughly people at Nvidia,
how many do you think are in engineering?
better writing code.
I don't know.
I don't know.
That's a good question.
I don't know what like the percentage is of engineering versus non-engineering.
But it's also complicated for Nvidia because we have both software and hardware engineers.
Yeah.
Anyways, I think it's when I was giving a talk recently, you know, I asked the audience what percentage of folks, you know, consider AI as I don't want to go back.
to... I think it's about 18,000 are engineers at NVIDIA.
S-a-und out of 18,000, or at least have logged into Cursor?
I don't know that. I think based on that, I think that that just means that I strongly suspect
that that would mean that we've auto-enrolled some large part in Cursor, like some large
engineering deal. I don't think it's auto-enrollment. I think you literally have to punch your
email in and download it in order for you to show up in these
metrics. Okay, that is pretty impressive. I would be more interested in seeing like numbers about
who's like actively using it, not just who's logged in one time. But, but that's just that,
because there's probably research people and product management and other things that fills out
some amount of roles. Anyways, my, my theory is that this stuff is, is already like, we can't go back
because it's so useful. And I also think that probably my boldest prediction,
is that people will stop treating these models as unintelligent.
A lot of people say that this is not, you know, it's not,
I don't want to use the word sentient,
but it's not the equivalent of like human intelligence.
And I think that all these different features that we are adding on to LLMs,
first it was reg, retrieval, augmentation,
then it was memories.
You know, that's what they're kind of currently working on.
On top of that is like context compression, which you could argue as a form of synthesizing information.
Like you're just...
Look at you.
You're just...
You think you're not going to get nerd sniped into working on in all things.
We'll see.
My point being here, though, is that in my opinion, you're just describing the process of like designing intelligence.
Like what is human intelligence, you know?
Our ability to retrieve information.
We do that.
search engines, our ability to like synthesize information from different sources and come up with,
you know, a synthesized fact or whatever. Memory, you know, a lot of intelligence, jeopardy,
all the game shows. It's just storing information in a way where you can, like we're literally
going. If you had to describe the different aspects of the intelligence of humans, it is literally
the different things that people are
adding on to like
LLMs. It started off just as this like
token generator and
you know feed tokens in tokens out whatever
but now it's like all of that plus
this stuff you know context
compression and and
information retrieval and memory
and at what point
do we like admit that we're just
like we're designing intelligent systems
that like oh okay it's not it doesn't
have consciousness and therefore
it's not sentient but you know
just because a plane doesn't fly like a bird doesn't mean it can't fly you know what i mean yeah it is we are
building intelligence systems we have put intelligence in a bottle and we are adding intelligence features
to these models and it is mind-blowing what they do sometimes you know and i think there's just a lot
of i don't know if it's just the hubris of humans and like arrogance that like oh you know it's it's
not it's not like us you know it doesn't think like us well okay sure planes don't
fly like birds. Doesn't mean they can't fly. Some would argue. They fly even better than birds.
And I think that like we're, we're approaching a point where these systems are going to
think better than humans. And how is that not like it's, it's better than human level intelligence
at a certain point, you know? We're not there yet. I'm just saying we're approaching it,
but I listen to all these podcasts of people being like, ah, you know, we, we passed the Turing test,
but it's, you know, we kind of disprove that. And so it's, what are you talking about? We blew past it.
It's amazing. We should have had a global holiday for passing the Turing test. And it's just, yeah, anyways. But as my wife tells me, I have like radical views on all these topics. And most people, you know, she says, she tells me that my views are outside the Overton window and that people aren't ready to accept our machine god overlords that are around the corner as much as I am.
We got to pause for a second because I have a book recommendation, but I have to go get the book. All right.
But I mean, I guess we can actually talk as I walk.
It's just that, you know, I have to walk into the...
I hope this book.
I hope this book is the book that I just finished reading two and a half books ago.
I guarantee you.
I guarantee you you have not read this book because...
All right.
It's in a completely separate category.
I will give my book recommendation after Bryce comes back with his book.
Are you still here?
Yes.
I went to grab a protein bar, but I am back at my desk.
You missed some goals.
that I think you will have to cut because Ramona was making a doctor's appointment.
But no, I was not looking for the tech books.
I was looking for the religion books.
Did you know?
I'm not sure if this has come out in the podcast, but my degree from LSU, which my origin
story has been on their hand test, but my degree from LSU, I have a major in applied
mathematics, but a minor in comparative religious studies because I had to take one course
on religion as like an elective.
And the professor was really good.
She was really tough, but she was really good.
And I was like, this is fun.
And so I took a bunch of other courses.
And so I find the study of various religions very fascinating.
And we had, that was right round when I was getting to tech.
And I ended up taking this one course and sort of like the Buddhist philosophy of the mind.
And one of these two books,
has a great section talking about, like, Buddhist theory on how the mind works. And I remember
reading it. And there was a particular chapter on, like, memory, and, like, talking about
how do our minds retrieve a memory? Like, how does that work? And I remember reading that and thinking,
huh, this is fascinating. I wonder if someday this will be applicable to people building intelligent, you know,
computers. And the books are, I think it's this one, the psychology of awakening or this one,
the resonance of emptiness. Let me look in the table of contents. Are these textbooks?
Or are they? These are textbooks. Yes. I see. All right. I'll put them on my list. I got a lot of
books on my list right now, though. But if you are able to pinpoint which one it is, although I'm
pretty sure an LLM could answer this.
I'm sure it could.
And one of them in particular had a section on memory.
It could be possibly one of the other books, but I'm almost certain it's one of these two.
We're using Gemini 3 folks because Chatchip-T has been broken for a number of days.
Both books explore the intersection of Buddhism and psychology, but they approach the topic of memory in different ways.
The book that focuses more explicitly on the psychological and philosophical mechanics of memory is the resonance of emptiness.
a Buddhist inspiration for contemporary psychotherapy by Gay Watson,
Memory in Reson of Emptiness.
In this book, Watson discusses memory primarily in the context of how our sense of self,
in quotations, is constructed.
She explores how memory acts as a bridge that creates a false sense of continuity and identity.
The key chapter, memory is discussed most significantly in chapter four,
the consciousness that views some ideas of the self.
The argument Watson explains that our mental continuity
is the result of habit, memory, and conditioning.
She argues that what we perceive as a solid, in quotation's eye,
is actually a construct built from aggregates of past sensory perceptions
that have, quote, unquote, hardened into memories.
Did that sound right?
Yes. I'll have to read a little bit later and see if I can find the specific.
Yeah, that sounds about right.
To be confirmed next episode when Bryce has read a little bit.
little bit and once he has, once he has nailed it down, I will add it up to my books to read, folks.
The book that I was hoping he was going to say was Permutation City by Greg Egan.
Have you heard of that book?
I have not.
I was, this will be our last story because we got to go soon.
I was listening to another podcast and a future podcast.
Got to go?
Where are we going?
What?
I got to go to dinner at my now wife's mothers tonight.
And anyways, there will be a future episode that I'm going to record in the next week or so called
The Podcast Purge, which is about all the podcasts that I've removed from my list of podcasts,
which it'll be a backup episode for when, whatever, I've run out of episodes and I don't
have time to record one.
And anyways, this podcast did not get purged.
It's called the Peterman Pod.
And it is interviews with high-level ICs at different companies and talking
about how to get promoted and advice, etc.
It's pretty decent more because he talks to folks at different companies,
meta, alphabet, blah, blah, blah.
In one of the more recent interviews,
he talks to Boris Churney, who is the author of Claude Code.
And in that episode, Link in the Show Notes,
he discusses the interview process when he was interviewing at Anthropic.
And he mentions that at some point,
his book, favorite book or something, comes up and he says it's a book by Greg Egan. He doesn't
actually name the book. So I don't actually know if this is the book he mentioned, but I asked
Gemini, there was a Peterman Pod where Boris Turney was being interviewed, and he mentioned a book
by Greg Egan. What was the name of the book? And the book that the LLM came up with was
Permutation City. And the remark that Boris made was that, in general, whenever he brings
this book up, no one's ever heard about it, let alone read it. And he was
talking or being interviewed by a panel of folks.
And not only had all like three of them or all of them heard of the book, they had all read it.
And then instantly they were like, oh, yeah, yeah, that book is good.
What about these books?
And the book is fantastic.
It's not as good as the three-body problem trilogy, which is my favorite books of all time.
But it is very thought provocative.
If you like the TV show Pantheon, which apparently Pantheon is based on a set of short stories by Ken Leo,
who is the individual that translated the three-body problem,
which was originally authored, I believe, by Jin Liu,
who is a Chinese author and wrote it in Chinese.
Anyway, so I do plan to go read the Ken Liu short stories
that Pantheon was based on.
I think that was Joao, the recommended that show.
Once again, not the world's best show,
but it is very thought-provocative.
If anybody has suggestions, similar to Permutation City,
I'm on a massive AI-uploaded intelligence kick,
and I just finished reading Diaspora by Greg Egan.
Wasn't as good as Permutation City.
I'm in the middle of reading, although this is rated R.
So if you are below a certain age, do not go read this book.
I was shocked at how rated R this book is.
It's called the...
Rated R in what way?
Like the content.
Like it's...
Like violence or sex, Connor?
All of the above and more.
I'm in fact, one of my best friends, you met him at the wedding.
Victor, he recommended this book to me a while ago.
and apparently also recommended Permutation City and I, for whatever reason,
I usually read whatever he tells me right away,
but for some reason this slipped through the cracks,
and I am surprised he did not give me a warning about reading this book.
I mean, I'm an adult, so it's totally fine,
but it's like when you recommend someone Black Mirror,
if they're going to start with Season 1, Episode 1,
which is the one that involves the pig, you have to give people a warning,
and I actually tell people just skip that episode because it's just too disturbing.
Anyways, the name of the book that's rated R is the metamorphosis of the prime intellect.
So far, it's not lived up to the hype that my buddy Victor sold it because he said,
if you like Permutation City, you have to read this book.
And so far it's not as good as Permutation City.
Anyways, do you have any other book recommendations while we're wrapping up this
2026 AI in predictions episode?
I've been trying to work my way through the entire works of Asimov.
I've read a lot of his later stuff.
I haven't read some of his like earlier stuff.
What is, is the earlier stuff foundation or is the later stuff foundation?
Sorry, by later stuff I mean like in his timeline.
Like I've read like foundation and like onwards.
I've read, I had not read a lot of the like earlier things like earlier in his timeline.
One, have you seen the Apple Plus TV show?
And two.
Yes.
And two, are you not bothered?
Because I tried to get through.
I think I got through one and a half of the foundation books or however many there are.
and I just could not stand the time jumps.
It's like I get attached to these characters and then boom, they're all gone except for one.
And they're like different.
No, I was always kind of fun with that.
It just drives me nuts.
And I think that they did it very, very well in the TV show.
Right now I'm on the robot series.
I'm on the Robots of Dawn, which is, it's like this series of three books about a detective
from Earth who goes and solves some crimes that involve robots.
But my favorite one was the, is I think is it I robot?
Yeah, it's iRobot, which is it's a collection of short stories by Asimov. It's 1950. And that's the book that introduces the three laws of robotics. And the short stories all involve various contradictions or paradoxes involving those three laws. And I think it's a very good book. And I think it's it really got me thinking, I think I read iro about like a year or two ago. It really got me thinking about like a year or two ago. It really got me thinking about,
How are we going to actually design, you know, robots that don't kill us all?
And it's not my job, but it's somebody else's job.
But good luck with that, whoever's job that is.
You know, good luck with that.
That seems like that's a hard one.
It's crazy that you just mentioned that because in the metamorphosis of the prime intellect,
I'm only 40% of the way through it.
It literally brings up the Asimov three rules of robotics or laws of robotics or whatever it is.
And literally they're talking about that contradiction that,
at one point, you know, there's this sentient, intelligent, whatever, I don't even know what it is, system algorithm.
And it's called the prime intellect.
And it is told to do something, but it would have harmed or potentially put a human at risk.
And so he says, well, I can't do that because of my, the first rule.
The first rule overrides the second rule.
And then military corporate interests are upset because, you know, they can't harm people.
Yeah.
All right.
Well, I should let you get going to your...
All right.
We're supposed to still, on the docket of topics is parrot.
We still have not actually had a full parrot deep dive.
But we've not.
We've not.
I mean, that was one of the last things we mentioned was back in October.
We got to have a parrot breakdown.
But coming in the future, folks.
Yeah.
Well, we can record.
At some point, we can record when you give me an update on parrot.
And then people can experience.
All right.
Till next time, folks.
That is the 2026 look-ahead episode.
And we look forward to all of your comments and feedback on the last episode.
If you want to leave comments and feedback on this one,
there will be another GitHub discussion as well.
Until then,
be sure to check these show notes,
either in your podcast app or at ADSP thepodcast.com
for links to anything we mentioned in today's episode,
as well as a link to a GitHub discussion
where you can leave thoughts, comments, and questions.
Thanks for listening.
We hope you enjoyed and have a great day.
Low quality.
high quantity.
That is the tagline of our podcast.
It's not the tagline.
Our tagline is chaos with sprinkles of information.
