The Breakdown - An AI Primer for Bitcoiners
Episode Date: April 14, 2023On todays show, NLW gives a primer on the rapidly evolving artificial intelligence (AI) space, including introducing key concepts such as Generative AI, LLMs and more. He also looks at why bitcoiners ...have a particularly important vantage point in the emerging AI conversation. - “The Breakdown” is written, produced and narrated by Nathaniel Whittemore aka NLW, with editing by Michele Musso and research by Scott Hill. Jared Schwartz is our executive producer and our theme music is “Countdown” by Neon Beach. Music behind our sponsor today is “Foothill Blvd” by Sam Barsh. Image credit: Midjourney. Join the discussion at discord.gg/VrKRrfKCz8.
Transcript
Discussion (0)
In the wake of one of the most tumultuous years in crypto history, the conversations happening
at Consensus 2020 have never been more timely and important.
This April, CoinDest is bringing together all sides of the crypto, blockchain, and
Web3 community to find solutions to crypto's thornyest challenges, and finally deliver on the
technology's transformative potential.
Join developers, investors, founders, brands, policymakers, and more in Austin,
Texas, April 26 to 28th for Consensus 2020.
Listeners of The Breakdown can take 15% off registration with code Breakdown.
Register now at ConsenSys.com and join CoinDest at Consensus 2023.
Welcome back to The Breakdown with me, NLW.
It's a daily podcast on Macro, Bitcoin, and the Big Picture Power Shifts remaking our world.
The Breakdown is produced and distributed by CoinDest.
What's going on, guys? It is Thursday, April 13th, and today we are doing an A-I-I-I-I-Efts.
primer, but for Bitcoiners.
Before we dive in, a quick reminder, I announced earlier this week that the breakdown is expanding
to become the breakdown network.
We've launched a new show, Bitcoin Builders, which you can find anywhere you listen to podcasts.
And most importantly, if you are listening to this show on the CoinDesk podcast network feed
and you want to continue to listen to the main breakdown show, you will need to switch over
to the breakdown-only feed.
After April 23rd, the breakdown will only be available on that breakdown-only feed.
I'm so excited to have you guys along for the next phase of this journey.
All right, to today's show, crypto remains fairly quiet this week.
In the land of 2022 cleanup, the FTX estate came out with new info suggesting that they'd
recovered $7.3 billion worth of assets and were actively considering restarting the exchange.
Meanwhile, Ethereum successfully completed its Chappellea set of upgrades, which, among other things,
allows for the withdrawal of staked Eth.
I'm likely going to hit on both of those topics on the weekly recap on Saturday.
For today, though, I wanted to do a bit of a 101 episode that connects the dots between a number of
the things I've been thinking about recently. If you saw that announcement for the breakdown network
and Bitcoin Builders, you probably noticed me talking about AI as another big-picture power shift
with fairly dramatic implications. What I want to do today is give a pretty rudimentary background
on the AI space, but with a bit of a lens of Bitcoin and crypto industry folks in terms of why
they might care and what they potentially have to contribute to the conversation. First, let's talk about
what we mean when we say AI. For many years, people involved in the industry were careful to use really
clear and precise language. Over the last year, or honestly even less, though, AI has come to be used
as a blanket term for these big swaths of software that are effectively in the business of
simulating or replicating human intelligence processes. It sort of functions as a linguistic
supercategory to cover a whole lot of things including deep learning, machine learning, natural
language processing, and more. The type of AI that has exploded into consumer consciousness
recently is what people refer to as generative AI. Put simply, generative AI is a type of AI that can
create new content including things like images, text, audio, code, and videos. Within generative
AI, there are two types of tools that have really captured mainstream attention more than
even the others. One of these is text to image AI. These are models that generate images from
natural language descriptions. So, for example, you can prompt one of these generators with
the description such as, selfie stick photo of Shakespeare and the Lord Chamberlain's men in the
Globe Theatre in 1596 in London, smiling faces, happy, crowd in the background, Victorian
clothes. And it produces, well, a selfie-stick photo of Shakespeare and the Lord Chamberlain's
men in the Globe Theater in 1596 in London with smiling faces and a happy crowd in the background
wearing Victorian clothes. And yes, this is a prompt I actually recently used. Now, these
types of models have been around for just a couple of years. Some of the best known are Dahl
E by OpenAI, a company which will talk more about throughout this show. Stable Diffusion,
which comes from startup stability AI, which takes a slightly different technical approach
in which released its code publicly, and Mid Journey, which is the service that I use most often.
It's really been in the last year, and especially the last six months, that these tools have been tuned to the degree that they're really capturing notice.
If you've been on the internet in the last couple months, you might have seen a Pope in a puffer jacket photo, or perhaps a mid-jorney imagination of Trump getting arrested.
You also might have seen historical selfies like the one I described above.
These all come from this type of text-to-image-a-i tools.
The other tool that has captured incredible amounts of consumer attention and wonder, frankly, is Chat-GPT.
ChatGPT is a chatbot layer that sits atop of a large language model.
Now I'm trying to keep this show very high level, but I think it is worth going into a few terms
and acronyms here.
As I said, chat GPT sits on top of GPT 3.5 and GPT4.
GPT stands for generative pre-trained transformer.
It's a large language model that comes from the same company that made Dali, which is called
OpenAI.
Large language models or LLMs are, again, a slightly nebulous term, but usually refers to
deep learning models that are, one, general purpose models, as opposed to being trained for one
specific use case, and two, models that are trained on incredibly large quantities of data.
For the sake of this conversation, I'm not going to get too deep into what it means to
train AI, but the short idea is that training AI means teaching it to interpret data correctly
and learn from that data, with the goal of using that learning to perform tasks.
So OpenAI has been working on these LLMs and their GPT system for a number of years.
But ChatGBT was a seminal inflection point moment, because the time,
chatbot gave regular people who didn't have any technical background at all, the ability to
actually interface with an LLM and start to discover all the things it was capable of.
And discover they did.
In the six months since the November announcement of ChatGBT, GBT, people have flocked
to explore just about every use case you could ever imagine, from using it to help produce new
content, to reviews of translations in legal contracts, to research, to creative experiments
in starting businesses, and so on and so forth.
For a sense of the scale and breadth of the activity here, by January, two months after launch,
chat GPT reached 100 million monthly active users.
That makes it easily the fastest growing consumer application in history.
For comparison, it took Instagram two and a half years to get to 100 million users,
and even TikTok nine months.
That means that chat GPT got there four times faster than TikTok did.
I think often world-shifting moments are the product of numerous things converging all at once.
the confluence of incredible advances in text-to-image generation,
coming at the same time that people got to start interacting with LLMs via chat GPT,
made the end of 2022 and the beginning of 2023,
something that I believe history will demarcate as a clear before-and-after moment.
Now, I want to be clear here that these two areas of AI and even generative AI
are just the tip of the iceberg of what's being built out there.
FutureTools.io is a site designed to help people discover the right AI tools
for whatever they happen to need,
and it currently lists 1,384 tools, and by the way, you'll probably get to use that stat
as a way to date this video slash podcast in the future. Some of the categories include AI
detection, generative video, motion capture, text to video, image improvement, music, self-improvement,
translation, image scanning, podcasting, uh-oh, generative art, productivity, speech to text,
voice modulation, and more. Okay, so we've clearly hit an inflection point moment. There's an incredible
flourishing of tools, let's talk then about the discussion swirling around the space and the way
that people are engaging with it. For the bulk of consumers and professionals discovering these AI
tools, the big questions are about how they use it, how it could allow them to create art in
different ways, how it could help them build new businesses or side hustles, how it could change the
way they do their jobs. One really feels this part of the conversation when you go look up these tools on
YouTube. So many channels have sprung up to help people learn about entirely new ways of working
in new ways of creating. One term, for example, you've probably heard is prompting. Prompting is what you
input into generative AI tools, whether they're LLMs, text image, or other text input tools to try to
produce the desired outcome. A huge amount of content is springing up around how to prompt on various
tools. There are also a huge number of emergent communities organizing themselves around this
type of mutual learning on places like Discord. I would highly encourage anyone who wants to really
engage with debates about AI and all of its complexity to go check this side of the space out.
By that I mean both go see how individuals and communities are imagining possibilities and opportunities
expand in front of their eyes, but I also mean go actually try these tools and feel a bit of
that wonder yourself.
Using these tools, I'm often reminded of Arthur C. Clark's suggestion that sufficiently advanced
technology is indistinguishable from magic.
I also recently saw a tweet from Goth 600.
It's an image of a wizard sitting at an old Apple 2-looking computer, and Gott's caption is,
casting spells with gods today.
I couldn't describe it better if I tried.
Anyway, I do really feel passionately that to engage with the complexity of the questions that arise from AI,
we need participants in that conversation to glimpse this stratospheric expansion of possibility as part of the conversation.
However, expanded possibility or not, there are big questions that AI brings up.
One set of those has to do with its likely disruption to industries and even more entire categories of work.
There are many people in our society that have grown complacent about the idea of technology replacing various aspects of blue-collar jobs.
This new set of AI definitely has that type of disruptive force, but it's for many white-collar jobs.
Epsilon theories Ben Hunt writes,
We started using GPT4 in some of our processes, and it's literally a 100x improvement.
Yes, you have to review work for errors.
Yes, you have to refine prompts, just like you would a human analyst.
I don't know why you'd ever hire a junior banker slash analyst slash lawyer again.
But also zooms out and generalizes.
Every analyst slash associate slash junior on the cell side or by side is now obsolete.
Seriously, you're about to be replaced.
GPT4 is as proudly disruptive as the internet.
It changes everything in businesses based on knowledge work and symbolic manipulation.
If you don't see that GPT4 is an industrial revolution level event, you're just not paying attention.
GPT is insanely deflationary, which is insanely nasty for our modern political and economic
system.
Now, part of the intensity of Ben's language is that it is quite difficult, once you've groked
the disruption at the door, to get really huffed about things like whether inflation was
5.2% expected, or the 5% we got with the latest CPI print. It all just seemed so quaint,
like dinosaurs discussing the weather when an asteroid is barreling towards them. I actually think
that the work disruption conversation is itself multiple parts. One, what jobs in industries
are most susceptible to this disruption? Two, how do we deal with the massive deflationary
pressures this disruption brings? Three, how does it challenge our conception of our own self-worth?
Do we think in fundamentally different ways about work and the value of our contributions after this?
and are these changes inevitable?
Even if we prohibit AI from doing certain types of human jobs for the sake of keeping our jobs,
would we become resentful knowing a machine could do it better?
Okay, so now we've got two categories of discussion.
We've got the regular folks out there exploring the opportunities for things like mid-journey and chat GPT.
We've got the work disruption conversation, which has so many dimensions.
Then we have a third discussion, which is whether we survive.
A month ago, Bankless invited Eliezer Yudkowski on the show, and it took an unexpected turn,
unexpected at least to the hosts.
They eventually named the episode, We're All Going to Die, and that pretty much sums up the thesis.
Now, this is a real and important part of the discussion around AI, whether the creation
of an AGI, an advanced general intelligence, would lead inevitably to the end of the human species.
This concern sounds to many who first hear it as outlandish or overblown, but many of the people
deepest in the AI space, ascribe it some meaningful percentage chance of happening.
In AI circles, there are a few relevant terms here.
AGI refers to the idea of a hypothetical agent that can understand or learn any task that a
human could.
AI safety refers to the field that is focused on preventing the harmful consequences
that could arise from AI.
AI alignment refers to the idea of processes that steer AI systems towards their intended
goals and away from misaligned goals which could cause harm.
X-risk refers to the idea of a, quote, astronomically large,
negative consequence for humanity, such as human extinction or permanent global totalitarianism.
That's the definition from the forum started by Eliezer called Less Wrong.
Philosopher Nick Bostrom introduced the term existential risk in 2002, and defined it as,
quote, one where an adverse outcome would either annihilate Earth-originated intelligent life
or permanently and drastically curtail its potential.
This is heady stuff, right?
Well, it might not surprise you then to see that these questions of human extinction possibilities
have developed extremely intense and passionate communities organized around how much they believe
in their likelihood.
The people who are convinced that this is a problem tend to think it's the only problem
that matters to work on.
One of the big groups that tends to have this belief are the effective altruists, which
is a group you might heard of in association with Sam Bankman-Fried.
But of course, there are entirely alternative and opposite points of view.
For example, there are the E-S-ACCs or effective accelerationists.
This is a group that believes that AI can lead to effectively a post-scarcity technological
Utopia. Now, going too deep on this debate is beyond the scope of this particular episode,
but at least you now have some of the terms. The point that I'm trying to make is that I think
it's impossible to fully discuss AI without understanding it from each of these vantage points,
the expansion of human creativity and opportunity, the disruption to work in industries,
and the existential risk. But now we come to the Bitcoiners and the CryptoKids. There is an
undeniable interest overlap between these communities, and one that I would argue is rising rapidly.
some observing from the outside, the explanation is cynical. Crypto is out of favor with the money people,
while AI startups are raising lots, so people abandon crypto for AI. This is a pattern we've seen
before. I saw a hell of a lot of people from L.A. in particular in 2018, leave crypto because they
had rediscovered their first passion in the legal cannabis industry or whatever it was.
Elon Musk even joked about this on March 3rd. He tweeted, I used to be in crypto, but now I got
interested in AI. As you might guess, I think it's a little bit less cynical than this. First of all,
The type of person who is going to be interested in Bitcoin is also probably the type of person
who's interested in big global systems and the disruptions of those systems in general.
They're probably more comfortable zooming out to think about society-level issues.
They're probably comfortable with interdisciplinary thinking that goes beyond economics to philosophy
and more.
So in that way, you have an intellectual personality alignment.
There's also the fact that native digital monies seem kind of well suited for an AI era
where digital agents communicate with each other.
Imagine an auto-GPT, which is a new type of AI that everyone is talking about this week,
that can use internet search, has memory, and can potentially spin up other AIs to accomplish
its goals. Imagine that an auto-GPT was assigned to manage pizza deliveries and had to pay an
intermediate AI that helped with its solution. Would it use USD or would it use a native
digital currency? That's obviously entirely speculative at this point, but not unreasonable to ask.
And not, I don't think, unreasonable to assume that an internet native currency is going to be a preferred
option. A third reason, that it makes sense to me that Bitcoiners and crypto folks more broadly are
getting interested in AI, is that in many ways the decentralization space has inverse
and even adversarialism dynamics to some of the problems of AI. Decentralized systems like
blockchains are, effectively, decentralized sources of truth. That truth could be extremely
important in a world of deep fakes and misinformation, a world where we have to assume things
are not real in our current sense of the term, but were in fact created, or perhaps a better
term, is generated. What's more, blockchain systems are harder to tamper with because of their
decentralized architecture that makes them resistant to attacks of all types. If you're interested
in a deeper discussion on this set of points, you will definitely want to check out my conversation
with Sergei Nazarov from Chainlink tomorrow. It's a huge part of what we discuss. A final reason
that there's more overlap than it might seem at first between the AI space and the Bitcoin
space is that there are many in the Bitcoin community who are convinced of the importance of
redesigning the financial system around a secular shift to a deflationary economy. This is the core
argument at the center of Jeff Booth's 2020 book, The Price of Tomorrow, why deflation is the key
to an abundant future. In it, Jeff argued that the world must find a way to make the financial
and political system compatible with persistent deflation during an era where technology
pushes the price of goods and services down at an ever-increasing rate. The traditional
solution to this problem has been to print money and artificially drive up inflation,
essentially for the purposes of avoiding a massive debt default. Booth considers that method
unsustainable and suggests that the solution may need to involve moving away from a debt-based monetary
system, which requires infinite growth and infinitely expanding debt. Is there an alignment then
between that type of deflationary world and moving to a new deflationary model and an AI-driven
future of deflationary abundance? A potential solution to this paradox could be the adoption of
digital hard money as a replacement for brittle fiat-based financial systems, or at least that's
an argument that some are interested in exploring. Now, the point of all this is not to try to overly lump
and Bitcoin and crypto together. Just to point out that there is some amount of fellow traveling in similar
strange times that I do think weaves them closer than it might have initially seemed. By the way,
one last note. I mentioned before the effective altruists and the portion of that community that is
focused on X-risk. The EA community was of course best known for its connection with SBF.
Subsequent to the failure and revelation of SBF's fraud at FTX, we've gotten a much more robust
picture about just how deep the connections between Sam and EA were. While the popular narrative was
of SBF making EA, it now sort of seems like it was the other way around. The original money
from Alameda was from effective altruists, and it seemed like it was at least in part an explicit
mission to make as much money to use for EA goals as possible. Now, there is a part of the
EA community that genuinely believes that effectively no other problems are worth working on
besides AI safety. I'm not sure that Sam was in the extreme on that. What's undeniable
is that he spent a ton of money on it. It was a central pillar of the foundation he began. Much
has been made of effective altruism's tendency to think in ends justify the means terms.
Is it possible that some of these AI safety concerns and X-risk issues drove Sam to simply
not care about the consequences of his wanton theft? At this point, it's not at all clear.
Sam hasn't even admitted stealing funds, much less trying to explain his motivations.
But I do think it dramatizes the stakes of these conversations.
Anyways, guys, that is the primer for today. If you are one of my bitcoiners, but you're interested
in this AI stuff, go check out my new.
AI Breakdown YouTube channel. I think you'll find a lot there that is resonant or at least
useful. And for those of you who aren't that interested, but at least want a conversational
understanding of all these terms being thrown around, I hope that this helped. Until tomorrow,
guys, be safe and take care of each other. Peace.
