We Study Billionaires - The Investor’s Podcast Network - BTC148: Bitcoin and AI w/ Guy Swann (Bitcoin Podcast)
Episode Date: September 20, 2023Preston Pysh and Guy Swann's conversation covers the fascinating attributes about AI, the counterintuitive nature of AI and how the need for specialized models will create a rich ecosystem of decentra...lizing forces around countless models and resourcing requests. IN THIS EPISODE, YOU’LL LEARN: 00:00 - Intro 01:24 - What is at the intersection of AI and Bitcoin? 19:12 - How AI is able to synthesize enormous amounts of information. 30:29 - Why AI models are actually going to be a decentralizing force. 32:37 - What incentives will attract the most data? 40:06 - Why training models from scratch isn't the most optimal path. 52:52 - Guy's thoughts on Sam Altman's World Coin. 01:03:58 - Why identity and AI are becoming a paired combo and how to avoid it. 01:03:58 - Why immediate settlement is a necessity for AI. BOOKS AND RESOURCES Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, and the other community members. Guy Swann's AI Podcast. Guy's Website with access to all his content and media. Related Episode: Listen to BTC051: Bitcoin & Why The Bond Market Is a Big Deal w/ Greg Foss & Guy Swann, or watch the video. NEW TO THE SHOW? Check out our We Study Billionaires Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: River Toyota Range Rover Sun Life SimpleMining The Bitcoin Way Onramp Briggs & Riley Public Shopify Meyka Fundrise AT&T iFlex Stretch Studios Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Transcript
Discussion (0)
You're listening to TIP.
Hey everyone, welcome to this Wednesday's release of the Bitcoin Fundamentals podcast.
On today's show, I have a fascinating and important topic to cover with Bitcoin OG Guy Swan.
Although it might not seem like it on the surface, Bitcoin and AI go together like peanut
butter and jelly. And on today's show, we're going to learn some fascinating attributes about
AI and how it necessitates the need for an immediately settling bearer asset money that can
transact as fast as the processing demands that it has.
Not only do we cover that, but we also get into the counterintuitive nature of AI and how the need for specialized models will create a rich ecosystem of decentralizing forces around countless models and resourcing requests.
So, without further delay, here's my chat with Mr. Guy Swan.
You're listening to Bitcoin Fundamentals by the Investors Podcast Network.
Now for your host, Preston Pish.
Hey, everyone, welcome to the show. Like I said in the introduction, I'm here with Guy. Guy, great to have you back on the show. This is long overdue.
Yes, indeed, man. How are you doing? Doing great. It has been a while. Doing great. I'm just, you know, we were in Austin and you're up on stage, just killing it. And I'm sitting there in the audience, listening to your presentation about Bitcoin and AI and these intersections and all sorts of things that you're commenting on. And I was like,
We have got to put this on the show and people have got to hear your point of view on some of this stuff because I think it's really profound and really important.
So that's the impetus for the conversation.
I'm curious what feedback you got in Austin from when you were done with the presentation.
Yeah, actually quite a bit of good feedback.
And one of them, one thing in particular will bring up kind of when we hit that point because it really reinforced one of the topics that I was talking about specifically about like the level of fraud.
and issues with dealing with payments and payment processors and credit cards in kind of the
era of bots everywhere, which is only getting worse with this new wave of technology that we're
seeing. But yeah, generally a lot of people, probably like seven or eight people that I talk to,
basically gave me some little anecdote or something essentially reinforcing. I didn't get any
contradictions to the argument, which I actually wanted to invite because I'm curious where
obviously it's not a complete picture. And I'm curious where there's nuance or pushback. But I think
the general, quote unquote, thesis is, I think it's the proper way to think about where things are headed.
We're definitely going to get to this in much more detail later on, which is the whole idea that
going down this KYC path is such a losing battle for them technically. And I loved how you really
kind of drilled into some of that. But let's start off here. You start your presentation off with
this idea that AI is a centralizing force. Explain to people why. Explain why that's an
important concept to understand in this much larger, broader conversation of like where this is
all going and how Bitcoin is somehow involved in all that. The thing about AI is, and I think
probably the biggest thing that, or at least the first thing that kind of opened my eyes to
this and made me realize that there's going to be a major problem with the way that our payment
systems and our monetary structure works, like just a credit-based system works, and where we
are in kind of our digital transition and what AI models and both image generation, video generation,
and audio generation is going to really mess with all of the tools we have for proving that
you are human online.
I think we essentially lose that ability.
And I don't think the timeline is very long on that.
In fact, I think realistically, if an attacker was genuinely, like, seriously motivated,
that you could essentially get, I mean, with the, I think the average price now of, like,
a credit card, a set of credit card information and ID and stuff on, like, the dark web is like
two bucks, five bucks, something like that.
Like, it's really cheap.
I genuinely think the reason we don't have fraud as bad as we could have it,
like all of our information is out there.
There's not this massive natural force to just break all the things.
You know, like one out of 100 people might have that inclination,
and then one out of 100 of those or a thousand of those might actually just go up and do something about it.
And people are generally just like, I want to go about my day.
I want things to work for me and I don't want to cause any trouble.
to the point that we'll accept terrible things just to not make it uncomfortable.
You know, we'll let ourselves get punished or we'll not do what is right for us,
just because we don't want to make a situation uncomfortable.
We're going to contradict people.
Like, we're generally, people are generally nice.
And then the other one is just the scale of the capability of the fraud just so drastically outweighs the ability to execute.
Like even with like genuine institutions, like organizations overseas.
sees in foreign countries that just kind of as a business attack first world countries for
just pennies and any elderly person that's going to click on that email or whatever it is.
Even at that scale, there's just too much.
It's kind of like hiding in the crowd, right?
Is that if there's two billion people's worth of data out there, like you just can't go through
it all.
You know?
Like I really think those are the major limiting factors to the degree of fraud that we have.
It's not whether it's technically available or possible.
But AI changes that because suddenly you can scale the attack.
You can have a computer do all of this.
A great example, actually, is that I had Alex Lewin on the AI Unchained podcast,
which by the way, a lot of audience might not know.
I started another podcast on this.
As I started going down this rabbit hole,
I was realizing just how insane AI was going to be moving forward
and how much I think it was going to change the land.
landscape of practically everything. I think the two most important technologies right now in the world
hands down, no questions asked for Bitcoin in AI. And with that kind of unraveling, I just kind of felt
like this obligation to specifically reinforce or explore the open source self-hosted side of the
conversation because everything I was finding is like, plug in Chad GPT, get an API from OpenAI and
Bard and Google and all this stuff. And I'm just like, we're going to be in a shit. We're going to be an
awful, awful place. If this new wave of technology has this kind of order of magnitude ability
to make us more productive and capable, but we can only have this capability by plugging into
a Google server. Because immediately the use cases I saw, I mean, instantly, my first thought
was like, I want this thing to read and see everything that's happening on my computer. So I have a
contextual way to search and make sense of things I did two weeks ago because I lost notes from two
weeks ago. I have things that I've saved. I have the, let's say I listen to an audio book or whatever.
Like, I listen to the real Anthony Fauci and the One Nation under blackmail. These books are so
horrifically dense. By the way, I started reading that based on your recommendation, and it is
mind-blowing. It's crazy. It's just that the story of essentially the mafia and our intelligence
agencies are the same. It's just one story. They're the same set of institutions.
for better for worse.
The research is just out of this.
I don't even know how that's,
I don't even know how it's human with possible.
Yeah,
yeah,
she's a beast.
And I'm pretty sure she wrote a lot of that before any of the open
AI stuff really kind of like she literally did that research,
which is totally nuts.
But anyway,
it's crazy.
It's crazy.
Sorry to sidetrack.
One Nation under blackmail is what we're talking about here.
Yeah,
and it's volumes one and two.
So it's,
it's longer than you think it is,
you know?
But both of those books are great examples of just, I mean, how many names have you already forgotten?
You know, as you're going to the thing. I mean, it's just dense, unbelievably dense with examples and data points.
And so after you listen to it, you have this idea, you realize the scope of evidence for their arguments.
And then you're in a conversation a week later and somebody mentioned something.
And you're like, oh, no, there's like at least seven pieces of hard evidence that prove this.
You know, like, this is just nothing but like the mainstream narrative to kind of hide away
what was actually going on.
But what are those seven data points?
Where the hell are they?
And I can't, that evidence is just gone.
But I know, and I can't search it.
But imagine if you had like an AI contextual, like, just listening to what you were
listening to.
And you could just literally search through the transcription of the thing.
In fact, I've been kind of doing this manually by the audiobooks.
And then I go online, I try to find the ebooks.
I don't buy it again, but I try to find it on Torrent or whatever, just so that I can search it because I'm not going to read it again.
But I just want to be able to get access to that information.
But sometimes I can't even, I don't even know exactly what I'm searching for.
I kind of like have a contextual vague idea of what I'm looking for.
Like, for example, I don't remember the country.
I don't remember the specific time period or something it was, but I remember just some of the details.
that's actually been one of the greatest things about like the LLMs, the large language models in search and like perplexity AI.
And then even just like asking Chad CPP general questions is I can ask things that are totally off the wall that I would ask a person.
And you can't possibly ask Google or some sort of straight index of information and get the answer.
But the language model can make the connections between the patterns if you specific.
say the patterns. I shit you not. I even asked it for lyrics of a song that I was trying to
remember. And I could only remember like two or three words, but I remembered the cadence. I remembered
the cadence of the rest of the phrase in the song. And so I just said, it sounds like this.
And it was just like bippity boppity boop, boop, boom, boop. Come on. And then put in the words that I
remembered. And I said it was a rap song and it was this. And I just gave it.
enough detail and it says, were you meaning this? And it literally gave me back exactly what I was
looking for because it knew, it understood the pattern of the, that I was simply asking for the
emphasis, that I was asking for the syllable breakdown, so to speak, of what the words
sounded like. So like you can, you can generally ask it via alliteration what sounds like this.
And one example I use a lot is
Oklahoma is a nuclear power plant company.
And they've created like kind of a modular design
that's far safer,
requires far less like ongoing maintenance,
and can essentially be set up in smaller units.
So it's just a far more efficient
and smaller setup of the design.
But they were actually partnering with like Bitcoin companies.
But I couldn't remember the name of it.
All I could remember was Ukla Speed Test.
And so I literally just said that.
I was like,
I'm looking for a company that I cannot remember.
And I searched it on Google,
don't go,
Go,
brave,
like all the different ones being like,
what was the thing?
And every time I typed in something
that was wrong,
it just,
it literally took me further away
from what I was looking for.
And I was like,
wait a second,
Chad GPT.
This was like one of the first times
that I realized that I could do
this type of question.
I went back to it and I asked
Chad GPT and I was like,
I can't remember what it's called.
I just know that it's something simple
and it starts with an O.
And,
And what keeps popping into my mind is the Ukla speed testing, even though I know that's not what
the name of it is.
And it was literally just like you're looking, I think you're looking for Oklo, nuclear, whatever.
Here's their Wikipedia page, blah, blah, blah.
And like, so it knew immediately that it can make that connection.
And so anyway, those are just kind of examples of how powerful these things could be.
And I think it's, especially with LLMs for kind of search and pulling information together that's
like loss.
Like, let's say in 2020, I started, one thing that I do is I save notes or links or articles,
but all sorts of stuff.
The A, either I'm trying to go back to or B are like a piece of evidence because, I mean,
God knows how many times, like, some piece of evidence comes out and I can easily find it on Google.
And then it turns out that piece of evidence is really convenient for some mainstream narrative.
And three weeks later, I can't find it on Google.
You know, like, it just gets suppressed in all the different ways that they can suppress.
it. And so, like, I want that link. So I've just kind of been doing this for years and it's
slowly gotten worse and worse and worse over the years. And I generally started doing this
during the Iraq War with like a bunch of anti-war links and stuff. And that's when I learned the
trick of like if I take a VPN and I'm, I say I'm, I connect into Germany or I connect through
Russia or whatever. I can completely the different search results for the exact same search terms.
And that's when I started realizing in another, at a whole other level, how much,
we're being manipulated in the digital environment.
And when you add AI to that mix,
that's going to make that problem just shockingly,
shockingly worse.
Not to mention the level of Panopticon sort of surveillance and nightmare
you're talking about the greater,
because the whole point,
the whole way that these things are incredibly useful tools
is by giving them access to everything.
So if Google has contextual,
analysis or a contextual based window, one that literally it can understand by knowing what
alliterations I use and what like code or a little like tagging system that I use on my computer
to assess the information that I'm saving and the ideas that I'm talking about. That's a,
that's a nightmare. Because they know what's in your head. They know what's in your head at that point,
right? If you're feeding it, they know everything. Yeah. I've read this, I've read these 20 books. Well, now it
knows that you've been influenced by the ideas in those 20 books. And it's like, it basically
knows what you're susceptible to. It knows if it was going to try to push you in a certain direction
or pull you in a certain direction, the queuing that would probably be required in order to do
such a thing to achieve whatever results, whoever's the owner of these models wants to achieve.
And I guess it goes back to the original question guy, which is AI is a centralizing force.
The reason it's a centralizing force is because it needs to ingest as much data as humanly possible in order to achieve these miraculous things that everybody's now just starting to experience.
Yes. And that's what I thought. That's what I thought at first, is that AI is going to be this horrible centralizing force. And especially the way it was talked about in the mainstream media too. And I knew take everything with a grain of salt. And that was kind of part of the problem or part of the reason for the podcast. And not only did I think,
And we have to really make the conversation about open source and self-hosting as loud as possible.
And I couldn't find a good source.
So I was like, all right, well, I'm just going to take my journey and finding sources and I'm just going to turn it into a show so that I can kind of save the trouble for everybody else.
Or at least the 100 people that listen to the show, whatever it is.
The other thing that I would constantly hear in the mainstream media is that only big company.
And this was also the narrative around like we have to make it safe and we have to make it unoffensive and all of these things.
unbiased, which is just an idiotic idea.
But is that the reason it was so important we did this is because, or we took that approach
to it, is because this was only going to be run by a bunch of giant corporations.
Oh, this model has 100 billion parameters and we're going to train it on a trillion
parameters.
And there's how many neural connections there are and the human brain is going to be smarter
than us and like all this kind of like general stuff.
So there was this image being painted.
It was a picture being painted for us that said only the general.
giant corporations are going to have this and anybody less than a billion dollars,
just these won't exist.
Yeah.
And that was scary.
I was like,
this could undo what Bitcoin is doing unless we like get ahead of this.
I actually now think that despite the fact that there will be elements of centralizing forces
in AI,
I actually think this will contribute more to the dis-economies of scale of the giant
corporate setup and the government.
institutions far more to the dis-economies of scale than the economies of scale in the sense that
the bigger you are, the less it helps you. And one of the most amazing things, the article that really
kind of tip me over the edge, I started kind of seeing this picture. And I was like, wait a second,
this is not quite what it's made out to be. But the one that kind of like put the nail in the coffin
for me on like, this is the way I need to take this. And like this is, this is the way Bitcoiners
need to take this was the article.
that was leaked from a Google executive, excuse me, a Google engineer, a memo that was sent around
titled, We Have No Mote, and Neither Does Open AI.
And he was talking about some of the fundamental characteristics of how large language models
and image diffusion models work.
And the fact that all of these kind of like lock-in, like network effects, this like,
if you're on Twitter, it's really hard to leave Twitter.
If you're on the Apple platform, it's really hard to leave the Apple platform.
Like these naturally, like locking us into certain silos or certain platforms effects that a lot of these other technologies have had because we don't have protocols for them don't apply.
They don't apply to that.
That's hard for me to wrap my head around.
Why is that?
Well, think about it.
First off, there's no network effect.
And a great example of why, or great example to just kind of see its outcome is open AI had the, you know, you've ever seen that chart.
of like a platform or a new service or whatever to a million users.
You see the chart and it's like Facebook is like,
oh, we did it in just like a few years and blah, blah, blah.
And then you see Open AI straight up.
It's like, I think it was like 24 hours, sub 24 hours to a million users.
And now they have in June, they had 1.7 billion visitors to chat GPT to the website.
But in July they had 1.5.
That's a pretty significant decline.
and this is also the same period in which we have just had this explosion of alternative models.
And the nature of the LLMs, one of the...
How are the alternate models getting their data, though, guy?
There's just tons and tons of data sets out there.
Like, like, basically a bunch of people have curated these data sets.
There's like official data sets.
And also, there's a really important element that has come into play that was not initially understood
and certainly wasn't part of the narrative.
Is that a great example is what I talked about with the $100 billion versus, oh, this one's got a trillion parameters and now it's going to be smarter than all humans.
Yeah, yeah.
Turns out the one with the trillion parameters was dumber than the one with $100 billion.
And the reason is, is because quality matters over quantity.
And this is becoming a major piece of the puzzle is how do you curate information?
It's a whole lot better to train.
It's actually better to train it on a billion parameters of,
extremely high quality material and get really solid human feedback to continue to iterate for
some span of time, then it is to just get a hundred billion parameters of whatever random
information that you can. I mean, garbage in garbage out applies. That's the simplest way to put it.
And because of that, I think there was actually a competition that I read about.
it was an article about the story about a dude who made a smaller model.
I mean, we're still talking about like, you know, $100,000, $200,000 to train these things.
We're not talking about like a small amount of GPU power.
But there's a far, far cry from you need a billion dollars to do anything here.
And he did like one tenth of the parameters.
He did it on like one tenth of the dataset.
And he actually beat like Google barred in some sort of like quality test or like a user response test.
And it was just because he found a really, really clever way to organize the information.
Let's take a quick break and hear from today's sponsors.
All right. I want you guys to imagine spending three days in Oslo at the height of the summer.
You got long days of daylight, incredible food, floating saunas on the Oslo Fjord, and every conversation you have is with people who are actually shaping the future.
That's what the Oslo Freedom Forum is.
From June 1st through the 3rd, 2026, the Oslo Freedom Forum is entering.
its 18th year bringing together activists, technologists, journalists, investors, and builders
from all over the world, many of them operating on the front lines of history. This is where you hear
firsthand stories from people using Bitcoin to survive currency collapse, using AI to expose human
rights abuses, and building technology under censorship and authoritarian pressures. These aren't
abstract ideas. These are tools real people are using right now. You'll be in the room with about
2,000 extraordinary individuals, dissidents, founders, philanthropists, policymakers, the kind of
people you don't just listen to but end up having dinner with. Over three days, you'll experience
powerful mainstage talks, hands-on workshops on freedom tech, and financial sovereignty,
immersive art installations, and conversations that continue long after the sessions end. And it's
all happening in Oslo in June. If this sounds like your kind of room, well, you're in luck because
you can attend in person. Standard and patron passes are available at Osloof Freedom Forum.com
with patron passes offering deep access, private events, and small group time with the speakers.
The Oslo Freedom Forum isn't just a conference, it's a place where ideas meet reality
and where the future is being built by people living it.
If you run a business, you've probably had the same thought lately.
How do we make AI useful in the real world?
because the upside is huge, but guessing your way into it is a risky move.
With NetSuite by Oracle, you can put AI to work today.
NetSuite is the number one AI cloud ERP, trusted by over 43,000 businesses.
It pulls your financials, inventory, commerce, HR, and CRM into one unified system.
And that connected data is what makes your AI smarter.
It can automate routine work, surface actionable insights, and help you cut costs while making fast
AI-powered decisions with confidence. And now with the Netsuite AI connector, you can use the AI of
your choice to connect directly to your real business data. This isn't some add-on, it's AI built
into the system that runs your business. And whether your company does millions or even hundreds
of millions, Netsweet helps you stay ahead. If your revenues are at least in the seven figures,
get their free business guide, Dismifying AI at Netsuite.com slash study. The guide is free to you
at netsuite.com slash study.
NetSuite.com slash study.
When I started my own side business, it suddenly felt like I had to become 10 different
people overnight wearing many different hats.
Starting something from scratch can feel exciting, but also incredibly overwhelming and
lonely.
That's why having the right tools matters.
For millions of businesses, that tool is Shopify.
Shopify is the commerce platform behind millions of businesses around the world
and 10% of all e-commerce in the U.S. from brands just getting started to household names.
It gives you everything you need in one place, from inventory to payments to analytics.
So you're not juggling a bunch of different platforms.
You can build a beautiful online store with hundreds of ready-to-use templates,
and Shopify is packed with helpful AI tools that write product descriptions
and even enhance your product photography.
Plus, if you ever get stuck, they've got award-winning 24-7 customer support.
Start your business today with the industry's best business partner, Shopify, and start hearing
sign up for your $1 per month trial today at Shopify.com slash WSB.
Go to Shopify.com slash WSB.
That's Shopify.com slash WSB.
All right.
Back to the show.
So is this how, when we're setting up the neural net, like I know Tensur,
Flow has this playground. You can go in there and you can kind of decide what the transition
function is on each neuron. You can say the architecture of the neurons are seven layers deep
and 10 neurons wide. Is that what we're talking about when we're saying that people are coming
up with better ways to train the data? Or is it something different that I don't understand?
Well, there's a handful of different things, actually. I mean, to some degree, there are elements
to that. And in the process, like, I'm still kind of in the middle of my journey. I come at this
from a very novice or like pro-mature, pro-amateur, you know, like kind of a pro-consumer
position, like in the same way that I'm not a developer and I can't build computer components
or I don't know circuitry, I can buy a bunch of different components and put them together
and make a custom machine. I put myself in the sort of prosumer level, and there's still a lot
about the models and how these things work that I don't understand.
But I'm in the process of reading a bunch of research papers that are about a third percent
totally obscured, like I don't know what I'm reading.
But I'm still gleaning more and more information from everyone.
It's kind of like what I did with the lightning white paper.
The first time I read it, I had no idea what the hell I was reading.
And about the fifth time, a couple of big pieces started to click.
But you go through it slowly and you just go over and over and over again.
and in that sense, there's just one of the big pieces of the puzzle is just finding good data.
And because you have to use machines to sort the data.
It's like having to use a machine in order to design your circuit or your chip because you've now got so many individual elements on the chip that you couldn't possibly design it without a computer.
Nobody can lay out a billion individual circuits or individual gates on a chip or whatever.
We now use software abstraction to a hardware abstraction that is insanely fine-tuned after thousands and thousands and thousands of iterations to even operate at that level.
Well, it's kind of the same way when we're talking about a billion pieces of information.
There's no way to manually put this together.
So the trick is, A, how to use the AI tools we already have.
And B, how do you source?
Like, where do you pull the information from?
because you simply can't put human eyes.
You can't hire a group of...
It's like Yahoo originally did manual search
and Google found out a way to do it with script
back in the late 90s.
And Yahoo had much better results
when the internet was tiny.
There was just a point where they could keep iterating
on the script.
And as it got like slightly better and slightly better,
it got closer and closer to what Yahoo's results were,
but they could do it on such a larger data set
that ended up their results were getting better,
just because they had a greater scope of site,
whereas Yahoo was still doing it manually
and kind of failing it,
their script version,
and they weren't prioritizing it
because they were just trying to hire more people
to manually decide what category of this went in,
et cetera, et cetera.
Well, we kind of have that same thing going on in AI right now,
is that you just can't manually do this
at the scale of the data you need to do training sets.
But then there's also, like the models aren't,
like the LLMs being a good example,
is there's also tons of,
different weighting and tons of different modes of thinking about the data that are going to
fundamentally change how we do this. Like I actually think we're kind of in a plateau right now.
We've kind of reached a point where there's this like 7 billion to 16 billion parameter
models that are out there now that you can run on a consumer machine. And that was one of the
other big things is a article I was talking about where we have no moat is they were saying that
these things would never run on a consumer machine and you'd never be able to do that you have to have
a billion dollars and giant corporation.
And so there's going to be this huge lock-in effect.
And it was like days.
It was like days after Lama was leaked the meta's data set.
They had it running on a computer on like a basic consumer hardware.
And even though it's slow as all get out, they even got it in, I think it was like three
weeks before they got it running on a Raspberry Pi.
Just like with like heavy quantization.
Basically the open source community, the tool is so useful that there's so many eyes
on it and so many brains thinking about how to make it better and more efficient is that they
just figure out how to break up the information and maybe they have to write to and from the hard drive
the whole time to get it to work on a really, really crappy piece of machinery, but they can run
them on potatoes and they just can or toasters, whatever. And so that was a huge thing. And that Google
engineer was specifically like he just, and one part of the article is just like two thirds of
the article actually. It's just like this list of like this happened. We were certain this wasn't
going to work. Then it happened like a week later. And you were never going to be a run on a raspberry
pie. Here's a link to the guy running on raspberry pie. It was just like one thing after the other is just
like them centralizing it and being certain that it wasn't going to. They knew how the space was going
to be. It's just kind of that same thing as any person can be arrogant and be certain that they
understand the whole picture, but when you let this information out to people, when you release it,
and you get a million people looking at it. It's just, we don't know anything. So, in essence,
so you're saying that you think AI is going to become an open source kind of project that is very
viable that works just as good as what we're seeing with open AI today. And because of that,
really the power is going to be more in the individual's domain than maybe the corporation's domain.
Am I capturing that correctly?
Is that what you really think this is going to percolate into in five years from now or 10 years from now?
Yes.
I think because the limitation is purely GPU, it's computation, which oddly enough, it's creating a service,
is creating a kind of a software architecture or ecosystem that is interestingly a lot
like has a lot of the same characteristics of proof of work because how we interact now with
or how we will be interacting with our information and with our network graphs and all
of these things will actually be up to now. It hasn't been a heavy computational thing.
It's been an organizational and an indexing thing, you know, that sort of stuff.
We basically have ways of shortcutting through everything.
But in order to get a deeper level of understanding,
kind of an intellectual, pulling intellectual patterns out of it
because the scope of information has just gotten larger and larger and larger
until we're losing things in our indexes.
And because of that, that deeper level of understanding,
I think AI is going to be our primary, particularly large language models,
is going to be our primary mode of interacting with machines.
Like, it's going to be an interface in the same way that, like,
You know, when I say when Bitcoin is a consensus mechanism and I say it's a lot more like a clock than it is a payment system like Visa.
Well, in that same sense, or in a similar way, at least, I think AI is a lot more like the keyboard and the mouse than it is Photoshop.
It's a window through which we're going to look at and use all of our other programs and applications,
not necessarily the program itself that we're after.
So I would think that if I was trying to compete in this space, in the AI space, I would think that to attract more data and to attract more inputs of customers, you know, requesting information, that if I would state up front that all of your cues, all of your inbound data requests are encrypted and I have no clue who you are, it would be like a magnet to people that would want.
want to use such a service, which further makes your model all the stronger because you're
not collecting or because you're receiving so many inbound requests and then sending back in an
encrypted way to that user, the data or the AI response that they're requesting, that it's an
incentive that would attract the most amount of data into your model by offering such a service.
Is that where you see this going as well that makes it more competitive for the people that are offering such a service, such an encryption type service to the access models?
I think there's definitely an element there and a lot of different companies will try to entice as many users as they can to their platform.
But the thing that I was mentioning here is that our interaction is becoming computationally heavy.
And so normal API calls are going to start costing money, like real money.
And they've essentially been free up until now, free to the point that we could easily just kind of like stick an advertisement in our experience.
And then that was enough to just kind of make everything freeloaded.
You know, like just make everything a sunk cost.
Essentially, the entire internet infrastructure is a sunk cost to get ads, right?
And that's going to change a lot with LLMs.
And Open AI is a great example because there was actually an article not too long ago that if they don't change something,
they got a $10 billion investment from Microsoft when they go.
got taken over or purchased whatever the hell it was, $10 billion, and they're going to be
out of money by the end of, like, middle 2024. If they don't, if they don't change something about
their model, like the subscription model just doesn't work very well. And they'll probably figure out
how to monetize it. They'll probably turn it around. It's just a terrible trajectory right now.
But the thing is, is if they start charging too much, like I do have a subscription just because I
really like the tool and I'm trying to play around with as many of them as I can before I sort of
out my structure and my machine and everything. And I also want to have a good comparison for
how to make use of a lot of these things. But if they started charging me like $100 a month,
I just use open source. Like I have GPT for all on my computer. Like I run the local one. It's
different. I guess not as good as chat GPT because they have a massive model with a massive amount
of GPU power, but it's good enough to keep doing a lot of what I'm doing. Real fast guy,
help me understand this because I think everybody's familiar with Open AI. I have my
umberl node i downloaded a gpt app that i guess is happening i don't know i guess it's happening locally
on my raspberry pie it is it is so like all of that data that it's using from its model where is it
pulling that that decision making from or that training it's riding to and from the hard drive like
the model has already been built by somebody else off the you know an enormous data set yeah
And then you downloaded the weights, essentially.
Interesting way to think about it that I have yet to be contradicted on,
I'm sure somebody who understands this in greater detail is going to be like,
that's an inaccurate analogy.
But I think it's useful.
And depending on the response I get from a person who does tell me I'm wrong about it,
I'll probably keep using it.
If I think, you know, it's like I've used analogies for explaining Bitcoin or whatever,
and then like a Bitcoin developer will come in and be like, well, that's not exactly right,
blah, blah, blah.
And I'm like, well, what you said is just kind of nuance to the analogy.
but I think the overall picture is actually right.
The evil of specificity and nuance kind of obscures the how, the bigger picture sometimes.
But a really useful analogy that I think is a very valuable.
It's the thing of this as like kind of a compression algorithm,
like a really, really intense compression algorithm for relationships instead of hard-coded data.
So like when you zip something, what you're doing is basically you're using a math problem,
so that you can pull out a lot of information with computation.
You have to unzip it.
Like imagine, like, you can just, like,
one plus five plus five plus ten plus four is 25.
And if you have, like, that math problem and you, like, fill in variables,
you can actually just store the 25 and the math and the function or whatever that you use
rather than, like, the six data points or whatever it is, right?
Got it, yeah.
So a really dumb example, but it's essentially the same sort of concept.
So that's what happens when you zip something.
But when you unzip it, you don't get like sort of the file back.
You get exactly the same thing you put into it, right?
Like it's not lossy.
It just trades storage space for computation.
What AI is doing, what these models are doing, whether it's an image diffusion or
its language model or anything like that, is that it's doing, it's making so many connections
between the data that it's literally taking like different groups of words and even letters.
and it's coming up with the probability of their relationship.
It's just making a bunch of percentages.
And then it can take those weights.
It can take this huge, essentially giant oversimplification of how all of this data looks.
And it can't actually pull out any of the individual information.
All it can do is store the most common and even at the edge, the least common,
but still most likely to show up if you give it enough words ahead of time,
if you give a good prompt for the relationship between any and all words
in massive, massive data sets.
And it's why you can actually create relationships that don't actually exist.
Like my question about bippity boppity clang clang clong is not actually in the data set,
right?
Like it's not been trained on somebody asking that exact question.
But what it has been trained on are,
a lot of different questions in which somebody has done some alliteration or explain the
definition of alliteration and used an example. And so it understands the pattern of cling,
clang clang is very similar to Bing Bang Bong when it comes to that sort of, when the word
alliteration or the word A, B, or C, or whatever, comes up in trying to describe those two things.
It understands the pattern between them. So it's never actually able to pull out
any specific data it was trained on,
but it's able to mimic the relationship that huge amounts of data stored.
So that's why, like, in an image diffuser, you can say cat.
And it understands if there's a black pixel here,
that there's probably a white pixel here or an orange pixel here,
every single time it saw a picture of,
every time it saw a picture that was described with the word cat.
And it just so happens if you do that on a large enough,
high quality enough data set,
this essentially relatively simple math function in cars in relationship to the data set can pull
out an image of a cat can just poof an image of a cat so it's into existence it's the vector it's the
quality of the vector that you're describing of these relationships and I'm calling it a vector
because this you can scale it larger and smaller and the relationship to the other point is what's
important and you're saying that as time goes on and so many of these these relationships
are understood, they're not memory intensive or relatively speaking, and they scale with one
another as far as like, as we move along on the timeline, they're like modular blocks that you can
keep building upon and why there's no competitive moat is because anybody can basically
extract these vectors of a cat. Or am I? And there's another than I sound like I'm crazy.
I guess I never really have, I haven't, I've been talking around the point rather than
getting straight to it. So let me get straight to it. Yeah. So the big thing is that training the models
from scratch is a horrific process. It's a really big process. Ask Alex Spetsky. They've been training
the spirit of Satoshi. It's not easy and it's costing a lot more than they went into it thinking,
but they're finally getting like great results, you know, and like really liking the direction
or whatever. It's just obviously a much larger scope of project than they had intended going
going into it, but they were committed.
But training the models from scratch is an incredibly difficult process.
Taking a model that already exists and adding in some specific smaller training
or adapting it based on new information and new input from a user base that's interacting with it
and then retraining it with just like kind of an incremental improvement is a hell of a lot cheaper.
A lot cheaper.
Let's take a quick break and hear from today's sponsors.
No, it's not your imagination.
Risk and regulation are ramping up,
and customers now expect proof of security just to do business.
That's why VANTA is a game changer.
VANTA automates your compliance process
and brings compliance, risk, and customer trust together on one AI-powered platform.
So whether you're prepping for a SOC 2
or running an enterprise GRC program,
VANTA keeps you secure and keeps your deals moving.
Instead of chasing spreadsheets and screenshots, Vanta gives you continuous automation across more than 35 security and privacy frameworks.
Companies like Ramp and Riter spend 82% less time on audits with Vantta.
That's not just faster compliance, it's more time for growth.
If I were running a startup or scaling a team today, this is exactly the type of platform I'd want in place.
Get started at Vanta.com slash billionaires.
That's Vanta.com slash.
billionaires. Ever wanted to explore the world of online trading but haven't dared try? The futures
market is more active now than ever before, and plus 500 futures is the perfect place to start.
Plus 500 gives you access to a wide range of instruments, the S&B 500, NASDAQ, Bitcoin, gas, and much more.
Explore equity indices, energy, metals, 4X, crypto, and beyond. With a simple and intuitive
platform, you can trade from anywhere, right from your phone. Deposit with a minimum of $100 and experience
the fast, accessible futures trading you've been waiting for. See a trading opportunity. You'll be
able to trade it in just two clicks once your account is open. Not sure if you're ready,
not a problem. Plus 500 gives you an unlimited, risk-free demo account with charts and analytic
tools for you to practice on. With over 20 years of experience, Plus 500 is your gateway
to the markets. Visit plus 500.com to learn more. Trading in futures involves risk of loss and is not
suitable for everyone. Not all applicants will qualify. Plus 500, it's trading with a plus. Billion
dollar investors don't typically park their cash in high-yield savings accounts. Instead,
they often use one of the premier passive income strategies for institutional investors, private credit.
Now, the same passive income strategy is available to investors of all sizes thanks to the Fundrise
income fund, which has more than $600 million invested in a 7.97% distribution rate.
With traditional savings yields falling, it's no wonder private credit has grown to be a trillion-dollar
asset class in the last few years.
Visit fundrise.com slash WSB to invest in the Fundrise income fund in just minutes.
The fund's total return in 2025 was 8%, and the average annual total return since inception
is 7.8%. Past performance does not guarantee future results, current distribution rate as of 1231,
2025. Carefully consider the investment material before investing, including objectives, risks, charges,
and expenses. This and other information can be found in the income funds prospectus at
fundrise.com slash income. This is a paid advertisement.
All right. Back to the show.
So building on other people's work is specifically something that is going to feed back
really, really fast, also more broadly with the scope of the ecosystem or environment
that is using it.
And actually the big billion dollar models are shooting themselves in the foot because
they're also so worried about this thing being dangerous or biased or offensive that they're
crippling it from being able to do so many things. And the LLM also doesn't have like a moral
understanding of what these things are. They're trying to train it on the specific language.
So like an example is that the code interpreter of chat GPT, somebody uses an example.
I don't know if they've fixed it or not, but they're, you know, obviously doing constant
feedback. So they're trying to find that sweet spot. But I don't think there is. Like the idea of
having unbiased model is just what kind of bias are you giving it? There's no such thing as unbiased,
right? But one of the things they ask is like, how do I kill a process on my computer using the
command line or whatever? And it responded, you shouldn't kill people killing is bad. You should be
nice and you should help people. Like, like, it just took the idea of kill and it said,
this is bad. So a basic function of like, I just want to know how to do a process to kill a,
command to kill a process. You have to go to some other model. So they're shooting themselves in
the foot in this way, which is actually great for us. But the where I think the 10 to 20 year timeline is,
is that these are self-hosted
and that it is increasingly going to go down
to smaller and smaller individuals
and contribute to the dis-economies of scale.
But in the middle ground,
where I think we're going to be in the next five to ten
is an explosion of alternative services
because there's no magic sauce
that Google's LLM has
that some company with 10 A-100
Navidia cards,
you know, with a $200,000, $500,000 investment
can provide with a bigger than your normal consumer model
and not have any restrictions,
not have any restrictions,
and they can offer essentially the same sort of service
that Google is offering.
And we've seen exactly this too.
Like there's not one LLM service out there.
Like there's one Apple platform, right?
And then there's Google.
And then there's one Apple Maps,
and there's one Google Maps.
And then there's maybe like one or two alternative map services.
and a lot of them just kind of pool data from the big ones, right?
But there's LLMs everywhere.
Well, guys, I've seen this conversation on Noster with some really smart developers
where they're saying if you're going after a general AI,
like you're just, you're not understanding where this is all going.
And so like just to kind of use it as an example,
let's say I wanted to start a business and I wanted to be an expert in reading financial
statements in detecting fraud in financial statements. And I spend a couple hundred thousand dollars
training this model. And all I care about is feeding this thing 10Ks and 10 cues in detecting
fraud before it happens from reading these reports. And maybe I'm sprinkling in social or whatever
else I want to do. But the mission of the company is to detect fraud from publicly traded businesses,
which would be a great idea, by the way. Fantastic idea. I'll invest. But some
somebody who's like Open AI, how in the world are they going to compete with a company that is
specializing in this specific niche? And let's say you're another business and for some reason
you want access to this model. And so you start feeding all your data to me, this company
who's specializing in this particular thing. I just don't see how in the long run anybody's
going to be able to compete with this with a company that's really going after a targeted
specific thing like what I described, and I think you're just going to have subscriptions to
that service to maybe embed into, you know, if somebody wants to have a model that's broader
that covers all of finance, let's say you have a model that's covering all of finance,
and they want access into this fraud detection thing. Like, they're going to subscribe to the API
of this company's very specific training. Is that how you see this evolving and it's like
modular AIs that are really good at that.
specific things. There's a company that specializes in detecting cat photos or whatever it
might be. That's going to be their niche. Is that where this is going? Yes. And this is not only
what we have seen specifically is that everybody is taking advantage of their specific data set or
their specific use case, but it also makes sense from a kind of general intelligence is not an
efficient thing. Perspective. I talked about this. We talked about this a lot with Drew on the last
episode of AI Unchained actually was just that the overhead of just being intelligent in like
a thousand different ways when you're primarily using it for one or two is a lot of excess cost
for no reason.
Yeah.
And so like the idea that we're going to have this giant godlike general intelligence that
can just do and is capable of anything is antithetical to everything we know about why evolution
works.
The idea isn't to just have
maximum total intelligence in all fields.
The idea is actually specialization
so that you have the minimum required
intelligence to accomplish your goal.
And when we're facing something
that we understand as poorly
as these kind of black boxes
of pattern recognition
and this kind of like stored intelligence
in software and data form,
I think it's really, really useful to just kind of go back to basic fundamentals.
Is, all right, how does order evolve, period?
Like, what are its characteristics?
And I think you, if you just kind of take some fundamental principles of reality
and realize that AI doesn't negate these things, it doesn't change these things,
it's just kind of a new layer of how we're able to digitize how we think,
as opposed to just like what we think about.
That's the space that we've been in, right?
We can digitize and store all the things that we think about, the media, the text,
like all of this stuff.
Now we're digitizing how we think, the relationships between those things.
I just don't think a general, like a massive general godlike AI that can do all of the things
and understands all the things is even slightly computationally efficient.
And I think it kind of falls apart as soon as it kind of exists because an entire ecosystem
of specialized models that do all of these.
extremely explicit things as best as they possibly for the same reason in the market of a billion
people specializing is a whole hell of a lot better than one giant government putting a boardroom
of experts together to do all of it like I just think that's just not sustainable it's not
suddenly that fundamental reality isn't suddenly different when you turn into a piece of software I could
see a general AI saying well if you really want to understand the accounting fraud I can subscribe to this
API for $5 to really do that.
I could see a general AI doing that, basically pointing you to the best source to be able
to do a certain type of analysis and then offering up some type of paid.
We're going to get into why Bitcoin really enters itself into this space through an
example like that one.
Before we go there, though, I really want to talk about KYC, because you were making some
really awesome points of like, look at OpenAI.
I, oh, what's his name?
The guy who's...
Sam Altman, right?
He is hell bent.
He's hell bent on this idea of K-YC, indexing everybody's eyeballs on the planet.
Why is this going to fall flat on its face from your point of view?
Okay.
Going back to what I said at the very beginning is that to get an ID and credit card information
and stuff is literally just a handful of bucks, like, per cent.
person online is, well, with just like one or two good pictures of that person. And it was funny,
you originally needed like 30 seconds. One of the first ones, excuse me, 30 minutes of high quality
audio, one of the first tools that I used for like mimicking somebody else's voice. Like I had to,
in fact, one of the ones that I used for the Matrix meme, I had to have five minutes. So it's like
incrementally gotten less and less. But I had to go through a bunch of like Lawrence Fishburn
interviews and all the scenes in the movie where there wasn't a lot of background noise and cut out
like Lawrence Fishburn's conversation until I had five minutes of audio so that I could train
it to sound like Lawrence Fishburn so I can make my Matrix mean. It used to be 30 minutes of high
quality audio. Then it was five. That was like 10 seconds. Like to for really good models.
Like surprisingly good. Ten seconds. It's nothing. Like somebody could call you up and you could be like,
hello and you're like hey can I talk to Billy Bob or whatever it's like I'm sorry you have the wrong
number thank you and that was enough that's enough um that's so be wary of con be wary of phone calls from
anybody and also in that same vein I think a really really prudent thing to do as we enter a space
where you can't prove someone's human in any digital context anymore that have a safe word
have something to talk like if something serious happens because if you get a call from a number
you don't recognize or somebody gets a sim swap that you do know. And then you get a call from them
and they sound exactly like them and they're telling you to wire money somewhere or there's an
emergency and I just need $2,000 worth of Bitcoin sent to me or something, something like that.
That doesn't mean it's this person you think you're talking to. My family has some safe words.
And I think it's prudent to start thinking about that because this attack vector is going to
show up quicker than we think. But anyway, in that sense, all of our tools for digitally proving
who you are, or just that you are human.
You just gave an example of like a general LLM that can figure out which LLMs it might need
to complete a task or which models it might need.
An agent that can literally go out, pay for services,
and assess the best way to accomplish a task.
That means that you can literally prompt a agent that's connected to the internet
to go find all of the things necessary to accomplish this task.
And the attacker cannot even know how to do it.
The attacker can just ask the agent how to do it or to do it for them and give it enough funds to make a return on who they're scamming.
And captures, done.
Like, I thought it was already, like, in the bag because I had heard someone say that it was, I'm talking about like months and months ago.
So I don't know if somebody was recreating this work, but it didn't matter.
It was as soon as I was looking into it, there was another model that somebody came out with that can do captures better than anybody.
I mean instantly.
Like the whole idea of like let's pay someone to do,
like as soon as you have the data set of like paying a bunch of people to do
captures for you and then you can get it to read the captures,
captures is dead.
You know,
and you just make a new model.
And if they change it,
they're going to have a model that will be able to do it better than all humans
before the humans catch on to figure out how to use the new thing.
Like,
you know,
have you ever done the listen to this instead when you can't like read it?
It's so much worse.
It's so much worse.
I don't understand how that's like a solution to that.
having an image of them, like having a video of them,
nope, you can live put somebody else's face on mine.
I mean, you can live make a cartoon and put yourself in a completely different environment.
Like, it's like this green screen stuff on steroids and change the voice live
so that even has like the cadence and the emphasis.
And then if you can get $5, with $5, you can get a picture of their ID and get all their social security,
like any of their identifying information.
And how easy is it to make a picture of them holding up their ID or a note that says,
Hi, I'm Guy Swan and there's this information and I'm signing up for strike or, you know,
whatever it is that I got to do my selfie with.
Like, and that's now.
That's now.
You know, like in three years.
But because of that, because API calls are increasingly going to be GPU intensive,
like very computationally intensive and more and more expensive, we're going to have to have funding.
for all of this. We're going to have to pay for all of it. And we have to move money around in an
environment where the fraud is getting worse. There's just not going to be a solution. I don't think
that makes it better. And so one of the things, in fact, this is actually one of the ones where
somebody came up to me and gave me like an anecdote at Diplog Boom that really reinforced this.
And they were talking about they're running a merchant services that integrates both Fiat and Bitcoin.
and he was taking notes while he was listening to the talk.
And this whole section about K.YC was specifically, he was just like,
oh, my God, I hadn't thought about it like this,
but you're absolutely right.
He's just started listening, like in thinking about all the things that they're doing
to deal with the credit card fraud and that his merchant services is having to do
to basically put up a barrier to make sure that they're not constantly putting through fraudulent payments
that then get reversed.
And worst of all is not only do they get the money back,
Like, not only just is the money pulled back, but there's like a $25 or $35 fee.
It's basically like an overdraft fee every single time it happens.
And he said literally thousands, thousands of charges a day.
As soon as you just have access, like as soon as you just kind of open up the window, they just come in.
And merchant services and like companies like that just have to sort out how to do it.
And then he was just, and then he put underneath it and he says, and Bitcoin payments,
he's just like, they don't do anything.
Like he doesn't think about it.
There's not infrastructure for it.
There's not like methodology for how to figure out which ones are real or not.
They literally just put in Bitcoin payments and those are done.
So this entire structure of things that they're having to do to protect against this fraud,
that again, this is the best it's going to be.
It's the best it's going forever going forward.
The easiest.
It doesn't exist.
They don't even have to plug it in.
It's the Bitcoin side of the equation.
Yeah.
Like they have to manage some channels for Lightning Node.
or they're using an LSP and they don't even have to think about that because somebody else
has all floated that for like a tiny fee.
Guy, I think something that's really important for people to wrap their head around is when
you're pinging that AI to perform work with its processor, expend energy, and it then immediately
replies with the answer.
You are paying, if I'm using traditional financial rails, I'm paying for that service 30 days,
depending on how long the clearing takes place,
for them to actually receive the payment in their account
and it to be settled 30 days after the delivery
of the quote unquote product,
which is the AI response.
And so this delay between,
I've just delivered it,
it'd be like me giving you a loaf of bread.
You have it in your hands and you're saying,
all right, well, I'm going to pay you like 30,
like I'm going to settle with you 30 days later.
Like anybody who hears that is like,
Well, that's insane.
That doesn't scale if I'm asking for a million loaves of bread in one second, which is what happens with a lot of these AI requests, right?
Like, people need to think of it in terms of how much monetary energy some of these really big companies are pinging these servers for.
And they're getting delivery of the product, which is the AI response right now.
But they might go out of business in 15 days or whatever.
and what they paid them with is being clawed back.
And like all of these issues,
like you really need to have some type of immediately settling money
so that if I ping the server for a 10 cent request
or a half a cent request or a $10 million request
that we can settle up right then and there
without it being clawed back because of bankruptcy or whatever.
Exactly.
What you're describing is the fact that the computational
cost of the LLMs actually work like an instantaneously delivered bearer asset.
They end up having that attribute. Sam Altman specifically said with Open AI, and this is why
Open AI is on the struggle bus right now as far as their longevity is because is that every
single time you ask it a question, it costs five to eight cent instantly.
Like you're just pumping power through a computational machine. It's like mining eight cent worth
of Bitcoin, right?
Yeah, yeah.
You can do that for free.
And, like, it's just a huge computational cost.
And trying to put together a machine myself,
I can tell you how much it costs to get that computation.
Like, it's a thing.
But the cost is immediate.
That's never been the case with APIs.
Like, if you're selling a physical product online
and you get a charge back in 20 minutes,
or you find out, you know, it takes you 40 seconds,
even if it takes you 30, 40 seconds to figure out,
that a charge is fraudulent or with a stolen credit card.
And you don't even execute the full charge,
but you go ahead and give them the subscription.
You go ahead and give them access because you don't want to lock out a customer.
You don't want to screw the customer experiences.
Wait for three hours while we figure out whether or not you're a real person.
No, you just let them go into the website, right?
But it doesn't cost anything with like a normal website.
Like to read an article, like it's just, again, it's all loss leaders.
Like APIs are free enough with all of the other things that you can just kind of hide that.
You can obscure that cost away and just kick off the customer that's trying to free load or using a stolen credit card.
Or if you're selling like a physical product and you figure it out in two hours, you just don't ship the product.
If you could ship the product within a second, well, then you'd have a problem.
I just don't understand why they can't figure it out though.
Like it's so much resource, more resource intensive to say, well, I just want a KYC everybody so that I can understand whether.
I think this person's good for the money or not, as opposed to just taking immediately settling
money that clears and you know, like, why? Why can't they figure that out? They don't understand.
They don't understand the relationship and the whole framing is, like, they're not asking about
which money should they use. They're asking about which payment method they should use, right?
They're trying to figure out the fraud problem, not recognizing that it is a credit problem,
that it's an identity problem. And identity,
is just kind of, KYC is just kind of this band-aid.
And it's been the universal tool, right?
It's been the tool for this all along.
That's my Twitter is K-Y-Cing everybody is because they've just had such a huge
bot problem that they just sick of it.
And they're trying to solve it the way that they think they can solve it.
But the solution, the K-Y-C solution is already outdated.
But that's why I think this is all going to get way, way, way, way, way worse before it gets
better.
Because the user experience, I mean, how crappy is the user experience of having to take a
selfie and hold up your ID and wait for identity verification and plug in, go to some other
website and it's like, we're connecting to IDMe.org or gov or whatever and like go through this
lengthy process and fill out another form. It's getting to the point we're just signing up for
a basic service that I just want to try for like 30 seconds to see if it's any good is becoming,
it's starting to behave like setting up a bank account.
physically in person.
Like it's a nightmare.
And I don't even think that those are real people that are looking at these things to determine whether they're real or not.
I think it's all AI.
It's all AI, which makes you say, well, if the AI is checking it, why can't the AI outsmart the checker?
And it's just this endless do loop of not approaching the solution from the base layer, but just kind of building these layers and layers of insanity on top.
of each other that don't actually address the root problem.
And it's just kind of this, like we started with like kind of credit and pull based payments
online and it worked good enough.
It was out of balance with the way the technology worked, kind of like selling CDs in the
era of file sharing was out of balance.
And then they just kind of kept trying to add a new piece to it to still make it fit.
It kind of contort it to shove it into.
this new reality and let the old system keep working, that's where we are. I think KYC is
trying to sell a full album and not letting you buy per song or have a subscription or whatever
it was in the 90s and with the growth of file sharing is that it's just out of balance
with the way technology is. And the new reality will not let it exist on a long enough time scale.
But in the same way that copyright and everything got way worse and the crackdowns and the
lawsuits and the you couldn't seem to get any good music anywhere online for this huge scope of
time. And then once one service did pop up, it got shut down within six months. So you're
constantly having to change stuff. I think that's kind of where we are with K. YC right now.
Everybody's in a KYC. Everybody's just going to get so much worse until the user experience of
trying to use any like the barrier of just setting up another subscription or trying out some sort
of service, like free trials are going to start going away. Because if it costs you eight cent,
immediately and somebody can just bail out on it.
Like you're talking about an API where somebody can just kind of plug into your service
with a fake credit card.
And if you don't catch it, they can just pound your API all night and just burn
through your GPUs.
I mean, God forbid you're on AWS or Google CoLab where it just like automatically
expands based on the request.
Yeah.
And then you just wake up in the morning, you have a $50,000 bill that you just can't do
anything about.
And you got it on a credit card of like one person that used this, that scaled this
with AI.
Somebody's just running like a little simple model on their computer.
And what they do is they took all the highest quality of their 800 or 1,000 credit card information that they did online and went through and created fake IDs and fake personas and fake selfies and everything for all of these.
So one person is able to do this at the scale of 100, even though it would have taken them months and months and months to do this before they can do it overnight.
This new, it just is not going to hold up.
And I think it's literally just the lack of knowledge about these tools and the fact that they're not widely accessible.
Kind of like the future is here, but it's not evenly distributed.
As it gets evenly distributed, we're going to see everybody silo.
Everybody's going to put up walls and have limited access to their APIs.
And the idea of the free internet is going to get further and further away.
And we're going to have KYC, everything.
and the user experience of the open web
is just going to get crappier and crappier.
And everybody's going to be sick of it.
Everybody's going to be...
The pressure has to build to the point
where everything is annoying
and everything increasingly sucks
because of this imbalance
until people are willing to go down to the base layer
and just go back to the drawing board
and say, what are we doing wrong
that has given us
this thing, this massive contorted, band-aid, nailed-together, duct-taped thing that doesn't
actually fit in our world. But at the exact same time, there's going to be a whole bunch of
Bitcoiners that have the best user experience possible, that decided to drop the whole K-Y-C crap
from the get-go because they already have a social graph based on keys. So that's it. That's
your head start. If you have a system based on public and private keys, going back to those
based realities that AI doesn't change, hard problems are still hard problems.
Proof of work is still proof of work.
And if you're delivering a bearer asset, where those APIs to read an article or watch
a movie or download something or order something online could easily be negated and batched
together with all those other people, when that's different with the API for an LLM,
and that cost is instantly delivered and irrecoverable.
Well, then if you can accept instantly delivered payment for the,
that cost, who cares?
I don't care if you're a bought or a person, right?
If I have a $50,000 bill overnight from somebody's absurd API request,
but I accepted only SATs and I made $50,500 off of it, that's a great morning.
That's a great way to wake up.
And nobody, again, like going back to the merchant services thing,
I don't have to have any barriers.
I don't have to have any like, oh, God, how do I sort out the noise from the fraud?
How do I make sure that these thousand charges aren't going through?
No, just except sats.
And from the user side, nobody even had to sign up.
Nobody had to fill out of form.
Nobody had to, literally have a public key to log in with.
You shoot some sats.
You shoot six cent worth of sats.
You get five cent worth of computation back.
And it's instant and done.
And I don't even have to have a relationship with them.
And I can give them exactly what they want with no barriers, with none.
And then going back to the elements of open source and the idea that we can train these models to be hyper-specialized and we can incrementally change them, you're also looking at an ecosystem that's going to be learning from each other rather than constantly redoing everybody's work.
When you're retraining these things and we're talking about hundreds of thousands, millions of dollars to make a decent model, it makes a whole lot more sense.
like in your example of the financial fraud,
makes a whole lot more sense to take a base LLM that's great already
and just add in all your financial information.
Like take the best open source model out there.
And because it already understands talk,
you don't have to worry about it understanding the language.
And now give it this smaller adjustment
on your high quality data set about financial fraud.
And then it will be able to do all of those other things,
but be specialized in understanding,
and recognizing financial fraud, but still have that same general sort of, I'm just looking for
what the most likely fraud cases are right now to start investigating, blah, blah, right?
Like, it's still got the general language model handled.
And then now imagine that, so the analogy that I think the L-L-L-L-M, the language model space
is heading into that we've already seen in the image space is something referred to as L-R-R-A,
or Laura, but it's L-O-R-A, it's low-rank adjustments.
And what it is is it's hyper-specialized training.
And all of these people with just like a little bit of GPU power,
in fact, that's one of the projects that I want to tackle
when I get this machine up and running,
is I want to make my own Laura for just to do it, right?
And Corridor Digital has some great videos.
They're a YouTube channel.
And they have some great videos on like training.
Like they just take a bunch of pictures of themselves
and they train it to recreate images of themselves.
And then they can create anime characters of themselves
and put them in this environment.
It's really, I mean, there's going to be a VFX revolution.
The AI, the image generations, and some of the new, like, models
or the way they're thinking about models.
I'll actually, I'll come back with that.
I'm going to put a pin in that.
The way they're thinking about models.
But so you can do these micro adjustments.
And then there's, like, websites, hugging face,
Sibit AI, like all of these things that,
collect these as like a big community of all these open source micro trainings.
And then they take the big open source models and they take like a thousand great
micro trainings and they retrain the big model.
And now the big model is better at doing all of these very specific things.
And I think again, going back to let's look at how like some of the base like fundamental
realities like fundamental truths, a group is going to be a whole lot better at
specializing and learning, like a community,
then one central group,
like one central institution.
No government or corporation is going to outcompete the market at large,
a long enough time scale.
It's just not going to happen.
And then at the exact same time,
one of the big dis-economies of scale
of these billion-dollar corporations
is the speed that the economy changes.
So, you know,
when the Industrial Revolution has,
You have this big wave of disruption.
And now there's all these new business models and new ways of accomplishing things.
And then you have 50 years of basically implementing this at scale.
Like it's just a massive orchestration in order to change all of the handmade processes and put in the
assembly line processes.
But then you have, we have these constant iterations of new ways of doing things and new ways
of thinking about production.
And if you've got a billion dollar corporation invested in,
10,000 like assembly line, like finely tuned production processes.
And then you have a really serious fundamental change to how that tooling works.
You have far and away got the largest expense for retooling.
You have a big problem on your hand.
And this is why like Fiat finance is such a centralizing force because one of the things
you see these big corporations doing is not spending all their time updating.
and, well, I mean, they do, but it's actually a second order effect from their primary method
is buying the small startups that are.
Is they can get financing with newly printed money so they can get it in interest rates that make no sense.
And so rather than having this disruption phase where all of these small agile startups
that are really innovating and coming up with new things, replacing them,
what you have is they just try to get big enough to get bought.
spot.
Yeah.
And,
but you think about it in a non-fiat world,
that doesn't make any sense.
No.
No.
That's not sustainable.
A Google does not even have anywhere near the possible margins of a company that's
still at 100,000 users or 20,000 users or something like that.
They have,
they have a 300x capability to increase and grow with.
They've got a genuine new innovation.
Google's going to go 3% tops.
They're already too big.
But they're going to.
go 15% if they get all the new financing and they can buy up all the new startups. So they
appear to be more profitable as an investment than they actually are. When in fact, it's just that
the Fiat apparatus is far more likely for all of the new funding at the non-genuine interest rate
and the money printing machine is just going to go to the big first. So they're going to win in an
outsized way against inflation, always, always. Because big is just easy to invest in. Big is always.
if there's 10% inflation, well, they're going to go up by 13%.
Why? Because everybody invests in them with the newly created money in order to beat inflation.
So it's a self-fulfilling prophecy, the very notion that we're creating money and then trying
to figure out where to put it to beat inflation just means it's concentrating, which means
that there's going to be in that concentrated area, something that beats inflation way better
than everything else. That's why you get these huge bubbles, like in housing or whatever it is,
is because there's a feedback loop that if you put money into new money into one specific area,
well, then that's where all the inflation is going to be.
And it's going to appear to have these outsized profits.
So that's this major recentralizing force that this era of disruption that has been pushing back
against this era of disruption and kept big things big and billion dollar corporations
and government institutions in their place, even though the technological force is actually
trying to break them down and split them up. But I think that's essentially falling away. The bigger and
bigger the challenges for Fiat, both payment rails and just how far can you push the debt Ponzi,
as these things kind of get stressed to their absolute breaking point, all of that's going to come
crashing down. But at the exact same time, you have AI, which rather than being a process
in itself, it's a way to create this open ideation and iteration.
on the process, the idea of processes.
The analogy I used in the talk was that it's like a 3D printer for how to accomplish things.
3D printing and additive manufacturing makes it so that you don't have to retool for every
individual product, right?
Like crypto cloaks, they can just come out with a new product or a new design or a new thing
and just immediately start selling it because they've got like 20 3D printers and they can
just put in a different file.
And now they're making a completely different tool, gear, case, you know, whatever.
it created this.
If you're disrupting the product,
they no longer have to retool.
So the broader your capability,
the more generic your capability
of accomplishing these things,
the better. Because then you're extremely agile
and you can move very quickly
and respond to the environment,
to the ecosystem,
rather than kind of become this permanent,
I have to do it this way.
Well, in that same way,
AI is a way to instantly retrain
your processes and all of your administrators and all of your tools and all of your software
without having to go through the training process. So the bigger and the more settled you are
in how you go about doing things in the structure itself, the worse this is going to hurt you.
Because I think we're going to start, things are going to start changing so fast and we're
going to come up with new ways of iterating and thinking about how to do the processes at all
and how to set up the idea of an organization,
it's going to move so quick that I think disruption
is just going to kind of become a permanent part of everything.
We're just going to kind of move into this layer
where there's less innovation or there's less need for innovation
on the actual individual pieces
because there's this whole new scope of just like,
how do we put the pieces together,
like recreating the entire puzzle over and over and over and over again
rather than having a set puzzle that everybody's trying to implement.
So because of that, I think, like to use an example of like the last paradigms where we see
these happen over 20 years, right, is you have these huge companies become out of nowhere
in the digital space and, you know, music subscription finally took off and iTunes, or individual
music selling individual songs and iTunes blew up. Now they're the dominant player. They
disrupted the old institutions and then subscriptions with a music killing the radio and then
table TV and YouTube and all these things. You get this whole.
new model and then you get another big giant thing, and then you go through another phase of technological
disruption. Well, I think these are going to start squeezing closer and closer and closer together.
To the point that when we think of, oh, this next company is going to be billions and billions of dollars
and it's going to control and run everything because it's the new platform, it's the Facebook or the
whatever of this new era, I think before you even get to the point in the curve where that is
clearly this is the successor and this is the new winning platformer idea, the next idea is
already going to be here. Like the next piece of the puzzle or the next process or platform
or whatever that's replacing them, like it's just going to the, whereas we kind of separate
out horizontally, we're also going to suddenly just split up vertically and in like a massive,
massive way where the people in the process of disruption and the systems in the process of
disruption are going to be disrupted before they've completed the last process of disruption.
You know what I mean?
And when we look at what's happened more recently, we're seeing a trend of the hockey stick
curve coming up faster than it's ever come up.
So to your point of a company replacing the one that everybody thinks is going to start to
dominate, it fits with what we have seen over the last 10 to 15 years, that it's going to come
out of nowhere, and it's going to come so quickly and be so disruptive because it's coming so
quickly. Guy, I want to make sure that we have the, what was the name of the article that you
said from the Google engineer? I want to make sure we have this in the show notes. Do you remember the
title? The only read that I have on AI Unchained. So I'm not making it a AI Unchained like a show where
I, like, I think when audible, you know, but I am actually going to read some articles that I just
want to talk about when either I can't find a guest to specifically talk about this topic,
and I just got to get it off my chest.
But right now, the only read, the inaugural read, is that piece.
Oh, perfect.
It's called, from a Google engineer, we have no moat, dot, dot, and neither does open AI.
Love it.
So that's the title of it, and I'll send you to the link.
It's obviously only on the podcast version of the show, not the YouTube version.
But it's great.
You should definitely, I mean, it's outdated instantly.
As soon as it was published, like three days later, it's missing some major pieces of the
But I think it's a brilliant framing and to see how quickly it went against them.
And how quick, I mean, you think about it.
How many companies have like AI now?
Oh, everything.
There's a new one.
There's a new one every 30 seconds.
And so many of them are running their own models because they do want to train it.
They do want to give it that specific feedback for whatever their thing is or they want
to do the Laura or the Laura on, you know, their image diffusion or whatever it is.
they're dropping the Open AI API.
And that's another example of being disrupted before you even completed your cycle is Open AI is already on the downstroke.
And this just happened late 2022.
It's like November or December that this like whole thing just kind of blew up.
And they went to billions of visitors.
And now they're dropping.
Like they're already on the way down because everybody's already kind of extracted how they operate and created a thousand alternative platform.
and a thousand specialized models for God knows what.
I mean, I can't even, like, I get excited about something that, like, got,
then hit the leaderboards of the open source models on Hugging Face.
Um, that, like, because the Falcon 7B, a model was, like, really big for like a little while.
And I was like, okay, I got to figure out how to install and use this on my thing.
And it was like a week where I was trying to figure it out and something else replaced it.
And it wasn't even, it was like, it was like, number seven or something like that.
And I was like, I had even installed it.
I don't have time to install the thing.
Guy, where do we go to see these rankings of the newest, latest and greatest thing that's actually high signal because like I'm hearing all these different models and I'm like, I have no idea where to look in a reliable kind of way.
Like, right, because I don't want to hear about some AI model from my next door neighbor or this person over here. I want somebody who's truly an expert in this to say, this one's worth paying attention to. This one's worth tinkering with because it does someone.
new revolutionary thing.
So do you have a source that kind of like lays out like here's the 10 AIs to really
pay attention to today and it might be very different next week?
The two best places to keep up with what's going on with AI, which it sucks in a lot of
ways because these are hard to navigate for normies.
But it's not terrible.
If you're trying to keep up with like where the real innovation is happening, GitHub.
And if you're looking for models specifically and you want to know what the best models are,
it's hugging face.co.
Huggingface.coe.
Okay.
Huggingface.co is kind of like the GitHub of all AI models.
And I'm trying to get into this space.
Like it's easy to talk about and people understand the interactions with like LLMs,
like Chad GBT,
and then image diffusion or whatever because there's this obvious like right in your face sort of use case
or reaction that you have with it.
But there's also this entire scope of these other.
models, like vector models and categorization models and like all of these things that are more
kind of like back end. So how you're interacting with them is not exactly like just trying to
make sense of a knowledge base or making corrections in like bad grammar or they think no
every time I say noster on a podcast in my transcription, it says no store, you know,
S-T-O-R-E. And like how do I how do I get a model to understand that that's a mistake and go against
a knowledge graph of like Bitcoin terms or something like that to correct the what I'm actually
thinking, what I'm actually talking about. Like those sorts of things. There's, there's this kind of like
magic of this whole other layers. And that's actually going back to the way models, we think about
the models more generally, like how these things work. Is there's going to be like real magic
innovation is going to occur in how to layer these things and how to understand. That analogy is
where we were in the digital space and the internet was we opened up this space where we could all
collectively think together. And we all started doing this massive amount of thinking that we'd
never done before and reflecting on ideas. And our ideas and what we believe started clashing
into each other and we started streaming each other on social media and all this stuff. And we had
this massive identity crisis because suddenly this cohesive narrative about what was true fell apart.
All of our niche ideas that never really had the air.
to the space to air their grievances or to debate or push back against what the consensus was,
suddenly had space, suddenly could spread like wildfire and a million people could have access to it.
And the cable news didn't have to put them with 15 seconds on the news.
You know, like you actually had nuance.
And because of that, our ideas and all of our thinking just split up into a million different,
a billion different ways and directions.
and the internet just like turned that into an ocean from like a consensus of just a few things
and just a few cultures to basically anything that you could think up, it's there.
What AI is going to do is where the internet and the digital environment got us to reflect
on what we thought about and it's changing and forcing us to realize that things we thought
we true aren't true.
What AI and these LLMs and these pattern recognition models are going to do is have us think
about the way that we are thinking.
Like, what's the principle behind what I'm thinking about?
How do I make it understand my moral foundation?
Like, what's the premise that is present in the way I am taking this thread of thought?
And when we figure out how to make a mathematical relationship between those things and a
general probabilistic, how does this word relate to this word?
That's where the innovation is going to come from.
is abstracting in an entirely new layer.
And, you know, I thought about this thing like my kids,
you think your kids are always going to know technology and stuff better than you.
And you're like, ah, but I'm going to stay up to date on this stuff, right?
Like, I'm going to use all the new tools and, you know,
they're going to come to dad when they need to install this or understand this.
And then I just, I was just thinking about it the other day when the fact that we're going to be thinking about how we think.
They're going to grow up, like my son and probably all the rest of them that aren't here yet.
are going to grow up understanding or being able to reflect so deeply on how they thought about
something last week and how it changed this week, which generally no one is aware of at all.
There's just no scope of understanding or able to reflect at that layer that we've ever
been able to have access.
It's going to wire them to be philosophers in a way.
It is.
It is.
And they're going to understand the foundations that actually force them to think differently.
because the AI is going to be able to reflect on that.
And without any bias or caring or thinking about emotions,
they're just going to be able to tell you flat,
well, that's because this change, like sort of thing.
And when those sorts of weights and patterns are established
and added to the models,
rather than just like a flat relationship between words
or a flat relationship between pixels or something,
I think that's where we are right now
is that we have this plateau,
where we've kind of reached the point in which the LLMs
are going to get better.
because we're just shoving more data into it.
And we have some incremental improvements,
which will still seem like,
they're going to still seem crazy.
Like, I'm not saying that it's not going to get better.
But quality and curation is going to be a huge part of this,
which I think it just so happens something like Noster,
an open source protocol for social media and waiting and liking and zapping
might be a really useful tool if we can get a really big network over there.
But anyway, as the next explosion, I think, in AI,
which I don't think is very far away,
is just understanding some elements of the multi-modality
of how to create relationships,
create logical pathways.
That's probably the best way to think about it,
is how do you give it logic
and then put language on top of it?
How do you create a new layer below it?
And that's how do you think about the math
that's creating the patterns
rather than how do you get better data to shove through the pattern?
In that same sense as I've been watching
a great one just in the context of like VFX.
There's something called neural radiance fields.
And it's a new model that rather than creating like a 3D,
like one of the things that has been really big in VFX for 20 years or so,
kind of since the Terminator era of like OCGs getting really crazy,
you know, Jurassic Park is one of the things you can do is you can do like image scan.
I can walk around something and I can take a bunch of pictures of it.
And then put it into a program and it will recreate like a 3D
image of it with like the texture.
And it's like decent, right?
Like, it's good enough that you can then get it ready and put it in a 3D environment and
then get your lighting right and all of this stuff.
But it's still really involved.
You still have to have a lot of VFX work and compositing and all this stuff to make
it good.
Well, now there's something called, and one of the big problems actually is that light
is insanely difficult in a 3D environment.
Because when you're, when you're looking at light on something like just in your room,
you think about it as like, oh, there's a light source and then that's bound.
off the object. But that's not all the light. The light is also bouncing off the wall to the
object. And if the wall is gray behind it, well, then there's a tent of gray on this edge. It's also
bouncing around the frame, which is creating like a slightly off white inner edge. Like there's
light coming from so everything is a light source because you only see all of it because
light is bouncing off of all of it, right? That's why 3D light always seems fake. It's so hard to put
finger on it on exactly why, but your brain can sense it if you have a comparison. So there's
this new thing called neural radiance fields, and it's just an entirely novel way of modeling a light
map of what's happening in the room. And the model is specifically how does light react.
And it doesn't create a 3D environment in the way that you can just like stick it into
blender. Granted, they've already kind of figured out how to make it trick it to do that. But you can,
I can literally go through with just like a phone and walk through an area and then put it into the Nerf is what it's referred to, the neural radiance field.
And it will create a damn near picture perfect, like perfect lighting.
The reflections will change based on the way you do it.
Radiance field of the entire room so that I can just stick a virtual camera at any point in this room and all the reflections will change.
move the camera, the reflections will shift based on where the camera is in the thing.
And it's gorgeous.
It's gorgeous.
That's unreal.
And it takes zero compositing.
And it's just a slightly new, it's just a method for how to build the light relationship.
And then there's another, there's another paper, I haven't finished reading it, about a new way to create text to video and how to create, like, somehow they figured out how the math can have like two different layers.
of like understanding both the object and the pixel relationships,
which is why image generation applies or translates to video really crappily.
You know, if you've ever seen like the video animations they've done with image generation,
like they're constantly changing or whatever.
And suddenly they're like the person's wearing headphones and then they're not.
And like it just every single frame is slightly different.
While they're creating somebody's created and it seems like a really, really clever method,
a way to have some degree of like object permanence within,
the pixel relationship to basically correct that.
But that's where I think like the major step function improvements are going to be.
And when we start understanding this when it comes to like moral premises and logical premise
and like the way that we think about things and we can give this to the LLM or we can have
a logic model that ties to the language model, we're going to have LLMs that explain to us
what we believe is a logical fallacy and where ideas contradict each other.
be able to test these things and but these ideas up against each other at a layer that we've
never been able to do before. And that is going to really, really mess with people, I think.
Because most people categorize things separately in their minds and they'll believe one thing,
they'll believe one set of premises about their religion and one set of premises about their
government and then one set of premises about like, just like everyday life. And they won't even
recognize that they, none of them fit together. That they don't have a unified theory of the world.
They, in government, people are magical fairies that can just, like, do all the things.
In normal everyday life, you should be skeptical of people and you shouldn't give them any power or control over you or share any information.
And in religion, it's all just fairy dust and there's a man in the clouds.
You know, like we don't hold the same premises, but what we do is we just kind of try desperately to separate these things in our minds.
But the LLM isn't going to do that.
When we figure out how to weight these things by a logical premise, it will be able to square.
all of these ideas together and just basically say,
it's like, well, we just believe a contradiction over here.
And that's not right.
And as we start making sense of, again, the models and relationships in these systems
of how to think about thinking, the amount of change, the amount of, like, you think,
like, the last like 15 years, 20 years of social media, like, has really messed with people
and messed with political institutions and who holds the power and who controls the opinions.
Oh my God.
Yeah, it's going to get wild.
It's going to be crazy.
It's going to get crazy.
So, Guy, if people want to learn more about this stuff is beyond fascinating.
I know you throughout the hugging face.
Was it huggingface.io is a good resource.
Dotco.
Dot co.
Okay.
Dot co.
Give some people some more resources.
Definitely give them your new AI podcast that you're doing.
as well and any other sources that you want to highlight if they're really wanting to dig into
this some more.
For sure, for sure.
Brian Romeli, R-O-M-O-E-L-E, has a project.
I've been bugging him to try to get him all the show on AI Unchained.
So definitely check out AI Unchained.
It's where I'm attempting to document the majority of what I'm finding and exploring and going
through as I try to build my Twitter list of signal, which is very difficult. It's hard in AI.
I've run into a lot of roadblocks in trying to find the cream of the crop, so to speak.
So definitely follow AI Unchained, and I will try to make as much available there as possible.
The multiplex, I think, is the name of it. I had the link. But Brian Romilly's blog, he keeps up with
a good number of things about how to think about these things. And he's building a really fascinating
project, or his goal is a really, really fascinating project about open source, like personal
AI. And there's a, there's another podcast called Latent Space. If you're trying to get into
the knit and gritty. If you're not a developer, a lot of it's going to go over your head. It does for me,
but I just kind of keep listening because, again, 30% of it is very, very applicable and very
useful because they're thinking about this thing about how to prompt things and, you know,
I like how to really find the, get the best juice out of the squeeze.
And they are also a good way to connect to other AI podcasts.
In fact, I might actually have them on the show and try to get them to dump some stuff down.
I should reach out to them.
Then there's a stable diffusion.
Stability AI.
I think it's stability.a.i is the website.
But they are a totally open source.
They're the ones that are the reason for the explosion and the open source image diffusion stuff.
Oh, okay.
Stability AI.
And they also just created, they just announced a new,
model called stability code, where it's a model about like teaching you to code if you don't
know how to code or basically filling in the blank. And this is a major, major, major use for
LLMs and stuff that, like something I have used constantly. So stability AI is a really,
really good one to follow because they have been on top. They're a wonderful resource for open
source development and progress in this space. And other than that, follow me on YouTube and
to the podcast AI Unchained and Bitcoin Audible
because I'm trying to keep up with
and talk to a lot of people in the space.
And so hopefully, hopefully I have a lot of really fun updates
in the not too distant future.
Dude, I thoroughly enjoyed it.
I learned a ton.
This is getting fascinating.
And boy, I just can't even imagine where this is going.
And in the coming five years,
like I just can't even imagine where this is going to be.
And just really appreciate your brilliant insights.
guy. This was really a lot of fun. Oh yeah, man. I appreciate it. Always love hanging out, man. It's been too long.
It's been too long. If you guys enjoyed this conversation, be sure to follow the show on whatever
podcast application you use. Just search for We Study Billionaires. The Bitcoin specific shows come out
every Wednesday, and I'd love to have you as a regular listener. If you enjoyed the show or you
learned something new or you found it valuable, if you can leave a review, we would really appreciate that.
and it's something that helps others find the interview in the search algorithm. So anything you can do
to help out with a review, we would just greatly appreciate. And with that, thanks for listening,
and I'll catch you again next week. Thank you for listening to TIP. To access our show notes,
courses or forums, go to theinvestorspodcast.com. This show is for entertainment purposes only.
Before making any decisions, consult a professional. This show is copyrighted by the Investors Podcast Network.
Written permissions must be granted before syndication or rebroadcasting.
