Unchained - The Chopping Block: Why AI Will Change the Course of History in Crypto - Ep. 471
Episode Date: March 22, 2023Welcome to “The Chopping Block” – where crypto insiders Haseeb Qureshi, Tom Schmidt, and Tarun Chitra chop it up about the latest news. This week, NEAR co-founder and former Google AI researche...r Illia Polosukhin joins the show to discuss the intersection of crypto and machine learning. Show highlights: whether Signature was shut down because of its crypto arm whether banking crypto is riskier than banking other industries how Balaji’s $1 million BTC bet is just highly effective marketing whether the Fed is doing quantitative easing again how the artificial intelligence industry has changed over the years why Illia says that “open source always wins” the intersection of blockchain technology and AI whether a broken Tesla could become the world’s greatest investor Hosts Haseeb Qureshi, managing partner at Dragonfly Tarun Chitra, managing partner at Robot Ventures Tom Schmidt, general partner at Dragonfly Guest Illia Polosukhin, co-founder of Near Disclosures Links Signature and Banking Unchained: $4 Billion in Crypto Deposits Not Included in Flagstar Signature Deal Was Signature Bank Actually Insolvent? Regulators Close Signature Bank Following SVB Collapse Jim Bianco on Why the Banking System Has Always Been Broken Nic Carter: Operation Choke Point 2.0 Is Underway, And Crypto Is In Its Crosshairs Arthur Hayes: Kaiseki Balaji’s bet Coin Edition: Balaji Bets $1M on Bitcoin Price, Says US Hyperinflation Is Underway Artificial Intelligence: Richard Sutton: The Bitter Lesson Facebook: Introducing LLaMA: A foundational, 65-billion-parameter large language model Stanford: Alpaca: A Strong, Replicable Instruction-Following Model Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Not a dividend.
It's a tale of two Kwan.
Now, your losses are on someone else's balance.
Generally speaking, air drops are kind of pointless anyways.
Unnamed trading firms who are very involved.
D5.Eat is the ultimate policy.
D5 protocols are the antidote to this problem.
Hello, everybody.
Welcome to the chopping block.
Every couple weeks, the four of us get together and give the industry insider's
perspective on the crypto topics of the day.
So quick intros.
First, you've got Tom, the DeFi Maven and Master of Memes.
Next, we've got Tarun, the Gig of Brain and Grand Pubodod Conlet.
Today we've got a special guest with us, Ilya, who is the founder of NEAR.
So I actually, for today, because we're going to be talking about AI, I got GPT4 to write some
intros for Ilya, and you tell me which one you like.
First one is Ilya, the NIR network's noble navigator.
Ilya, the brilliant blockchain builder behind NIR, introducing Ilya, NIR protocol's
pioneering pilot, Ilya, the founder and fearless frontiersman of NIR.
That one doesn't rhyme with NIR, but that's fine.
And then Ilya, the NIR protocol pioneer with a visionary vibe.
Welcome to the show, Ilya. Chat GBT, BT, welcome to you as well. And then you've got myself, I'm the head hype man of Dragonfly.
So we are early stage investors in crypto, but I want to caveat that nothing we say here is investment advice, legal advice, or even life advice. Please see Chopin Block.X, for more disclosures. I should also mention, before we're kicking off the show, Dragonfly, we are investors into NIR protocol. Tarun, are you also an investor-insnear?
We are. Okay, cool. So just so that we're all fully disclosed, Ilya and us go way back. And so he's a good friend.
And we're excited to have them on the show because in addition to all the crazy news that's been going on, there's been a lot of hype around AI and all the advances in machine learning, large language models.
And we wanted to finally get a show to cut through some of the noise with somebody who knows about this more than almost anybody, especially in the crypto industry.
But we are going to get to that.
First, I want to get through some news because the last couple weeks have been all about this banking crisis that's been going on in the U.S.
things are now winding down.
And so we're going to take the first half of the show, roughly, just talk about the news,
and then we'll jump into a deep dive on AI.
So let's update what's happened over the last week.
That's significant on the crypto side.
So the first thing was that signature bank was one of the banks that was wound down.
And there was this broad discussion within crypto about is signature being headshot
because of its proximity to crypto and its crypto banking activities.
So there was a report that,
FDIC went after it took over the, the auction process for signature, was requiring bidders
to wind down the crypto business. And then, so this was reported widely. And then crypto was like,
oh my God, it's happening. Operation choke point 2.0. It's really true. And then FDIC official
denied in a public report that this was the case. They said, look, that's not true. This is a,
this is a bank. We want to get the best value for the bank. So, you know, any activities that are
revenue producing, feel free to take them over. Then yesterday, which was Sunday, it was announced
that signature was in a purchase agreement with New York Community Bank. So New York Community Bank
was announced is going to be the buyer for signature, although the deal is not fully closed yet.
And New York Community Bank is winding down the crypto business. So now, was this off instruction
of FDIC? Not only winding down the crypto business, meaning probably Cigna, which is the 24-7
real-time settlement system for signature, but also they are going to be debanking the crypto clients.
So they're going to be asking the crypto clients, take your money, buzz off, go somewhere else,
we don't want you here. So one, this, this again begs the question, was this FDIC kind of, you know,
nudge-nudge saying, hey, if you want this bank, like get rid of this crypto stuff? Or was this just,
you know, in your community bank, just not liking this thing and saying, like, look, after Silvergate,
I don't want crypto deposit money. It's not worth a headache. And it seems to,
invite more trouble than it's worth. One way or another, this seems to be an indication that whether
it was FDIC directly or whether it was indirectly through the treatment of all the crypto banks and the
fact that, like, you know, all this stuff is in the news now. And you can't help but notice that banks
that bank crypto clients get a lot more attention from the regulators than they otherwise would like to,
that this is causing this kind of, uh, this stigma around banking crypto clients that's likely
going to continue in the U.S.
What were you guys' perspective on this story about, about signature and the debanking of
signature crypto clients?
Yeah, I think part of it is definitely a narrative that is being propagated in a lot of
press outlets with respect to, oh, crypto caused these bank runs and crypto caused this banking
crisis.
And, you know, I think there's not a lot of truth to that, but obviously it's a great
narrative to tell.
I think what happens, see what happens with SBB.
Obviously, SB is still looking for a buyer.
There weren't a big crypto bank.
Like, you know, crypto is a very small.
percentage of their business. But if whoever buys SVB also winds on their crypto business,
there might be more into the story, but this feels maybe a bit sort of isolated in some
ways. So I know there's actually some other big banks that started debanking crypto companies
even before SVB's kind of case started. So it's not like that message was already there. And
kind of banking regulators, I think we're already propagating that message. So I don't know if it's
FDIC itself, but I think it's already word on the street that crypto banking kind of is risky
and people should stay away from it on the banking side.
Yeah, so Nick Carter made this point when he was, he wrote this blog post that's gotten,
you know, at the time it got some attention now, the attention on this blog post has exploded
where he describes this Operation Chow Point 2.0, which is basically this kind of full court press
from the executive branch to try to put pressure onto anybody who touches crypto from the banking side.
and, you know, it's now really manifesting that we're seeing bank failures.
There's a perfect time to, you know, paint crypto as a scapegoat, as you said, Tom.
What I'm curious about, so one, obviously there are banks that are starting to position themselves as
being crypto-friendly.
Obviously, these are mostly smaller banks because, you know, they can afford to take more risk.
Obviously, as a small bank, you're more like a startup and you're more willing to, like,
try to do something risky in order to win a big business.
So Cross River is, you know, sort of effectively a startup bank that is doing this.
and there are a few others that have positioned themselves as crypto-friendly.
A lot of the banks in Europe are taking the opposite tack.
Europe doesn't seem to be quite as aggressive.
I know you're based in Europe at the moment.
What is the vibe you're seeing from the European side with respect to crypto banking?
Is this purely a U.S. thing?
Or is this spreading to Europe as well?
Because obviously with credit suites and all this stuff, obviously credit suites and nothing to do with crypto.
But there is a broader banking crisis going on in Europe now as well.
Yeah, I mean, I haven't seen anything.
on the European side per se.
I mean, Switzerland has been always very welcoming.
I think they're even more welcoming right now,
as this is kind of unfolding.
UK is also trying to set up kind of a better rules around crypto.
They actually, as far as I saw,
adding crypto on tax returns explicitly.
So that, you know.
I don't know if that's being friendly to crypto,
so just wanting your pound of flesh from crypto.
As soon as you put like, you know,
if you say, hey, you should be,
taxes from this. That means like everywhere else, like this is, you know, now is an accounting
software. It's in all of the system. So like now what rates are going to be taxing on, that's
that's the other thing. But I actually think it's like a way to. You're saying that, you know,
weed in many U.S. states is the same as crypto in the UK, basically. Is there tax returns on weed?
No, now they ask you directly. And a lot of state filings where it's legal. Yeah, I think the UK is
finally up to like 2019 IRS rules, which is like when it changed.
Do you think Switzerland's going to continue to actually be positive after this current stuff
they have?
Yeah, why not?
I mean, like for them, what's the difference?
Well, I just feel like they, their banking sector is probably consolidating is probably like
means that the smaller banks don't survive.
And like the smaller banks were the ones just like here that gave a lot of the crypto company
stuff like what was that bank
SEBA or whatever that did like
a lot of the 2017
Layer 1 ICO stuff
I'm just curious if you think
they'll survive because
So in in Switzerland
I have a weird
system where like there's
Canton banks which are actually
can just like have all their deposits
and not lend them
and so and Canton's a really
one business they really want people to come
like when we were looking for foundation
Canton governments were emailing me, cold emailing me, which was like, I'm like, what government
does that? Fascinating. Well, okay, so amidst this broader banking crisis, there's been one story
that's been dominating Twitter, basically Twitter just can't stop talking about this, which is a good
friend of the show, Bology, who was on the show, I think, a little while back. So he has made
a bet. And his bet is basically a million dollar bet, or,
million dollar in U.S.D. terms. He's basically betting a million dollars to one Bitcoin that
essentially he believes that one Bitcoin is going to go to a million dollars within 90 days,
essentially because he believes that the U.S. dollar is going to hyperinflate. So he's claiming
very loudly and very aggressively on Twitter that the banks are insolvent, that the bank term
funding program, which is the sort of liquidity injection that the Fed is providing to banks that
that need, you know, that need liquidity on some of their hold to maturity,
treasuries and mortgage-backed securities,
that this program is pure quantitative easing and that in this banking crisis,
which is going to extend everywhere in the world,
all the banks or the majority of the banks are insolvent already,
the Fed knew it all along,
they're going to hyperinflate the currency in order to, you know,
kind of prop up the dollar system,
and that Bitcoin is going to go to basically infinity or, you know, a million dollars.
And so, curiously, he's making,
two of these bets. So he's made one already. I think he's going to make another one or has he already
made it. I don't know. But so he's betting, I guess, two million USD that he's putting up against
people's, you know, sort of Bitcoin in return. And this is, it seems a little bit crazy. I've
gotten a lot of people hitting me up and being like, hey, what do you think about this biology thing?
Do you think he's right? Like, should I be worried? Should I like take my money out of the
USD. What's your guys take on this apology, end of the world bet?
Maybe many observers have made the same point, but it's a great marketing spend at the very
least, you know, Bitcoin's already moved up like 10 plus percent.
Is it great marketing spend?
If he already owns more than, you know, 10 million worth of BTC, which is very likely,
he's already sort of EV positive.
I think the other thing
is that this is the type of thing
where it's directionally correct.
Like, yes, there is going to be,
like, I mean, just look at the overnight banking operations
changes over the weekend for all the central banks
coordinating to provide liquidity.
That's a little bit, that's very 2008.
And I think directionally, this is correct.
like there is just like a ton of operations.
There are a ton of operations that are basically quantitative easing 2.0, 3.0, whatever.
I don't know.
It depends on how you want to index it.
There's a sense in which such a bet will cause, you know,
if enough people glom onto it, will cause things to move in that direction,
even if it's negative EV.
And right now all he's doing is basically recouping costs.
But as long as Bitcoin still goes up, you know, 10 to 20 percent, he's fine.
Okay, so you take a cynical view that you think that he doesn't actually think Bitcoin's going to a million, but he owns enough Bitcoin that if he can meme a price rally in Bitcoin, which, you know, obviously Bitcoin is rallying, outperforming everything else right now, that it pays for itself.
It's not just that it pays for itself. It's also that it's directionally correct. So, like, even if it doesn't hit a million, he's always going to be able to be like, look, banks did hyperinflate. Even if I made the wrong bet, I got the right direction and I just didn't hyperinflate enough. It has a little bit.
of like I can fall back on that. It's not just like, oh, I got it wrong. Yeah, I got the magnitude
wrong. I got the direction right. All right. Yeah, I agree with that. I think it's one of those
things where, you know, even if you're off by like a fewfold, you can still sort of claim victory.
It's kind of people who thought, you know, there's be a million people dead in the US from COVID
and like early 2020. I was like, okay, well, like you got, you know, the timeframe wrong and like
the numbers wrong. But like, you were sort of right in like ringing the alarm bell. I think
that's kind of what he's going for.
But yeah, I think I kind of disagree with his interpretation of what's happening with like
BTFP and yeah, it seems more like, I mean, we're talking about in the show.
Everybody's talking about it.
It's actually very difficult to get that kind of earned media for like a million dollars.
That is very true.
Super Bowl ads are a lot more expensive than that.
And I don't think they have the reach of this.
As much as I think Gabriel Aiden was a who found a limit break is a marketing genius on
internet. I'm not sure I would call the $5 to $7 million, whatever, spent on the Digi-Digaku ad on the
Super Bowl, very good. Whereas this is like the most persistent marketing for a million bucks.
If you compare it, it's like it's unreal how good marketing this has been.
Yeah, very, very fair point. I guess, yeah, the issue that I take with it is that like a million
dollars is a 60x increase in the Bitcoin price, right? This is not a, this is not a direction.
Bitcoin went up and now it's worth, you know, 35 instead of 20.
It's like, oh, you're still off.
How many, how many claims in marketing are, like, we're going to improve your life by
1.5% if you buy this product.
No, they're all like, we're going to improve your life by 5x if you buy this product.
And this is just the same type of.
60.
I saw you again, but 60x is like, it's just a big hurdle, you know.
But anything else would not make people pay attention, right?
Even if it was actually 100K.
If it was 100K, people were like, yeah, it's kind of plausible.
You know, the Bitcoin, Twitter would be like super excited.
You know, crypto Twitter would be like that.
It's interesting.
And it would kind of die away.
999,99 would not have memed, right?
Like you would not have got the attention.
You would not have got the articles.
Like there's also the fact about choosing the number.
It's like astrology.
Like choosing the right number somehow like.
I guess I resist the like Justin's son kind of energy that you guys are
describing demology here.
But again, it's a directional bet, right?
Like, it's not 90 days.
It's, you know, two, five years.
But, like, you know, if the system doesn't fix itself, right, there's something
going to break.
It's a little more like micro strategy meets Justin's son.
Yeah.
It's a people, I think it was Matt Levine was saying, you know, if you really think Bitcoin
is going to $2 million, you should probably just buy, spend that million dollars and buy Bitcoin
instead of buying Bitcoin for a million dollars today.
But it sort of misses like the reflexivity of Bitcoin as a market and sort of almost manifesting
the price increase by sort of putting this idea into people's heads.
Like there's sort of a core reason why you would see that kind of inflows, but you sort of
create this meme around it and that can sort of self manifest.
Yeah.
I mean, look, it's hard to know if it is like, okay, the counterfactual is hard to tell.
or obviously at a time when, you know,
expectation of interest rates are cratering
and banks are failing.
Like, okay, yeah, that's good for Bitcoin.
So that meme was already happening
before Bologi made this bet.
So it's hard to tell, like, how much of a lift
was Bologi's bet and the earned press, as you guys put it.
I mean, it does seem like it's having some impact
because so many people are messaging me about it
that I have to assume this is kind of, you know,
it's drilled itself into people's brains
that like, hey, maybe you should be worried about this.
The interesting thing is that, you know, unlike most of the rallies,
all coins are not following.
So it's really just Bitcoin breaking way ahead of everything else,
which maybe lend some credence because most of the rallies you've seen in crypto,
especially around interest rates, have been pretty broad.
Like everything kind of followed together.
Now Bitcoin is really breaking away in the market.
And that may be, again, it's hard to ascribe something like that.
It's like, oh, well, this is because of this thing.
And, you know, like, who knows, markets are kind of crazy.
So, okay, let's take this kind of showmanship of the bet aside.
what about the claims in the bet?
So, you know, one of the things that Bologi
and a lot of people on Twitter
are arguing about right now
is whether or not the bank term funding program,
which is the sort of, you know,
the credit line that the Fed is offering to banks
on their treasuries, is it QE?
And if it is QE, how stimulative is it?
And should we be thinking about this as,
hey, the Fed is totally about face,
now they're doing QE again.
And, you know, like we're basically going back
to, you know, the kind of profligate,
stimulative monetary policy that we had for the last, you know, 10-ish years.
If you asked someone in 2008, if this was the last failure, every time there was a failure,
they would have said yes, right?
There's sort of this like, you're living in the fog of war.
You don't really know what's going to happen.
Central banks seem to be incompetent at communication right now.
I only say that because of like this weekend stuff.
I mean, did anyone watch the Credit Suisse press conference?
That was a little embarrassing for the Swiss government.
I'm not going to lie.
That was like that did not sound good.
Like the media had some pretty hardball questions.
And like the banking officials, whether they were from UBS,
Credit Suisse, or the Swiss government, we're just like, yeah, we don't know.
It's fine.
We'll figure it out.
Like in a very like non-plussed way that did not give anyone any confidence.
For instance, you know, one of the biggest things people thought about in the U.S.
bailout last time was like, oh, do people get paid bonuses?
or do they get clawed back?
And there's a reporter who asked,
hey, it seems like you're not like culling pay of any executives.
You know, is that planning on changing?
And they're like, no, no, no, no.
We just agreed on it.
We're like never going to change this.
And like then there was this huge uproar.
And then they were just like, oh, well, maybe we'll change it.
And it's like, well, I guess they're making their decisions live at the press conference.
It just like it didn't really inspire confidence.
And so I think this fog of war thing is also.
really true. So it's kind of hard to make such strong claims. That would be what I'd say.
I mean, so with this bank's term funding program, I think a lot of the assumptions people are
making around it, claiming that it's QE and it's kind of, you know, this broad, you know,
paradigm shift and how central banks are approaching the situation. Like almost all the assumptions
I'm seeing about is Arthur Hayes had a piece, you know, we just had on recently where he talked to,
I think it's called Kiseki, where he talked about like, hey, this is actually kind of disguised
QE and, you know, QE's back on.
and central banks start printing again.
And I think almost every crypto take
that I've seen about this
that basically calls a QE
is assuming that this program
is going to keep rolling over.
Like the reality is that
the difference generally between QE,
quantitative easing,
and this bank term funding program,
is that in normal QE,
you buy assets from the market,
you generally buy toxic assets
and you often buy them at a premium,
and you just hold them on your balance sheet
for some uncertain period of time.
Like you might hold them
maturity or you might, you know, if they're like other assets, you know, like mortgages or
whatever, you might just hold them for a while and then eventually sell them back when the
market stabilizes, right? But you're basically, you're providing a lot of liquidity to the market
and absorbing toxic assets for an uncertain period of time. That is not what the bank term funding
program is doing ostensibly, right? In this case, the program is explicitly for one year.
It may be rolls over for two years, but, you know, it's in this program, instead of buying
the assets outright, what they're doing is they're allowing you to borrow against it. So they're
basically saying like, look, I'm going to be a pawn shop for you. You've got these toxic assets.
I'll let you borrow a lot against it, right? I'll let you borrow it up par, even though the
market value of these mortgage-backed securities or these treasuries is down. And I'm going to charge
you 5%. I mean, basically I'm going to charge you the prevailing interest rate, right? So this is not cheap.
It's not cheap borrowing. But, you know, you can borrow from the Fed. And, you know,
basically you have a year to, like, kind of tie it over your liquidity needs. And if at the end
of a year, like, you can't figure out how to make money or satisfy your depository base,
you're done.
You're not going to make it, right?
It's okay.
You can argue, like, well, are they really going to let the banks fail a year from now?
If they didn't let them fail today, like, what's really the difference?
I mean, I would argue a lot of what the Fed wants is no shocks.
They don't want shocks.
They don't want sudden things to happen.
But if these regional banks are just fucked, if they're just horribly mismanaged,
they took a lot of risk and, you know, they can't get recapitalized and they need to get bought,
then, okay, you've got a year to figure that shit out, right?
The thing that markets hate is uncertainty and volatility.
If you can smooth that volatility, that's kind of the purpose.
of a central bank. It's just smooth that volatility and basically say, like, hey, let's give
people a year to, like, buy each other, acquire each other, reorganize their assets, make sure that,
you know, we don't have, like, sudden collapses. But if these banks are not viable, they're not
viable. And, like, I don't think the U.S. is fundamentally committed to never letting another
bank fail. Like, banks do fail. Like, it's not that uncommon for banks to fail. The problem is
banks failing at the same time. That's the thing that the Fed doesn't want. Yeah, I, I read your
points respect to the difference between QE and PTFB in terms of like, you know, banks don't
actually want to use PTFB unless they have like an immediate, you know, liquidity constraint.
And the nature of and the scope of it is pretty limited.
I think something that is somewhat different about, about Bologi's argument, too, is that like,
unlike 08 or maybe even, you know, 100 years ago is the proliferation of social media.
And basically the ability of that to instigate bank runs in a way that is an order.
a magnitude or two faster and greater than it was back then, which you kind of saw with
SVB with all these group chats and all these people DMing each other and busy within 48
hours, everyone trying to withdraw their cash from the bank. And that basically creating a credible
strain and a need for liquidity and those tapping into BTF pushing more dollars into the system.
And I think that's kind of where he imagines the hyperinflation coming from. But I also think
the hyperinflation thing is a little maybe melodramatic. Like I see a lot of crypto people
getting excited about the million dollar bed and oh yeah you know fuck the banks fuck usd i don't think anyone
actually wants to live any world where the u.s is undergoing hyperinflation it would be extremely bad
everything would go to shit even if you know bitcoin is someone needs to buy enough bitcoin to make
bologes ev greater than zero on this bed right and those are the people who are buying it so
i mean look i i i do think that bitcoin is going to do well and i think it's plausible that this
has i mean i think it's obviously uh this program is not
it's definitely stimulative to some degree, right? Obviously, it's providing more liquidity. But it's not,
you know, it's not pure stimulus in the way that that QE is. It's more subtle than that. But I think
it is going to be good for Bitcoin. And I do take Bologis point that like probably there's going to be
more inflation. Obviously the Fed has to back off now and interest rates are pricing in that, you know,
the Fed is not going to raise rates all the way to like five and a half or whatever it was originally.
They're probably going to back off before the end of the year. And that's going to make it harder to tamp down
on inflation. So probably, yeah, inflation is going to be elevating.
in the U.S. for some time.
Now, for those of you don't know,
hyperinflation is when you basically get into
like a reflexive inflationary loop
where inflation just increases
and increases and increases
and there's no psychological way
to kind of stop the expectation
that the money is just going to get
to base further and further.
So tomorrow it's going to inflate more
and the day after that it's going to inflate more.
And so you just want to get your money out
as fast as you can
and nobody wants to hold dollars anymore.
That kind of event has happened before.
You know, it happened in Zimbabwe.
It happened in, you know,
Weimar, Germany.
Happened in Ukraine.
in the beginning of night.
Happened in you.
That's right.
I remember buying bread
for 10,000, 100,000
a million
was in like span of a year.
Oh, wow.
Nothing like logarithmic scales.
It does happen, right?
But it happens mostly in failed states.
And it happens, you know,
maybe once a decade or so
there's like a hyperinflationary event.
The US dollar hyperinflating
would be absolutely like just batheetshit crazy.
This is not a prediction
you should take lightly as like,
oh, you know,
the central bank is,
is going to lower interest, or they're going to lower interest rate too fast, so it's going to
hyperinflate.
This is also why it's very clearly a trade, not a real philosophical bet.
But I think there's an interesting question of, like, confidence of dollar, right?
Like, you know, there's maybe, like, nobody's counting this, but there's like a global
confidence in dollar, quote unquote, you know, latent score that like all the outside of
US countries are putting confident this is, you know, how much the indexing their reserves into
this, how much of the economy?
economy is kind of transacting in it.
And is like, is the events that have been happening going to help us this confidence?
No, right?
So like it's definitely going to reduce index, which means this assets, like the central banks,
the kind of economy, the companies need to go somewhere.
And given what we see with other countries, like a lot of them have similar problems, right?
And so there's an interesting question of like, yeah, what is the confidence, you know, where, like,
let's just say it's not a zero sum, but, you know, is that confidence need to go
somewhere. And so that's, I think, what at least, like, if I was Belagis, that I would be banking on,
it's like, hey, like, here's an asset where you can put confidence in. The rules are known. The
supply is known. The, like, you know, a bunch of people already have it. Like, it's, you know,
so far, still easy to come and go in it. And, you know, if, if a lot of people even put a
fraction of their kind of value into it, it's going to actually be, like, then hits its reflexive state
where people see it going up and they're like, oh, actually, we need to go because,
I just said it's going up, right? And so, like, I mean, there's like a huge reflexivity in
crypto in general, right? And obviously, I mean, Bitcoin has like less flexibility just being bigger,
but, you know, just kind of kick started by like world events. And so the amount of attention
right now on this is huge.
You know, actually, the reason I was laughing about this is my friend who's at OpenAI
and like maybe it's a good segue for our AI segment.
And, you know, he's been in AI stuff for like maybe a decade.
Okay, so my friends in the SF scene, especially EA people, are actually getting freaked
out by the Obology tweets.
I literally got that while Ilya was talking.
So all I have to say is amazing marketing, right?
There's no way you get for a million dollars you get anywhere near this level of dispersion.
That is very true.
Yeah, somebody on Twitter said it's like,
Balagipa $2 million to be the main character for next 90 days on the Bluebird app.
And it seems to be working.
I mean, like, I just think people who aren't even paying attention to crypto are suddenly like,
oh my God, what does this guy saying?
Okay, well, as long as we are kind of taking sides,
let me just register that I don't think the U.S. dollar is going to hyperinflate,
but I do think Bitcoin is going to do well over the next 90 days.
But let's take that segue and kind of transition over to second part of this conversation,
which is about crypto and AI.
So obviously, AI has been on the rise, you know, chat GPT, large language models, diffusion models.
Everybody is going nuts now over this idea that AI is just, it's the next massive technology wave.
It's a platform.
It's deflationary.
It's going to change everything.
So, Ilya, the reason why we brought you on the show is that you are kind
the natural person to talk about the intersection of crypto and AI.
So just a quick way of background.
So, Ilya, you were originally at Google.
You were actually working on TensorFlow.
And you also were, right before you left Google, you were one of the co-authors on a very
famous paper in AI called Attention is All You Need, which is the paper that introduced
the transformer model, which is the model that is now used to train almost every large
model in AI.
It's basically kind of revolutionized, you know, basically machine learning at a scale that we've never seen before.
And so originally before starting NIR protocol, you were starting a company called NIR.AI, which was an AI company where you were trying to build a sort of GitHub copilot type thing back in the day.
Obviously, you didn't have the resources that GitHub has.
They still own the domain.
And Pond was very much intended.
I'm sure it's going to go for a lot more today than it did it back in 18.
And then 18, you pivoted into blockchain.
And I remember the coffee shop where we were initially
met and you pitched us on the very, very V0 version of Near Protocol,
which was totally broken and made zero sense.
But you guys were, you and your co-founder, Alex,
were some of the most brilliant guys that we'd met
working in the blockchain space.
So I'd love if you could kind of talk us through what have you seen.
So you were at Google basically at the seat of this
when we were just starting to turn the corner.
on a lot of the problems that were seemed to be insuperable back in 2018, 2017, right?
People were excited about, oh, hey, we've got things that, like, kind of sound vaguely like humans
or, you know, we had all these, back in 2018, every single problem in AI, whether it's
like, you know, syntax parsing or, you know, understanding natural language or machine
translation.
There were all these individual models that people were kind of training and fine-tuning
on individual problems.
And basically, that is gone.
that it's not completely gone, but a lot of it has just been like kind of blown away by the
generality of these large language models. Talk us through what was that, what was it like
for you seeing that evolution in your time at Google and beyond since you came into the blockchain
space? Sure, yeah. So my journey in kind of neural networks deep learning started actually. So I was
playing with this stuff like when I was in high school. And I always thought it was interesting,
but it didn't work. Right. Like the kind of the neural networks back then were like basically, you know,
just a basic classifier that you could not use for much.
And then I remember reading the paper, which was Andrew Yang and Jeff Dean,
training using a large at the time GPU cluster inside Google.
And they were pre-training model to look at the image and output image back
and kind of create a representation inside.
And they found later inspecting it that it had the neuron that would recognize cats.
And so it's kind of called cat neuron back then.
And that was like for me was the signal that I need to go into this.
So that's when I applied to Google research and kind of join the team because this was kind of the first time.
We did not teach machine to recognize anything specific.
It wasn't a classifier.
It was just retraining.
And it found something that like relates to human concepts.
At the same time, I always believe language is like images.
there is, you know, thousands of species in the world that can actually like navigate the world,
you know, look at things, they have eyes, et cetera.
There's only one species in the world that speaks language.
And so I always thought that language is a way to kind of actually train these models
and get them to actually understand and reason and make logical conclusions and answer questions with all this.
And so that's what I worked on.
Now, back then, state of the art was recurrent models.
And recurrent models means you kind of one worded,
time, pretty much read the sentence, and you kind of process it. And so first of all, it's
highly inefficient, right? Second, because of kind of the way this models are trained, it's
actually very unstable model and so-called kind of gradients explode when you try to train it.
And so it was really hard to train them and even harder to put them in production. There's no
recurrent models ever put in production kind of at Google scale because it would take, you know,
seconds to actually query out of it and Google, you know, optimizes for millisecond response time.
And so what we would do is we would then take and dumb down this models to just words,
independent words without order and kind of train the dumb down model that does not know anything
about order in a sentence to then output the same prediction and then try to launch that.
And we've launched a bunch of stuff like that.
And so then there's a concept of attention came in.
And so that actually helped a lot with kind of training these models because it allowed to bypass this recurrence, right?
This kind of step by step evaluation and kind of look back into a sense.
Like let's say if you're doing translation, look back into words in the sentence you were translating kind of when you output in the answer.
And so that was the concept of attention.
And then there was like few papers that like tried self-attention as well.
And so combining all this, right, we knew that kind of.
of board models were actually kind of okay without order. Self attention and then self attention
was an interesting concept. And we needed something that's way more efficient, right, to train.
This all kind of came back came in and Jakob kind of our manager leader at the time came this idea.
It's like, why don't we try just not kind of putting them in sequence but just processing all together,
but then attending back into the whole sequence and using that to output.
And so for machine translation, for example, task, outputting that.
And so that's what kind of how Transformers started.
And the idea was, I like to describe it for people who watched movie Arrival.
Instead of, you know, like we usually say one word at the time,
the aliens were outputting the whole sentence at the same time.
That's what that model learns to do.
It's way more efficient and effective, which means you can train it
longer and it's at the same time kind of the gradients like the actual training methodology is more
robust and so that's what we kind of we had first prototype and that's where i left and kind of the
team continued and got in like amazing results and published that but the interesting thing is like
that model architecture started to work for everything else right they started applying it for
images to start applying it for other tasks and it just worked like without changes and that's where
I think people started experimenting more and more
and why it's now, I think,
over 60,000 citations because now
everybody's just leveraging that
kind of as a basic building block.
I was only going to make one
comment, which is, you said there's only
one species that has language, but
there are dolphins.
Thank you, Taru.
That's very useful interjection.
Sorry, Ilya, go on.
Yeah.
All right, next time we're
going to do a dolphin to human language
translation model.
Coming up next.
All right.
Powered by near.
But yeah, I think
and then the big change that happened is this idea of
pre-training, which, I mean,
which existed, like we all pre-trained a lot of
models before, but it kind of applied
at like a huge scale, pretty much just
feeding the whole internet to this model
saying, like, hey, just predict the next word
using this model. And then we can
condition you and sample from it, like
and kind of try to output, what would
you, if you had this, seen this prefix
would produce. That's
what this GPT retraining team kind of, and, you know, started to explode really because
at a reasonable scale, that model started to actually create representations similar to that
cat neuron example, started to create representations of world knowledge and being able to make
reasoning because it's seen so many times how people reason about things.
So I remember, you know, I was following opening eyes work from the very, very early days
when they first, you know, with GPT2 and then eventually, of course, GPT3.
which is the one that took the world by storm.
And I remember being absolutely fucking amazed
that with unsupervised learning,
just basically just feeding lots and lots of text
into a model that it could figure out
such a wide variety of tasks
that seemed to be incredibly idiosyncratic.
And so I think a lot of people internalize
there's this great essay by Rich Sutton
called The Bitter Lesson,
where he describes basically the history of machine learning
was lots of people trying to solve these individual problems
and thinking that the way you solve these individual problems
is you, as a human being,
as sort of like the architect of the AI,
you have to encode into that particular AI
idiosyncratic features of this problem, right?
So like, oh, to understand a face,
well, there's like a nose and there's two eyes
and there's some symmetry
and there's all this other stuff that,
like, the model needs to know that we know about the world,
and only if we encode that into the model
isn't going to get anywhere near the right answer.
And that's what a lot of old-school machine learning approaches
would, like, embed kind of human-known features
of the problem, into the problem.
And what we've learned over time,
especially within the last like three to four years,
is that that just doesn't scale.
The thing that scales and gets to like the real state of the art
for most problems that we have today,
obviously there's like some extra jurejeure
fine-tuning that we do for most of these things.
But most of the way that you get to these world scale today
is just throw fuckloads of training at it.
Just like lots and lots of data,
lots and lots of training, lots and lots of money.
And if you just do that enough
for almost every category of problem
that we can think of, the machine figures it out way better than we can.
And in a way, like, the human kind of symbolic craftsmanship just ends up being
actually worse than just raw data input, which seems to, you know, belie in anxiety that
a lot of people have now about AI, which is that, okay, now it seems more and more that the
AI just kind of need us to sort of like shovel it oil, which is data, right?
like it. Yeah, there's some fine-tuning. There's some extra, okay, maybe we, you know, tweak the fucking, you know, training algorithm or whatever. But for the most part, we just need to generate lots and lots of training data. And the more training data we generate, especially, you know, with a chat GPT and we enforce and learning with human feedback, a lot of it is just like the way that these models are getting better and better is, like, they already have huge corpuses. They already have all the writing on the internet that we're feeding to it. The big thing is that we just need to train it to, to like, not lie to us, to be kind of friendly, to follow instructions.
we have these gigantic, you know, 11-dimensional monsters,
and we're trying to, like, use, you know,
just raw hours of human training to make it nice and be friendly.
And so more and more, I'm seeing this nervousness from people
about this new state because it was beautiful before,
this idea that, like, oh, we teach it about the nose
and we teach it the eyes and then the machine figures it out.
And it's like, no, no, no, don't tell me anything.
Just give me lots and lots of pictures and, like, I'll figure it out.
I don't really need you.
I don't know what your perspective is on that.
Yeah, I mean, I think that transition was happening for the past 10 years.
Like, again, that paper was in 2013 that like, hey, you don't need to like handcraft things.
Just throw just like it's a basic model.
Right.
Back then it was just a convolutional network.
Now it's just transformer.
And it will build representations that it needs to solve its task.
And then this representation is actually extremely useful across many, many, many tasks.
And so there was this thing embeddings, which still exists actually inside GPD models and everything else,
which represents pretty much meaning of the word.
It's like 100, 200 numbers.
And these numbers represent meaning of the word.
And like people were training the model to get this embeddings and then using the symbetics and a bunch of other tasks.
That was like, you know, we were doing that in 15, 16.
And it was extremely useful because it would capture lots and lots of different.
dimensionality of our world without like even you know us teaching it anything and then we could use
that dimensionality to then decide oh it's a city or the person it can be like you know is there some
words that mean similar things etc etc and i think this is just kind of continue expanding but we should
remember this is still tools this is not a thing that has like a you know i want to do this this is like a tool
we give instruction to and it does things for us.
And so I think it's important to kind of understand like at the, you know, at the base of it,
it's a thing that like it ingested all the world knowledge.
It has a common sense now.
It has some, you know, resemblance of logical reasoning, although not always correct.
But it's a tool that we kind of feed the input to produce output.
Now it's a really powerful tool and kind of the way people will start using this.
can be extremely dangerous, right?
That's why, like, you know, teaching it not to do bad things is good,
but, like, people keep finding ways to kind of go around the teaching, right?
And, like, you know, they close the one level.
Now people, like, pretend you're someone who is pretending that you're someone
that doing something, right?
And, like, that now jail breaks kind of the system.
So, like, it's a tool that, you know, people will be using for things.
And so we should look back, again, at people and how this can be used
and what things that people usually do with tools,
you know, good or bad,
and just magnify that by kind of the abilities of the systems.
So actually, you brought up Sutton before,
I see Sutton's sort of a famous author
in that he sort of coined the term to some extent
reinforcement learning, which, you know, in the 80s.
But, you know, one of the reasons I think people miss the sort of like,
hey, we can throw more parameters and it'll eventually
will figure itself out, is that it's not just the idea that we were
encoding, like, hey, we need these features that are human interpretable
like a nose. But it's also that statistical theory
still to this day doesn't justify over-determined systems like this,
where it's like, hey, we have way more models than even data points
by like orders of magnitude. And there's really no way to know if like you can
ever have something that's stable.
Like, if I throw in one new data point,
it doesn't completely destroy the model.
But the last sort of 10 years have been a resounding set of examples that,
hey, these models are sort of robust in a way that basically none of the existing,
you know, literature could ever describe correctly.
And I think to some extent, maybe the limits of such models that are overdetermined is
You can't really stop them in the sense of like,
you can't really figure out what types of constraints to put,
to like avoid these types of jailbreak scenarios,
precisely because you're like,
okay, well, we're willing to just have like way more directions
to search in the model space than there are actual possible queries.
And so there's always sort of some way of getting to whatever outcome state you want.
And sort of the opposite philosophy of crypto, which is like how do we restrict the number of output states like quite dramatically.
I mean, what I've seen from opening eyes that they seem confident that if you just do more, more, more, more reinforcement learning, like, eventually you will, you will get it to, you know, sort of enclose that output space more and more such that, like, you kind of find these nooks and crannies that people are exploring by trying to jailbreak the system or get it to like tell you had a hotwire car or tell it, you know, how to hack a bank or whatever, all things.
these, all these things that people have managed to get chat, GBT and Bing, and Bing's really
Sydney, get Sydney to tell you how to do. And like, the reality is that we're at the very
infancy of this stuff, and it's only going to get better, right? It's going to get better. It's already
gotten better insanely fast. And no doubt that is going to accelerate as people realize the
economic value that is, you know, going to be unleashed with all these large language models.
I think this is where it's interesting to think about. So open AI went from,
you know, hey, we're going to build it and open source it and everybody can use it to like, hey,
we're going to control it because we're afraid of how people will use it, right?
That's really kind of a transition.
And, you know, Alia Schuzevra actually mentioned that like, hey, I was wrong.
Like, if you have such a powerful tool, would you really give it out to everyone to leverage?
And this is where I think coming from a crypto, you know, blockchain, that three perspective,
and honestly, open source, like I've always been doing open source in my life.
like open source always wins.
Like there is no so far like products that in long enough term,
open source did not take over.
And I think like the only one so far is search.
And this may change actually because of this models.
And so the reality here is, yes,
open source will be lagging maybe like one model,
like one year of modeling behind.
But for anyone who is like following Open Eye footsteps right now,
it totally makes sense.
open source it because they get so much treat credit for doing that and like they don't lose anything
because like well open AI has kind of there's not and we already see like a bunch of them are open
source like some of the models you can run on your laptop that like you know reasonably powerful
like this obviously not not near gbd4 or gpd even 3.5 but you know they're starting to get there and
the reality is like it doesn't matter what you what open AI does to train it like there will be
models that will be used in all kinds of weights.
Did Facebook get street cred?
That's my question for you.
Yeah, that was very, I was about to bring that up as well.
So Facebook, they invented this model called Lama, which is much smaller than in terms of the
number of weights than GBT3.
GPD4, we actually don't know.
Now opening, I won't even tell us how big the model is.
But GBT3, we know it's, I think, 175 billion parameters.
And Lama, they release these models that are significantly.
smallest one, I believe, is 7 billion,
which is, you know, enough that you can run
on your laptop. Can run a mobile phone.
Mobile phone even.
Oh, wow. I didn't really. Yeah, someone has a run it on a pixel.
Oh, wow. Well, they demonstrated that actually, if you,
instead of like blowing up the size of the model,
if you just train the model for longer, actually, you can get
significant increase in performance that approximate a lot of what you get from,
you know, something like a GPT3. But not only that,
there was a more recent paper that came out from Stanford called Alpaca
that showed that if you basically use in conjunction the outputs
from a bigger, like sort of more robust, better trained language model,
you can actually approximate that model really, really well
as long as you have kind of a big enough model that you're training on.
And so you can sort of imagine like the kind of gigantic blob Big Brother GPT for
training this like little llama model running on your mobile phone
actually can get your mobile phone to really closely approximate the big monster,
which is like surprisingly cheap.
I think they said like,
you know, roughly on the order
of like 100,000 input
examples, which is crazy,
which basically means that
the edge of having a gigantic model
that's like you're, you know,
hidden behind a wall
and that, you know, nobody can access
and the weights are secret
and the size is secret,
that actually that mode
just might kind of melt away
if in fact this kind of,
you know, sort of co-optive training
can be done at scale
because you never know
when you're talking to somebody,
is this person training another model
trying to steal my internal knowledge.
And that might really change the economics
for how these large language models
end up interacting with each other.
Yeah, so this is actually exactly
what we're doing to launch staff at Google.
We would take an expensive model
and then we would train a way cheaper
like bag of words or whatever model
just by feeding the input
to the bigger model
and kind of training the smaller model and output.
So like this is like distilling
or, you know, there's a few different terms
how to call in that.
And that's part of the reason.
The other part, yeah,
that you just can query out kind of information out of this bigger models,
even if they are closed source.
So that's why I'm like, open source will win, right?
It's like, we'll get this out.
And smaller models can approximate very closely indeed this.
So I think the mode there is, again, it's just like it's people multiplied by compute,
multiplied by data.
I think the most interesting mode is actually product.
It always was, right?
Like at the end, if everybody believes chat GPD is the main thing where you find kind of best state of the art, everybody goes there, everybody talks to it, everybody feeds data to it.
That data then kind of improves the models.
Like that becomes kind of still ahead of everything else because they just don't get this flow.
And that's what Google Bin.
Google Bin not compared to Bing and other search engines because it became kind of state of state of the arts.
Like people kept feeding it, you know, queries and clicks.
this queries and clicks then fed back into improving the model. And so there was no way to kind of
turn that around, again, unless you completely change how this thing is kind of interacting. And so I
think the interesting mode of when I has is in product land, not in like model architecture or
purely like data. It's in this feedback loop that they now built. Okay. So I think there's a good
place for us to bring it back to crypto. So obviously AI is huge. Everybody's thinking.
about it. And so naturally, given how hype-driven crypto is, there's a lot of people who are now
trying to take the two and mash them together and see what, you know, is there something,
is there something that we can do with crypto or blockchains to enable AI?
To be fair, to be fair, every cycle has had a lot of non-reputable scam versions of this.
But I think we're, our goal in this conversation is to focus on the actual real ones,
not the marketing. Just as a disclaimer to edit.
Anyone who is listening who is like, oh, you missed AI coin.
And it's like, well, AI coins, Git repo is null.
Okay.
Well, so there are a few threads that I think we keep coming back to.
And to be clear, this isn't just with the advent of, you know,
chatypt in these large language models.
I remember when I first started getting into crypto investing in the 2017-2018 cycle,
there were also a bunch of AI hype-driven, blah, blah, blah,
you know, type blockchain projects.
But I think we keep coming back.
to a few core ideas and I want to get your take on what you think about the intersection
of crypto and machine learning. So three years in particular, I think that are the most,
they get the most attention. The first one is sort of private machine learning, whether it's like
using zero knowledge or fully home working encryption or multi-party computation, one way or
another finding a way to make machine learning training happen in a way that is privacy preserving.
The second is decentralized training. So obviously, you know, you've got all these companies
spending huge amounts of money on training these models.
Is there a way to decentralize that and do a kind of peer-to-peer, you know, folding at home
kind of thing?
And then the third is, what if you just put the fucking model on chain and you do inference on
chain, which obviously is, you know, you wouldn't want to do training on chain because
training is super expensive, but inference is somewhat cheaper.
So does it make sense to just put models on chain and, you know, query them that way?
Which of these approaches do you think are the most interesting and why?
Walk us through them if you can.
So let's start maybe with the current state, right?
So the current state is these models are trained on supercomputers,
which are built out of a purpose-built hardware,
which is called Nvidia A-100 and there's a new version, which is H-100,
or TPUs at Google or there's like a few other like Traneum and few others in other organizations.
But generally speaking, this is like when it says Nvidia GPU,
that it's not a GPU you have in your, you know, play a game box to render graphics.
It's especially designed for AI training.
GPUs, it costs $30,000.
So you usually use about the, like for GPT3-ish model,
you would use a thousand of them for three months.
So this is about a million dollars worth of kind of cost to train a model like that.
So there's not that many like kind of companies right now that can afford this.
And so this GPUs, the sound of them, are interlinked with kind of very high-speed connectivity.
And when I say very high-speed, this is like, I know, 6, 700 gigabytes per second.
This is faster than the connectivity between the GPU itself and the local RAM, like the memory on the computer itself, by like almost order of magnitude.
So it's literally easier to send data to another GPU
and like recompute something later than to save things and load it back.
So that's the current state of how this models are trained.
And so when somebody says like, hey, let's do decentralized training on like a network
that barely maybe can push like one megabyte per second and we're talking about, you know,
seven, eight hundred gigabytes per second, we're way way more, you know, orders of magnitude
away from this actually happening.
And plus, like, people generally don't have this kind of hardware.
People have, you know, let's say people have leftovers from the GPU mining that we're doing for Ethereum.
Well, that's like, you know, orders, again, of magnitude far away from what usually these chips are used right now.
So can you do decentralized training with the current setup with the current training?
The answer is nothing closely to what you need.
need to train any of this models like for real, right?
And again, we're talking about, let's say, again,
DVD3, because we know the parameters, 175 billion parameters, right?
You need, you know, I think like 20, 30 GPU, 32 gigs of RAM each to just like store that
model in on this thing, right?
And then being able to pass through it.
So like that's just like coordinating that, making that is just so not effective.
and so anybody who is doing this, like really, right now will not be using anything.
They don't even want to use custom hardware that doesn't work with either XLA or
Nvidia right now because if you're betting on custom hardware or custom setup,
that's not kind of in the common.
It's just something that like nobody who's right now rushing to build better models
and kind of out-compete each other will be doing because they're willing to spend more money
even if it's to just be in the same kind of setup that is like reproducible and doesn't have risks.
So I think that's probably the main like, like when everybody's like, hey, let's do decentralized training.
Let's do private training similarly, right?
Well, private training is done either on secure enclaves, which don't have any accelerators.
MPC possible, but like, you know, you're adding a huge kind of overhead on just like,
computing parts, aggregating, like, it's going to be probably, you know, 10 to 100 times lower,
and you still need the same level of compute, right?
You'll still need, and like synchronizing between different clouds, for example, will be huge.
So I think at the end where I think with, you know, if we call decentralized training,
what does make sense right now is this marketplace of hardware itself, of the supercomputers,
is completely closed.
Like if I want to train something and I do have a million dollars, I need to call up Amazon,
I need to call up Microsoft and then you could call up Google, maybe like few other organizations,
and negotiate with them a rate.
And so I think what blockchain is really good at is opening up marketplaces.
And so what we can do is opening up marketplace as a supercomputer.
So you can kind of have a better price discovery on where is a supercomputer and having better
resource allocation like that, you know, kind of auctioning off this compute around the world
where people are building these clusters.
But like you said, the kinds of GPUs,
we sort of, GPUs in name only,
the kind of GPUs that you use to train these models
are not consumer GPUs, right?
These are not things that people running a note at home
are going to have.
And you cannot actually buy them.
So you need to be a large bulk buyer.
I think you need to like,
I don't actually know what's a minimum buy,
but like, Nvidia will not sell you like,
can I buy a couple?
We can back estimate this.
based on the fundraise sizes of like adeption topic.
Like so I don't think they actually bought their own hardware though.
So I don't know one startup that has access to this.
Like everybody else is like literally you need to be a cloud, like a billion dollar cloud level to buy this.
I know on the Lambda, which is able to buy them from Nvidia.
And they've been doing this.
Like I've known them doing this buying GPU.
use for past like almost 10 years. That's why they probably have the access. I will say that
having seen a bunch of these fundraising decks between Stable Diffusion and Adept, 90% of
their fundraising decks said like 80% of our funding is going to build our own clusters.
And they actually are really trying to convince people right now. So what I'm saying is like,
Nvidia's not going to be like, oh yeah, like this like billion dollars of money raised. We
can't sell to that. I think they're just going to probably charge them more. And it's pretty
clear they're going to charge them a lot more. But my guess is the minimum order size is $100 million
based on the... Probably, yeah. And remember that it's like you buy GPUs and you need to
buy all the other stuff. Like there's networking staff. For sure. Yeah, yeah. Yeah. I agree. I mean,
the like SLI infiniband stuff itself is probably as much as the GPUs. But yeah. And you need
engineers who then can like maintain all that. Okay. So all right. So TLDR, any kind of, anything that's
going to be an impediment to training, either one, like you don't have the money or two, like the,
even the kind of machines you would need to train and decentralize a way are so expensive.
Very few people have access to them. So I think blockchains work well when you're coordinating
resources that lots and lots of people have and, you know, the distribution of these resources are
very decentralized. This is not the case for like, you know, A100s from Nvidia. Very few people have them.
And so you don't need to blockchain to coordinate them.
Just go call up to like the three big cloud guys and they're the ones who have all these.
It won't stay that way though.
Like there's no doubt that someone will actually try to break the monopoly here.
And obviously, AMD has tried and it has failed so far.
But I think like there's going to be, there's going to be a day that like some of these other accelerators are good enough and they're cheaper.
Right.
But it's hard to imagine that it's not going to be the case that the most cost effective, right?
like the most kind of energy efficient and cost efficient approaches to training are going to be
basically gatecapped by the people who have the economies of scale.
But maybe not for these distilled models, right?
If you're bootstrapping off of just like training a smaller model off the open AI API.
True, true, true.
Yeah.
So if we're talking about smaller models here, it's a totally different story, right?
And, I mean, you can potentially get like a server with a few GPUs kind of consumer grade and train it.
and people doing that, right, like researchers.
Yeah, so the day that you have a model on your phone
that approximates one of these large language models fairly well,
and you can do fine-tuning of that model through some cloud GPUs
that maybe, you know, are not quite, you know, sort of state-of-the-art grade.
Is that a case where you think that, okay, in this kind of situation,
you can imagine having some kind of GPU marketplace
and maybe, you know, there's enough demand there for this kind of consumer-level
fine-tuning of these kind of miniature models
that the economics can work.
What's your take?
I think the question is,
like,
if it's enough to have,
like,
few GPUs,
right,
getting them off the cloud
is actually pretty easy
or getting like a binary server
with like,
you know,
four GPUs plugged in.
Like,
kind of what's the reasoning,
right?
It's not,
it's not that you need it,
generally speaking,
to be decentralized.
And so if it's a rentable resource,
a little bit ton of demand
and clouds are not able to satisfy the demand,
And that's when it can start spilling over and like people buying this, you know, maybe servers can offer it for rent.
That means like, you know, clouds need to really like not satisfy the demand or start censoring someone from using it.
Right. Like that's, that's the only reasons why this kind of would start, this movement will start.
Sure.
I was to say, I think, I mean, you're talking about like compute marketplaces and obviously decentralized compute, verifiable compute has been like kind of a part of, it's been a meme in crypto for a long time.
But the other meme, right, has been owning your own data and data permissioning and, you know, data marketplaces.
And, like, I feel like that's been the other point of contention or sort of debate with these LLMs is, you know, they're adjusting, your images and text from the internet, you know, training these models on them.
And, you know, specifically for, like, diffusion models and, you know, image output, artists feel maybe deceived or hurt that, like, their work is then being trained to use to train these models that they don't really see any benefit from.
I believe Gettie images is actually suing open air, stable diffusion, for basically training their models on Gettie images, even though they're not necessarily licensed to do that.
I'm curious to get your thoughts on, like, you know, sort of this, again, this crypto meme of like owning your own data, making people pay you or advertisers pay you to actually access your data and train on it.
Is there sort of a new life to this idea, you know, with sort of the rise of the LLMs or is it still just kind of impossible to actually do this in practice?
Obviously, being on the crypto side, I want this to work, right?
But practically speaking, we still don't have tooling to do that, right?
And so I also think it was stability that took out whatever that content was,
trained the models, and they were pretty much kind of same quality.
And so generally speaking, it's like unless we actually flip the script and actually start
creating provenance for all the content we create, that is cryptographic, that is,
that is potentially also enforced by law that you need to kind of have include provenance as you kind of process this content further.
Like we will not be able to, you know, just like by the current systems, we'll not be able to use this data.
And if data belongs to to users, and this limited, I would say, law and kind of regulatory enforcement.
So I think this goes back into like what we need to start doing is that the content that we produce and I mean AI as well, but especially humans produced needs to be cryptographic authenticated.
It needs to have provenance and needs to be leveraged.
And I think this actually will become even bigger problem because this models are kind of effective tools to create insane amount of content.
right um and so one of the kind of core issues is that all of the kind of societal systems actually
run on language right they run on language the you know you file things with language you you know
you read news you you look at like what candidate you know their platform is you know or video of that
and so like all of the systems are highly kind of susceptible already to manipulation right like
you don't need AI to manipulate them.
People manipulate them all the time.
AI just gives you this extreme kind of leverage to create this.
And so, like, you can be reading a book, which is literally has all the same characters
and all the same kind of overall story and completely different narratives.
Like, and you will not even know that, right?
Like, you can be going to, you know, this website and like seeing the same titles,
the same author, the same everything and complete a different narrative.
And so, like, that's already possible to do now to kind of create this deceptive conduct.
So we need authentication path for everything.
Otherwise, we're going to actually live in this, like, everybody will see a, it's on version of reality that's completely different from what you think you're putting out.
So, you know, interestingly, I remember when GPT3 first came out and certainly chat GPT, people started really worrying like, oh, my God, how will we ever know that a human being wrote something?
And then, you know, with stable diffusion, it's like, oh, how will we ever trust an image ever again?
and you know and obviously this is going to when we have good video models as well people are going to say this about videos
you know how do I know that's you know Barack Obama making out with Mitt Romney or whatever you know how do I know that's real
and I think a lot of these things are a little bit a little bit of an overreaction right like we've had Photoshop now for like 20 years
and things are fine we kind of you know like obviously Photoshop does affect people things and you know fake media does end up getting going viral sometimes
But for the most part, like human, like it's not like civilization has collapsed because
Photoshop exists, right?
We find ways to, you know, figure out chains of provenance and authentication of what's real
and what's not.
And I think we're going to adapt, like, because that's what society does.
Society adapts to technology, period, every time it does.
That being said, I do agree with you that we do need to have better authentication of raw
inputs that come into society, right?
So one of the things that, one of the most obvious things is, you know, how will we know that an image that comes from a camera is actually a raw image from a camera and has not been manipulated, right?
And so if you have some kind of physically unforgeable signature from, you know, the camera itself that verifies that this image was taken from a camera and maybe there's a small number of transformations that were applied to this that are not, you know, manipulative as, you know, the color was tuned or it was cropped in such and such way,
like already I think we're starting to see hardware
that can come, you know,
I'm sure that we're going to see this with video as well
as just cameras and audio as well.
You can verify this thing came from the real world
and it was physically produced
and we can have some certainty of that
and maybe someday your browser
when you right click on something,
it'll show you like, ah yes, this thing came from a,
you know, a Nikon, D7, blah, blah,
whatever on such and such date.
That I think is the path
that this stuff has to go
in order to co-evolve with the speed at which
generated content is going to compete with real content.
And I think it's plausible that blockchain crypto is going to have some role to play
in how that information gets authenticated, stored, tracked, et cetera.
Or maybe it would be way simple with that.
Maybe you just hit the Nikon API and your browser just knows like the Nikon keys or, I don't know,
something like that.
What's your take on this question of like physically verifiable data?
Yeah.
So I think some cameras already do that.
Like there's a secure enclave that signs photos on some cameras.
I think like Sony added that and like a few others.
And there's like actually it's, you know,
a metadata on Nikon and everything.
I think similarly needs to happen for, you know,
let's say, you know, we record this video.
Like we should all co-sign on it that indeed this is a video
that we produced and we talked about, right?
So things like that.
We kind of need to make this almost like a new normal
new habit. But I totally agree. Like, we will adapt. This is not, like, we have all the tools.
It's not like an unsolvable problem. We just need to do it. And I think the kind of, the more
things that will be breaking, the more we'll be fixing, the faster will be fixing them. So I'm kind of
introducing new habits around us. Where, again, like, you know, our identities are, you know, have
cryptographic information. And so you can like, co-sign directly on the YouTube that like,
hey, yeah, you know, yes, I recorded this video or like I participated in this video or this
quote is mine on inside a newspaper article, right? And so that kind of creates just like
prominences that then browsers show robustly. I think the important part, though, that some people
were like, oh, you know, we should enforce fingerprinting in the output of the AI models. I think that
part is like, you know, just not going to happen. Like people will always, like, people will always go
and remove that fingerprint in the code and just do it without.
And so I think authenticating content as a source is the right way to do it.
Okay.
So we've talked about the things at the intersection of crypto and AI that maybe don't work
or are not likely to work anytime soon.
What are you bullish on at the intersection of crypto and AI?
What do you think is going to work?
I can say one thing that's been working is data collection.
So data collection in general, right, is how to get a lot of people to contribute
data for some income, some reward, right?
And this is literally what, you know, blockchain is a really good at, coordinating a bunch of people doing some work.
One may call it proof of work.
And so actually on near there's been near crowd, a community-built project which has been running for past two years, where one to two thousand people every day has been working and labeling data like various tasks and creating like a massive data system.
from that. So that part is, I think, very straightforward. It's like, it's, you know, it's micropayments and
kind of coordinating people, kind of a marketplace of tasks and people. And so that, that works really well.
I think that's kind of a continuous scale and, like, used in more different ways because you can
introduce that as part of some experience, right? You can introduce, like, because, as Tom mentioned,
like, data has value. And so, like, as people do something, they can receive reward for that data,
than flowing back into the model.
But then it does need to be like fully authenticated and kind of on chain for that to happen.
So I think that that works.
I think there's interesting examples of like this model that can run on your device.
So like more on edge computing that are applied to your data.
And so that that will be interesting kind of again more in the conceptual Web 3,
not like specifically crypto world is starting to have like a personalized model that like fine
tuned on your stuff without leaving this.
And like for that you don't actually need that much computer.
Probably don't have, you know, millions of data points anyway.
So you just kind of run a few back props on that.
I think there's interesting question that like I'm always being excited and that's why
we're doing the AI is on coding.
There's an interesting intersection of like decentralized data, right, data that belongs to users,
decentralized services, meaning they're like open accessible, they, you know, they're going to disappear.
And coding, which, you know, if we think of this like end user coding where they're not going to
probably build like complex stuff, but they can mix, by describing what they want, they can mix
existing services and existing data.
And it's really hard to do that in Web 2 because the services are closed.
You know, their like APIs are not always known.
The source code is closed, like all those things.
we actually have everything open.
And so saying like, hey, can you build me a front end that, you know,
combines AVE, compound and uniswap and like creates me a 10x leverage, right?
That's actually like possible now to do because like the smart contracts are public.
And of all the services like, you know, full data are public.
And so it can like create that front end for the user at the moment pretty much kind of in a custom way.
And like I'm really excited about that.
That kind of is a vision of what we were trying to do originally.
And I think that that kind of
attract a lot more attention
as well to how people
interact with services
because now there's like
UI problems may disappear or like
maybe reduced as well.
So I really like that and it augments
a lot of the way that I view the intersection of
crypto and AI, which is that
they may not like the way that
crypto and AI intersect in my view
is probably not going to be that there are going to be large
tokens that you can invest in that are going to make a lot of money
that are the AI tokens and those are going to pump.
Obviously, there is, right now,
there are a lot of AI tokens that are pumping
as, you know, the AI trend is getting more and more exciting.
But I think the two are interlinked in more subtle ways.
So one of the examples you mentioned is just the fact that,
obviously, as code generation and AI has become better at writing code
and building front ends, that's obviously going to be good for crypto
because crypto will have better front ends.
And the, you know, programming will become more efficient and cheaper.
And, you know, eventually you're going to have,
I mean, already there have been examples
of chat GPT sort of quote-unquote auditing code
and finding common vulnerabilities.
So I think all these things are accretive, right?
They sort of make human beings better
and making human beings better
makes blockchains better because blockchains
are made today by human beings.
I do think, I wrote a tweet story about this a little while ago.
One of my thesis about the intersection of crypto-N-AI
is I think a little bit more forward-looking,
which is that, you know, today,
I think you mentioned this earlier earlier,
is that most of these models,
almost all of them that we're interacting with,
like the large language models, we sort of make them kind of look and feel like people,
but it's a bit of a side of hand, right?
These are not actually agents.
They don't have any kind of persistent preferences or desires or anything like that.
We sort of make them pretend to because that's what human beings like.
But eventually we are going to have more agentic models that have long-term memory,
that are going to be goal-directed and are going to try to be doing things in the world.
And when we have those kinds of AIs, and they're so far away, I think, to have them
realistically beyond just like video game environments,
when we have those kinds of AIs,
I think those AIs are going to want to solve problems
that involve shared resources.
And we know how to solve problems
that involve shared resources.
We use money.
Money is the way that we negotiate access to shared resources,
whether it's a message bus,
whether it's turning into a lane,
whether it's asking somebody to do something for you
that's easier for them than it is for you.
The way we solve all those problems,
of huge category of problems is with money.
And so AIs are going to want to use money.
But they're not going to be able to use Fiat money because, you know, Chase is not going to open a bank account for an AI.
They won't even open a bank account for a crypto startup.
So they're definitely not going to do one for an AI.
But the fastest way to get onboard onto money is just by owning a private key.
If you can manipulate a private key, which pretty much any AI can figure out how to rent a, you know,
if you can rent a cloud GPU and stick the key in an enclave and then give it instructions, boom, now all of a sudden,
you can use money just like everybody else.
And you can coordinate with other AIs.
You can get an AI to work for you.
You can end up employing somebody else to work for an AI
or an AI-driven organization.
And so I think the ways in which these two things are going to intersect,
I think is not that, okay, there's like some big,
maybe there are going to be some applications, certainly,
that are going to be blockchain accelerated, right?
Like, you know, potentially decentralized labeling
and this, you know, maybe generating training sets.
But the biggest thing is going to be that AIs are going to want to use money.
and they're going to use crypto
because it's just faster, it's easier,
it's digitally native.
And that, I think, is going to be
an accelerant, maybe in a scary way,
for how these AI suddenly start
interacting with us in the world.
And so you can imagine someday,
instead of Bologi making this crazy bet,
it's going to be like some very poorly calibrated AI
that's making bets on Twitter.
There's a very famous crypto investor,
who I won't say who it is,
but one of the first times I met them,
I remember,
they told me their vision
of the future in 2016 or 2017, which was, you know, the world's richest entity in 50 years
will be a broken Tesla because a Tesla that's broken and can't be used to be a cab will
have to train itself because it'll have all these GPUs on board to basically become an investor
because it like can't do its normal function as like a Tesla.
And then it becomes the world's greatest investor on its own.
Trust me. I was like what I did not.
What? What?
Who? This was an investor who told you? This is their thesis?
A famous crypto investor that you, everyone in this, this call knows.
So did I didn't know who it is.
Wait, is this, is this Kyle? Tell me this is Kyle.
I don't think he should disclose on the call, yeah.
Yeah. Okay.
But, but, but it was that, that, that sort of matches what you just said for the record a little bit.
I don't know if I'd say that matches.
I wouldn't endorse that particular thesis that we're all going to get beaten in our investing prowess by a broken Tesla.
But, you know, I've been proven wrong before.
But I think the general idea that blockchain is a place for autonomous agents to pretty much interact with each other and the world in general is true.
And we actually actively building more and more ways for them to do that.
And I mean, there's already agents interacting, right?
They're just very basic.
Like, maybe they don't, they use basic machine learning to, you know, predict prices for arbitrage and, you know, do a few other things.
But like, as there's more and more externalities for the blockchain and there's more ways to do this, right, I mean, imagine a very simple system where the model indeed is trying to, you know, beat the market by investing.
you give it $1,000 and you give it access to also ask people to do stuff again on chain,
which exists, you know, there's like a job market on chain.
And so now it can like decide, you know, buy a crypto coin, you know, sell a crypto coin
or ask a, you know, humans to do something.
And so it may start to like invest and may start to actually suggest people to start its own project
that it's going to pump, you know, by posting stuff on decentralized social media.
That's actually possible now.
No, what will actually have is this broken Tesla starts doing NFT scams.
That is absolutely how this broken Tesla is going to end up making all this money.
Yeah, generates NFTs with mid-jury and then.
Yeah, exactly.
But anyway, but this is like this is totally possible now.
This is like if this is not science fiction, this is possible now.
Yeah, interesting.
But the question is like who put it together to do that, right?
Like at the end, it still was a will of someone.
I'm waiting for someone to make broken Tesla capital, which is their entire thesis,
their entire thesis is we're investing in the future of the broken Tesla that eventually becomes
the world's greatest investor.
I feel like, now, next bridge hack I see.
I'm going to be like, shit, was this a broken Tesla that hacked this bridge?
All right.
I think we're at time, so we have to wrap.
Ilya, thank you so much for sharing your font of wisdom with us.
I hope that next time we're having this conversation, we can just generate you and we don't
to bother you out of your day. But for now, we appreciate you showing up in person.
That's it, everybody. Thanks, everyone. Thanks. See ya.
