Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Tarun Chitra: Gauntlet – The Simulation Platform for Blockchain Protocols
Episode Date: September 9, 2020Gauntlet is a simulation platform for building financial models of blockchain protocols and applications. Their platform uses machine learning methods to simulate different environments with various u...ser behaviors and see how the system holds up in those conditions. They perform analysis on things like core mechanisms to test for liveness, block propagation, and at higher layers like simulating markets. Their offering is complementary to security audits as their analysis goes beyond code functionality and looks at how systems may behave in real-life conditions. Tarun Chitra, CEO & Founder of Gauntlet, talks about his previous life in chip manufacturing and how he built Gauntlet. He also goes deep into the machine learning and statistical analysis involved with Gauntlet. It is quite a fascinating concept and when applied to blockchain systems, there is huge potential for this to become something that is expected by users, investors, and the community for new protocols.Topics covered in this episode:Tarun’s background and how he got into cryptoChip manufacturing in the US and its geopolitical implicationsWhat sorts of things does Gauntlet model?How does historical data play a part in predicting blockchain protocol behaviorSimulation models used by GauntletGame design analysisThe parts of the crypto economic stack that would benefit from this simulation work todayHow does Gauntlet complement a security audit for a new blockchainWhat tools they use for simulationWhat Gauntlet looks like as a business and what is the roadmap going forwardEpisode links: Gauntlet websiteGauntlet on MediumTarun on MediumTarun's papersGauntlet on TwitterTarun on TwitterSponsors: Algorand: To learn more about Algorand and how it’s unique design makes it easy for developers to build sophisticated applications, visit algorand.com/epicenter - http://algorand.com/epicenterThis episode is hosted by Sebastien Couture & Sunny Aggarwal. Show notes and listening options: epicenter.tv/356
Transcript
Discussion (0)
This is Epicenter, episode 356 with guest Tarun Chitra.
Hi, I'm Sebassankutio and you're listening to Epicenter, the podcast where we interview
crypto founders, builders and thought leaders. On this show, we dive deep to learn how things
work at a technical level and we fly high to understand visionary concepts and long-term
trends. If you like the show and you'd like to support us, the best way to do that is to leave
a review on Apple Podcast. It helps people find the show. It helps people know that we're one of the
best podcasts from your crypto space. And we're always a lot of the best podcast from your crypto space.
and we're always happy to read your reviews. So if you're on a Mac or iOS device, the easiest way to do that is to go to epicenter.rocks slash Apple.
Today our guest is Tarun Chitra. He's the founder and CEO of Gauntlet. Gondlet is a simulation platform for building financial models of blockchain protocols and applications.
So when you're building a blockchain protocol these days, it's become expected to have the code audited. This is like an essential thing now.
And it's a good thing, of course. And security.
audit will look at the code to make sure there's no bugs. And usually the security audit goes
into mechanism design, but it only goes even further. And they do simulations on the mechanism
and on different layers of the stack. So they use machine learning methods to simulate
different environments with various user behaviors and to see how the system holds up under those
conditions. So they perform analysis on things like the core mechanism to test for
liveness and block propagation. And they can also do higher level analysis and test entire
markets to see how the market will react to a certain protocol. So this was a really cool
interview because it's heavy on machine learning and statistical analysis, which is not an
area that I'm immensely comfortable with, but I find it really fascinating nonetheless.
And when it's applied to blockchains, I think there's a potential here for this to become,
you know, much like security audit. It's something that becomes expected by users, by investors,
and by the community for new protocols. And, you know, that's probably a good thing because, as we've
seen recently, these protocols can lock in a lot of value. They're also doing research around
transaction fees, which apparently is an entirely new space in the area of mechanism design. A lot of
the mechanism design research apparently focuses around things like auctions for like Google ads and
things like that. And of course, this is an area in which, like, you don't really need to
consider transaction fees because, you know, transactions are abundant and free.
Whereas with blockchains, that's of course not the case.
And so given the current surge of transaction fees, for example, in Ethereum, it's an area
that's going to become increasingly interesting and increasingly relevant as bull markets
have, you know, these effects on the transaction fees in a network.
So at the beginning, we talked a lot about chip manufacturing, which is, since Tarun was previously
in that sector, he's got some interesting stories there.
And we also briefly, very briefly touched on Algarand, which is convenient.
because they are sponsoring this episode. So if you haven't heard our episodes on Al-Garan,
I would encourage you to go back and listen to them. We did one in January with Steve Kokinos and
Sylvia McAlli. And went way back in 2017 with Silvio. This was before Al-Gran was even
a company. He'd just written the white paper at this point. Anyway, they're doing really cool
stuff to improve developer experience, specifically around building defy applications.
And I'll tell you all about that a little bit later on during the interview. But for now,
I give you our conversation with Tarun Chitra.
We're out today with Tarun Chitra.
He is the CEO and founder of Gauntlet.
Gauntlet Network is the domain name.
Is it actually a network or is it sort of just a company?
I tried to get all the gauntlet domain names.
And fortunately, the only one that was really open was dot network.
I think now we're going to try to see if there's a TLD.
Dot degen and buy gauntlet.
dot degen, but we have gotten with dot 5. But basically, in 2017 and 18, I bought the dot network name
and it kind of stuck. And we had to incorporate with some name. So I chose networks.
So great to have you on. You have a really interesting background that gives you a bit of a
different perspective from a lot of the people in the crypto space or especially at least in the
DFi space. Could you tell us a little bit about it? What were you doing before you got involved
with crypto. After I graduated college, I worked at this billionaire's research institute. It was called
D.E. Shaw Research. And this person who worked in trading, he wanted to spend his money on building
A6, so application-specific integrated circuits, so these custom hardware devices, for doing
computational biology and drug discovery and physics research. It's just the same
company as like the D.E. Shaw Investment Group, or is this a separate?
It's the same thing. Basically, one of the branches of the investment group was working on
this building A6 for physics research. So basically the story of David, not to kind of recursively
add some stories, is in the 1980s, he was an assistant professor. I think he didn't get tenure.
And so, you know, if you think about people you know who are really smart, but like they don't
a tenure for some reason or another. In his case, it was, he was really working on non-Vaun-Norman
computers. And 1986 was the time when Intel was kind of like about to hit Moon. At that time,
no one wants to hire someone who's doing kind of custom computer architecture. Everyone's like,
no, Intel's going to win. So we're going to like hire people who are going to make Intel hit
Moore's law. And so then he went and worked in finance somehow secretly hidden,
underneath him working in finance and trying to make money was this idea that he like still wanted to work on non von
Norman computers. So non von Neumann computers means that like a normal computer architecture kind of separates memory and compute,
potentially separates those two. Von Neumann computer architectures kind of are what you have in your
computer or your phone right now for the most part where compute and data are processed in the same sort of pipeline.
And so I think in the 80s people didn't know what would win, like what hardware would be in your devices that you own.
He was kind of always interested in that.
And one of the applications of these kind of very esoteric architectures was building supercomputers that were really good at solving physics problems and also solving sort of like computational biology problems at large.
And he kind of in the back of his mind, he was always like, hey, if I become rich enough, I'm just going to spend all.
my money on building these custom computer architectures for something useful for society. And so once he
became a billionaire, he started trying to prove theorems about whether it was possible to do better
than these like Intel style architectures. And he proved this theorem in sort of 2004, which is kind of
great because it uses only high school math to make this point, that you can actually do
significantly better, like very much better in the sense that Intel style processor,
if you try to parallelize this computation,
will always take a finite amount of time
even if you had an infinite number of processors,
but there exists kind of crazy architectures
that would take basically zero time
as the number of processors goes to infinity.
So we actually, once he kind of proved this theorem,
I guess once you're a billionaire,
you're like, I'm going to build hardware
to prove that my theorem is correct.
So that's the Genesis story.
I work there, and in 2000,
11, there weren't many people building ASICs. Most of the people building ASICs were certainly not
Bitcoin miners. They were mainly telecom companies. So one of the things that's interesting is that
the only people who really needed custom hardware were people doing really low latency,
Fast Fourier transforms, FFTs. And most of the people doing that were in telecom. There was no
neural nets. There was no self-driving cars. There was none of that type of stuff. When we were
talking to suppliers, because we're not someone who's like trying to be Samsung or Apple building a whole
infrastructure on it. We want to kind of like pay them for their excess supply. So it's like,
we want to build 10,000 chips. Intel and Apple and Samsung are each building a billion chips.
But the factories that you, that build that stuff, they have excess supplies. Sometimes like the one factory might
have an excess 10,000 and they're like, oh, well, can we sign someone who wants to buy that? And so the way
that excess supply gets sold, it sort of gets auctioned off to different market participants.
And at that time, we were literally the only ones buying this type of space. It was like us and
like random telecom company in Japan. In 2011 and 12, one of the companies that sort of became Avalon
started buying a bunch of this chip space. And so we were like talking to our supplier and like,
hey, we sent you, you know, $25 million. When are our chips coming? And then we got ghosted. This is
getting ghosted pre-tinder. So that was weird. And at some level, we were like, what the
hell? We just gave you a bunch of money. You just disappeared. And then they came back eventually and
we're like, sorry, we're going to do your chips, you know, the next batch. How's a 10% discount
found? And that was the first time I'd really thought seriously about Bitcoin. After that,
I kind of started mining. How did that go into mining? So you found out the people who like
took your order were basically Bitcoin miners who were like a hardware fab. Let's say they have
10,000 spots. Imagine a physical wafer. Now cut up the wafer into 10,000 like one centimeter by
one centimeter little units. Let's say Apple takes up 9,000 of them. So there's 1,000 left. And what
they do is they auction off the physical space. Imagine block space, but physically auctioned off.
usually it's kind of a fair auction.
Like, you're like, okay, I put in a bid.
I wanted, you know, 8,000 of the spots for $100 each.
So we put in our bid and then we got told, hey, you won.
And then the person disappeared.
And so once you win, you have to put money in escrow.
So you post the money that you're supposed to post collateral.
And then you're supposed to eventually get paid or eventually get your hardware.
And then they take your money.
They just kind of kept it in escrow for like,
six months, way longer than it was supposed to be. And we were like, what the hell? And so what happened
was this Bitcoin miner went to the supplier because the Bitcoin miner was in Taiwan and the supplier
was in Taiwan. And they like, just, I guess they knew each other. And they were like, look,
I know you close this auction, but we need to make these miners by X state. And we will pay you
anything. It's kind of tendential to this conversation. But people don't really realize
the strategic importance of chip manufacturers, because there aren't that many.
I think you probably know this a lot better than I do, but like Intel, for example, is a chip designer and a chip fab.
And in the last 20 years or so, there's been a shift towards, you know, like more the chip fab model.
And like the designers and the fabs are now separated.
And so like companies like TSM, which is like this big chip fab in Taiwan, have kind of like one market.
And Apple and all these companies get their chips made there.
And the question here that's kind of really interesting in the current geopolitical context,
And this kind of example that you have kind of exemplifies this very thing is that, you know, these chip manufacturers are an arms reach away from China and they're very far away from, you know, the U.S. or Europe and other Western countries.
And they're strategic to, you know, critical military and industrial infrastructure in West.
The amount of chips that, you know, the U.S. can produce on its own without these suppliers is, like, insanely small compared to.
just like what TSM can put out, or like Samsung, for example.
What your thoughts on that, like knowing this ecosystem a lot better than I do?
I think the U.S. is actually completely uncompetitive at this point.
There's global foundries, which has this kind of big fab in upstate New York and Long Island, I guess now.
They're kind of like split it into two.
And then there's like the Intel's Intel has their own fab, so they have one in Arizona.
And the U.S. government, especially in the current kind of strong arm administration,
type of nonsense is trying to be like, hey, you have to like build your chips in the U.S.
if you want to sell them here or something.
That's not a very good point of leverage in the long run, right?
Because one of the more impressive things about Moore's law is Moore's law actually is a
self-fulfilling prophecy.
Gordon Moore said this kind of apocryphal thing of like, oh, every 18 months, your chip
frequency is going to double.
It turned into its own kind of war, right?
So like every 18 months, these companies would have like the chip designers benchmark themselves on how close they were.
And then once they had a design that could achieve that sort of doubling rate, then they would go through this entire process of finding suppliers who would like be able to do that.
And the suppliers also had to follow the like, hey, we need to double every 18 months kind of rule.
And you had this cycle of like chip supplier gives you design.
suppliers who are like, oh man, we need to source this really rare material to like make this happen.
Like we're going to spend all of our money trying to do that.
Leads to successful Moore's Law thing leads to lots of chip sold, leads to chip designer forcing supplier to do this.
And there's an ecosystem effect kind of not unlike cars where like the car manufacturer isn't really the true end-all be-all manufacturer.
There's this whole network of suppliers who is necessary for it to kind of you to get the final product.
And the chip designer is the intels of the world, plus the suppliers who are making the little
sub components, had to work together cooperatively for this very long time period in order to
achieve the current kind of status.
And the suppliers, not just the fabs themselves, are all in Asia, right?
The entire supply chain is completely in Asia.
There's literally nothing in the U.S.
I think it's a farce when the U.S. is like, we're going to take back all this.
It's a 30-year effort of building out multiple industries, right?
Like, one of the things that's very important to getting to sub 10 nanometers chips is something called extreme UV.
So it's building these really crazy lasers.
I think the same UVs that prevent the coronavirus?
Exactly.
It's like these really crazy lasers that are very, like, hard to build.
And Intel has claimed that, hey, they've been working on it for 20 years of, like, we can build these, like, really crazy lasers.
The reason you need these lasers is that when you have a chip, you have a piece of silicon, and then you build what's called a mask.
So the mask covers a piece of silicon, and then you shine some type of electromagnetic radiation on it, and it etches.
It, like, kind of etches sketches out your circuit design.
But there's this whole industry of these like extreme UV lasers in order to get like the size of the little thing to be really small.
So you can pack more transistors on a chip.
You need to build these custom lasers.
The only place in the world that you can make the kind of crazy glass that you need for the laser is in sort of Mongolia, Inner Mongolia and China.
And there's just like little tidbits like that.
Like, oh, well, we need this type of glass for this type of thing or we need this type of silicon.
or we need this rare earth material,
those are all things that you need to build
if you want to vertically integrate the chip stack.
And I think the geopolitical thing is like,
well, Asia has spent 20 to 30 years
building the whole supply chain around this industry.
And you're not just going to like uproot the whole tree.
That's like saying that my cora hizzea or rhizum
of like this industry is going to like migrate in two seconds.
I just don't think that's possible.
I think maybe like TSM has been due to open a fab in the U.S.
in like some time or there's some kind of thing like that.
But like the FAB itself isn't able to produce like these, you know, 10 nanominyership.
I mean, I know very little about this, but from what I know, it seems like a very kind of interesting thing that most of people don't realize the geopolitical impacts.
For sure.
That's like this olive branch that was given to Trump because he's like, I want to have chip manufacturing in the U.S.
And it's like, that's not happening.
By the same thing happened to Boeing.
I know this is really off topic.
But part of the reason we had this whole 78 fiasco is that Boeing,
to decentralize its supply chain. And then like they stopped having control over like batteries
and then batteries exploded. Whereas they used to make their own batteries before.
Back in January, we interviewed Steve Kokinos and Sylvia Macaulay of Algurand. And during our conversation,
we talked about how Algarand's unique design makes it easy for developers to build sophisticated
applications on their platform. So what's great about Algarand beyond the fact that it's fast,
it's secure, it's scales and it has instant finality is the fact that they've designed a
layer one protocol with primitives that are purpose built for defy. So what that means is that they've
taken some of the most common things that people do with smart contracts and they've embedded them
right in the system, right in the layer one. So things like issuing tokens, atomic transfers,
well, these are built into the layer one. And smart contracts are first class citizens on Al-Grand.
So with these essential building blocks at your disposal, you can build fast and secure defy
apps in no time. To learn more about what Al-Grand brings to the table and how to get started,
I would encourage you to check out Algaran.com slash epicenter.
That lets them know that you heard about it from us, and it takes you where you need to go
to learn about their tech.
And with that, we'd like to thank Al Garan for supporting the podcast.
I think our audience would also like to hear about Gauntlet and what you guys are doing.
I guess I didn't even explain how hardware got into crypto.
Bitcoin miners front run us.
I started mining.
And then in 2013, I sold all my Bitcoin because I was like.
This is going to blow up.
This is going to be a Ponzi scheme.
Very dumb idea, obviously, in retrospect.
But I started really paying more attention to the papers because we worked in distributed systems.
We were building this type of, you know, when you're building these A6s, we built this data center to kind of run like millions of these machines.
So we had to kind of think about this type of stuff.
I started really getting convinced that there was something novel here when the ghost paper came out.
So Ghost is kind of this fork choice rule that was in the early versions of Ethereum that
kind of promised you that you could handle like faster block production times if you chose a different
fork choice rule.
And Ghost was one of the first papers that thoughtfully thought about the incentive design
and also the networking and also this sort of like basic like architecture of like,
if I wrote this code, how would I write it?
And that was when I was like, wow, there's something serious here.
It's not just like, oh, ha ha, like a bunch of people on the internet, like, made, tried to, you know, usurp Leslie Lamport's Paxos.
It was like, oh, there's actually some novel thing here.
So, you know, I think before that I was like very, you know, maybe Bitcoin maximalist.
I think the ghost paper was one of the first papers that I was like, oh, wow, there's like cool ideas that the Bitcoiners are not paying attention to.
For sure.
It also made me realize like, oh, man, the design space of this thing is like bigger.
than anything that humanity has ever had.
Like, you have to, like, just combine so many things to say a simple result.
Like, that's kind of insane, right?
Like, you know, in other fields, you don't have to do as many things like that.
And then I worked in high frequency trading afterwards, and there we actually would do this
type of thing where we would make models of, you know, our trading strategies, and then we
make models of other people's trading strategies, and we'd have them run, kind of think like
AlphaGo style, where they would, like, play against each other and you try to optimize your
strategy. And that was around the time I think the Algorand paper came out. And I remember reading
the Algorand paper being like, this is amazing from a cryptography standpoint in the sense of like,
well, like, you can actually generate these verifiable random functions. I didn't study cryptography.
I had to go read the classical papers at the time because I didn't really know that existed.
But at the same time, I was like, this seems like a little bit like a derivative more than it
seems kind of like proof of work, like a one-way function. Like burning energy is a true, is like
nature's only one-way function that we know, like a perfectly one-way function. Whereas like in
cryptography, you tried to like emulate that, but it's never perfect. And so it's kind of, I was a little bit
like surprised that there's this whole proof of sake thing, but like people didn't really think about
the financial aspects. And then 2017 happened. And then I kind of started being like, hey, maybe this is a
real deal. And I'm writing simulations for fun based on the type of things we were doing in finance. And then in
2018, I kind of kept talking to a bunch of layer one protocols because I was curious if anyone
was doing this financial modeling. I quit trading and then started consulting for layer ones.
And then the big bad Libra tried to like buy my consultancy. And that was when I was like,
you know what? I think there is enough room that there are a lot of people who probably need
financial and actuarial modeling for this stuff. And I met my co-founder along the way, because he also
was in trading for a while. And then he actually worked on.
on designing like incentives for drivers at Uber.
So we were both like, yeah, you know, like,
I think there's like a way to make this rigorous.
And so we started initially focusing on proof of stake,
especially because I think that was the genesis of my kind of interest
in really committing 100% of my time to this.
And then over time, it became much more clear that defy is really the place
where there's crazy amount of financial incentive modeling from multiple agents that exist.
and there's just this open space of both research as well as actually deploying it to production.
And the 2010 to 12 shift in AI from like, it's like half academics who are washed up from the 90s
and half like people who are just making random stuff and like calling it like sentient,
but we don't know if it works, was really this thing where like these kind of hooligans turned into like
the people who are correct.
I really feel like that's starting to happen right now in crypto.
That's a very long-winded explanation of how I got here.
And so Gontlet really is about taking these tools from finance, actuarial modeling, agent-based
simulation, and tooling them towards the kind of new problems in incentive design and
crypto.
Could you give it an example of like, so what are the sort of things that you would model?
Like, let's say I came to you with a new proof of state.
a consensus protocol. Are you modeling, like, are you testing the safety and liveliness of my
consensus protocol? Is it at some higher layer than that? Like, what specifically are you testing?
I think it starts in a bunch of different levels. I think that's certainly the first level.
One of the things I remember that tipped me off when I was first reading about proofs sake was this
idea that there were many different synchrony assumptions in all of the different papers,
but they were quite inequivalent. So,
people would say you're live if you eventually were able to process a block. Some people would say
you're live if greater than X percent of nodes agreed that a block had been produced. And some
people would say you're live if, and this is sort of the way the avalanche paper kind of eventually
got proved, if like in the limit of an infinite number of blocks, a non-zero fraction of them were
actually reached by large percentage of the nodes. Now, those all kind of sound equivalent,
but mathematically, when you're trying to write these proofs they're not. So the types of things
I was really first interested in simulating were things like, how long does it actually take
on different network topologies for these blocks to actually have disseminated enough such that
the network reaches consensus? And one of the things I was very realized you could only kind of
answer via simulation and it would be very hard to prove is given a network topology,
what is the true partial synchrony constant? And what I mean by that is like, what's the
constant at which if everyone receives all of the blocks within a certain time window,
how long does that time window have to be for the network to like achieve liveliness and
safety? And so simulating that on different network topologies actually convinced me that even
Bitcoin has a lot of problems if the network topology is like too disconnected.
And so mathematicians have sort of ways of defining what it means to be too disconnected.
Without getting into too much detail, I think like the spectral gap of a graph is something that measures how long random walks take.
And so the idea is like if someone who's randomly walking on your network topology gets lost because they're too drunk, then your block may never reach everyone.
And so you kind of like assume like, hey, I put a drunk person on the network graph and I see how long it takes.
them to reach everyone, that's kind of this model of like time that mathematicians use. And I was trying
to map the model that people have formal proofs in that land to, to like what cryptographers
and distributed systems people were using. And simulation was the tool for that. So we start by
assessing kind of some of these types of issues. I think safety is not the type of thing we assess.
I think safety is a purely cryptographic property. But liveliness of proof of state protocols is
very much a sort of statistical property. It depends on the network to policy.
it depends on the latencies, how random they are, what the 95th percentile of the latencies
look like, et cetera.
I'll use the example of tenement because obviously that's what I'm most familiar with.
You know, we also have this whole live and partial synchrony.
How it works is we basically have these rounds, and each round we say there's a timeout,
which nodes will wait.
Currently on most tenement networks, it's one second by default.
But then if we don't reach consensus in that one second, we go to the next.
round and we increase it by the timeout by a certain amount. So I think we increase it by a quarter of a
second timeout. Then if that round doesn't work, we go to a 1.25 second timeout. Then we go to a 1.25 second timeout.
And so these numbers for us, we just pulled these numbers out of a hat. You know, we did a little bit of
testing. But if I was still at tenderment, we'd go to you and basically say like, hey, help us figure out
the right numbers we should be putting here. Because if one second is too long, then we're we're
time that we could be making faster blocks. Meanwhile, if it's too short, we're causing
unnecessary rounds for no reason. And so you would basically be able to help us parameterize that
correctly. Exactly. Yeah, it's like a bandwidth versus latency tradeoff of like how much
communication do you have to do. There's an expected number of rounds and the distribution of the
number of rounds in the thing you're talking about. Imagine you have 100 million blocks that
are produced. And for each block, I looked at the number of rounds it took before the network
agreed. And I look at that distribution. Now, that distribution is a function of these parameters you chose.
But the problem is that distribution also depends on the network topology. It depends on some lower
level details of protocol. And so, yeah, the type of thing we'd stress test is like,
how does that work under different models of users? Because you can have different types of users
who affect that behavior. One type of user might be the type of user that drops a lot of packets
because their computer goes off a lot.
They don't care about getting slashed
because they don't even know they're getting slashed
for being offline or something.
Another type of user might be one that's malicious,
who's purposely forwarding bad packets.
Another type of user might be one
who is kind of a big block producer
and is like trying to get,
not even just honest, but it's hyper-rational
in that they're just trying to like flood the network
so that their block always is first.
The different composition of users
also affects this distribution.
of like the expected number of rounds it takes.
And that's kind of where we model when it comes to proofsake.
But we also model over time, we realized that we started with these networking models,
because that's what people in high frequency trading do a lot.
In high frequency trading, you model like, here's exchange one, here's exchange two,
here's exchange three.
Here's all the routers in exchange one to exchange two.
If I send a packet, how long does it take?
And if, you know, you kind of model the topology in sort of a similar way,
you would think about modeling validators.
And then you say, hey, if I have an adversary who's also thinking the same way as me,
are they also sending the same number of packets and who will reach first?
It's a similar type of problem.
It sounds like you're doing analysis at like different layers of the stack.
You're doing the mechanism analysis of the systems themselves in order to looking for
liveliness and availability and things like that.
So this is like the mechanism design part.
and this might take place when the team is building the system.
But you're also doing research and analysis and simulations at a higher level up the stack.
So I know like you're also doing, say, research on market participants' risks like in the compound protocol.
So this is happening at the economic layer, at the market layer.
Is that right?
Yeah.
So I like to think of when you do simulation, I think one of the reasons people oftentimes think like,
okay, this can never be real or it doesn't replicate reality or how do you know it replicates reality
is that a lot of people try to simulate everything all at once. And you really need to think of it like
an onion where here is a particular problem that I'm trying to solve. And here is the particular
instance of it. And here are kind of the bounds of like the worst case than best case. And I'm going to
try to analyze that in isolation. And then I add the next layer of the onion and I have it
interact with that layer. And then I add the next layer of the onion and I have to interact with
that layer. I think if you do it kind of in this incremental way, you can actually try to reason
about the whole complex system. You know, we start with things like this layer one lifeness type of
stuff, but you slowly build up to the economic incentives. So how much that complexity gets
injected in that once you start thinking of things like interoperability between blockchains,
for instance, the problem gets exponentially more difficult if you start factoring in multiple
blockchains and interactions between all these different systems. That doesn't fit in my brain
space. It is certainly exponentially bigger. I mean, you're taking address space one,
address space two, you've doubled the number of bits, definitely increasing in an exponential
manner. But the key is to try to isolate the points of complexity that are most tangible to
think about how humans would interact with these systems. Because fundamentally, okay, I went from
128 bits of entropy to 256 bits of entropy for a two-chain interactive system, but humans are still
using those, right? And like methods and interfaces that you provide to the human as a developer
also dictate what usage you're going to get. And so you try to model things that replicate what
humans who are using those interfaces would look like. And then you kind of say, okay, let's
say I have 10 different versions of the same human, how would they use the system?
A hundred different versions of the same human.
How would they use a system?
And you kind of build from sort of a bottoms up approach where you try to like identify
behaviors, figure out which of those behaviors are consistent among group of people,
then figure out what math describes them, like what their utility function is, what value
they're getting out of calling this function cross-block chain transaction, what value
they're getting out of, hey, I'm willing to pay a transaction fee that's higher than the one on
my chain to move a cross chain. Then after that, what decisions do they make? Like, given a sort of
notion of how they can value a cross-block chain transaction, what actions can they take? One action
is certainly make cross-block chain transaction. Another one is don't. Another one is, is there a way for me to
do it on my current blockchain that gives me 80% of the same value, or 70%?
Or 50%.
Breaking it down in kind of this, hey, there's still a human using this thing or there's a human
writing a bot that's using this thing.
There's still this notion of like people's UX habits are not uniformly random, right?
They're not just like a fuzzer.
They're really kind of like using these interfaces in a very concrete way.
And reasoning about how different people would use it is really how you mom kind of try
to start modeling these types of things.
In consensus protocols, we usually, like, think of it, like, okay, the three types of actors we have are like Byzantine, rational, and altruistic.
But so what you're essentially implying is that this is like way too simplistic and that we need to be much more specific.
It's not just these three categories.
It's way more of a spectrum or many more types of users or actors.
So how do you know you've modeled all the actors possible?
Or like, how do you know you've covered the entire space?
and how do you, like, deal with, like, things that were just, you couldn't predict?
Like, imagine you wanted to try to predict, like, the distribution of SNX and how much, what it would be collateralizing.
But, like, you know, in what world could you have predicted that, like, a million SNX would be sitting here farming yams?
Craziness, how can you possibly build all of these into your models?
For sure.
So, I think one thing to remember from Consensus Protocols is this bar model.
as Byzantine altruistic rational is very unfair in one way, which is that Byzantine and altruistic are like one-dimensional things.
So in the space of all possible strategies, if I represent a strategy that a user takes given an interface.
So an interface, let's just say it's a set of functions that you can call.
And let's assume that all users have valuation model.
And based on the valuation model, a valuation model means a utility function in traditional economics or kind of a
objective function in machine learning.
You have objective function and you have decision function.
So the objective function gives you a single number or some sort of numbers.
The decision function takes those numbers and gives you an action, like what the user does.
And there's an entire, you know, not to do it else, but, you know, if you read philosophy,
there's a whole argument of whether this is how humans act or not, but ignoring the Kantian type of stuff.
That's how almost all models in machine learning for like,
AlphaGo and stuff, think about the world, right?
There's this kind of value function, decision function.
Now, Byzantine, the value function is choose a random number, choose an action.
Altruistic, the random number, it's like choose the same number and choose this particular
behavior, always.
Rational is much more like I'm actually observing the environment and like trying to figure
out a valuation and a value of these actions and changing over time.
Fundamentally, Byzantine and altruistic are actually very one-dimensional in that way.
One is purely random and one is purely deterministic.
But rational is actually this adaptive type of adversary.
And it's an infinite dimensional space of functions.
Like the set of functions that can give you utilities and values is infinite dimensional for rational.
But the other ones are actually zero-dimensional, like one-dimensional.
There's like a single function that everyone knows in advance.
And so the problem is by saying Byzantine altruistic rational, you're kind of assuming, hey, there are three equal categories.
But that's not true, because the rational category from a mathematical perspective is significantly larger, like infinitely larger and not just countably infinitely larger.
It's uncountably infinitely larger.
I think there's a lot of kind of classical functional analysis theorems that prove this.
I'd say Nash won his Nobel Prize was kind of related to proving this.
But the point is that, like, you can't actually, you know, when you, when you say, hey,
I've analyzed this protocol under Byzantine altruistic rational, 99.9999% of people who've done that are like,
hey, we're just going to say rational is optimizing quasi-luminary utility, like, Kiskeesh.
or hey, rational is like they are like only caring about a certain type of thing.
And the problem is no matter how you reason about any of these systems,
you are fundamentally imbueing a notion of what you think rational is.
And you can never perfectly simulate these such things.
I will be the first person and the last person to tell you that.
But I do think you should be foliating the set of rational actors with a much broader
set of views than what you can do with formal proofs. I think in formal proofs, the problem is really
like, it's very hard to prescribe a model that, you know, kind of can cover this infinite
dimensional space. And I think cryptographers have this willingness to suspend disbelief of like,
hey, we're just going to pretend that the rational actor only does one type of rational action.
But that's just not true, right? And game theory and algorithmic game theory and stuff have
tons of examples of this happening in practice. So I think the better better answer, and I think this is
what the biggest algorithmic game theory systems that are in production like Google ads and Facebook
do is they do tons of numerical stress testing of different types of users trying to commit fraud,
different types of users trying to do X, Y, Z type of action, and then you run these simulations
and say like, oh, okay, this parameter for our auction is correct. Or like, that's what we're going to
use tomorrow. And you keep updating it as you get new data. So the yam thing happens. Okay,
S&X whales are suddenly into vegetables. Like, didn't predict that. But now that we add it to our
little repository, that's a new type of rational agent. And so then the next time, so the day after
the yams happen, we can say, hey, look, here is this thing that actually causes this crazy
amount of risk to your system because people who have SNX, they're already leveraged. They printed
some SUSD. And then they took their SUSD, bought more SNX and put in YAMs. Now your system,
even though you say it's a 700% collateralization ratio, it's actually much lower because
people have been doing this recycling and kind of like sort of weird, weird sort of financial
engineering that they might not even realize they're doing. And now we have a strategy that
replicate that. So now when someone else wants, let's say, yams add atoms, I don't know, let's
pretend that there's like a synthetic atom on ETH, so that you can deposit. Yeah, we can run the same
strategy and say, like, this is the amount of risk the atom holders are taking. And so my point is,
it's an incremental thing where you're not going to predict everything, but you're going to try to
make your best guess by building the biggest library of possible things and then stress testing
against it. It's a lot like security auditing where you say either there exists formal
verification as this dream that will predict everything or here are the set of things I know
that could happen and I'm going to try to carefully look through each line of bike code and say
like this might happen, this might not happen. It's much closer to them if that makes sense.
And so how does like historical data play a role into this? Do you like when you have the simulation,
do you like run it against historical data?
and modify your simulation models until they fit the historical data and then you start to use them to predict?
Or, yeah, how does that work?
Yeah, another kind of philosophical dichotomy that exists in the traditional world is the difference between the financial world and the actuarial world.
So in the financial world, people really care about what are called point estimates.
So point estimates are what your neural nut does.
They give you an answer.
They say, like, hey, here's a function.
Here are a bunch of examples.
Train it on those examples so that it gets the right answer.
And then in the future, take a new example and give me a guess of like what the output is.
It doesn't give you any estimate of what the uncertainty is.
It doesn't give you any estimate of like what, how wrong can this function that's mapping things be?
Neural nuts don't give you that.
But like sort of more traditional statistical methods.
do that. And so in finance, people care about point estimates because they're like, I want to maximize
my expected reward or economics, microeconomics in general. In actuarial studies, people care about,
like, hey, I have this life table for this insurance I'm underwriting, and I care about the variance
of how much I have to pay. Or like, hey, like, yeah, sure. On average, I only have to collect
$100 in premiums from each person. But like, there's this one dude who has as best.
That's what's poisoning.
And it's going to cost us a billion dollars to cover his health insurance.
I'm making something egregious.
So in insurance and actuarial studies, you care about this kind of like distributional effect.
Whereas in finance, you care about like the, and, you know, I would say machine learning in general,
you care about kind of like predicting the average.
Although in finance, you care about the tail events blowing you out.
But there's kind of this dichotomy.
And so one way we do this is we fit.
some of the rational actors' behaviors based on historical data. So we tried to take the historical data and say,
hey, these actions were done by this address repeatedly. Let's say you tag these addresses. And we're like,
this is a, you know, a dext trader that does this type of action. So can we try to infer their utility function?
Infer their utility function. Now that's one of the libraries of the one type of user who's fit to historical data.
Another type of user is one where we leave, we say, hey, this user, we're going to parameterize them this way.
We're going to say, hey, they have a value function, but we're not going to say it's precisely these numbers.
And then we sample all the numbers.
We're like, we have a parameter that says how risky they are.
And when it's one, they're a complete gambler.
They just pull the slot machine every time.
And when it's zero, it's their very risk averse.
And then we have another parameter that says, hey, how much do they value, how,
growth versus how much do they value kind of like safe growth.
So like they're like, oh, like I'm willing to invest in the S&P 500 versus like I'm willing
to put all my money into Nicola or I don't know.
I don't know what the hot like now.
Thanks to Robin Hood.
I don't even know what the like hot stock thing like, you know, Portnoy's stock thing is
anymore.
But the idea is you try to say, hey, here's how we parameterize how this agent thinks
about risk and then we search through the whole parameter space.
So we say we're going to grid search from zero to one on their risk level and then show kind of
these heat maps or like these plots or these kind of more descriptive statistics about how at
each parameter, how the system behaves. So you kind of have to do both. That's maybe a long-minded answer
to that where you want some historical types of users, but you also want to try to make sure
you parameterize a space in a flexible enough way that you can search it.
I want to ask you about transaction fees, and if you're doing any research there,
and how important that is in the sort of mechanism design space.
Yeah, so I think in traditional mechanism design, it's not quite, it's not quite well studied.
And I think, you know, we're, you know, one of our advisors is Tim Ruffgard.
And we're constantly educating him a lot about this type of stuff.
And he's really been like, hey, yeah, like we just didn't really, you know, we spent the last 20 years building auctions for Google because that mechanism designers and an algorithmic game theory folks.
Sorry, when you say when you say in traditional mechanism design, you mean transaction fees applied to other mechanisms than blockchain?
Like, I mean, what other, in other places do we see like transactions fees as part of mechanism design?
For sure.
Yeah.
So, uh, the biggest, I would say, practical.
user of mechanism design that exists in the world is online ad auctions.
Okay.
Sorry, what I meant by mechanism design was specifically like, sorry, I was talking about
like crypto cryptocurrency mechanisms, like basically like this cryptocurrency design.
Yeah.
I guess what I mean is a lot of the math that has been invented for traditional mechanism
design doesn't include transaction fees in the way that blockchains use transaction fees.
And what I mean by that is when I'm, say, buying an ad.
or I'm connecting to a futures exchange, I don't pay per message I send to the exchange, right?
But in crypto, I have to actually pay per message that I send.
And so that actually changes the dynamics of a lot of the math.
And a lot of the math that works for ad auctions is completely invalid for blocks.
Yeah, that makes sense.
Because of this.
It's like a new research space, essentially, is what you're saying.
Yeah, it's 100% new research.
because people don't think about this pay-per-message aspect of it.
It's assumed that any user can send as many messages as they want to Google or Facebook
and that they don't have to pay.
They're kind of paying in like there's some DDoS prevention,
but there's not like a, like, hey, you actually have to pay for spam prevention.
And so what we do is we spend a lot of time modeling this.
We don't model it, say, in the way that we could probably prove a theorem about it.
But we do try to say, how should I value, if I'm a minor and I get, I have a mempool, how should I value a certain permutation?
Right? Because a mempool is a set of unordered set of transactions. And the kind of notion of, A, I chose some subset of it. And B, I chose an ordering of that subset. That's the value that's extracted by the minor, right?
And so we try to take kind of the more machine learning-ish, more statistical approach to it, which traditional mechanism designers would say, oh, well, like, how do you know it's optimal?
We just try to say, like, hey, well, any local optimum is good enough, which is sort of the machine learning approach of it.
Of like, what permutation, what subset can you pick that will maximize your value and then what ordering will maximize your value?
So we measure that both in terms of trying to predict distributions of delays.
So submit transaction.
How long is the delay given a fee?
And then we also tried to say what permutation is like most likely.
So we, but the problem is prescribing value functions over permutations is very difficult because a very large, you know, it's an factorial space.
So you have to kind of like come up some heuristics for that.
But roughly speaking, that's a very large, you know, it's a very large, you know, it's just an factorial space.
Roughly speaking, that's what you do.
The good news is that everyone who's writing front running bots is still a human.
And so, like, they write a certain set of strategies, right?
Like, it's not like, it's not like there's like they're really looking at the strategy that's like,
compute the Ackerman function divided by the maximum value that it could have been
and then use that as a random number to flip a coin to decide on the ordering, right?
They're not going to choose some crazy thing whose complexity is like super factorial or something, right?
So so far, you know, we've been discussing this in the context of like simulating an existing designed game.
Do you guys also work on designing new games altogether?
So in HFT, for example, you could simulate the HFT or you could solve some of the problems by like inventing
batch execution. So when it comes to like, you know, for example, on Ethereum, there's like
crazy gas spikes that we've had for the past couple of weeks. You know, we could continue to
simulate this game, but it's probably not sustainable. Like, there's probably a good chance
that the game design itself is broken and we need to rethink how we do block space auctions
in the first place. And so would you be able to use similar methodologies to construct
new games, or is the construction of new games sort of something that has to be just
intuitive and then this stuff is only used to test them out? I think it's sort of,
there's a feedback loop, right, of like, I have an idea, I run a bunch of simulations,
I see if it works, and then I see what doesn't work, and then I mutate my idea until, like,
I get to some type of minimum, like optimum solution.
I think a lot of the problem with things like designing box-based auctions is like they,
there's a really well-established theory that's very attractive to people to use,
which is the theory of ad auction.
So a lot of the papers on that, I would say that, especially by crypto professors,
are just like cribbing algorithmic game theory results and saying like, hey, they apply here.
But I think that like a lot, there is certainly some theoretical innovation you have to make first.
I think you do have to write the correct mathematical framework and equations before you can really simulate.
But I do think simulation tells you when you're wrong.
It doesn't tell you you're right, but it definitely tells you when you're wrong.
So it's kind of like a property test.
Like, you know, you say this model should do X kind of like in formal verification.
except to add a statistical level.
You say, on average, this type of block space thing should do X.
And you use simulation to verify, and then you find, hey, it doesn't work.
So, like, I must have made the wrong model.
So now I have to change something.
And when I say model here, I mean first price auction, second price auction,
weird, like, auction mechanic for block space that you choose, right?
Like, you somehow have to kind of, you know,
you can think of, you should really think of simulation as a way of doing this property testing
identification.
So what piece of the crypto economic field or stack do you think would most benefit right now,
like today, from some of this simulation to work?
So would it be like the proof of stake protocols?
Is it the fee models?
Is it some of these on-chain defy stuff like lending, dexes?
I used to think it was proof of stake.
I think the problem for proof of sake from a more practical standpoint is that people are just more risk averse, which is good.
You should be real risk averse for your base layer.
But that also means you're way too slow to like try to like update.
Like, you know, simulation should be used in like we did something.
We observed something.
We try to predict what will happen given the new observations and then we update and you kind of have this feedback loop.
repeatedly applied, that's when it works best. So, like, that's what happens in trading. That's
what happens in chip design when simulation twos in other places. But I think proof of stake is, like,
very, very slow. And, and, like, DFI is basically copying proof of stake, except it's
replacing proof of stake as a with an insurance fund type of thing. And I think, yeah, the DFI
is really the biggest deal right now for sure because like people are doing all the stuff
that they said they would do in proof of stake except they're doing it like recklessly so I think in
the long run proof of stake will learn a lot of the lessons of failure from these defy things but yeah
right now it's just so much more you know you can like make a prediction someone does it see how it works
use that as a example to add to your simulation and and like that's just happening in all
in defy right now. I just don't think it's really happening in proof of sake.
How much does like sort of governance actually impact a lot of this stuff? So,
so when you do these simulations or like this mechanism design, you have like some model of what
like some socially optimum utility is for the entire system. And if I was like a benevolent
dictator and I wanted to maximize like the socially optimum for this thing,
You know, you could figure that out.
Tell me what the best mechanism is and I can go deploy it.
But now what happens when, you know, there's a governance token.
And so sometimes the holders of the governance token are not trying to maximize socially optimum.
They're trying to maximize, you know, they themselves are rational agents.
So it seems like it becomes this like very weird meta thing that you have to also account for.
You have to model first.
You have to model the game that's the mechanism.
So let's say, you know, it's kind of.
curve, but now you also have to
model the incentives of the
curve governance token.
The DAO. Yeah.
I think the key is to inject simulation
into the decision-making process.
So, like, when someone is proposing a vote,
you run a bunch of simulations and you say, like,
here are the set of outcomes.
Given these types of utility functions,
put yourself in one of them.
And if you don't find yourself in one of them,
then you can complain.
But assume you have these sets of value functions,
We've run these simulations under different edge cases, and here are the properties that hold, and here's the probability they hold with.
If everyone's a degen gambler, the probability of the system going to zero went from point, went from previously before this vote, one basis point to 5%.
Like, okay, that's something, right?
And I can give you an uncertainty estimate.
So one of the things that I was saying before about, like point estimates, finance machine learning versus uncertainty estimates, actuaries,
insurance,
statisticians,
is that if you can provide good uncertainty estimates,
if I tell you it's,
hey,
I'm increasing from one basis point
of a chance of SNX going to zero
to 5% of a chance of S&X going to zero,
but 5% plus or minus 0.2%,
then you actually have a lot more,
like if I can give you more and more confidence
that like this is an increasing thing,
then governance actually,
you can impact governance in a way that's quantitative
and not like this emotional view of the world.
Because like at the end of the day, it's a new field.
People don't really, like they're voting kind of blindly.
And at least giving some sort of uncertainty estimate
lets people be like, okay, well, I'm rational,
but I'm not, am I rational in an iterated game?
Right.
Like if I'm taking a single step, it's.
rational for me to say increase the fees 99% because I'm an S&X holder and I want all those fees.
But this kind of iterated simulation says, if the game lasts long enough, we might go to zero
if I try to collect too many fees. And if that happens, is that really worth it? And so it gives
people a way to figure out their own valuation of how much they want to risk adjust. So I think
simulation is not going to be able to predict perfectly. These governance,
actions, but it's going to show you the outcomes under what happens when you choose them.
And so it's an integral part of giving quantitative justification for these things.
And in the normal world, it's much harder, actually, to impact governance in a quantitative way.
Whereas in crypto, it actually feels like it's quite tangible that you're like giving people
risk assessments based on a lot of very clear financial data.
But in a lot of stuff you do in the real world, you have to infer whether the data is real, whether it's accurate.
Sometimes you're like, well, someone may have been like kind of injecting noise into the data.
But the on-chain data being something you trust in as valid is actually quite important for those.
So I'd like to come back to an earlier point, which is let's imagine that someone is building a new blockchain as that happens these days, right?
And, you know, a lot of times, I think teams are kind of focused on, on like building the product, growing in community.
And then, of course, one of the things that often comes up at some point is doing a security audit.
And a security audit will entail, like, a bunch of things.
But there's some like design aspects that are also part of that audit.
I'm curious how you consider your work to be complementary to that or should, should like,
all that replace it? Is it better? Or is it a little bit somewhere in between? Like, where do you
put yourself in that sort of like early stage research when one is building a blockchain?
I think it's pretty complementary to both normal audits and formal verification. Because I think
one of the problems for formal verification is related to the thing you're talking about, which is
that naively, there's an exponential state-based blow up once I start interacting to a system.
like K systems. Once K systems interact, you have this, the naive notion of the number of bits
you need is blowing linearly in K, so the space has kind of blown out exponentially. But simulation
is more about like, well, what's the behavior that's not, if I don't have to sample every possible
action, if I sample the most likely actions, as well as the ones that are near the most likely
actions, what happens? And so there are two different types of scenarios, right? One is the pure worst
case, but might take infinitely long to search through the set of tests. And the other is,
how do I kind of use the expected behavior to estimate risk in a way that is intuitive and
interpretable to the non-developer? And both of those, I think, are valid ways of stress testing,
but they're very different. And they are extremely complementary. So like when an exchange, when like
the CME, the Chicago Mercantile Exchange, which is like the biggest futures market in the
world. When they build a new piece of software, they, of course, get audited and they do kind of
traditional cybersecurity ads, but they also do simulation. And they stress tests like, hey, did we
choose the right tick size as a parameter? Did we, like, are we resistant to kind of certain
types of malicious trading strategies that try to, like, block everyone out of the market? And there's a whole
literature of this. Like, the SEC themselves spends a bunch of time doing these stress tests on exchange
code to show that they meet compliance. And so they're very complimentary, but they, they,
they stress has very different things. One is really stress testing user behavior and the other
is stress testing like code behavior. And user behavior is about probabilities. Code behavior is
about determinism, pure determinism. But they're related. So in all of our simulation research,
And I think this is what distinguished as us from other people who've kind of tried to do this,
is we run everything against the real code.
So we build kind of think of OpenAI Jim or like the AlphaGo training program.
What happens is people build a harness around the real piece of code.
And then the harness has a way, has a set of interfaces that you can model the different types of users in.
You make a domain specific language.
that you can program the different types of users,
and then the users interact with the real code.
And I think I've seen a lot of simulations,
especially in 2017,
I saw a lot of kind of less rigorous simulation stuff
that kind of is like,
hey, well, we think the model of how the blockchain itself works is this,
and we're going to say, this is a Poisson process,
and this is a this thing, and this is a this thing.
And then we have models of user interact with them.
The problem is in reality,
like a lot of these code things that cause problems
for formal verification or security auditors also will affect the economics in super edge cases.
So you want to minimize the amount of surface area that you seed to your model.
You want to say, hey, look, we're running this against the real code as much as possible.
And this is something people in trading do a lot.
And I think that I only really respected this once I saw the difference in trading between,
hey, like, this exchange happens to use only 18-bit fix-point integers, and, like, all of a sudden,
the strategy loses money, right?
Like, that's the type of detail that, you know, I think a lot of people who are like,
oh, well, I just, like, learn Python and use PyTorch, and, like, I made a model of your blockchain.
Don't kind of, like, are missing.
Like, they've never seen, like, people lose money because, like, the, you went from floating
point to 18-bit fixed-point integer randomly.
And I think that that's why you need to actually run this stuff against the real code.
Because there's just like tons of weird developer decisions, some random if statement somewhere that completely like takes all the money out.
And you don't realize why until you like actually are running like tons of.
So I it's complimentary though.
I just don't see.
I think security orders are really focused on like binary objective functions of like does it property testing of like yes.
No.
And, you know, I think what we're focused on is this kind of like statistical version of that of like, but we still want to run against the real code, right?
We still, you know, compile against whatever Docker image you give us.
We run it against whatever kernel modules you say it should be running against.
Because I think you never know when like some random piece of the code just change the economics completely.
What kind of tools do you use to do this?
Like, are these custom in-house built tools, or are there like some, you know, public tools that you kind of use to do these sort of simulations, both, you know, with dummy code and then also when you want to test with real code?
Yeah, so similar to kind of security auditors, we kind of have, we kind of build a lot of our own versions of the,
virtual machines themselves and like add in extra tracing functionality and extra kind of like
tracking functionality for like agent submits a trade to uniswap and we track kind of like
where through the client that transaction goes and like oh did it get halted at a certain point
or oh did the networking layer like look malformed at it but we spend a lot of time i mean certainly
now we're pretty much only ethereum we were doing
a lot more other chains, but honestly, defy and Ethereum has the most sort of need for this.
So we kind of have written our own, we kind of have our own fork of geth, where we have optimized
a bunch of things for doing simulation. One thing to remember when you're doing simulation is you're
controlling the threat model, you're describing all of the users in the system. So by controlling
a threat model, you can actually reduce a lot of the cryptography burden.
And by doing that, you can make the performance a lot better.
And so we've spent a lot of time building this client with extra tracing and kind of ways of having multiple agents interact with the same node.
Multiple agents kind of work off the same kind of simulated blockchain state, stuff like that.
And so, yeah, so we, we have that.
And then B, we have sort of a domain specific language that we is mainly,
in Python because I think from a data science perspective, it's just like still too hard.
As much as I love Julia, I'm sorry Julia fans.
It's just still not quite there.
But it's always one year away, right?
But I'm sorry, Julia has a scientific programming language that's like way faster than Python.
It's like this like compiles to Rust and C++ plus like supposed to be like the real deal.
But yet if you talk to every data scientist in the world, they're going to tell you.
you they use Python or R or something.
So we have these Python bindings.
We have this DSL.
DSL compiles to some bytecode that basically gets run against the virtual machine directly.
So there's like kind of a layer in between that takes the compiled agent code and
has it interact with the virtual machine.
I think in the world, you know, kind of in the same way that it took trail of bits forever
to open source a lot of critic.
I think we want open source some of it,
but it's just going to take as a while.
But yeah, right now it's mainly that type of stuff.
A lot of what we use is based on a lot of the work
that Google and Facebook have done
on compiling Python models to C++.
That type of stuff has been,
is like really deep in our stack
for increasing performance.
So before we wrap up here, I'd like to ask you a little bit about gauntlet, the business and what does the current business look like?
I mean, you guys put out all these reports and all this research, but who do you work for?
And then also, you know, what's the sort of roadmap and plans looking forward?
Yeah. So, you know, I think a lot of what we do right now is, you know, putting out reports, working with the protocols themselves, kind of close to security auditors.
although we've been taking a lot more of an active role in governance.
So we're sort of the third largest comp holder in governance by votes.
And we have a bunch of stuff we're working on right now to try to automate the actuarial
predictions I was telling you about earlier.
So imagine that there's a governance vote.
Someone says, hey, we want to change the collateral factor on.
compound for WBT to this value.
We will basically auto-generate a bunch of simulations and risk estimates for like what
the before and after of this particular vote look like to our best estimates and then present
them to the user in a way that's intuitive.
So you can be like, oh, well, you know, by making this change, we decrease the probability
of default by this amount, but then we also lower the revenue.
that the cash flow that the network gets by this amount.
And then, you know, a user who's like maybe more financially educated,
but not so like in the weeds on like how the protocol works can be like,
oh, okay, I kind of get that.
This is what this change does.
And we're also working on sort of what we call auto-gov,
which is a way where we monitor the markets and then auto-submit proposals.
So we do, we have, you know, sort of some of the proposals are more simple, but some of them need a little bit of like program synthesis where we generate the code for the proposal based.
We run a bunch of simulations.
We say, hey, there's way too much risk in because a bunch of yield farmers decided to like mint too much at SUSD.
And we're going to submit a proposal that says like increase SUSD minting fee by.
X and here's the reasons why and here's the code for doing it.
And so the dream is to have this sort of automated system that can monitor these things
and submit proposals to governance in a fully automated fashion.
And then the smart contracts pay for this.
I don't know if that.
It's a little bit of the opposite business model of most blockchains.
Most blockchains and smart contracts want to make their coin worth a lot,
whereas we want to kind of reduce the tragedy of the commons and have be kind of paid as a service provider, but it's automated.
And how do you get the time to like spit out all these papers?
You know, one of my, you know, I just remember I had this idea like a couple of months ago of like, oh, you can combine ideas from stellar and avalanche to grant this new consensus protocol.
And then like, you like, you're like, oh, me and my brother, we wrote this up like two months ago for fun.
And I'm like, what? Like, where do you get all the time to, like, write all these papers? And, like, is that part of the work you do with, you know, for example, you wrote this paper on like Uniswap or just like in, or AMMs in general? Is that also work you're doing with these companies or is that sort of just something you do on the side?
That is something we, I do on the side, but I think it's very closely related in the sense that, you know, how you were talking earlier,
about like, can you discover new mechanisms by pure sort of simulation methods?
I think that there's kind of this interplay between the theory and actual discovery of these things.
So you actually need to make the theory so that you can simulate it.
Right.
Like once you have the theory, you can start saying, like, here's where the theory breaks.
And that's where we're going to simulate.
And that's where we're going to do kind of these stress testing type of things.
And I feel like right now the way that things look, especially in Defi, it feels a lot like the kind of late 2000s, early 2010s in machine learning.
It feels a lot like quantitative finance in the 1980s, where if you can figure out how to make the valuation model that people use, then you will like,
actually impact the usage of these systems. And so that is related in that, like, yes, we use the same
models and simulation, but also we have more people using these things because, like, they understand
these financial aspects. So there's kind of this dual play between, like, doing research and
convincing people that these risk metrics are correct. And I think, I think writing the research
is, is quite crucial to that. It's, it's the equivalent of open source software for this type
stuff. Cool. So where should people go to find out more about Gauntlet and your work? So our Twitter
is at Gauntlet Network. For me, my Twitter is at my name at Tarun Chitra. So T-A-R-U-N-C-H-I-T-R-A.
You know, we're we publish a lot of stuff, but I think we're going to be coming to a governance
vote near you soon. So cool. If you're if you're in that realm, you will see us or you already
seeing us in compound, but I think that's the story.
Great. Thanks for coming on, Toro.
Yep. Thanks.
It doesn't end here. There's more to this conversation, and you can hear it on Epicenter Premium.
As a premium subscriber, you'll get access to a private RSS feed where you can hear the
interview debrief and get enhanced features like full episode transcripts and chapters,
which allow you to easily skip to specific sections of the interview. You'll also get
exclusive access to roundtable conversations with Epicenter hosts and bonus content we put out from
time to time. Go to premium.competter.tv to become a subscriber and support the podcast.
