Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Eli Ben-Sasson: StarkWare – Productizing zk-STARKs to Provide Blockchain Scalability and Privacy
Episode Date: October 22, 2019This past year, we have witnessed what some are calling a “Cambrian Explosion” in zero-knowledge proof systems. New proof systems based on a variety of cryptographic assumptions are popping up eve...ry week. And while zero-knowledge systems are known for their privacy-preserving characteristics, they have proven particularly useful for scaling blockchains through off-chain computations.We're joined by Eli Ben Sasson, Co-founder and Cheif Scientist in the East at Starkware. His company is developing a full proof-stack, which leverages STARKs. Pioneered by Eli, STARKs are zero-knowledge cryptographic proofs, which are succinct, transparent, and post-quantum secure. StarkWare has demonstrated how zkSTARKs may be leveraged to provide off-chain scalability by generating proofs of computational integrity which may be verified on-chain.Topics covered in this episode:Eli's trajectory and transition from academia to the startup worldThe origin story of Starkware and the founding teamThe current explosion of zero-knowledge proof researchAn overview of zero-knowledge proof systems and how they workWhat are STARKs and what are their propertiesHow STARKs are different from SNARKs and BulletproofsWhat are zk-Rollups and how they are used by StarkWare to achieve scalabilityThe StarkDEX experiment and scalability benefits it had demonstratedThe issue of data availability with layer-2 scaling solutionsStarkWare's business model and the solutions they are building for customersEpisode links: StarkWareStarkWare resourcesStarkWare blogStarkWare TwitterStarkWare Sessions conference videosEpicenter Podcast SFBW Week MeetupDeFi Hackathon sponsored by CosmosSF Blockchain Week 2019 (use the code EPICENTER for 20% off tickets)Macro.WTFSponsors: Vaultoro: Trade gold to Bitcoin instantly and securely starting at just 1mg - http://vaultoro.comB9Lab: Level up and become a Solidity smart contract auditor – 5% off the with the code EPICENTER - https://solidified.b9lab.com/epicenterThis episode is hosted by Sebastien Couture & Sunny Aggarwal. Show notes and listening options: epicenter.tv/310
Transcript
Discussion (0)
Hi, welcome to Epicenter.
My name is Sebesenk with you.
And my name is Sunny Akir.
Hey, Sunny.
How's it feel to be back in the U.S. after DevCon?
Pretty good.
My trip was cut a little bit short in Japan due to the typhoon.
I decided to jump out of there as fast as possible.
But yeah.
Yeah, you made it in the nick of time, I think, right?
Like, your plane was, I could just, I was imagining you were, like, flying out and, like, the plane, like, flying out and just, like, the hurricane, like, coming up behind it.
I was just, like, like, flew.
out into the clouds like in the movies or the typhoon rather yeah i know a bunch of people who left
a few hours even a few hours after me they were their their planes are grounded or they had to
make it through luckily osaka wasn't uh the the brunt of the damage it was mostly more towards
tokyo but yeah still quite a bit of like rain and wind and stuff would have put a damper on
any vacation yeah so how was devcon defcon it's pretty good it was a bit like weird in a little
It felt very unlike other past DevCons in a lot of ways.
Keep my mind, I've only been to one DevCon before, which is DevCon 3 in Mexico City.
But it was cool.
I mean, I think there was a lot of talk about Ether 2.0 and whatnot.
I would say my overwhelming thing I would just say is it felt less like a DevCon
and more like Osaka Blockchain Week.
Yeah.
In the sense that it wasn't very Ethereum.
It didn't have the same Ethereum focus as it normally does.
Like, we walk into the main venue.
There's no, where's the rainbows and unicorns gone?
And where's the, where's the, all the main stage talks or like non-Etherium stuff and whatnot?
Why do you think that is?
I think there's this interesting question of what is the Ethereum community.
Is it the core Ethereum technology, that specific chain?
Or is it this community of people who are interested in de-sexuals?
centralize applications and Web 3 and Defi and whatnot.
And so if it's the latter, does that mean that other layer one platforms that are also trying
to push forward that same movement are also part of the Ethereum community or not?
And I think that was kind of the general debate, where it seemed the Ethereum Foundation
seems to think that those are.
and then a lot of the, or portions of the
ETH community tend to think that it is not.
So I think that's kind of where some of the debate comes in.
I mean, I wasn't there, but I've heard a lot of similar things from people who were.
I heard on another podcast someone say that increasingly, you know,
other projects that are trying to launch their own layer one solutions,
in order to grow communities need the kind of buy-in from the ETH,
Ethereum community and that being at this conference and presenting their ideas there and sort of like showing good faith and sort of pat blanche.
I don't know if you can say that in English.
It is a way to gain credibility in order to launch a new layer one solutions.
Yeah.
I mean, exactly.
And I think that's kind of a little bit of what I meant as well where it's like, you know, there is this DAP community.
and they're all currently mostly in Ethereum.
And it can be very difficult for like new L1.
So we're trying to solve a similar problem to Ethereum,
but maybe using different technology or different consensus protocol
or different staking mechanism or whatever.
But if they're still solving the same problem of being an L1 smart contracting system,
if all the smart contract developers are working on Ethereum already,
the only way to get them is to poach them.
them if you want to be cynical or attract them if you want to be more lenient, generous.
Yeah, that's interesting.
It would be interesting to see how these acre systems grow and what synergies will exist
between them in the future for the ones that will remain because I presume a lot of
these other layer one solutions that are being proposed might find their own niche applications
or might even just disappear in a couple of years.
Luckily for Cosmos, what's nice is we're able to make a pitch more, not just on technological changes, but there I think are some deeper ideological and like structural differences between Cosmos and Ethereum, which is why I think there's a little bit less of a, I think it a little bit more complimentary than like much more positive some than, you know, Cosmos we don't really believe in like, or we're not focused on, uh,
layer one like true and complete smart contracting systems, for example.
Right.
No, I kind of get that complementary aspect as well.
I feel that Ethereum and Cosmos are more complementary than, say, other platforms.
Cool.
Well, I'm glad you had a good time.
And I'm going to see you soon because we're going to be at SF Blockchain Week,
which we'll get to in a few minutes.
But first, I want to introduce our guest for today.
Our guest is Ellie Ben Sasson, who's the co-founder and chief scientist in the east at Starkware.
Yeah, so Starkware has actually two chiefs.
scientists, one in the east and one in the west, which is crazy, and this goes to show the amount
of the amount of crazy research is going on in that company. So prior to founding Starkware,
Ellie was a professor at Technion, the Israeli Institute of Technology, and he held positions at MIT,
Harvard, Princeton. And his research focuses on theoretical computer science and proofs and
computations. And he's particularly interested in how you can apply these proof systems to
applications and decentralized applications and cryptocurrencies.
So if you're a long time listener of the podcast, you'll remember Brian and Mayer's interview
with Ellie in 2016.
This was long before there was even an idea for Starkware.
In fact, it would be another two years before the release of the Stark's paper, which Ellie
co-founded.
And he was also the co-founder of this little cryptocurrency, privacy preserving cryptocurrency
called Zcash, which we've also talked about a lot on the podcast.
So I really enjoyed this conversation because it was, we sort of broke down the types of zero
knowledge schemes which exist today, explaining the differences between Starks and Snarks and
bulletproofs and all of these terms that we've been hearing more and more lately.
And generally, I thought it was a really good refresher on zero knowledge proofs.
I mean, I know you spent a lot of time researching these things, but for me, it was a good
way to sort of get up to date on what the state of all these things are at.
And in a moment where, as he describes it, there's sort of a Cambrian explosion in the zero knowledge space, it's a useful conversation to have, to have a better idea of like where we're coming from, where things are at and where we're going.
What did you think of the conversation with Ellie?
I think that especially in the last few months, this whole Cambrian explosion thing has been, you know, I try to look my best to like keep up with what was going on in zero knowledge a little bit.
I'm not the best cryptography expert, but I would try to keep in, at least keep in my head
of like, okay, these are the tradeoffs between these different types of your knowledge,
implementations, like, okay, this is how snarks compared to bulletproofs to Starks.
But now in the last few months, it's like, oh, man, there's Sonic and Supersonic and Plank and this
and that and that.
And I've just even stopped keeping track at this point.
And so it's good that there are some very smart people who are focused on, like, you know,
mapping all of this stuff out and like pushing that feel.
forward. And it's really cool that, you know, this is a company that isn't really focused on. They
don't have a chain or a project. Like, you know, they have some projects that they've worked on,
but they're really more of sample project to show off a technology. And it's really a sort of a
company. It's really more of a research team that's been structured as a company. And so that was
kind of interesting. Yeah, I like their approach. And I'm curious to see what will come out of it.
like what are these products that that will be released?
I know that they're working on one product to help scale exchanges.
And so we talked about this Stark Decks, which they've built,
which is really sort of a proof of concept to demonstrate the abilities of these future products.
But, you know, yeah, this will be a B2B company servicing companies in the crypto space
and not so much servicing sort of like crypto users, at least for now that seems to be the direction.
And yeah, you mentioned all these other projects.
Yeah, I mean, that's also an interesting aspect of what's going on at the moment.
There's just so many things going on.
And we're looking at having a lot of those folks on the podcast as well.
I think we'll probably get to meet a lot of them in SF.
So, yeah, before we get into the interview, let's talk about SF Blockchain Week.
It's coming fast.
There's lots going on.
And so let's break it down.
And don't fast forward this one because I'm going to tell you how you can get free tickets
to the main event conference.
So first and foremost, if you're going to SF, you know, if you want to meet
with me and Sunny, we're going to be having a casual drinks meet up on the evening of the 29th.
It'll be after day two of CESC, so we'll have it in Berkeley close to the venue. I know that
we've got a pretty big audience in SF, so I hope to see many of you Bay Area listeners come
and hang out with us. drinks are on us, and you can register at epicenter.orgs slash sF meetup.
I'll be, and I think you too, Sunny, will be at CESC on the 28th and 29th.
And then the main event, SF Blockchain Week Epicenter on the 31st and 1st.
I'm super excited to be emceeing the epicenter conference, which is quite fitting because
they have the same name as this podcast.
They're organized by DECRIP Capital.
And Sonny, you're also going to be speaking at this conference.
Yeah, I'll be moderating some panels and I'll be speaking at the CESC event.
which CESC, by the way, is also it's being organized primarily by blockchain at Berkeley,
which is an organization I co-founded a couple years ago.
Cool.
And you're also putting on an event.
I don't know how she tells us about that.
Yeah.
So I'm kind of also helping put on another two events, actually.
But one of them is this macro.
wtf event.
So basically back in Osaka, the day before DevCon, there was this awesome event called
defy.
dot WTF. And it was just like a one day conference where they invited a bunch of DFI speakers.
And it was just a really good conference. Like all the speakers were really great. The vibes were
good. They had this like cool aesthetic, which I really like. And so yeah, it was really good event.
And so after the event, I was hanging out with them. And I was just telling we were chatting about,
you know, I want to see more. One, I was like, okay, we should turn this like dot WTF thing into like,
into a thing. Like, let's like, let's do another event. And I was just telling us, we're chatting about, you know, I want to see more.
And I've been reading a lot mostly on macroeconomics lately.
And so I want to do something where we put crypto in the context of macroeconomics.
And so that's what we're doing on the Wednesday of SF Blockchain Week.
So the day between CESC and Epicenter will have our own little event.
On the 30th.
Yes.
And if anyone's interested in speaking, we're still actively looking for speakers.
And so just feel free to message me on Twitter or on Telegram.
My handle is SunnyA 97 on both of those.
So that's macro.wtf for more details.
So yeah, so tickets are still available for CSC and the Epicenter Conference.
So if you want to get tickets, you can go to epicenter.rox slash SFBW tickets and enter the code Epicenter 2X for 20% off.
And here's how you can get free tickets.
They've got a deal with E. Toro, which were a guest last week.
Yanni SC was on the podcast.
So if you create an account with ETORo and deposit $50,
you'll get a free ticket to the epicenter conference.
I mean, that's a pretty good deal, right?
Like $50 deposit on Eitoro?
So to get in on this deal, you just need to go to sf blockchain week.io slash Eitoro,
and the details are there.
I think you need to do it before the 28, which is fitting because that's when the conference starts.
Then we're also going to the DFI hackathon over the weekend.
Sunny, tell us about this hackathon and why people should be excited about it.
Yeah, so Cosmos is one of the lead sponsors of the hackathon.
But, you know, so I'm really excited to kind of show off some of the stuff that we've been working on on Cosmos and why it's a really cool platform for building cold Defi applications, some of which I've been working on over the past couple of months.
You don't have to build on Cosmos for the hackathon.
You can build on anything you want.
And it'll be really interesting just to see a hackathon that's really focused on.
DeFi products.
Yeah, I mean, there's a $50,000 price pool in Adams for winning teams.
So that's a good reason to attend.
You know, it's going to be super defy focused.
So, I mean, you could build like a lending platform.
You could build like a stable coin on Cosmos, for example.
And I think IBC, yeah, IBC will be ready by then.
So you can also interoperate with the existing cosmos ecosystem so that there's all sorts
of possibilities here that opens up.
Yeah, that's one of the most exciting things. It'll be the first time IBC is kind of ready for people to play with. And so a lot of people are going to be coming in. And like, I know, for example, Kava is for their project they're working on an IBC demo of like moving atoms and stuff onto the Kava test that and collateralizing them to create stable coin. So yeah.
That's super cool. I'm like really excited about that. To register for the Defy Hackathon, which is organizing.
part by Cosmos, our sponsor. Go to epicenter.orgs slash SFcosmos and be sure to let them know that you
heard about it on Epicenter. I'd also like to tell you about our other sponsors for today's episode.
So recently I started reading the Bitcoin Standard by the economist, Safe Dean Amos, and he spends a
considerable amount of time at the beginning of the book talking about the history of metals as money.
And throughout the years, throughout the centuries, copper, silver, and gold have been used as money.
But what has remained consistent is gold's position as the most stable and trusted source of stability as money.
And to some extent, that remains true today.
In fact, as he describes it, central banks are still buying and stockpiling gold in great amounts.
So if you're holding crypto and you'd like to buy gold, there's no better.
way to do that than at vaultoro.com. Voltaur is a leading gold to crypto exchange and they've just
released their brand new V2 platform which looks and feels great. You can create your account in just
a few minutes and once you're verified you can transfer Bitcoin or dash onto your Voltaur account
and start trading gold. What I love the most about Voltoro is that you're not buying some
gold derivative or some asset which is backed by gold. You're buying real hard gold man. It is
protected by Brinks stashed in vaults deep in the Swiss mountains. And if you'd like,
I mean, you'd have to be a little crazy, but you can have that gold delivered to you. At Epicenter,
we've been friends and customers of Volturo for many years. We've held a portion of our
crypto assets in gold, and we've always been happy to have that stability whenever the markets
were really volatile. So to start trading, go to Voltoro.com and create your account. And when you do,
do me a favor. On the bottom right of the website, there's a little yellow,
support icon, click on it and in the message box, just say, I heard about you on Epicenter.
That would make me really happy. We'd like to thank Volta for the support of the podcast.
I'm also really excited to tell you about our new sponsor this week, and that is B9 Lab.
And if you don't know about B9 Lab, they are the premier blockchain developer academy on the
market. I've been following them for many years. In fact, their co-founder, Elias Hasa, is at about
every blockchain conference in Europe. So we've bumped into each other many times over the years.
And it's been great to see this company grow from a small team and just a handful of courses to now
over 15 people with professional instructors and a breadth of courses that spans just about every
blockchain protocol on the market. So they have Ethereum developer courses, courses for
Korda and HyperLedger Sawtooth and Fabric. And they're adding new courses all the time.
And one of those courses that they're adding is the brand new certified solidity and smart contract auditor course.
It's an eight-week program.
It's a part-time course.
And it's for experienced Ethereum developers who want to expand their skill set and become smart contract auditors.
This course will introduce you to the current state of the smart contract security ecosystem.
And you'll learn through exercises which pull from vulnerabilities that were found in the wild.
And it provides advice for those who are looking to enter this market.
So if you're an Ethereum developer, just think about what an eight-week smart contract auditing
course could mean for your hourly rate or the salary that you can ask for in an interview.
I mean, chances are you can ask for much more if you have this skill set.
So B9 Lab have partnered with Solidified, a smart contract auditing platform, and they're
offering three paid internships at the end of this course and the possibility to join their auditor
pool.
The course starts on November 18th, and there are only 100,
seats available. So if you're interested in this course, register quickly because these seats are
going to go fast. To sign up, go to solidified.b9lab.com. The link will be in the show notes and use
the code epicenter at checkout to get 5% off the course price. And this code is valid on all the
courses on b9lab.com so you can get that 5% discount on anything that you see there that you find
interesting. We'd like to thank B9 Lab, the blockchain education specialist for their supportive
epicenter. So with that, here is our interview with Ellie Ben Sassasson. We're here with Elie Ben Sassasson,
who is a return guest on the podcast. He's been here before. It was almost four years ago,
and Ellie has been gracious enough to give us some of his time while he's on vacation, and it's very
late at night where he is in Israel. Ellie, thanks for coming back on the show.
Thanks, Sebastian and Sunny. Always a pleasure to be on this show.
Well, it's a pleasure to have you back. And yeah, so as I mentioned, I was looking before we started recording. The last time you were on was in 2016. This was long before I think there was even an idea to do something like Starkware. Back then you were a professor at Technion. Talk about your trajectory since then. What's it been like to go from the academic life to full on startup mode?
a lot of fun in one line.
Yeah, the trajectory, I think, started years earlier.
And my Eureka moment was in 2013, May 2013,
in the Bitcoin conference in San Jose,
where it sort of dawned on me that the research I was doing about
scalable proof systems that that stuff can be very useful for blockchains.
And that was my.
turning point. And in 2016, I was still doing research as a professor, advancing the science and
technology of the very particular brand of proof systems that I think we'll talk about later.
Then, I believe it was at the end of 2017 that sort of we realized that it might be time to
try and commercialize this stuff. This was right after we were ready to publish.
work on ZK Starks that we'll talk about later on.
And we started this company almost two years ago.
So it was around the end of 2017.
And yeah, it's been a lot of fun since then.
Very exciting.
Before working on Starkware, you were also one of the co-founders of Zcash, right?
Yes.
Was Zcash already going on when you were on last time?
or was that something that you got started with afterwards?
I was definitely involved in Zcash in 2016.
I think the coin launched at the end of 2016.
So I guess around the same time I was doing the interview with you,
but I was definitely involved with Zcash then
because the company has been working,
I mean, was working on it for a while since then.
Yeah.
So back then I was also involved in Zcash,
which is also a very exciting project
that I'm very proud of, you know, having contributed to,
along with my other founding scientists on that thing.
But, yeah, since then we sort of moved on to other technologies
and other endeavors.
And so with one of your other co-founders at ZCash with Alessandro Chiesa,
now you guys are both chief scientists at Starkware.
So can you tell us a little bit about the origin story
of how you guys decided to, how Starkware came into being?
Yeah, so I guess for me, the origins go a very long way back to the days I started my postdoc in 2001 at MIT.
And I was doing research with Madhu Sudan, one of the leading figures in the development of the PCP theorem.
And it was this trajectory of reducing both making systems a bit more efficient, but also reducing stuff more and more towards practice.
And this is something that was our passion for a very long time.
So again, for me, it started maybe in 2001.
I started collaborating with Alessandro Kieza around 2010,
shortly after Michael Rebzzev, our third co-founder,
also started working with us and advancing both the science
and the technology of this thing.
and Ui Kolodny, our CEO and fourth co-founder.
So he's been a close friend of mine for more than 30 years
and also almost as many years was my business mentor
and he certainly was following everything and helping us out
with already Zcash and other things.
So at some point when it was clear that this particular technology
can benefit from a dedicated company.
So the four of us sort of joint forces,
and that's how it started towards the end of 2017,
which is about two years ago.
Cool.
We met a couple of weeks ago in Tel Aviv.
Starkware organizes a fantastic conference,
which was called Starkware Sessions.
We've mentioned it a few times on the podcast since then.
and during your keynote talk, you gave this description of the zero knowledge space as going through a Cambrian explosion.
You described what you meant there and talk about the unique time in which we live with regards to how ZKPs are evolving.
So this notion of the Cambrian explosion, there was this era about half a billion years ago where
from this primordial soup of microbes or things like that,
all of a sudden, in a very short span of time,
a lot of the more advanced creatures that we see today,
various plants and insects and other forms of life,
all spawned off in a very short time.
And with cryptographic-proof systems,
commonly referred to as EKPs,
although not all of them are formally zero knowledge proof systems.
There's a big family of them.
So these proof systems, they've been researched in theory since 1985,
since this beautiful discovery by Goldwasser Mikali and Rakov of interactive proofs.
And they've been pretty much confined mostly to theoretical works.
And then around sort of starting in 2005 to 2010,
then suddenly we see this emergence of more practical stuff.
And over the past decade, there's been a proliferation of different forms of proof systems
based on all kinds of different assumptions.
And it seems that the speed of release of new kinds of proof systems is sort of increasing
almost exponentially or very rapidly.
And just over the past three or four months,
we've heard about Plunk and Sonic and Supersonic and Dark and Fractal and Marlin.
And by the time we'll end this interview, maybe a couple more will be released.
So it's really quite remarkable.
And where do you think this is all heading?
If we're now in the Cambrian explosion and you sort of equate that to the real Cambrian explosion
that happened with regards to life,
where can this all lead us
knowing that there's just so much more
ahead to discover?
Hard to say, okay, one thing I think is for sure
that we'll see a lot of systems deployed
and integrated into products.
That's one thing I think,
Starkware is leading with
in this particular adoption
of a particular form
of proof systems
and bringing them into live products
that will help scaling.
So we'll see a proliferation,
not just of academic research,
but also of dedicated productization
and robust code bases that will be used.
I think at the same time,
we'll also see a better understanding
of the different building blocks
by which you can compose and build
different proof systems.
We're already seeing this.
So this will continue,
and at some point we'll come to some
a decent understanding.
And the most exciting stuff is that often with research,
you know, some completely unexpected new discoveries or challenges or open problems
might be discovered, but that, you know, one cannot predict what they will look like.
That's maybe, as a scientist, the most exciting aspect to me.
Do you think it's a matter of, like, we're going to be finding the perfect proof system
that, like, kind of satisfies most of the use cases?
Are we going to get into a ecosystem where we have like many different proof systems?
Each of that maybe are good for specific use cases?
I think that when the dust settles, they'll probably be a very small number of proof systems that are actually used at scale because, you know, they're built in different principles and mixing them is not as efficient as sticking with one or two of them.
and I'm biased, of course, but my bet is on Starks for reasons that I can describe later.
So that's what I, it's just a little bit like, you know, if you look at other kinds of infrastructures in the computer science world.
So there's an abundance of communication protocols or ways to build operating systems or programming languages.
But at the end of the day, there is a very small number of them that actually stick.
around and they're used by everyone. So I think it will be a little bit like that. Maybe not one,
but I think there'll be a small number. So not all the school research will be necessarily
adopted as infrastructure by all systems. So how would you say we should go about like thinking
about the tradeoffs and comparisons between different proof systems? What are some of the
parameters we should be looking at? Like the ones that come to, that I'm aware,
of are things like prover time, verification time, and proof size.
What are some other things that we should be looking at?
Yeah, so definitely a prover, verifier, and proof size.
I would say that especially if you look at scaling,
it's far more important to look also at the amortized costs,
which means, you know, for a certain batch,
you take, let's say, prover time and you divide it by the number of states.
or transactions that this proof covers.
And the same thing with verification time or gas cost,
same thing with proof length.
That's far more important in terms of scaling.
One other set of parameters that was, I think, a bit overlooked
and we were very interested in,
is the cryptographic assumptions on which you're building these systems.
And a way to think of it is in terms of future-proofing your systems.
So the more your assumptions are sort of fundamental have been around for a while,
and the more other stuff is built on them,
it sort of implies that they're a little bit more future-proof
than more exotic assumptions that have been around for a shorter amount of time
and scrutinized by fewer peers.
So that's another dimension that was slightly overlooked
in evaluating these systems.
So I think this is a good segue
into the world of Starks.
And before we dive deep into Starks,
I think it would be helpful to get a brief refresh
on zero-knowledge proofs.
So everyone's on the same page.
Can you summarize very succinctly?
What is the zero-knowledge-proof?
So zero-knowledge-proof,
there's a mathematical definition,
and it is one that covers privacy.
basically informally a zero knowledge proof is a proof.
Think of it as some sort of beefed-off grocery receipt
that when you look at it,
you're completely certain that the total sum that you need to pay is correct,
but you learn nothing other than this fact.
So it's a privacy preserving technique,
a magical one at that.
The term ZKP, by the way, by now,
has been sort of borrowed to cover a much larger range,
of proof systems that I like to refer to as either cryptographic proofs in general or sometimes
as proofs of computational integrity. And both of these terms are terms that are not mathematically
and formally defined. So you could sort of loosely use them to define a very large range of proof
systems, including ones that mostly care about scalability and not privacy. So you have this
sort of receipt that tells you that a very large computation or
or a very large batch of transactions has been processed correctly
without needing to pay the cost of checking each and every one of these transactions.
So this larger domain of cryptographic proofs or proofs of computational integrity,
there's a variety of technologies that allow you to scale systems up
and assert their correctness and computational integrity
and also to do so in a privacy preserving manner.
Okay, thanks. That's a very clear explanation. And so what are Starks and how do they fit in this broader context?
So within the variety of proof systems out there, a proof system that satisfies basically two important parameters will be called the Stark. And these important parameters are one that it is scalable, which means that as the number of transactions you're processing goes to infinity.
proving time scales with it nearly linearly.
So it's almost the same cost to just compute the stuff as it is to generate a proof rate.
And at the same time, verifying a proof scales exponentially smaller than the amount of computation.
So a system that is scalable, that's the S in a start, and also transparent,
which means that there is no trusted setup and all the,
The only ingredient that you need in order to make the system secure is public source of randomness,
or you need to assume that the universe has some entropy in it.
So systems that are scalable and transparent are called starks,
and there's a very natural way of constructing such starks that leads to systems that are also post-quantum-secure
and have very efficient prover and verifier in concrete terms.
So I knew that the T in Stark meant the transparent,
but I actually didn't know that the S stands for scalable,
because in Snarks, the S stands for succinct.
And so can you explain what's the reasoning behind that difference there?
Does Starks not have this succinctness property that's present in Snarks?
Yeah, well, the mathematical definition of succinct in a snark is one that sort of involves a security parameter
and talks about a constant-size proof that is constant but sort of allowed to be polynomial in the security parameter.
So sort of, I mean, I don't want to get bogged into very technical details.
one could say that the term succinct
refers only to one part of scalability
which is you want a verifier to be very efficient
but that's not enough for scalability.
You need something more
and that is that you need the proving time to scale really well
and the reason this is important is because we know
of theoretical constructions, for instance,
the PCP theorem where you can get amazingly succinct
verification time
but at a horrendous cost to proving time.
So succinctness is not enough for scalability.
It's necessary but not sufficient.
You also need this other aspect,
which is super efficient proving time.
So when we coined the definition of a Stark,
we wanted to make sure that we're also capturing that aspect as well,
which is why we think it's better to use scalability
he has this sort of two-pronged definition, both efficient proving time and efficient verification
time. So when it comes to snarks, I think one of the things that sometimes a little bit confusing,
and correct me if I'm wrong here, but the term snark, it refers to this idea of something
that's a succinct, non-interactive argument of knowledge. And then the term ZKSnark also at the same
time refers to like a very specific construction. Is that true? And if so, is that also the same thing for
Starks? Is that, is it also referring to a specific construction or is it a just a general term?
So both the term snark and ZK. Snark and the same thing with Stark and ZK. Stark, these are
general definitions that could cover potentially a very large variety of proof systems. But it is true that
both of these terms have been sort of associated with very specific systems.
So when people talk about snarks, they usually mean a very specific kind of a ZK snark that is used by Zcash.
And I guess that when people talk about starks, they usually refer to the flavor of systems that is based on IOPs and uses things like, I mean, the Fry protocol for a low degree testing.
So I guess this is inevitable that you have these general mathematical terms that then sort of get associated with very particular proof systems.
But that's, you know, it is what it is.
So could you now walk us through a little bit of what are the tradeoffs between, you know, the ZK Snarks that are used in Zcash versus the starks that you guys are working on versus some of the other family such as like Bulletproofs?
Yeah, sure. So let's talk about these three things again. The snarks of Zcash, the starks that we're building, acknowledging that there are other kinds of snarks and starks and there will be other kinds. But let's just associate snarks with the stuff of Zcash, starks with the stuff that we're building and there's bulletproofs.
Snarks have famously very short proofs, like around 200 bytes. Bulletproofs have longer proofs around, let's say, two kilobytes.
or so. And Starks have longer proofs around 20 kilobytes. So you go one order of magnitude
increase moving from Snarks to bulletproofs and then to Starks. In terms of verification time,
Snarks and Starks are pretty similar. They're very, very fast. Starks are a little bit faster in
verification, but, you know, 10 milliseconds in Snarks versus, I don't know, 8 milliseconds or less
and Starks. And then bulletproofs are less so because verification time in bulletproofs scales
actually linearly with the amount of computation. So bulletproofs are not scalable according to
our definition of the term. That's in terms of verification time. Proving time is fastest in Starks
than about one order of magnitude slower in Snarks and bulletproofs. And I think the most important
differences in sort of this other dimension of future proofing the systems or what kind of
assumptions you're using. So Starks require only the existence of some collision resistant hash
function, which implies that they're plausibly post-quantum secure and they require very lean
cryptography. Bulletproofs require assumptions regarding the discrete log over elliptic
curve groups, which is a slightly more exotic problem, but it's been around for, I don't know,
like two decades or so. And then Snarks require things called knowledge of exponent, which are
even more recent and slightly more exotic. So I guess that's sort of a comparison along the
four dimensions of proof length, proving time, verifying time, and future proofing the system.
Okay, so just to summarize, in terms of proof size, there's a clear difference between bulletproof,
Snarks, and Starks, sort of one order of magnitude for more than the previous for every system.
However, in the verification time, where Starks have a much larger proof, the verification time
will be lower than Snarks and bulletproofs, so we'll have faster verifications.
in terms of the real differentiating factor is that Starks relies upon cryptographic assumptions
that have been around since the 70s. So these are collision-resistant hashes,
which means that they're a quantum-resistant and very lean and presumably future-proof
because of this quantum-resistant feature. Does that sum it up correctly?
Yes, I think that's a good summary. And also,
there's a fourth axis, which is the proving time, which is, again, fastest with Starks.
And we get the quantum resistance because there's no sort of like public key cryptography or
pairings or anything like that, right?
Quantum computers are known to be pretty good at solving problems related to hidden subgroups
and, you know, factoring and discrete log and things like that.
But they're not known to be able to break all cryptography.
And in particular, there's a wide-held belief that most hash functions will be secure against quantum computers, which is why Starks are so.
So if Starks relies on cryptographic assumptions that have existed since the 70s, why did it take so long for them to come into existence?
Are there other things that needed to be invented before Starks could exist?
or did we just need you to figure it out?
That's a good question.
So a lot of the more practical cryptography in recent years has revolved around sort of number
of theoretic assumptions and elliptic curves and so on.
So there's this very wide class of researchers that are sort of somewhere between theory
and practice that are very familiar, who are very familiar with cryptography that uses
elliptic curves and, you know, RSA and other things.
And the branch from which Starks emerge from,
which is known as computational complexity or the PCP theorem and things like that,
has been, you know, this sort of the playground,
mostly of theoreticians and mathematicians
and very few practitioners or more practical-oriented researchers
have ventured into it.
Another factor was that some of the earlier constructions of things like Starks were not as efficient.
And we needed to sort of tighten and invent some new stuff like the Fry protocol,
Fast-read Solomon IOPP and the IOP model.
So the IOP model is joint work with Alessandro Kieza and
Nex Spooner. The Frye protocol is joint work with Michael Riabtsev and Ido Bentov and Inon-Horish.
And then we've done some further improvements to fry that also made things a bit tighter.
Like deep fry that most recently emerged, which is joint work with E.O. Goldberg from Starkware
and two of our scientific advisors, Swastikoparty and Shubangisarv.
So there's a little bit of advancement that needed to be done
and some new mathematical stuff that needed to be invented.
But I think it's also this cultural thing
that the class of folks that can build things like Starks
used to include a very small set of people
and those that are more familiar with the techniques around snarks
or bulletproofs or other things is a wider set of researchers.
Okay, interesting.
How should we think about some of the newer stuff that's coming out as well,
especially within the IOP family, things like Aurora and stuff?
There's a wide variety of systems that are similar to Starks in requiring only the existence of a hash function.
So Arora is one.
There's Zikabu, Ligero, and then there's more recent, I believe it's a fractal in Marlis.
in that at least one of them doesn't really require, you know, anything but for a collision
resistant hash.
And they're all very similar in some of their techniques, which uses basically interactive
oracle proofs and low-degree testing algorithms.
So they're similar.
I mean, in particular, Aurora is not scalable because it's verifier scales linearly with the size
of the input.
it's more geared towards, you know, circuits of unknown structure that the verifier must process,
whereas Stark's have scalable verification or this exponential speed up.
That's the main difference between Aurora and Stark.
There are other systems out there that are also similar, like Ligero that has square root proof length.
It's verification times actually, again, linear in the size of the computation.
and then there's others.
So looking at this broad set of zero knowledge systems that we've described,
are there specific applications that are better suited for, say, Starks or Snarks or Bulletproofs?
How do they distinguish each other in how they're implemented?
Yeah, I mean, that's a great question.
I think that, so from my point of view, I think we're not that far from the optimum,
at least with respect to Starks.
And let me explain why.
There are some mathematically proven lower bounds that we're not that far off from.
So, for instance, if your computation scaling parameters, let's use T for that parameter.
So as T goes to infinity, we know that verifier time must increase at least like the logarithm of T.
And we know that the prover time must scale at least like T.
Now, the starts that we have right now, Prover Time scales almost linearly in T.
There's really like one Fast Fourier transform there.
And the Fast Fourier transformed cost T times logarithm of T.
And improving on the FFT as this longstanding open problem in all of math and algorithms.
So I think it's very safe to assume that, you know,
it would require a very major breakthrough in order to do something that's,
better than t-log-t. That's my belief. Then in terms of verification time, again,
Starx already have log of T to very small power. So you could reduce that power a little bit,
but you're very close to theoretical limits. And in terms of cryptographic assumptions, again,
there's, I'm not aware of many assumptions that are weaker than assuming the existence
of a collision-resistant hash. So along almost any of the parameters that you look at,
that there's just very little, you know, fat that you can hope to trim in the future.
So, I mean, that's part of the reason that we are so optimistic about, you know, the use of
Stark's.
But that's a really good question.
I mean, you know, when you get close to theoretical limits, you know, that you're pretty
safe, I would say.
Recently, Stargware announced two products or initiatives that they were working.
on, Stark Decks and Stark Pay. Both of these make use of what we call ZK Rollups, which we'll get to
a little bit. Can you talk about how these two products fit into the broader mission of Starkware?
Yes. So there's this principle of blockchains that I like to call inclusive account.
which means that everyone using their laptop is invited to sort of monitor the health of all of the system and verify everything that's going on.
But once you impose this principle of inclusive accountability on a financial system like Bitcoin or Ethereum, then two things get compromised.
First one is privacy because everyone verifies everything.
And the second one is scalability because if you want to grow your system, 10,000.
next, then you need everyone who wants to monitor the system to sort of go and buy 10 laptops
instead of one or increase their bandwidth 10x, which is unrealistic. And if you do that,
you'll be sort of throwing out a whole lot of folks from monitoring the health of the system.
So what you really want to do, and this is where something like stars come in, is you would
like to use the scalability aspect and have one entity generate the proof for ever-increasing batches.
and use this magical aspect of starks where verification times scales exponentially smaller than batch size
in order to maintain inclusive accountability.
Still, everyone can check everything and make sure that the system is okay.
But you don't need to replace your laptop every time the system goes up 10x.
Now, we started asking ourselves, where can we deploy this functionality, this scalability in the best way?
and we looked around a little bit and it seemed to us that the most,
the simplest and fastest way to address a real problem seen by the world today
is in the area of basically transacting, that's payments and also trading,
because currently due to the low throughput of blockchains,
essentially if you want to use them either as payment systems
or you want to trade them using the principles of inclusive accountability,
and trust no one, so on and so forth,
you can't really do that.
So you have a wide variety of players,
the custodial exchanges that tell you,
okay, you know what, send your Bitcoins or Ether here,
park them with us,
we'll maintain the keys,
and then, you know, you'll do all your trading on our books.
And similar things happen with payment providers
where basically they sort of tell you, you know,
leave your payments with us.
At the end of the month, we'll sort of, you know,
check out all the books and send one big payment.
to the various merchants and so on.
And we thought it would be really good to use Starks
to show the world how you can maintain inclusive accountability,
not need to trust or hand over custody of your funds,
your payments at any point and still scale the system,
even within its existing parameters
without waiting for plasma or Ethereum 2.0.
On the existing Ethereum, we can already, you know,
batch settle.
batch pay tens of thousand of transactions, which is two to three order of magnitudes more than
Ethereum can do natively. So that's how we got to this line of products.
That's really cool. And I was really excited to hear about this at StarCorpers sessions
when you first talked about it publicly. How do you arrive at this scalability of tens of thousands
of transactions per block in Ethereum,
and how are you making use of Starks to do that?
That's a great question, Sebastian.
So remember that the S in the Stark stands for scalability,
which means that as T, where T is now the number of trades that you're settling,
as T goes to infinity, proving time,
which is done on the cloud or on some huge server,
scales almost linearly with T.
so you can reach very large batch sizes.
At the same time, verifying a batch of T trades
does not scale linearly in T.
It scales actually like the logarithm of T,
which means that each time you go 10x on the number of transactions
you want to settle,
you're only doing plus one on the amount of gas
that you're paying on the chain.
So using this kind of math allows you to take, for instance, a batch of 32,000 trades generate a single proof that they settled correctly.
And that single proof can be verified within the gas limit of a single Ethereum block.
So this gives you an amortized gas cost of around 200 gas per trade that is settled.
So you are using the scalability, the exponential speedup and verification, in order to exponentially
reduce the gas cost of settlements.
And why did you choose to focus on scalability rather than privacy, for instance, which is
I think most people associate to zero knowledge systems?
So back in the day when we thought about which of the two aspects of Starks should we pursue first, is it privacy or scalability?
Our thinking was something along these lines.
Okay, there are a lot of technologies that for the single shielded transaction are pretty good.
You have the snarks of Zcash you had already back then bulletproofs, which again for a single shielded transaction works pretty well.
and a bunch of other technologies.
But there was this huge need in scalability solutions.
I mean, there still is.
And there was no real technological alternative to the efficiency of Starks in this respect.
And I think there still isn't.
This goes back to your question about, you know,
how far are we from, you know, the optimal proof system?
So even today with all these newest systems, if you look head to head at huge batches of computations,
Stark still outperform all of them.
And I think it's likely to continue in this way.
So it was very clear to us that scalability is an area where this technology can be applied to very, in a very unique way.
And it's a very big need.
So that's why we addressed it first.
We will add privacy, have no doubt, but I think it will come later.
So one thing that's happening here is in this model, you're batching within a block,
but it's still, so you amortize only what's going on in a single block.
We can compare this to things like Coda, for example, or like things that make use of recursive snarks,
where not only do you amortize the computation within a block,
but you amortize the computation over the entire system.
Would it be possible to recurse the Starks?
Because then if so, you only really have to publish the proof onto Ethereum,
the data, but you don't actually have to run the verifier on Ethereum
until someone actually wants to exit.
So would that be possible to be done with the Starks?
So whenever you have a proof system in which verification scales sub-linearly with computation
size, you can compose it incrementally.
So this notion was first described in this beautiful paper by Paul Valiant.
And it's called incrementally verifiable computation.
So whenever you have a proof system in which the verifier running,
time scales sublinear with a computation size, you can use it for chopping up a computation
into steps and doing them one after the other and proving that you ran a verifier, that ran a verifier,
that so on and so forth. So you can do that with Starks as well, you know, quite efficiently.
Whether for a given problem this is your best line of attack, I'm skeptical. I think that for most
applications, you're better off just using the start will be more efficient, or you might want to
use limited recursion, starts, let's say, you know, one level of recursion, which means you prove
that you checked a bunch of proofs. That's where you went. Not that you checked proofs,
the check proof, the check proof, the check proofed, and so on and so forth. So just to summarize,
you can do a recursive stark. I think that practically for most problems that you'll face,
you are bitter off not using it, even though you can.
Let's say in the Stark decks, let's say users only exit their coins once every 100 blocks,
let's say, right?
Then if we use the recursive Stark, instead of having to run a verifier every single block,
we can only have to run the verifier once every, really only, we only need a verifier
the state, prove the state to Ethereum when someone is actually trying to exit. So how many
blocks, for example, would that have to be, that it makes it worth it to use the recursive
system? I don't know, but like, you could still, if you want to prove that you checked
a hundred proofs, sequentially, it would still only be one level of recursion. Your statement
would be, I saw a sequence of 100 proofs one after the other.
This is not 100 levels of recursion.
A hundred levels of recursion would be,
I want with each block to verify, you know,
that the verifier run by the previous block,
which checked the verifier run by the previous,
and so on and so forth,
add infinitum that this thing worked.
And that's a very different construction.
That is also, you know,
it's a security analysis is sort of much more tricky to do.
and if you really want to do it the right way,
then various parameters blow up very quickly.
So even for the use case that you're giving,
which is a very practical one,
you're probably better off with just having one level of recursion.
Every 100 blocks, you have a verifier proving to you
that it saw a sequence of 100 proofs
and it checked all of them.
This is still one level of recursion.
And you could have one for every 100 blocks,
and then you could have a daily proof for all blocks of a day, all blocks of a week, and so on and so forth.
So this is an example of a use case that I think you're better off using just one level of recursion
rather than the notion of infinite recursion.
Oh, I see.
Okay.
Yeah, that makes sense.
And so would it be possible to sort of like compose these like things together?
So let's say, you know, it's 150 blocks.
We can take one of 100 and then one of 50 and we can put these together.
together. But essentially what I'm trying to think through here is, is there a way to offload the
verification gas cost onto the users, the user who wants to exit rather than on the people who are
submitting the proofs? Well, the way it works right now is that the gas cost is not on the
provers. The provers are working very hard to generate the proof, but the gas cost is sort of a, oh,
you're right. The proof is submitting the proof and is paying with gas to this sort of network to
sort of check this thing. I think that if you don't have any proof out there for a while,
then you're risking all kinds of attacks, right? How do you know that the system is actually
evolving correctly until someone, 100 blocks later, comes and says, oh, I need to, you know,
take my money out. Maybe by then someone ran off with it and, you know, no proofs were provided.
So I'm not sure.
I think you still need proofs pretty frequently.
I believe that from what you guys wrote up,
it's that you're including, to solve data availability,
you're still pushing the data onto Ethereum of all the trade data.
And so couldn't this proof data also be pushed on in the same way that the rest of the data availability is done,
but just not actually run the verifier?
Yeah, you could do that.
But again, I think you're risking.
So if the main network doesn't really see or check the proof,
then someone could start deviating from,
or just not putting proofs as they should.
Now I think you're sort of going away from this notion
of maintaining at all times a system that has integrity
to one that requires something like a watchtower
or fraud proofs or something,
just right until you reach that checkpoint,
a hundred blocks later.
This Starkdex demonstration that you've built,
I believe it's live on Ethereum Mainnet.
Is that correct?
No, it's not.
It was just a demo that was run for some time
on Robsten test net.
Okay.
What's the barrier to running this on Mainnet?
And are there further optimizations
that you could make
for it to be even more scalable?
Yeah.
So, I mean, to run it on Mainnet,
first of all, you have to sort of run a whole bunch of audits and add a lot of functionality.
For instance, it was only in the maker-taker model.
And basically, sorry, if you want to put on a mainnet, you want to add other kinds of order types,
limit orders, you know, partial fulfillment, cancellations, whatnot.
You also needed to be integrated.
So this was sort of a settlement engine that has to be integrated with some exchange.
So you would need to integrate it with some relayer if it was to be over.
zero X over some other protocol.
So there was a lot of work that still needed to be done and could still be done if we, you know,
if relayers come up and want to work with us.
And definitely you can improve the functionality of it.
And that's precisely what our team has been doing pretty impressively since the launch of that thing.
So now we have a system with far greater functionality and scale than what appears in that demo alpha.
Are there any improvements that could be made if there were any pre-compiles that you were allowed to add?
Yeah, I mean, you would lower the gas cost, basically.
That's what would happen.
But we found ways to make everything work within the existing Ethereum system and without asking for any pre-compile.
So we can work pretty efficiently even over the existing Ethereum.
And, I mean, we're very proud and happy with that.
That's another aspect of the efficiency of Starks.
I mean, if you compare this to what happened with Snarks, for instance,
so without the pre-compiles, you couldn't really run them nearly at all on Ethereum.
So that's why Ethereum went ahead and added some pre-compiles.
But with Starks, they're already efficient enough without any changes to Ethereum.
So based on those numbers that you mentioned earlier,
it's like how many transactions per second we could get with 8 million gas.
So the limiting factor is no longer the gas limit for the verifier.
The limiting factor is now the proving time for the prover.
The gas limit per block is still a limiting aspect,
but we can sort of still put a lot of proofs out there.
So we reached 32,000 trades that we can settle in a single block.
Maybe we could push it also to 64K.
I should say with the
once Istanbul turns on with EAP
2028 that we're very
proud to have helped
to push forward. Then
the only factor that will limit
us is exactly what you said.
It's the amount of compute
that we can generate
off-chain.
But practically almost
I can't even compute what the limit
will be, but it will probably be in the many,
many millions, if not maybe
billions.
of trades.
That's impressive.
Thank you.
So let's move on to Starkware of the company because you guys have built these these demonstrations,
Stark decks, and there's another initiative that came out recently called Stark Pay,
which we didn't even have a chance to talk about.
But what is the goal of Starkware of the company and what problems is it trying to solve
and for which types of customers?
So our long-term vision is to help Stark technology become prevalent and used as infrastructure
and a whole variety of blockchain uses and then also in uses outside of blockchain, just in the
standard world.
But, you know, we're, we only have 32, you know, folks right now and we need to move cautiously
one thing at a time.
So our first product is, I want to emphasize something here, is not a Dex.
So we are not currently building a Dex.
We are building scalability solutions for standard exchanges, you know, the kinds that are known as centralized exchanges.
So just a week ago, our team announced at DevCon 5 that by early 2020 will be launching
the Stark Exchange engine, which is not a Dex,
and it will be serving Diversify,
which again is not a Dex itself.
It is an exchange that is sort of, you know,
operates like BitFINX or IfFNX
with similar liquidity pools,
but against which you trade
without ever handing custody over your assets
to the exchange online.
operator. That's the big difference.
So very similar to like the zero X model.
Very similar in the sense that that you do not transfer custody over your assets to
anyone while you're trading. But in other ways, it's very different. I think Xerox is this
basic protocol that is used as a, you know, a layer that others are supposed to build on.
Whereas we are building a service that will be serving a particular customer.
in this case, diversify, even though we would very much like to offer this service of generating
proofs to other exchanges as well. So it's a different business model and different sort of
system that we're building. But it is similar in allowing self-custodial trading.
And so I believe you used the term Prover as a service a little bit earlier in the conversation
to describe part of what Starkware does.
What is a prover as a service, and why is that useful?
StarCore is a for-profit company.
Famously, we have not done an ICO.
We do not have a coin.
So we're sort of bound by the laws of physics
or of economics and business.
We have to sort of find ways to generate profits
and then sustain ourselves and remain, you know,
remain, can't just be burning our money.
And so we need to think about business models that makes sense while we're advancing
this technology and infrastructure.
And the notion of Prover as a service is a very natural one, just like you have software
as a service and other service providers.
You know, it's this thing that as long as you're using, you're sort of paying it, but
if, you know, you could turn it off whenever you want.
So our model currently, the one that we're using first is a Prover as a service.
So the exchanges and various companies that will be working with us will be one way or another renting or paying for these services.
And it makes a lot of economic sense on both sides.
Would you be licensing out the Prover software or would you be actually running the proofs yourself?
So in the Prover as a Service model, you don't license it out.
You run the servers and, you know, basically, you get, for instance,
we'll be getting batches of settlements from Diversify and then generating a proof for that batch
and sending it to the verifier contract on the main chain.
That's the current model.
But I want to emphasize that it's not the only one that we're considering.
And definitely down the line, things like licensing or freemium and maybe, you know,
in three to five years maybe selling hardware or other things like that are all viable options
that, you know, we'll be exploring.
So hearing this, it sounds that, it sounds like you'll have some service running on a server
and that service will be receiving transactions from an exchange and then you'll be generating
a proof and sending that to the Ethereum chain or whatever, whatever blockchain is being used,
I suppose.
To the uninformed ear that would sound like you guys.
are essentially centralizing that service. Talk about the liveliness issues that this could cause
and how are you ensuring that exchanges don't have to rely solely on you being available. How do you
reduce that dependency? So the first answer is that just like other service providers, we actually
expect and embrace and welcome competition, which means, you know, just like if you're looking at
your cloud provider, you don't need to, you can't be censored by, let's say, Amazon, you can just
move it to Google Cloud or something like that. So over time, we're sure that there'll be other
Prover as-a-Service competitors out there, which is one answer. Another answer is that, you know,
till then, and even when that happens, it's very important to allow customers and the end users to be able to
control and get their funds, even in catastrophic events where, let's say, StarCore is hacked
and the exchange itself is hacked. So just to emphasize in both these cases, no one, I mean,
if you hack into StarCore, you can't really take the user's funds because they are being traded
self-custodially. So only the users can do that. But someone can hack StarCore and try to
shut down its service in order to prevent anything from happening.
So we've built a bunch of sort of emergency hatches that you could, I mean, that are automatically invoked when folks want to take their funds out of the system and it just doesn't service them.
And in order to do that, we will be launching a variety of data availability solutions that will ensure that users have redundancy in their access to the information they need in order to extract their funds.
if Starkware or the exchange is ever catastrophically hacked.
Cool.
So is there anything you want to share that is coming soon to Starkware?
What's on the roadmap and where can people find you and get involved?
Yeah, so we were talking to a whole lot of exchanges and custodians and traders and services
around exchanges and in this area because we want to, first of all,
serve as many exchanges as are willing to use this technology.
And second, we would like to ensure that traders and users have a seamless experience
when they're using our technology.
And we'd like it to be of use to anyone who offers services to end users,
be it a custodian service or an OTC desk or a trading.
firm and so on and so forth, a broker.
So we're holding a lot of discussions and we see a lot of enthusiasm for integrating
with this technology.
So Metamask announced that, you know, we were the first team to use their new API in order
to allow traders to use or trade on our systems using Metamask.
And we're currently integrating our technology into ledger so that, again, traders can sort of
of seamlessly use it.
And I expect we'll be announcing a whole lot of other collaborations and integration projects
so that everyone can use this thing.
At the same time, we're also talking to a lot of exchanges, you know, big ones and small
ones.
And I think we'll see a few others joining, diversify in this move to a larger liquidity
pool for trading in a self-custodial way.
And this is very important for a variety.
reasons. First of all, it's safer for traders, which is good for a business. It's also good for the
exchanges. You know, the insurance costs and security overhead is much lower. Another thing is that you can
sort of move in and out of your positions across exchanges much more seamlessly, and that's very
important for, again, streamlining and making blockchain ecosystem a bit more like the
traditional one. And lastly, there's a lot of fragmentation of liquidity.
between different exchanges due to a variety of reasons,
you know, geo-fencing and geographic locations
and regulatory stuff and technological differences.
And we believe that our technology can enable, you know,
a defragmentation process of this liquidity pool
and a lot more market efficiency,
which is why I think the folks we're talking to
are also very enthusiastic about this.
Cool. Well, Ellie, I want to thank you once again for coming on the show and for being so gracious with your time. I know you're on vacation. So I want to thank you again. I look forward to having you again in the future. Maybe not in four years, but at some point.
Thank you, Sebastian. And thank you, Sonny. This is, as usual, a very delightful experience.
Thank you for joining us on this week's episode. We release new episodes every week. You can find and subscribe to the show on iTunes.
Spotify, YouTube, SoundCloud, or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter,
so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners,
you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
