Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Zac Williamson & Joe Andrews: Aztec - Privacy-preserving, hybrid ZK rollup
Episode Date: June 9, 2023In order to achieve true transaction privacy, it is not enough to encrypt or build ZK proofs for transaction bundles, as long as the underlying blockchain uses an account-based model. Aztec is buildin...g a ZKVM that superposes an UTXO model, so that balances are constantly updated as new, untraceable, log entries. The upcoming Aztec 3 aims to enable privacy-preserving smart contracts, using a hybrid, multi-layered rollup. This approach allows for both public and private smart contracts to be executed simultaneously.We were joined by Zac Williamson & Joe Andrews, to discuss the evolution of Aztec, from ZK Money to their upcoming privacy-preserving hybrid ZK rollup.Topics covered in this episode:The evolution of Aztec, from ZK money to Aztec 3How Aztec’s ZKVM ensures privacy for Ethereum transactionsThe need for UTXO model for privacy-preserving encrypted databasesInteracting with public smart contracts in a private mannerTechnical roadmap and challenges faced along the wayHow Aztec differs from Mina protocolRecursive proofsEstimating gasMain-net sequencer decentralisationGoblin PlonkThe future of programmable privacyEpisode links:Zac Williamson on TwitterJoe Andrews on TwitterAztec on TwitterThis episode is hosted by Felix Lutsch. Show notes and listening options: epicenter.tv/499
Transcript
Discussion (0)
This is Epicenter, episode 499, with guests Zach Williamson and Joe Andrews from AdStech.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Friedricha Ernst and I'm here with Meheroy.
Today we're speaking with Zach Williamson and Joe Andrews, who are the founders of AdStech and CEO and company president, respectively.
Zach and Joe, you have been on the show.
before, twice actually, in your case, Zach. Can you briefly remind us what AdSag actually does and how the vision of the project has changed since inception?
So yeah, hi, hi everyone. Thanks for having us on. So Aztec has changed a lot over the years, also since we last spoke, I think. So it's been about, it's almost six years since we got started in this space.
And Aztec's always been a privacy-focused company.
So building privacy-preserving infrastructure for Web3.
However, the way in which we go about this has changed radically over the years
as the technology that we can work with has improved.
And so our main focus now, which I think Joe can speak more about,
is building a fully programmable privacy-preserving smart contract layer,
effectively an end-to-end encrypted blockchain.
Yeah, thanks for having us on.
I can finish some more details.
I guess to speak to what Zach was hinting at,
over the years we've tried to build products
with what was currently the cutting edge of cryptography,
and a lot of that's been pushed by our internal research,
done by Zach and our kind of cryptography team.
But the goal has always been to kind of build a kind of programmable private version
of Ethereum. We just didn't have the technology to actually do it at various points over the years.
So we had to settle for, I guess, slightly less functional versions of the technology.
And some of those people will be familiar with. So we had kind of Aztec 2, which is the first time we had Zcari money, which did just basic Zcash style private payments.
And then we spent about a year upgrading that to add Aztec Connect functionality in the form of kind of private defy.
mediated on L1.
And all of that was really kind of trying to push out technology with the best of ZK
Snarks in 2021 and 2022.
But it wasn't really kind of the end goal.
It was kind of showing the state of the technology at that time.
And yeah, we're thrilled kind of over the recent months that we've been able to kind of
actually work on our end vision, which is abstracting.
those roll-ups into a fully generic version that other developers can deploy programs to.
And that's what we call Aztec 3.
We've shortened now to just Aztec, kind of the realization of our kind of ultimate vision.
So if you look at each of these stages, so basically the first time we had you on,
you guys had just launched ZK Money.
So basically, and you could deposit any ERC 20 token into
your L2
and
you could transfer it
within the L2
privately and you could
withdraw it back to the L1
and then basically
in Aztec 2
which is the second name
we had you on
you had all of
you know these private
token transfers
on L2
plus you had some
defy integrations
so basically you could send
tokens from a
shielded pool
to some
L1 DFI contracts.
And so basically the tokens
they were in fact reshielded.
So that kind of in a way
gave some, in some sense,
anonymity to
defy users on L1.
This was super useful, but as I
understand it, at Stack 3
kind of radically expands
on that vision. Can you give us an idea
of what, when it launches?
And I think we have to be very
clear that basically this is, this is still, there's not even a test net yet, right?
So basically you want to launch this by the end of next year.
Soon.
Can I ask it.
Yeah.
We could talk about the roadmap in a second, but maybe just to kind of highlight on the
point about the functionality changes.
So if you look at kind of just the payments version of Aztec, that was one circuit or one kind
of contract you can think of it, written by the Aztec team.
and it took kind of probably six, seven months for the team to write that and audit it and deploy it to make the basic ERC20 L2 transfer functionality.
And then with kind of Aztec 2 and Aztec Connect, we upgraded that to add in kind of a few more contracts or circuits, an account circuit, which gave aliases and these defy circuits which let you kind of send, send a number.
kind of tokens to L1, as you described.
But those circuits kind of took a very long time for our team of cryptographers to build an audit.
And the reason for that is that they were all acting on shared state.
So it's not really something that you can extend very easily.
So the main kind of breakthrough in functionality that Aztec 3 kind of affords is developers can now write their own circuits.
and they can have siloed state or interoperable state between different contracts
and kind of really generalizing what we were trying to do with Aztec 2
and removing us from having to write the circuits.
We'll write one execution environment and then everyone else can deploy their own contracts
and programs do that.
And that's kind of the premise behind Aztec.
There's a lot to unpack here.
And maybe I'll start with kind of,
the cracks of the matter that I don't understand yet.
I hope I will understand it by the end of this episode.
So you're talking about different kinds of state.
And I think this is kind of what I want to understand.
I want our listeners to understand by the end of this episode.
So how do you actually have something without intermediaries
where you can have different sets of state that are still consistent,
with one another.
So maybe let's kind of start at the beginning.
So what our listeners will be familiar with, hopefully, is the ZK EVM,
because we very recently had Joddy and David on.
So the way that the ZK EVM works,
it basically takes every op code on Ethereum and transpiles it into a corresponding ZK
op code.
And this is very much what you're not doing.
So kind of walk us through why.
Because in principle, that seems like such a good way of kind of making everything private, right?
Basically, you just take what you already have and what you know the EVM can deal with,
and you kind of build a ZK version of it.
So why did you not choose to go down this route?
And how, because your sounds way more complicated.
So, yeah, just walk us through this.
I can try to field that one.
So the reason why we've gone down the street is because you cannot just wrap the Ethereum
virtual machine in a Zika Snark and make it private.
And it's because of the information that's revealed when you modify state.
So a blockchain eddust core is a glorified state machine.
You have some database of information and transactions come in that change to the database,
and you have no one of those transactions
for all the rules of your blockchain.
And the problem is that that database is public.
So even if you wrap the EVM in IzK SNAR,
you create ZK opcodes,
the actual information that's going into this,
like the database that makes up the state of your chain,
that it doesn't, like,
wrapping the validation logic in the SNARC
doesn't make the information that's being transmitted private.
by nothing, nothing changes about it.
You could take, you could go one step further and say, okay, well, now let's, let's encrypt the
Ethereum database.
Let's say that every single storage slot on Ethereum is encrypted with some encryption key
that somebody passed.
Even then, that's not, that gives you very, very weak privacy, no privacy at all really,
because if you think about the Ethereum state database, right, it's basically, it's a big key
value database where, let's take, for example, your Ethereum account, you have an address,
it's linked to a balance. Let's say you can encrypt that address, and you can encrypt the
balance, and you can still through ZK SNARCs, through ZK. Opcodes, you can modify that balance.
That's the leaked a lot of information because you're, if repeated, if you make repeated
transactions from that address, the same, like on the same position in the database is going to
have it's not values change. So even if your addresses encrypted, your values are encrypted.
You can see there's encrypted values changing per transaction, and therefore you can build up
the transaction graph and build up an identity of who a person is, this time not defined by
their Ethereum address is defined by their position in the Ethereum state tree that that
that encrypted balance lies. So functionally, it gives you very little just to turn Ethereum into
a ZK SNAP. You need a bit of a more involved.
model to actually get strong user privacy. And the big part of what we're doing is, whilst the
architecture is relatively complex, the abstractions that you can layer above it and like kind of the
heuristics you can apply to use it, at least if we do our jobs right, those will be very simple.
I just going to add a few things. Like even if you could solve kind of the act of kind of
updating encrypted state, and there's been a few papers where people are trying to solve this,
you still end up in a pretty weird world where you have race conditions. So let's say I want
to pay Zach. And Friedrich also wants to pay Zach. Well, we're both trying to modify the same
piece of encrypted state. So basically, you can't have a system that resolves that. On Ethereum,
the entity that resolves that is the block builder who executes the transactions. So if we now
give those transactions to a block builder to execute, then again, you have further privacy leaks.
So, yeah, you really can't get strong privacy in the account-based model.
So you have to have a very different kind of model and data type to actually build a privacy chain.
And this is why we say kind of externally sometimes that EVM is not privacy compatible.
So that's why we don't build a ZKEBM.
We build a ZKVM, which can have kind of privacy preserving properties,
which maybe Zach can talk a little bit about how that actually works.
So from my perspective, I think like restating what Zach has said,
maybe we start off imagining, you know, the ledger or the state as being, you know,
for simplicity like rows and columns with like each row indicating, let's say, an account to start with.
and then the data that's there is kind of like balances data.
And then there are transactions happening.
Like if it's a normal blockchain, there's transactions happening
and a transaction is subtracting from one of the rows
and it's adding to another row.
In this model, if you think of what a ZK EVM like Polygon is doing,
the fundamental thing is doing is when, let's say,
I send a transaction and I subtract from row two
and I add to row 17,
then it is,
the ZKVM is in the end generating a proof
that when you execute my transaction,
the ledger adds plus 10 to row 17 and minus 2 from me
and that this is correct.
So if there is another copy of the ledger
with the updated balances plus 10 and minus 10,
the proof just tells you that
that new ledger is,
indeed correct. What it doesn't tackle is the problem of sort of hiding that it is plus
10 in the first place. Ideally, what we want to be hit is the fact that 10 was shifted and we
also want to hide the fact that 10 was shifted from 2 to 17. The problem that ZK EVM deals with is
once you have the transaction, and this was the consequence of the transaction, the consequence is
correct. That is what a ZKM, EBM deals with, but it doesn't deal with the fact that how do you conceal
the N and how do you conceal rows 2 and rows 17? And that's kind of the problem that you are
starting to solve. Yes, that's exactly it. That's a great explanation. Thank you for that.
and yeah, you have to break away from the EVM model to solve it.
When initially we talked about your Aztec cache or the first version of your system,
it was already solving this problem of if you have two in rows 2 and row 17
and there are the simple subtractions and simple additions,
how do you kind of obfuscate all of that?
but this was already achieved by Zcash in some sense in prior history.
And now your jump is that if you imagine row number 16,
somehow that row number 16 could represent an entire smart contract with its code
and with a data set that is more complex than a simple balance.
Is that a good way of imagining it?
Yeah, it's good.
Let me try and expand on that.
So what a smart contract in NASTIC does is it,
effectively it controls a set of rows in the database.
What rows they are, it is up to the contract.
Depends how many storage slots you're using.
But you'll have some contract that defines the rules and the logic around which rows
in the database.
can be modified, you can modify them how they're encrypted.
And then sort of what will happen is that a user will, when they submit a transaction,
they will basically, they will be submitting requests to modify some encrypted state.
So let's say, here's some old encrypted values, here's some new encrypted values.
This is a bit of a simplification, actually.
I can explain why it doesn't actually well as some, but it's a simplification.
You can say, here's some old encrypted values, here are some new encrypted values.
And here is a ZK proof that proves that I followed the rules of the smart contract that controls these rows.
And therefore, if you verify the proof, you can trust, yep, okay, this is legit.
I don't need to know what's inside these encrypted values.
I can just, the node can just update the state.
Right.
So, I mean, if we kind of like imagine our huge accent sheet, maybe it has like tens of thousands of rows,
we can say that, let's say, like, rose
1,000 to rose
2000, they
are encrypted, right? Like, you cannot
make sense of any of it.
And let's say, like, they belong to a
smart contract,
which may be like a voting smart
contract, for example, right?
So,
rose 10, 1000 to 2000
belong to a voting smart contract.
And then what you're essentially
allowing me to do, so if I'm a voter,
in one of these elections,
I am basically sending a transaction
and it's going to make some changes
to rose 1,000 to 2,000,
some part of the state.
But what comes in is kind of like
1000 or 2,000 is kind of encrypted.
What comes out is also encrypted.
But I'm also, when I'm submitting a transaction,
I'm submitting a proof that
despite the input
being encrypted and the output being encrypted,
the transition is kind of correct
and my vote was counted correctly
and I did not interfere with the election process.
Yes. That's pretty much it.
It's kind of hard to do.
But I can, yeah, we can talk through
some of the details of how to,
how to make that happen if that would be of interest.
Yeah, why is it hard to do maybe on a high level?
on a high level, there are two problems, possibly three.
One of them is that if you have an encrypted database,
if you have a database that needs to be privacy preserving,
you can't really, you have to use a UTXO model for your state.
So you can't use an account-based model because it's not enough just to hide encrypt
the information associated with a row.
when a transaction modifies a row in that database,
you also need to hide which row is being modified,
which is a little bit hard to do in an account-based model
where you're basically every state variable
is a key and a value,
and you can encrypt the value,
but the key describes what information are you actually adding
and changing in the database.
You can't really encrypt that.
To give an example of why
it's difficult to hide state changes in a private environment.
Let's consider the basic token transfer case.
Let's say I want to pay you, Mehe, I want to pay Joe, I want to play Frédrique.
Then what I will be doing is I will be deducting tokens from my balance and adding into your balances.
The problem is that in an account-based state model, even if my balance is encrypted,
you can see three transactions coming in, changing the same balance.
So you know that those transactions are affecting a person.
You don't know it's me, but you know it's somebody.
And you know that for every transaction.
And therefore, you can sort of build up a transaction graph of what entities are interacting
with other entities, even if you don't actually know their addresses.
And so if you want a private database, you need it to be a pend-only.
So if you have a smart contract that modifies, that can control some private state,
it doesn't so much control rows in the database as it has the ability to add new rows to this database,
but it can't change existing rows. It can only add. And then you can use the same tools as eCache used
to delete records in the database. So effectively, you have a data structure where you can add records,
you can destroy records, but you can't change them. And so you have, so to emulate, obviously,
in the real world, you do want to change values.
I want to represent things like balances,
but you have to emulate that by having some
private state that represents a balance.
If I then want to send a token to Joe,
that balance gets destroyed,
and a new balance that represents my balance gets recreated
at a different row in the database.
That is one less than my old balance.
And you can craft your system such that.
You cannot link the creation and destruction
of data, which sounds a little complicated,
but it is essentially how Bitcoin works
with its concept of unspent transaction objects,
just that we extend that to,
instead of representing cryptocurrency values,
it represents whatever a small contract wants it to represent.
I think I understand this.
So it's a mash-up between the UTXO and the account-based model.
But what I'm wondering,
Isn't that terribly inefficient?
Basically that if every time you kind of touch any state marginally,
you kind of have to destroy that kind of piece of the state
and kind of recreated at the end of the at the end of the Excel spreadsheet
in your example again?
Yes, yes, it is.
It is a problem with privacy preserving systems.
So take a theory, for example,
If I want to send some Ethereum to somebody, my Ethereum balance gets modified, which in the Ethereum's cost model is 5,000 gas.
If I wanted to create a new account from scratch and add a balance which didn't exist before, it's 20,000 gas.
Previously preserving systems have this problem.
It felt a bit more acutely because if you want to modify an existing variable, you need to create some information that destroys it.
that's like one storage slot you could consider.
And then you have to recreate the variable in somewhere else, which is two storage slots.
So it does the data throughput of a private, privacy preserving blockchain is a lot higher than a transparent public chain,
which is why data-availability solutions are so important to us and things like EIP4844.
There are other ways to mitigate this cost as well, particularly.
maybe I think something else.
I don't want to jump the gun a little bit by going into this,
but in order to build complex privacy preserving applications,
it's not enough to have private state
because the problem with private state is that it's encrypted
and therefore effectively is owned by somebody or a set of individuals.
The people who possess knowledge of the decryption key
effectively own that state.
And if you don't have that decryption key, you can't change it.
which creates some problems when it comes to creating complex applications.
Any small contract that requires a global state, does this model some work?
Take, for example, if I'm building a decks and I'm creating a liquidity pool,
so I need to know how many tokens I have of a given token type,
that needs to be public information that is constantly updated every time an entity deposits into the liquidity pool.
You can't make that private, not without very complex multi-party computations that are
slightly beyond the scope of what we're trying to do, at least in the next couple of years.
And so what do you really need for complex private applications is hybrid state.
You need private state, which is encrypted.
It's owned by individuals.
You don't know when it's created.
You don't know when it's destroyed.
You don't know who owns it.
You don't know which contracts control the state.
NSUTXO based.
and then you have regular normie account-based public state,
which operates an X, just like Ethereum's state model does.
And smart contracts have the ability to modify both.
Okay, let's kind of dig into this a little bit.
Say, for instance, I have a private account with kind of private token balances,
no one knows it's me, sort of thing.
And I want to interact with a public account.
AMM. How would that work and what of the information that kind of I devolve becomes public?
And how do you make sure that it's not traceable back to me?
I can start maybe with the user experience. So I guess it comes from the fact that on Aztec,
an account is defined by a private smart contract, not an externally owned account like Ethereum.
So effectively, whenever you do a transaction on Aztec, you satisfy the conditions of your private account.
So it starts as a private transaction.
So you modify some private state.
And then if your function call calls a uniswap contract, you'll probably set up that swap in the private realm.
So you'll migrate some of your die balance to the uniswap private contract.
before you kind of send it to the public land for the AMM swap.
You'll do all of that in the private realm, client side, local to your device.
And then the last thing that the kind of private setup function and the uniswop contract
will do is make a public, public L2 call to kind of execute the swap.
So leading up to that point, all that you can see on the L2 is someone somewhere modified some
state that resulted in this public L2 call.
And then the contents of that L2 call will have some public information defined by
kind of the protocol you're calling.
So in UNISWP's case, it will likely be the asset you're trying to swap, maybe some slippage
and like a notional, but you can't see anything about what led up to that point to
kind of leak further information.
So you can't see my address, you can't see anything like that.
And you can just see that it's a valid private transaction leading up to that point.
And at that point, kind of the sequencer takes over and can kind of execute that much like a public transaction on any other L2.
I can try to summarize.
So this is what I'm about to describe is abstracted away from the user experience.
But functionally, what would happen under the hood is you on shield to a random address.
So if I want to swap 10 eth into dye, iron shield 10 eath to a random address.
No one knows.
So 10 ether has popped up.
It's owned by OX, like, OX, like, you know, X for particular.
No one knows who that is because it's just created just for this transaction.
Then that money then gets put into uniswop for, let's say, ETH to die trade.
And so people will see that the amounts of, they will see that ETH got unshielded from somebody.
They will see that, but they don't know who, they will see that ETH gets put into uniswap.
and they'll see some dye coming out.
The die will be recovered by that, like, random address,
and then that die will get shielded into my private account.
And again, you can construct it such that.
You don't know.
That shielding action also provides you with known information.
So you effectively, you get anonymity.
You hide the identities of the people interacting,
but the protocol logic is transparent.
So you still know the values going through Unisw.
The imagination that's copping up in my mind
is this idea of a public square and private houses
where I mean you have a public square
and whatever happens in a town
and then whatever happens in the public square
is kind of frackable, seeable.
when you have private houses and in principle what happens inside my house is is just known to me
and maybe in Aztec you know the Aztec state can be visualized as the combination of both
the public square and the private houses where anytime anytime I kind of interact privately
with myself or with another users I have an anonymity set of all of the private houses so I
let's say I send money from like my house to another house.
From the outside, nobody knows what house the money came from and where it went,
except the owners of these two houses, sort of.
But then when I want to interact with an AMM,
I can suddenly transport, teleport my money from one of these private houses seemingly
into the public square
do a trade which might be
an exchange of one coin into another coin
in the public square.
Everybody sees that like the public square
activity is kind of genuine
and then
that resulting coin
I can again transport back into my
private house
and the anonymity set is again the set of all
private houses so nobody exactly knows
which private house it went into
but it actually went into my private house.
That is a...
I think that's a great analogy.
I could extend it a bit with some kind of...
It's not just like going from the house
to a particular stall in the public square.
You can effectively imagine that you can route your funds
or your data through any house around the square
and you have to follow the logic defined in that.
house. So let's say there's a, let's say there's a kind of a customs officer who does some sort
of stamping that this is, this is a legitimate trade coming through. You can write a stool in
the public square that will only accept kind of transactions that could have come through a
certain path, which is defined by the logic of a composable smart contract. And you could then
also have two public square stalls that could talk to each other. So there's a
full composability between all the houses kind of on the outside of the square and all the
stores inside the square. So it's not just a case of how it was in Aztec Connect, where it's a bit
more single shot where you can go and do one interaction with your asset. The interaction scope
is defined by the contract and the set of functions that the developer writes, which can cross
private to public boundaries. So actually, that's
that's really interesting, right?
So now there's like actually,
you can maybe imagine
that three sets of,
you know,
land.
One is like the public square.
One is these private stalls,
which are basically the private smart contracts,
as I see it.
And then maybe these houses are kind of like
the ends accounts of the users
that are holding the coins.
Maybe we can imagine it like that.
And you've kind of mentioned this kind of interaction
where I could start from like the end house
going into a stall,
prove something.
to the private smart contract, and then that sends the funds into the public pool.
Something happens in the public pool.
Then it goes back to a stall, then it goes back to private.
Could you give me an example of a real-world application that would have such a flow?
Yeah, I can do an obvious one, I guess, which we don't see on chain right now, but kind of consumer finance.
So I could prove something about my private house, which is kind of the income coming into it from my job.
I could prove that to a private stall, which is kind of a loan broker.
And that stall can give me a credit score.
And I can take that credit score to the public square and receive a loan based on that credit score into my private account.
and kind of the only thing that's been made public in that interaction is that someone's asking for a loan
and they have a credit score above 600.
But it's not revealing which house it was or anything like that or any of the information about the proof that was required
by the kind of credit score private store.
So that was one analogy.
There's KYC checks.
Also some interesting game state that Zach's been thinking about as well.
Cool.
Yeah, I think I understand the vision of kind of where you guys want this to be at.
taught me through the technical roadmap and the challenges that you face along the way.
So, I mean, even for, I mean, all of us, we're eternally optimistic about timelines.
I mean, I've been wrong about this many times.
And for you guys to say, you will have this ready by the end of next year.
Sounds like there are some major roadblocks in the way.
I'll do the short-term roadmap and then Zach can kind of fill in some of the major obstacles
because there's definitely a few roadblocks in our path.
But what we've been able to do for the roadmap is to kind of condense which parts are needed for
main net and kind of put those in one bucket and then separate that bucket from which parts
are needed for developers to kind of start to write applications and test out what's possible
in this new design space.
And so in the second bucket,
we've actually managed to kind of remove a lot of the heavy lifting
away from the protocol
and just to find an optimistic version of the protocol,
which we're calling Aztec sandbox.
And we're hopefully going to have a release of that
in early Q3.
So you'll be able to kind of take that,
run it a bit like a local ganache or foundry node,
and write programs in our smart contract
language more against that. So test out these new types of applications as soon as
kind of Q3 this year. And then we'll kind of expand that functionality with a more
centralized actual test net with persistent state. So you can interact with other people's
applications, not just just local ones, which we're targeting towards the end of the year.
And then we'll expand that again with a kind of decentralized test net in kind of early 2024.
for and then the rest of it will be kind of filling all the blanks in to actually get this
onto main net, which Zach can talk a bit about the challenges on that journey, because
it can no longer be optimistic at that point.
Yeah, so I guess one of the reasons for the log roadmap is because we're also like aware
of how optimistic timelines normally are in this industry.
And we've fallen prey to this ourselves in the past where we've internally been far too optimistic
about how long it would take us to execute on building.
On Aztec on Aztec Connect.
And so part of the reason for the end of 2024 timeline is because we actually want to hit this one.
It's not two years optimistically isn't two years.
But one and a half now.
There are three key, obviously core technical components to Aztec.
One of them is decentralization.
As Jay said, we want to launch on main net decentralized from day one.
there's
what I guess can be
turned heavy engineering
which is just like
building out the node architecture
the software to run a node
to run a sequencer
like this
the kind of the core
protocol level technology
and then there's cryptography
which is actually building out
our cryptography
back-end software like the actual
algorithms that will be constructing
these are an old proofs
and verifying them
and also, I guess, onto microchography is also the circuit architecture,
actually building out the ZK circuits that will govern this entire protocol.
Just the heavy engineering work alone, I would, I think, is, I would wager as more complicated
than many other L2 projects out there just because of the complexities around this hybrid
state model.
And then on top of that, you have this
like decentralization track
where we're building out the decentralized sequencer,
non-trivial.
And then there's the cryptography,
which is there's a reason why
no one's really attempted anything at the scale before.
And that's because the cryptid wasn't good enough.
It took a lot of work to make it good enough.
There's been a ton of advancements over the last few years,
some of which we played a hand in.
But we're confident we're in a place now where the fundamental proving tech is fast enough to construct these very complicated zero knowledge proofs in a on resource constrained platforms like laptops like phones in web browsers.
But it's going to take a while to build that out and audit it and make sure that it's secure.
Yeah, to give you an idea maybe of the kind of cryptographic kind of challenges.
So I think for these programs to actually run in the browser,
you have to prove the correct execution of a set of rules.
So you have to know that private program has kind of,
it's only updating or adding UTXOs that it owns
and signatures checked fees have been paid.
All of these rules that are defined by the protocol
have to be kind of checked by a protocol level circuit.
and every time you call a smart contract on Aztec, you have to do that proof and you do that
through recursion.
And over the years, and then I think with Aztec, Aztec 2, we used to do recursion in our
roll-up circuit and it would be done on a 32-core kind of machine in AWS and it would take
minutes to kind of do a single recursion.
With some of the latest work we're doing for some L1 private voting work, we've got it down to
15 seconds in the browser.
But for Aztec,
Aztec 3, or the next version of Aztec,
we need to be able to do multiple
levels of recursion for a single transaction
in kind of a couple of seconds.
So we've got to go down another order of magnitude,
which is probably a nice segue
into some of the cryptography research
and Goblin Planck that we've been working on.
Just to kind of repackage this a little bit,
And basically the entire computation that has to happen client side, that's because you want to divulge as little information as possible and you kind of create the proofs client side instead of just sending it somewhere, right?
Basically, if I send like a vanilla Ethereum transaction, that's like literally zero computation overhead.
For me, I just send it to the MAM pool and that's it.
But basically because part of the system runs client side,
this kind of creates this overhead that needs to be handled in browser or in the client.
Correct, yeah.
The private side of the system all has to run the client side.
So before we get into like the cryptography,
the novel cryptography goblin block that you have created,
I am actually curious how the system you are targeting
differs from Mina, which is another one of these systems that are building privacy, preserving
smart contracts. Is it the case that these are radically different visions, or are they similar
visions with different technical approaches? I think there's somewhat similar visions, but
from very different perspectives. When we, there's the reason why we've only really,
recently embarked on building this program elastic network is because we weren't comfortable
with a state of ZK cryptographies and we figured what we really want to do, it'll be too slow,
the previous will be too slow, it'll be a bad user experience.
Effectively, what we want to do is we want what we build to look and talk and quack like a
smart contract where you don't need to be a cryptographer to write smart contracts.
you, it's like we've got this language called noir, it's a sort of rust-like syntax.
It's, but anybody using it, at least in its full version, it should feel somewhat familiar
to solidity in terms of how it operates.
It differs slightly from Mina and that's, so obviously I'm not an expert on Mina, so I don't
want to mischaracterize what they're doing.
So with Mina, the getting the kind of composability that,
you get in Ethereum is possible, but it's a little bit more involved where you need to sequence
what would normally be one transaction over many transactions. And I don't believe they have
their own like specific smart contract programming language. At the at its core, the thing about
Aztec's architecture is that we want full compatibility, which means we want contracts to be able to
call other contracts. I can call contracts. And effectively, each contract core,
is it has to be its own zero-nought proof of correctness.
And therefore, you need, if you want that kind of composability at a bare minimum,
you need some kind of higher-level ZK circuit that the user that the client is writing a proof of,
that will verify all of these private contract calls and sequence and properly so that,
like, they're all, the call semantics are all correct and everything is, the rules
are all being followed.
In reality, it's actually much more convenient and more practical to do multiple.
arbitrary layers of recursions. So you have lots of proofs, verifying,
proofs, verifying, proofs. And I believe my understanding is,
at least when Mina launched, that wasn't really practical to do client's side.
So that they've taken a different approach. But that's kind of where my understanding of
the protocol ends. And I don't want to miscarriage of what they're doing is a extremely
impressive project. And yeah, I just think that they've made some slightly different
design choices. I think part of them are motivated by the fact that they're at 1. So they, they,
they have different requirements to us as well on that front. Right. So is it fair to think that
you're betting on kind of like this recursive proving kind of architecture because fundamentally,
as we said, like there are like private houses, stalls and then public squares. And then my
my big transaction has to go from a private house to a stall to a public square,
back to a private stall to a private house.
So it has to touch all of those pieces.
And somehow like the recursion is you take one step and you generate a proof for it.
And then the next step verifies the proof for the first step,
does something, generates a proof for both the steps together.
Then it goes to the public square.
That's like step three.
it verifies the proofs for the first two steps,
that's something, generates a proof for the entire thing, and so on.
So this is essentially why you need recursion?
Sort of.
Maybe I can try and rephrase that, because it's not quite that iterative.
So what happens is imagine if you're moving between private houses
and you're moving between private stores and each action,
each time you're between, let's call that a step.
What you really need is you need some entity monitoring these steps,
checking that they're all following the rules.
Basically, you have some kind of spy that spying on the,
your actions going, okay, is this step correct? Is this step correct? This is what we call
the kernel circuit. The meglish is borrowed from Zexi. So effectively, the kernel circuit is the
entity that verifies all these steps are correct. And obviously because it's basically adding
like a giant spy, the proof of the kernel circuit must be made client side. You leak way too much
information if you send it to a third party. And that kernel circuit is doing lots and lots of
recursion, and you also, it's more practical to architecting such that you actually have
recursion at the kernel circuit level. So you can think about it. One way of thinking about it,
is that you have this kernel circuit, which what it'll do is it'll verify that one of these
steps between like the private houses, the private stores is correct, and that's it.
But what the kernel circuit also does is it verifies a previous iteration of itself. So you have a
kernel circuit for step one, verify step one.
And then at step two, you have a kernel circuit that verifies step two, and the kernel circuit that verifies step one.
And on and on you go, until you've made all of your private steps.
And then you step out into the public sphere.
And then that proof of the kernel circuit gets kicked off to a sequencer to complete the transaction.
And so that requires a lot of complexity to pull that off in a ZK system.
Right. And so your fundamentally, these Aztec wallets will need to be able to generate these recursive proofs inside the browser. And that's also going to be an engineering challenge to build wallets of such complexity.
Yes. So we know how to do the cryptography. At least we're very confidently we do. But turning it from paper to code is, is,
not a trivial process.
The plan is that the,
we will be,
maybe Joe can speak about this more,
but what we're calling an Aztec wallet is,
it's more of a,
it's something that,
it's something that if you're actually building a proper,
like,
wallets to manage Aztec funds,
you would incorporate our software within it.
It's basically,
it acts like a little bit like a miniature node.
That's,
uh,
um,
it's been like,
consider like web3.js or ethers.j.S.
anything like that.
we're building an equivalent
like ASTECD-JS and
you can make calls to it to construct these
proofs. So the goal is to abstract
all of that away from the actual
like from application developers.
Yeah, like for kind of like an end
webpage,
you'll connect your your version of
Aztecjs, which will be running like
an RPC client server model
and your wallet is basically
implementing that through
our open source software. And that
kind of gives you access to private state management, so which UTXO is yours, and also kind of the
ability to simulate and execute these private smart contract programs and unultably construct
the proofs. So we can do some of the kind of engineering for wallets to kind of make this
a little bit easier, but there is still a lot of work ahead of us to actually get it fast enough
to be a good user experience,
which is kind of the extended timeline we have here.
So I assume that the amount of gas that you have to pay on ASTEC
is also linearly related to the computation.
But it seems to me that as a user,
you can't always know how much computation
you actually have to touch in order to do something
because basically you say if you're touching a state
that like a thousand people have access rights to,
you kind of need to update that accordingly, right?
So basically if you add like one more person who can update something,
you kind of need to deploy an entirely new contract to do that.
I mean, if I do something on Ethereum,
I kind of know what kind of gas is attached to any of the upcodes that I use
and kind of it modifies that state.
Is that necessarily apparent on ASTEC?
I can try and make for the private world.
Zach, do you not talk about gas metering in the public world?
Because I guess in the private world, you don't pay for compute
because the act of executing the kernel proof
is proving that you've done the compute correctly.
Yeah, you've done it yourself, right?
Yeah, so computer is effectively free.
But what you're right in saying is what you will
pay for is kind of the state reads and rights and nullifications that result from that compute.
And that's where there will be a kind of a gas cost for the private world.
And you can kind of, you can see that based on what the smart contract's doing,
kind of what that will be, but it could result in a variable gas cost for a certain transaction
based on your UTXOs under the hood.
So that's kind of one thing we're hopeful that's going to be solved by EIP 4844.
If the cost of data goes down a lot, it should become cheap to do private transactions.
And then when you get to the public world, it does act a lot more like Ethereum because someone else is doing the compute there.
So the metering there, there will be opcodes in kind of the public land that will get metered.
and you'll pay a fee that has to cover the cost of the sequencer,
both executing that but also then constructing a proof of correct execution for the public transaction.
Yeah, just to expand on that and to summarize,
so for the private domain, you pay for state rights,
and you don't believe you pay for state rates.
It's just state rights.
And things like events, which, again, was kind of your bookcasting,
data. And in the public world, yeah, it's much more familiar gas metering like it is on Ethereum.
But just like if you think about Ethereum, most relatively complex contract calls do have a
variable gas cost. If I'm calling Uniswap, I don't know the like the routing logic affects
how much I'm going to pay. And so how do you figure out how much gas your transaction is going
to be? Well, you simulate a client site before you send it. You do the same on Aztec.
Simulation of transactions is very cheap because you're not making any zero-s proofs if you're
just running the logic. So you can you can figure out ahead of time.
Okay, how much is it going to cost before you actually go through the effort of building the ZK prep and sending the transaction?
Cool. I have a couple more questions as to Roadmap. So you talked about like the different buckets.
And I kind of want to dive into the heavy engineering bucket for now. So you already talked about sequencer decentralization as kind of part of that to-do list.
will your sequencer be fully decentralized by the time mainnet launches
because none of the other layer twos have done that, right?
So basically everyone's still fighting with the sequencer decentralization at this point.
And basically, if I were to know that any of the other L2s
were to be able to solve this by the end of next year,
that would greatly alleviate many of the concerns I have about the L2 ecosystem.
So is that already kind of priced in?
Yeah, I guess some of the rationale for this line of engineering is based on kind of the nature of our network and our desire for it to be kind of incredibly neutral and have kind of complete liveliness.
And I guess, so we're based in the UK and like some of our colleagues are based in Europe and some are based in the US.
And no one at the moment can agree on crypto regulation.
So we feel like for any one entity to run a sequencer that deals with encrypted state is a bit of a kind of fools errand.
So we're looking for kind of multiple sequences to run, which can maybe kind of conform to different regulatory requirements if they need to or just act as like a decentralized, incredibly neutral layer, much like Ethereum does.
So that's some of the rationale.
And we face this pressure a bit more than some of the public ZKEVM-based L2s because they can kind of just run centralized sequences and have kind of a path to decentralization, which I think is part of the reason why no one's made significant inroads here yet, because it's not the most pressing problem.
But for us, we feel like it's something that's needed for launch.
And I guess like how we get there, yeah, we're kind of still in the RFP design phase.
We've had some public kind of discussions on our forum and some great submissions from external teams on how best to do this.
But really, yeah, we're looking for a proposal that can take the technology we've built and ensure that if enough entities are running it, the network can remain live and with a good kind of throughput and user experience for developers.
So that's kind of the high level goals here,
but it is going to be an engineering challenge
to actually launch this.
But that's why we're starting now,
not towards the end of next year.
Yes, so it is priced in,
but it is also the biggest unknown unknown on the roadmap.
Okay, so cryptography,
you guys just started talking about this,
and we kind of got sidetracked a little bit.
Talk to us about Plonkin,
in Plunk.
Affie to, yeah.
So Plonk was, we put together Plonk in 2019.
So by, in the standards of ZK, how quickly ZK.
Crypto-Movee and Sistri, Plunk is absolutely ancient.
And it was really the first practical universal snark as in it was, it was a, you could write
ZK circuits in it where you didn't need to perform this trust-a-set-up.
process per circuit. Things have moved along a lot since 2019, where we've upgraded flanks,
from Tavoplank to Tavl to Tavlanc and now there's hyperplunk. There's been a lot of innovations
that have happened over the last two years. Particularly, one of them is that we, as in the
collective cryptographic community, has cracked how to construct plonkish-type systems
using a cryptographic primitive called the sum check,
which is much more efficient and faster than what we were using earlier.
For us, this is a very big deal because it removes the needs to perform
like an algorithm called a fast-free transform, which is very, very memory-hungary.
And so the latest iteration of flunk of a building, which we're calling a honk,
for highly optimized plonk, the peer silent, is, it uses sumchecks,
And we're confident that memory consumption will be, once it's optimized, at least 20% of what it currently is with things like Aztec Connect.
And so that's a huge deal, particularly when you're trying to run a proof in a browser tab.
And then there's a lot of development and work that's been happening on what are called folding schemes, which is effective.
It's a way of reforming recursion, verifying proofs and side pricks more efficiently than what's come before.
And Gobble and Plunk is kind of a slightly orthogonal piece of research there which complements folding schemes, but it makes it, it's effectively a way of doing recursion efficiently.
It takes what every, when you're using elliptic curves and you want to verify a proof side of proof, you've got to do this.
It's called a non-native elliptic curve, you've got to do some very complicated mathematical operations that are difficult to emulate inside a ZK Snark circuit.
and Gobble and Plonk kind of mostly removes that difficulty
and makes them relatively easy to do.
So you combine Gobble and Plot with folder schemes and Plot with some checks
and you mash it all together into a ball
and you get a pretty system which is fast enough for Aztec.
It can do recursion, cliocytes super quickly.
Memory consumption is really light,
which means you can run complex circuits in a browser
and the proof is fast enough to deal with the Deobats.
at least that's the goal.
That sounds wonderful.
How worried are you about using novel cryptography introduction?
Because basically it's like super complex math.
And basically all it takes is like a single vulnerability, right?
Yes, but we, that is true.
However, we're not, we have the luxury that we have our own kind of in-house crypto R&D team.
A lot of the research, we publish ourselves with our own proof of security.
So we understand the tech at a much deeper level than a typical software development outfit, a much deeper level.
And so that gives us some, so we have a lot of confidence that the tech we deploy is secure at the cryptography layer,
as in the proof of security are good, the soundness is good, it's secure.
There's obviously, when you're implementing novel cryptography for the first time, the biggest risk is in,
the software that your software doesn't actually do what the paper needs to, like the maths in
the paper, there's a bug, there's some issue, there's something you haven't seen, which
causes it to become insecure. And so that's one layer that errors can creep in. It's at the
implementation of the cryptography. The other layer that errors creep in is at the circuit level.
One of the big issues with Ziki-Snoch's circuit design is under constrained circuits. Basically,
the circuit's supposed to be enforcing logical rules that make sure that your transaction is correct
and it can't double spend. However, one of the most common types of bugs is where, well,
actually, the logical rules you're applying are slightly more relaxed than you intended because
you've written the program, you've figured your psychic incorrectly. That's quite common.
You know, we've experienced when we deployed Aztec Connect. And as Tech 2, we did have under-constrained
circuit bugs that some of them we found. A couple were reported externally.
by third parties for a bug bounty.
All of them we fix and disclose publicly, but it does happen.
And so there are some, yes, there are technical risks with building advanced cryptography
systems and deploying them to production.
It's one of the reasons why our timeline is so long is so that we have a longer time
to audit everything.
But ultimately, one of the frustrating things is that sometimes, like if any applied
cryptographer or cryptographer come,
to you and says, I can build secure
cryptography software, 100% no bugs.
They're either lying to you or to themselves.
So I can't say that when we launch on main net,
everything's going to be perfect. Everything's going to work
and there's not going to be any security issues.
What I can say is that we're going to have done our absolute
utmost to audit the security,
to validate that what we're doing is correctly, moving
very methodically as we build this.
And we have a very
generous and bug bounty
so that to help us enterprise third parties to review, to audit,
we wouldn't recommend that people throw very large sums of money through ASTIC
in the first year of this operation.
Because ultimately, the only thing that really proves a cryptography protocol is time.
This has been the case, not just in ZK in Web 3, but forever, really.
Even nowadays, the fundamental security assumptions that back elliptic curves,
that you can't solve this computational problem called the discrete lobby,
problem. We don't have a proof that you can't solve it. It's just that no one's cracked it for 30 years.
So we just were like, okay, fine. It's probably good. Yeah, basically, we're going to do our
absolute most utmost to internally validate our software, move methodologically. You know,
we've done this before twice now, which is quite rare in this, in the sector to have deployed
advanced ZK, crypto, ZK relops to production twice. We've done that. We've got experience. We're going to
use all of what we've learned to make sure that we've done as good a job as possible.
And then we will have a very generous spot about to be engaged with the community to try
and help us spot issues as when they come up. And we would expect that basically the people's
comfort with Aztec and like there are the volume of funds that people are comfortable
putting in that we're comfortable seeing going to Aztec will just increase over time the longer
that we remain bug-free.
There's some things as well, which kind of have extended our timeline in terms of like
client diversity and working out what that looks like on Aztecs.
So there may be parts of the system where multiple implementations help get that confidence.
So once we've kind of got a reference implementation built, maybe it's for the roll-up proofs
or maybe it's just for the verifiers, there may be ways to build in kind of,
extra redundancy that I guess two implementations have to agree with the same code.
And that's something that we'll be starting kind of in 2024.
Once we have the full protocol nailed down working on TestNet for other teams to start,
building reference implementations,
so building other implementations off our reference implementation to get extra security
and help spot bugs that way.
So maybe one final question.
So in the history of this blockchain,
technology. I've often encountered a pattern where some capability emerges and then people
realize that it's not enough and then the next version emerges with more capability and then
they realize that is not enough and then the third version emerges where they realize it's not
enough. I mean, fourth version emerges and then like it turns out like that actually addresses
most use cases and that becomes really widespread.
Typical example might be you started off with Bitcoin.
I can send coins and I can write a bunch of fourth like scripts.
Then people realized, okay, it's not enough to just send coins in a decentralized fashion.
We need to do derivatives contracts or a master coin, colored coin, all those things came out.
Then came Ethereum with a virtual machine.
It's like, turns out the Ethereum is really enough for.
a massive spectrum of activities.
And then maybe like in the last 10 years,
if you'd look at public smart contracts,
very little has been built on top of the EVM itself.
Maybe, maybe we can say Solana did parallelization of transaction processing
and maybe that's an innovation.
People still doubt it, right?
So that happened in that space.
Similarly, in the privacy space,
I see that in the beginning, you started out with zero coin, then came Zcash that did the cash aspect well.
And now when you did the cash aspect well, the natural consequence was, hey, can we do smart contracts?
Well, and now you're tackling that.
And in your vision, if your vision is achieved, do you think like there is that next layer of privacy
preserving smart contract systems
that's still possible?
Are there like elements of
your system where
developers don't have certain
functionality and you will actually need to build a
Aztec V4 or somebody will need to build
the next generation of privacy preserving smart contracts?
Or do you think that your capability
is actually nail the privacy preserving
smart contract space and that's the
EVM of the future in that space.
I can feel that question.
Obviously, John, I are very biased on this.
My answer to that would be yes,
but just to go back a bit and speak widely,
you're absolutely right on the observation
about this iterative incremental progress.
I mean, this is really the fourth iteration
of a private transaction that we've built
going back from six years ago.
And the frustration has always been,
we've always wanted to do complex privacy-preserving transactions,
but it's always the tax never been there,
crypto isn't good enough,
crypto isn't good enough.
I make the crypto good enough.
And it's always like,
as soon as we crest like some capability threshold,
like, okay, we can do something more.
So like originally Aztec was
confidential transactions on layer one,
very expensive.
You didn't get anonymity.
You just hid your balances.
It was not that great,
but it was the best we could do.
And we came up with Plonk.
I'm like, oh, great, okay,
we can use this to now do something better.
We can actually make a proper,
basically like Zikash from Ethereum
where you can shield Ethereum.
Let's do that.
And we did that, Astic 2.
And I'm like, hmm,
but this isn't quite,
good enough. We wanted for programmability. We couldn't get there, but we're like, okay,
we've got this, we've got teleplon, we've got recursion. Let's interact with defy. Let's do
layer two to layer one in like defy interactions and transaction matching. We can do that. So we did
that. We did launch asset connector. Now finally, finally, we have the tech where we're like,
okay, we can do now what we wanted to do all along, which was programmable privacy.
So the way I see things evolving, at least for Aztec, is that I do see us like, you know,
downing our tools and going home once we've launched the Aztec network.
But I do see that feature enhancements are iterative and not revolutionaries,
and there'll be upgrades to the existing Aztec network and not completely new
architectures that are deployed.
Because I do think that we've summited the overwhelming majority of all of the thresholds
that we've been trying to cross.
But speaking more widely, are there any other frontiers, any other game changes that can
happen down the line?
there are two that I can see.
One is less speculative than the other.
I think the next big threshold is multi-party computation.
With the systems that we're enabling with Aztec,
you can get single person, like single user privacy,
and then you interact with transparent protocols in this public-private square,
description that you mentioned earlier.
What would be very nice is if there is no public square,
everything is private.
But then that requires a large amount of coordination with a large number of entities that all cannot.
Like they all cannot like know anything about the people they're in judging with.
That requires MPCs.
They're much more complicated to execute on.
You have different worst trust assumptions as that you have to work around.
And the transaction complexity is a lot higher because you're interacting with a large number of other individuals.
However, we do think that Aztec is basically the network where that will happen.
One of the things we're excited about with Aztec is that we see as basically an MPC incubator
because Astech smart contracts are the perfect execution environment to run a MPC critical on.
Because you have this decentralized validation logic that everyone can trust, that can modify private state, shared state.
it's what you need.
And we're going to be launching
with MPC primitives built in
to our program and language noir.
However, we do anticipate
it'll take a while for people to experiment,
but these and actually build advanced systems.
The biggest
game changer,
which I think is a long way away,
if it really ever,
if still I'm not sure what's going to happen
within 10 years or even 20 years,
maybe 20,
is fully homomorphic
encryption, but more than that, verifiable fully homomorphic encryption.
So FHG is a bit of a holy grail where, instead of proving that a computation is correct,
so I have some secret data, I can prove to you that it follows some rules.
You encrypt your secrets, you give them to some other schmuck, and they run the program
for you, but they don't know anything about the algorithm they're running, they don't
anything about the data.
That's FHE.
It's much harder to do.
it's much slower, the amount of information that's required is much higher.
Both of these are problems for in the blockchain protocol.
And within a blockchain protocol, what you really actually want is you want to verify an FHC computation
has been correctly performed.
So you want to say, I gave my encrypted data to some third-party schmark.
They ran an FHC program.
Here's the output.
And by the way, this all happened.
You don't have to trust me when I say an FHC program is run.
And so that means taking a very, very, very complex computing operation, like an FHG
algorithm and then running that inside a snark,
that's that's the real
holy grail but I don't to be honest
I don't see that happening for quite a long
time to come and
yeah, Aztec isn't really built around that
because it's a bit too speculative.
So it's like what is like the man
on the street capability
meaning like, okay, I understand
the man on the street capability for
Aztec which is
my coins are private
I can hit a smart contract, get a credit score, private,
and then I can go to the public square,
borrow money based on my credit score,
and then I can go back to the private world.
MBC seems to be, yeah, you can do all that,
but you really don't even need to go to the public square
to take the loan.
The loan will also be granted to you on some kind of private square.
Everything's private.
But what does FHE provide?
me beyond that, I already feel like that's enough for me.
I think so too, which is why we're not focusing on it and given its technical hurdles.
But I think with FHE, you might, maybe, might be able to actually create Fibbithereum.
Maybe.
As in, just take regular Ethereum, wrap it in, not in ZK, but FHE, where you use oblivious, like, oblivious RAM to, like, to do the,
the state updates, and you might be able to make that fully private with some tweaks.
So we can have the account model of Ethereum completely,
and smart contract writing behaves exactly like Ethereum, but the whole thing is off-scuited.
Possibly, I'm a little uncertain myself, but you have a very different execution layer,
basically, if it is possible. And I think, yeah, like, we did some,
early kind of tests on on using some some kind of fHE methods to help retrieve as tech
UTXOs and the compute required is is eye-watering so it's I think to even match Ethereum's
throughput on today's protocols it's just a complete non-starter um but it's it's fun to kind of put
put pie in the sky and see where things can things to get to and maybe it will become faster than
that we will think that it seems quite far away right now.
I have one more final question.
Seriously, last final question.
You have made the somewhat controversial decision to sunset
Aztec Connect to kind of focus on building Aztec 3
and kind of leaving everyone who had kind of so far entrusted themselves
to you and your company in building on Aztec Connect.
somewhat high and dry.
Can you walk us through that decision process?
Because I totally see that kind of maintaining something
where you don't know whether there's an upgrade path to the successor
is a pain,
but also kind of, frankly, pissing off the people
who so far kind of build on your protocol.
It's probably not a great business move.
Yeah, it's a great question.
I guess I would kind of dispute,
the high and dry.
We've made all of the code fully open source.
And if people want to take the network
and kind of take on the costs of running it
from a maintenance standpoint and just resources,
we encourage that.
We can put the code links in the podcast.
And for the users,
we've kind of offered free withdrawals for the next year.
So we're hopeful that even though it's not a decision
that a lot of users kind of agree with because they were using the product and enjoying it.
They shouldn't be kind of out of pocket because of that.
And we're very hopeful that once Aztec 3 or Aztec the next version goes live,
they'll come across and kind of try the wealth of new applications that are there.
And from our community, at least at developers, they're all very excited to try the sandbox.
I think we've managed to kind of protect against burning, burning bridges there where possible.
And I guess on the rationale, under the hood, like it had a lot of usage Aztec Connect.
And that came with costs.
So we had kind of close to 200,000 kind of users.
And for a company like ours, we're still small.
We may have raised kind of venture funding, but we were spending kind of close to half of our engineering resources on keeping that
and we kind of felt that it was not just normal engineering resources that were like replaceable
and we could scale up. These are kind of ZK. Snark-based brains that are not really kind of scalable.
So it really came down to kind of can we keep this live, work out how to upgrade it and decentralize
it at the same time as building something as ambitious as Aztec and just the math said no.
So it came down to being a business decision on getting Aztec out by the end of 2024 or not really at all.
So that's what kind of forced our hand there.
Okay.
That's fair enough.
If people wanted to keep abreast of developments going towards Aztec 3, where's the best place they can follow you on Twitter or Discord or your forum or what would you recommend?
Yeah, I'd say the forum. So we're trying to do a lot of our kind of protocol and product
development in public. And so for kind of key protocol decisions, we'll be using our forum,
which is at discourse.astech.network. We will obviously be tweeting out about that via
our Twitter account, which is an Aztec network. But really, we're trying to build Aztec in
public and get as much community involvement in the design and building of that as possible.
we all have lots of RFPs coming up where we encourage people to kind of chime in on the design
decisions we're making, everything from sequencer selection through to kind of account
abstraction and token models.
So if you have thoughts on how that should work and want to kind of debate some of our
ideas, then yeah, please head at the forum.
Super cool.
Thank you guys for coming on again.
Yeah, thank you so much for having us.
Yes, thank you.
This has been a lot of fun.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever
you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode
of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes
in your inbox as they're released.
If you want to interact with us, guests, or other podcast listeners,
You can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
