Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Rand Hindi: Zama - Fully Homomorphic Encryption in Blockchain Applications & Privacy
Episode Date: November 24, 2023Homo- (Greek prefix meaning ‘same’); -morphic (Greek suffix meaning ‘having a specific shape/form’)Intuitively, one could deduct that homomorphic encryption indicates that the initial data and... the encrypted result (cipher) could share the same form. Based on this property, it can be inferred that computation can be performed on the encrypted data, without prior decryption. By decrypting the result, you get the same output as the computation performed on the unencrypted data. While homomorphic encryption can be either additive or multiplicative, fully homomorphic encryption supports both types of operations. Unlike ZKPs, which are proofs of computational integrity, fully homomorphic encryption allows for encrypted data computation, without revealing additional information about the original data. This could provide the missing link for ensuring private transactions on blockchains’ public ledgers.We were joined by Rand Hindi, CEO of Zama, to discuss fully homomorphic encryption solutions, how they differ from ZKPs & MPC, and how they can be leveraged to ensure compliant programmable privacy.Topics covered in this episode:Rand’s background and his interest in privacyMeeting Pascal and founding ZamaFully homomorphic encryption (FHE)Zero knowledge proofs vs. Multi-party computation vs. Fully homomorphic encryptionTaking fully homomorphic encryption 'mainstream'Zama’s productsfhEVMHow multi-party computation would secure fhEVMMulti-key homomorphic encryption & functional encryptionDeploying an FHE rollupFHE use casesPrivacyZama’s business modelEpisode links: Rand Hindi on TwitterZama on TwitterFhenix on TwitterSponsors: dYdX Foundation: The recently launched dYdX chain features new governance and token economics, that empower stakers and promote validator decentralisation. Bridge your DYDX tokens and contribute to the evolution of dYdX chain, fully permissionless and community driven. - https://bit.ly/47kqG59This episode is hosted by Friederike Ernst. Show notes and listening options: epicenter.tv/523
Transcript
Discussion (0)
Introducing the next generation of DYDX and the next version of the DYDX token.
Welcome to the DYDX chain.
New token mechanics mean you can stake to secure the network.
Staking is fully decentralized and controlled by the YDX token holders.
All fees are distributed to stakers.
Earn rewards from using the DYDX protocol,
with rewards planned for traders and early adopters too.
New governance means you are in control.
Trading has been democratized.
You can vote on protocol improvements, token distributions, and more.
Bridge your DYDX to seamlessly transition to DYDX chain.
Bridge now at bridge.DG.DX.
Trade and contribute to the evolution of DYDX chain, open source and community-driven.
Run your own validator.
Validating is fully permissionless.
Join us on our mission to democratize access to financial opportunity today.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization in the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Rand Hindi, the CEO and co-founder of Zama.
Zama leverages and fully homomorphic encryption for all kinds of privacy stuff that we'll talk about in just a second.
Before we start properly, Rand, thanks for coming on. Can you tell us a little bit about yourself?
Sure. Thank you for having me. So my name is Rand.
I'm an entrepreneur and investor in deep tech.
I started coding when I was 10 years old, built my first company as a teenager,
then did a PhD in AI in 2007 before it was actually a cool thing to do.
I was then running an AI company focusing on privacy.
They got acquired in 2019.
And since 2020, I've been running Zama with my co-founder, Pascal Payette.
And I also do quite a bit of investing in deep tech.
So my sweet spot as an investor is companies,
that most other investors cannot understand yet.
Ah, that sounds fascinating.
What kind of, can you give us some examples of companies you've invested in
or you're thinking of investing in?
Sure.
At the moment, I'm looking a lot at biotech, specifically psychedelics, longevity.
My PhD was in AI applied to biology.
So I can read biopaper just as well as I can read AI papers.
And so I was able to get in early on a lot of things that people taught
who are like sci-fi medicine.
So for example, I invested in a company
that's doing like a miniature MRI for Shipillon.
It's an MRI machine
that's 10 times smaller and cheaper
than existing ones.
So something you can put to the back
of like an ambulance and things like that.
When I invested,
nobody thought it was even possible to build it.
And now these guys just produced
their first MRI images.
Yeah, super cool.
And obviously, longevity has also seen
quite the boon over the last
couple of years. Any notable investments there? I mean, I've been investing quite a few things.
My most recent investment is in a diagnostics company called Glycon Age, where they basically
analyze some very specific markers in your blood called Glycans, which gives you a good
estimation of your biological age, but also of all kinds of different diseases that might
progress in the future. A great company, very strong science.
20 years of research, just something most people overlooked.
And so I always try to find those things, you know, those sort of non-obvious bets where the science
is very, very strong, but nobody really thought about making a company yet.
Super cool. I think we will kind of see how this segues into privacy in just a bit.
So basically all of these biotech companies, kind of the ones that kind of tell you your
biological age and kind of your risk markers and so on, I would be.
absolutely thrilled to kind of know where I stand.
The reason I've never taken any of these tests is because I'm concerned about the privacy
or the data.
So going from that to kind of like a privacy focus space, it's quite a leap.
And then on some levels it's not.
So kind of tell us kind of what kind of drove you to kind of found something so intensely
mathematical and kind of like computation.
I mean, it's a tool set, right?
So basically it's kind of, it's not, yeah.
So basically what made you do it?
Actually, that dates back from when I was 14 years old.
So at the time, I was 14 in the 90s.
So it was the beginning of the internet.
You know, people were starting to build websites.
And since I was coding since I was a kid, for me, it was easy to start building websites.
And at the time with a friend, we had built a social network.
they're quite popular in France.
So the reason why I'm saying this is because at some point at school,
I got bullied by an older kid, right?
I mean, you know, when you're 14, the guy is 16, he's twice my size.
What can I do about it, right?
And I got so fed up of this guy that I thought, well,
maybe he's a user of my social network.
So perhaps if I looked into his private messages,
I could find something compromising and blackmail him so he would stop.
So I looked into the database and I found,
amazing things about him, which I challenged him with and say, hey, if you ever mess with me again,
everybody's going to know about your dirty little secret and your secret crush. He never
ever spoke to me again or approached me. So victory, I bullied the bully. But then I thought
something's wrong about what I just did. Just because I'm the one operating the service doesn't
mean I should be able to see everybody's data. And so from that point, in 1999,
I knew that privacy was going to be a very important topic for the future of the Internet and of, you know, just digitizing everything.
So as far as I can remember, I've always tried to think about privacy just as a necessity as something that you build by design in everything that you put out there.
That's very true.
I feel, makes me feel bad for the bully, which almost, you almost never do, right?
It's kind of like, but yeah, it's kind of, he probably didn't even think about the fact that you,
you had kind of clear text access to kind of messages you were sending.
Nobody thought about that in the 90s, right?
Like, it wasn't something people talked about.
And yeah, and so I think I don't feel bad about it because if anything, that particular episodes
eventually made me focus on privacy, made me create Zama.
And if anything, more people will benefit from privacy,
because this guy bullied me and forced me to confront at the time the fact that I had access to so much data and that this was wrong.
So, you know, silver lining, he stopped bullying me and hopefully a lot of people will end up benefiting from that.
So you just said that you co-founded with CTO.
That's done.
Yeah.
Yeah.
How did you guys meet?
Pascal is like Pascal, Pascal, Pascal Paget is super OG and homomorphic encryption.
He's one of the early inventors of homomorphic encryption.
He was the first one to invent homomorphic addition,
so the ability to add encrypted numbers in the 90s as well, actually.
So when I was doing my social network, he was inventing that.
And him and I have been friends for a few years.
We kind of like hang out and we kept in touch.
And when he heard that I was selling my previous company,
he reached out and he said, hey, I just had a breakthrough in homomorphic encryption.
I think that was the right time to build a company.
Do you want to do it together?
And quite frankly, initially I thought, I don't know, man.
I'm kind of like tired.
I want to do the whole like sell my company,
take six months, travel the world kind of thing.
But then I thought, how often do you get a chance
to build a company at the right time
with the perfect co-founder based on that new scientific breakthrough?
And so I just couldn't resist.
So a week after I sold my company, we started Zama.
that wasn't 2019 and there was before 2020 officially 2020
2020 sorry before the ZK boom so kind of kind of if you look at kind of like the trajectory
of ZK technology and kind of the mind space it's taken up in in the ecosystem
that was just the very very beginning and you guys weren't even primarily working on
ZK stuff but fully homomorphic encryption which is kind of like
two pay grades beyond
you know vanilla ZKPs
we'll talk about kind of the differences between all of these
technologies in just a bit
back then the only project that I'm aware of
who was aiming to use fully homomorphic encryption was numri
I don't know whether that's still going but kind of
when you came out and said I will build something
on this basically fully homomorphic encryption
has been around for 50 years or so
but basically making it usable, this has kind of been the holy great.
It's not really been done so far.
So kind of what, how did people react when you came out with this?
So fully homomorphic encryption was actually only figured out in 2010.
Before that, you could only do either additions or multiplications, but you couldn't do both.
2010 is the first time that someone invented a homomorphic technology where you could do any kind of computation.
The problem is it was extremely slow, very, very slow.
It was very hard to use for things that required complex computation.
And unless you had a PG in cryptography, you couldn't really use it.
So, you know, the big contribution from Zama is that we actually created a homomorphic scheme that is very fast,
that can do any kind of computation.
So it can do any kind of thing you want to do without any sort of difference to you, the data was not
encrypted, and it's very easy to use. So as a developer, you don't have to learn anything about
cryptography. And so I think, you know, from a mathematical perspective, we've sold FHE. Now it's
surely about making it used by as many people as possible and that improving performance
all the time. I think maybe this is the time to kind of explain all of these terms in depth, right?
So basically, in terms of privacy, people tend to think in different teams.
So Tier 1 is ZK Proofs, which we have talked about on the shows many times.
So basically, it's this idea that you can prove your knowledge or something without revealing the thing itself.
And they're typically generating the proof is computationally expensive but can be done off-chain,
and then anyone can verify it on chain.
Tier 2 that we don't actually talk that often about is multi-party computation.
And then in my head kind of tier three is kind of like the holy grail tier fully homomorphic encryption.
Can you explain the differences between ZKPs, MPCs and FHE?
That's a good question.
They're fundamentally very different technologies.
And I believe that three of them have a role to play in building a privacy protocol.
Zero knowledge proofs are great when you want to prove something without revealing the data.
But it doesn't actually allow you to compute on the private data itself.
So whoever has to produce a proof has to actually have the data.
So for example, you know, if I want to create a ZK proof that I have enough tokens,
me, the approver, have to have access to the actual balance.
Otherwise, I cannot prove anything about it.
So if you wanted to actually compute on an encrypted balance,
you wouldn't do that with ZK.
That's really not what this is about.
ZK is a technology that creates proof of correctness.
It's not a technology about computing on private data.
If you want to compute on private data, you've got basically MPC,
multi-party computation, and FHE.
At least if you're talking about software-based solutions.
You have harder-based solutions, but let's stick to software-based solutions.
The idea of multi-party computation is that instead of having one machine do the computation,
you basically split the data and the program to be executed on multiple machines,
each of them doing a piece of it and then putting the result back together.
So as long as a majority of those machines are honest,
nobody can retrieve the original encrypted data.
They can only retrieve the final results.
MPC is great.
The only downside is that you're limited by networking time.
So at some point, it doesn't matter how fast the machines go because sending data back and forth is going to be the bottle.
FHE is basically running on a single server.
So you encrypt the data and you compute on the encrypted data itself without having to decrypt it.
And because this happens on one single server, you could always throw more computational power at it and make it faster, contrary to MPC.
So what we believe is a holy grail of privacy and what we believe is a holy grail of privacy.
what we do at Zama, actually, people only realize that is we use FHE for computing on the private
encrypted data. We use ZKP to make sure the user is doing what is supposed to do at the end.
And we use multi-party computation to secure the private key and decrypt the result of the homomorphic
computation whenever we need to. So MPC is great for managing the keys. ZK is great for proving
stuff about what users are doing, and FH is great for doing the computation itself.
And that's really how people should think about combining them.
Okay.
Let me kind of see whether I got this right by kind of just recapping it.
So kind of if you look at fully homomorphic encryption, it's kind of like by giving the
example of kind of say, my sequence genome.
So I would kind of encrypt the genome that I sequenced or kind of I had someone's
sequence and send it to these biotech institutes that kind of can tell you what your risk factors
are and so on. And then they can do computation on my genome without knowing what exactly
they're looking at. They're just doing their computation on it. And then kind of they send me
the result back and I can encrypt it with my private key. Correct. Yes. That would be the simplest
way to think about FHE. So if you wanted to do that with ZK, it would be the other way.
around. So in the case of ZK, the user, me, would run the genomic analysis on my own computer,
on my unencrypted genome sequence, and I would then produce a proof that have correctly
executed the program and send back the result alongside with the proof so that I don't need to
show them the actual genome inputs. So this would be typically if the research organization wants
only the result of the analysis, but they don't want to see your individual data. That's how
you could potentially do it. So you see, you're swapping things around. In the case of ZKP,
it's not the company that's providing a service doing the computation. It's you, the user,
doing the computation and just sending them the result and a proof that this result is for the correct
data. Okay, so basically it's kind of like, say, I want to take out new health insurance
and kind of they want proof that kind of I am generally healthy and I kind of, I want to be
charged like on the lowest here, so I don't need to tell them exactly what my ailments are,
but basically I can tell them that I'm generally a healthy person by kind of sending the
DKB fund.
Exactly.
But you're the one doing the computation.
In the case of FHE would encrypt your data, send it to them, and they would do the computation,
and you would then send back their other results.
That sounds like magic, because as someone who kind of who has processed large batches of
data before, typically the first thing that you do with data is you clean it up, right?
So basically that is in the right format.
And does FHE work even with data that's not cleaned up?
Say, for instance, I take a picture of a natural scene and kind of I want image processing about kind of what we see.
Say dog walking across the street and kind of traffic light and whatever kind of like automatic processing usually happens.
Can this happen on data that's not been tidied?
Absolutely. FHE is true incomplete. So anything you can do on unencrypted data, you can do on encrypted data with FHE. The question is, how efficient is it going to be? So in often cases, you probably want to do some of the pre-processing before you encrypt the data and send it as FHE, because if you can, why not? It's cheaper, right, at the end. But for image processing, we have an example. If you go on a hugging face, Zama has like a space where we actually have a demo of,
an encrypted image filtering application.
So it's literally that.
You upload an image, it encrypts it,
send it to the server.
The server applies different filters on it,
and you get back the response.
So 100%.
Like FHE does not have any requirement
on what kind of data is being computed on.
You can do whatever you want.
Okay.
I'm still kind of struggling.
This seems like magic.
So basically kind of,
if I think about this,
say, for instance, I ask an LLM something, right?
Kind of like an embarrassing question that I don't want the makers of open AI to know that I have.
So basically, let's say I send an encrypted version of it.
But basically the way that kind of say LLM's work and also kind of like these image processing
softwares is that they kind of quote unquote understand something about the data, right?
basically kind of they have context.
So kind of how does the embedding in the context work in terms of FHA?
When you think about AI or any other application, you have some inputs.
There is some data going into the model.
This data is then transformed.
You're applying all kinds of different algorithms on it.
So it could be looking of value in a table for your embeddings,
or it could be applying like an activation function on the data, right?
All of these things, you could do an FHE in exactly the same way.
So the program itself doesn't change.
The only difference is that the inputs are encrypted.
And so if you look at the input itself, you're only going to see some gibberish.
Unless you have the key to put things back in order, you're not going to understand much.
but the particularity of the way that this was encrypted
is that it conserves some mathematical properties.
So for example, adding two encrypted numbers
would result in the same thing as adding two unencrypted numbers.
So a good analogy is like,
imagine you have a box, okay?
So you cannot see what's inside the box.
You can put something inside a box,
but you cannot really see what's inside a box.
But you know, that box has a few buttons
and a few knobs that you can turn.
You know, maybe one button,
squashes whatever is inside the box, maybe one button sprays red paint on it. So even though you don't
know what's inside the box, you can still press a button that will squash it. You can still press
the other button that would make it red. And when you take out whatever object was in the box,
it will be squashed at red. So the person who applied the transformation on the data didn't have
to actually see the data itself that was in the box. It just had to know that there was something
in the box and just press those buttons.
It's exactly the same idea here, right?
Basically, encrypting is just putting something into this box
and giving this box to you and you are the one pressing the buttons on it.
So you know exactly what you did.
You just don't know what you did it on.
I think on some level to me that makes sense,
but especially kind of when you have a large repository
that you kind of have trained your data on,
it seems intuitive to me that maybe you should have had to do the same thing on the data you used for training.
So that kind of you actually compare apples with apples rather than apples with really squished and green apples.
But it is the same thing, right?
It's just that it's translated into a different language, but it's fundamentally the same data.
So when you train your model, right, what you're doing is you're telling it,
You know, if that number is five, I want to multiply it by two.
If the number is six, I want to divide it by three.
Let's say, something like that.
Yeah.
That logic, the application itself still holds as long as the input that represents,
you know, the value you want to transform is able to support those operations you want
to do on it.
Okay.
So if you encrypted data, but the encrypted data can still be multiplied and divided and
things like that, then it doesn't matter, right? It's like you're basically just shifting the space
that you're operating in, but you're not changing the operations that you're applying on it.
Okay. And that was the big difficulty of FHE is supporting all of those different operators on the
encrypted data. Okay, but just to be perfectly clear, the only thing that kind of needs to be
encrypted is kind of the data that I sent. Basically, you're using kind of the plain text operations
that kind of I would use on unencrypted data as well.
Okay.
And that's exactly a challenge, by the way.
Like FHE for 50 years, the whole challenge was,
can we do any operation on the data,
not just additions or multiplications?
Okay.
Yeah.
So I hear you, and I think I've read this before,
still seems crazy to me.
But, I mean, this is often, I mean, it's matched, right?
Which is exactly the reason why we put so much effort
into building good developer tools
so that people don't have to figure it out.
Because at the end of the day, people don't care
about how it works, right?
They care about it working.
So the users want to know that the inputs
they're sending aren't readable.
The developers want to know the applications
they're building are going to behave as expected.
That's it.
And as long as you can guarantee that,
the internals and the mathematics,
which I agree sounds like black magic,
honestly,
most people are probably not going to care.
I mean, at that point, I'm the CEO here in the company, and I'm pretty good at math and coding.
I can't even keep track of everything happening inside Zama.
There's like, there's so much complexity in terms of the underlying mathematics.
And it's okay, you know.
It's not our job to understand those things to use them.
And we've been told time and time again that commercial use of FHA is years in the future.
From what you're saying, it sounds like you disagree and it can be used for things now or things soon.
Where do you think kind of this disconnect comes in?
Well, I think the people saying that aren't the people working on it.
So it's very difficult to know, it's very difficult to know what's possible unless you're yourself in the field.
At some point, it becomes evident to everyone.
But, you know, ZK today seems obvious to everyone.
But it was very obvious to a small group of people five years ago as well, right?
FHC is the same.
FHC today is obvious to people working in FHE that this is working.
It hasn't yet transpired to everybody else.
So your short answer is it depends who you're asking to.
The longer answer is they're kind of right because up until now, FHC was not yet practical.
There were three problems.
It was limited in terms of what you could do with it.
It was very difficult to use as a developer.
And it was very slow.
Zama made it very easy to use.
We've made it such that you can do anything you want with it.
You don't have to worry about what's coming in, what's coming out.
We've taken care of everything.
Performance is the last mile.
So right now, we can basically, let's take blockchain as an example.
You know, using a morphic encryption in a blockchain.
Right now we can support between two and five transactions per second.
It's not bad, considering that most,
L2s on average only actually have 5 TPS.
So FHE today already matches the average load of most EVM chains.
We believe that we can 10x that number just with better cryptography the next 18 months.
So where you're going to get to like 15, 20 TPS in the next, you know, in the next couple of years.
So that basically is the same throughput as Ethereum.
So for blockchain, FHE works, period.
it done. If you want to have a thousand transactions per second, then you need some additional
hardware accelerators. You know, you need like a kind of GPU for homomorphic encryption if you want
to go beyond this 10 or 20 transactions per second. So, you know, if you allow to use it for something
that you would do on Ethereum, done. It works. If you want to use it for something you would do
in Solana, then you need this hardware accelerator. Okay. Maybe it's time to
kind of go into the products and services that you currently offer. So what's on offer?
So Zama is a full stack company. We basically have a solution all the way down from, you know,
FPG and GPU acceleration, all the way up to solutions for blockchain and machine learning.
At the core of everything we do, there is a unique technology we built called TFHE,
which we basically have like in a FHE developer library. And on top of that,
We've built one solution for machine learning where you can take some Python code,
so an existing model that you wrote in Python,
and we automatically convert it into a homomorphic equivalent that can work on encrypted data.
So as a data scientist, you don't have to learn anything about cryptography.
You just write Python code, and we take care of everything else.
On the blockchain side, things were a little bit more complicated.
So we basically created this product called the FHVM,
which is a way to have confidential smart contracts
in EVN chains using homomorphic encryption.
And so that particular protocol works great,
but it's a little bit more than just FHE
because you have multiple users interacting with each other,
you need composability between contracts.
And so this is where we're using MPC, for example, for key management.
So our FHVM protocol uses FHE for the on-chain secret computation,
but it also uses MPC for managing the secret key
of the network.
And this FH EVM, is it live today?
Can I run it on Ethereum?
Is it like its own chain?
What are the costs?
There is a set of pre-compiles that you need to integrate into your EVM chain
to support homomorphic encryption.
So it doesn't work on Ethereum.
It would work on any chain, any EVM chain who just basically implements those pre-compiles
pretty much.
So it's a very easy integration, but it does require effectively to at least soft fork
the EVM itself. I don't think Ethereum will ever use FHE just because of the computational
requirements. It's just not in the spirit of Ethereum of running that like on cheap hardware,
right? I think FHE still needs some pretty powerful hardware. So there are a number of companies
integrating the FHEVM. The first one that's public who announced it is called Phoenix. So Phoenix
is a new L2 based on homomorphic encryption using air technology that's built by the
the team behind secret network
that's already like a privacy network.
So they're launching this new protocol called Phoenix.
Great guys, incredible team,
great investors too.
So they raise like a 7 million seed round
led by multi-coin.
So I think that's going to be probably
one of the very successful projects.
But let's just say that
if we do our job rights,
homomorphic encryption will be
a commodity technology
in blockchain. Because when you think
about it, right now on a
blockchain, everything is public.
If you want to build any kind of
confidential application, you're
going to need something like homomorphic
encryption. Yeah, absolutely.
So kind of let's talk, I mean,
the use case to me is
absolutely clear. I'm still not
clear on kind of the technical
implementation. Say I want to launch
an FHE
enabled chain
S&L 2 on Ethereum.
First of all, does everything need to be fully homomorphically encrypted?
Or can I do like plain text things and FHE things on the same chain?
That's a great question.
You can do both at the same time.
In fact, we don't actually change the EVM itself.
You can take an existing EVM.
So let's say you take a Go Ethereum.
You use that.
You can take that.
And you basically add the Zama pre-compile
libraries, which are basically linking your EVM to our FHE library so that you can start
doing FHC stuff in solidity. But the way you do that is that we expose some new data type
in solidity, so basically an encrypted integer and encrypted Boolean value. At your contract, you can
specify what's supposed to be encrypted, what's not encrypted. So you have full composability,
not just between encrypted FHA contracts, but also between encrypted and non-encrypted states.
And that's a very important part because you can take an existing chain that's already running,
and without changing anything, without breaking anything, you can add FHE capabilities on top of it.
Let's just stay on the same chain, right?
So basically, if everything kind of happens on one chain, how do you deal with composability
of like some parts are plain text and some parts are encrypted?
FHC can work.
FHG operations can work between two encrypted values, between two ciphertexts.
but it can also work between a ciphertext and a plaintext value.
So the operators in FHE basically exist for both flavors.
So that part is, I would say, a natural feature of FHE technologies.
What's really difficult is actually composability between encrypted states.
Because if you think about it, if you have multiple users or multiple contracts interacting with each other,
it does imply that if all encrypted their data under the same public key.
Because if the data is encrypted under different keys, it cannot be mixed, right?
It just won't work.
So it has to be under one global network key.
And so if there is one global network key that everybody's encrypting under, the question
is, who has the decryption key and how do you selectively determine who's allowed to see
which encrypted value, right?
How do you decrypt what for whom?
And this is where MPC comes in.
The smart contract itself can define access control logic.
It can say this user who owns this balance can decrypt its own balance.
Makes sense, right?
And the way this works is that the validators will split the private key of the network
in different pieces, and you need a majority threshold approval for something to be decrypted.
So it's called threshold decryption.
And by having this threshold decryption, combined with, you know,
your traditional blockchain, you can actually have, you know, this decentralized system where nobody
has a private key and where the smart contract dictates what can be decrypted or not.
I think I understand that.
So this means that kind of theoretically things can't be stolen from you, but kind of if enough
of the MPC keyholders collude, the network state could be frozen, right?
I mean, not necessarily.
If the MPC notes collude, it doesn't change anything to the blockchain itself.
It only means that they would be able to decrypt anything they want.
So the worse is that you would lose confidentiality, but you would never be able to double
spend or anything like that because of that.
But I would argue that, you know, if you don't have an honest majority in your protocol,
is broken anyway, so you probably shouldn't use it.
But having said that, you know, securing an MPC protocol is a very, very tricky thing, very, very tricky.
So most likely we believe that this is going to go through a combination of this threshold MPC protocol running inside some kind of secure enclave.
So for example, if each of the MPC node participants are running the MPC software inside at HSM,
then you would need to break multiple HSMs at the same time,
faster than the keys are rotating.
And so, you know, arguably this means that, you know,
I don't even think a government could do it because if those nodes are in different
countries, nobody would have full access to all of them at the same time.
So you would need some kind of global international, you know, operation where people synchronized
to break HSM in a few minutes.
If they can do that, they can break into any bank.
So the goal here is to make this protocol bank grade security, right?
Okay. How fast the keys rotated?
That you can determine pretty much any way you want.
I would say at least as often as validators rotate.
Okay. And in terms of kind of like MPC numbers,
so kind of how many parties do you recommend?
Because I mean kind of you need like you need to cover like different jurisdictions.
You kind of need to have like geographical independence and the operating.
independence and so on.
So kind of just so to make sure that you don't have run into like either a
jurisdictional catastrophe or kind of like a dark Dow scenario.
These are very important question.
These are very hard questions.
I think today the hardest question for FHE isn't FH anymore.
It's the key management of the treasurer protocol.
for those, you know,
composable multi-user FHE,
you know, use cases like blockchain.
That's a very hard problem, right?
Because, you know, it's not about cryptography anymore.
You know, this is about...
It's about upsec.
Security.
Yeah.
Exactly.
UPSC, pretty much.
So we believe that a combination...
So first of all, we believe that the threshold, you know,
MPC protocol is probably not going to be run
by the validators of the network itself
because it doesn't have to be.
Secondly,
We believe that this KMS will probably have much, much stricter requirements in terms of what
hardware should be running on, who should be allowed to run it.
It might even be permissions if people want to have it like really even extra secure.
So it's possible that you might have like one permission, you know, threshold network that
everybody's using.
And the people running that are going to be Apple, Huawei, Zama.
Like, you know, you basically have companies from different countries that have no
incentive to collaborate whatsoever running those things. So it could be, you know, five participants.
It could be 10. We even have a protocol for 50 participants. So, you know, the number doesn't really
matter. It's really more about, yeah, just who's running it effectively.
Okay. But there's no way of kind of establishing a scheme such that kind of you don't need
the same encryption keys to kind of.
for things to be able to it to operate, right?
There is something called multi-key homomorphic encryption.
The problem is that it requires every participant to be online for decryption,
and the size of the keys basically explode quadratically with the number of users.
So if it's like three people, sure, why not, right?
If it's like 100 million people and Ethereum, no way.
And plus not all of them will be there.
So, no, no, the correct way, 100%, the correct it only,
way that this will work is homomorphic encryption using a public key that everybody's sharing
with some sort of threshold protocol for securing the private key. There is maybe a longer-term
idea where you could basically have what's called functional encryption combined with some
kind of ZK proof. So if you can provide a proof that the FHG computation was done correctly,
there is a technology called functional encryption
that takes an encrypted input
and produces an encrypted output
only if certain conditions are satisfied.
The problem is that this technology
is so, so, so, so, so, so, so slow and limited
that is basically not possible right now to do it.
But, you know, maybe in 10 years,
you're going to have FHE running on the encrypted data,
ZK proving that the computation was incorrect,
and the proof of this ZK protocol will be used in a functional encryption scheme for decrypting
the actual ciphertext that was the result of the computation.
And here, you would have a completely trustless decentralized, no threshold, no MPC protocol.
And that would be a holy grail of, you know, FHE.
Super cool.
So let's talk about kind of the ways that it can be deployed today.
say I'm deploying an L2 on Ethereum, and I kind of, I have all the necessary pre-compiles to kind of enable the FHE VM.
Do I still use like regular vanilla ZK roll-up technology to kind of prove to Ethereum that my state is correct?
Unfortunately, right now, proving an FHE computation is much more costly than just redoing it.
So it doesn't really make sense to use a ZK roll-up for scalability in FHEG.
right now. Doing an optimistic FHE roll-up makes more sense. So I think that's probably what we're
going to see happening. Okay. And how do you then deal with fraud proofs that you need for the
optimistic roll-up? I mean, can everything that kind of I need to show for fraud-proof be done on
layer one without the pre-compiles? That's a very good question. Not with the optimism stack.
the OP stack, but with Arbithrom, you know, you can push a WASM executable, right? And so
theoretically, you could compile your contract, including our libraries into a WASM, you know,
executable and run that on the L1. That's possible. How practical it would be, I'm not sure.
There are people working on FHE roll-ups, but it's not yet solved, but it's doable, for sure,
doable, I think. So I think Arbitram style fraud-proof would work better than OP-style ones.
And that's what Phoenix is using, or how are they setting this up?
So Phoenix just recently published a white paper showing how you would actually do FHE roll-ups.
So they have a prototype working, and they're well on track to release that at some point in
2024 in production.
And they use actually
arbitrary style
wasn't frog proof for it.
If you kind of look at
L2s and L1s
obviously kind of like there's a huge
spectrum of possibilities how to configure this.
So does this work with POS, POWW,
POA?
Can I just set up
an arbitrary EVM
chain as long as it kind of has the right
pre-compiles?
It should work with any EVM.
There are some, I would say, compromises if you're using a consensus protocol that doesn't have instant finality.
And the reason is that if you don't have instant finality and you're requesting to decrypt something of the state,
you might be decrypting something that gets rolled back at some point.
So you might be leaking information that isn't actually final state.
So that's why we think that like, you know,
instant finality protocols like tendermint,
you know, commit BFT and all of these IBFT sort of consensus
are better suited for this.
Okay, you don't want proof of whack or anything.
Yeah.
Proof of work is probably not going to do it.
But proof of stake, proof of authority should be fine.
Okay.
Yeah, super interesting.
Do you expect everyone who will,
do you expect that to be specific change
that kind of are FH enabled?
Or do you think we will see the future of DAP chains
where a DAP decides that kind of this is really what they need
and they kind of release their own chain?
Because then obviously kind of interoperability becomes a problem again
that, I mean, with optimistic roll-ups,
this is a much bigger problem than with ZK roll-ups.
I mean, bridging between different L-2s is not satisfactorily.
solved at the moment.
But I mean, I can see us getting there
with different ZK roll-ups
in due time
where it's kind of on optimism
or optimistic roll-ups generally
it's much harder.
Yeah.
I think most likely,
given the constraints in terms of
computational power needed for FHE,
I don't think that an L-1
would be running FHE
natively right now.
I think FHU will be at the L2 or as a side chain or as an app chain.
So I think you're probably going to have Ethereum, Solana, Polygon, all of these guys
as plaintext on encrypted L1s with FHL2s and L3 applications on them.
And whether you run that as app chains, as POS, sidechains, or as roll-ups,
doesn't really make much of a difference.
In terms of interoperability, it's actually not that much of.
an issue because even though every network has its own key, you can re-encrypt from one key to another.
So when you're bridging, all you have to do is re-encrypt the value to bridge using the
public key of the network you bridging into. And that's perfectly fine. Like this is, you can take
an existing bridging contract and just add that particular feature and then you're done. So there
shouldn't be any more complexity for bridging in FHE chains as you would have in regular ones.
But that's only if you don't have a contest mode, right?
And on optimistic chains, you inherently have to have one.
So basically, you could only do this like after a week or so.
I mean, I guess people still use optimistic roll-ups, right?
And then they basically swap on those, you know, fee markets to,
they give away 10% of the value and then, you know, they don't have to wait a week.
So you could imagine that people might be okay with it.
It might not be the most secure thing to do, right?
But I think, you know, at the end of the day, the user can choose if they're fine and okay with the tradeoff.
I give the idea that a user is taking a risk, not a protocol.
I agree, but kind of these bridge liquidity providers,
they work by kind of by verifying that the claim is correct and then kind of paying out on this without waiting for the contest period.
You mean in terms of like a bridge.
Yes.
Yes, yes, yes, yes.
No, you're correct.
I don't think the bridge would be necessarily different here in this case.
I think it would be the same logic again.
I think, you know, arguably that weak period in your optimistic roll-up could roll back spending, which is way worse than, you know,
confidentiality in some extent.
I don't think it would be any different, to be fair.
Or at least I don't see any particular problem right now.
Okay.
Let's talk about things that are actually currently being built.
So we already talked about chains who may use this imminently.
What kind of depths run on them that kind of make it necessary to have this level of privacy?
You know, it's a completely new design space.
So we don't know yet what people are going to build.
But I can tell you what I see people asking us if it's buildable.
So there is a very big use case around defy, obviously, the ability to have confidential
defy, whether it's preventing MEV by having your transaction encrypted up until the point
that it's executed.
So it's encrypted in a mempool.
it's executed during execution in the contract
and is only decrypted
and made public once a blog is finalized,
for example. That would be
one example. Confidential
ERC 20 tokens, keeping the balance
and the amounts transferred
encrypted. So you still have
traceability. You still know that you and I made a
transfer. You just don't know for how much and how much
we both owe. And related to
that, you have governance. In a
Dow right now, everybody
sees who's voting on
what with how many tokens.
a lot of blackmailing, bribery, social pressure.
If your vote was encrypted and people didn't know what you voted for and how many tokens you voted with,
you would have a much, much, much better, you know, system for governance that wouldn't be,
you know, subject to peer pressure and things like that.
One thing I'm particularly excited about is compliant applications.
If you want to be compliant, let's imagine something very simple.
you want to transfer tokens to someone else.
Maybe that person is a different country.
So there might be regulations around that.
Maybe the government in your country should be allowed to see the detail of the transfer, right?
You have a lot of like those things.
With a homomorphic encryption, with the FHEM, you could have your identity encrypted in a smart contract.
So let's say you go through some KYC.
The KYC provider, you know, does your facial scan, takes your passport.
they encrypt your age, they encrypt your name, your address, your citizenship.
They put that in a smart contract.
And so whenever you want to use a Defi protocol, let's say you have to prove that you're
not American, you could do that on chain.
You wouldn't need any kind of off-chain attestation.
There wouldn't be any kind of off-chain thing.
You do the KWC ones, they put it on the blockchain, and then from that point you can use
it to prove things about yourself.
What that means is that you can have composability between identities,
and applications.
And that I think is huge, right?
And I'm pretty certain that that's going to create a kind of like white market and dark
market on the blockchain where you're going to have like a layer of compliant
applications where everybody's like KYC but confidential.
So people don't know as you, right?
That's going to be most of the volume.
And then there's going to be like a dark net on blockchains, right, where people do things
without KYC.
It's going to put it.
It's going to put a very clear price on money laundering.
Well, yeah, and I mean, you know, look, at the end of the day, what people want is confidentiality.
They want privacy.
They don't want noncompliance.
Oh, yeah.
It's just that up until now, you know, compliance meant disclosing everything to your government and to everybody else.
Here we're talking about a way to have confidential compliance.
And that, I take, is really the trick to.
make this work. Absolutely. So kind of if you look at the wider context of privacy over the last,
say, 20 or 30 years, the expectation that kind of things that we do is inherently private
has kind of eroded away. Right. So basically it's kind of like just because there's,
we generate massive amounts of data that are analyzable. I mean, I think,
had the situation been different earlier,
I think we may have seen the same thing,
but basically there was no data collection
on almost all of the things that people said and did and so on.
And this is different now.
And there's already been this cultural shift
to kind of expect that kind of data that's generated
can be,
used by law enforcement and other agencies and can be monetized.
So people are no longer willing to pay for services that previously would have had to charge
something just because their business model is that they use your data to kind of generate
income, right?
How do you think privacy solutions kind of fit into the space where kind of a lot of the
harm has already been done in terms of kind of expectations?
I think there is a distinction between accessing the actual data and doing something with the data.
For example, let's imagine, you know, you want a government to be able to prevent transfers
from one country to another.
Right now, they will need to see all of the data from everybody making a transfer, even those not
making a transfer to a blacklisted country. But imagine for a second that the data is encrypted
using homomorphic encryption. The government could still apply a filter on the encrypted
stream of financial transactions. You could say, I want you to homomorphically check where those
transactions are going. And if they're going to a blacklisted country, just make them zero.
right? So basically you send zero instead of sending whatever amounts you're supposed to send.
Effectively, you're blocking the transfer by doing that, right? That would work. The government
would be able to prevent people from making transfers to, you know, like Russia or whatever
without seeing what the transfers are. So you could still argue, well, but you know, the government
in that case could apply any arbitrary filter. That's true. And that's where the transparency of a
blockchain comes in. If that first,
filter is a smart contract on a blockchain, everybody can see which filters are being applied.
Right. So you can have transparency of the regulation and the filters the governments are
applying on financial transactions and still have confidential financial transactions.
That I think is super powerful, right? And that I think is something that's uniquely enabled
by FHE because FHE doesn't remove traceability. It doesn't hide the application. It hides
the user data.
I think that's fair to a certain extent.
I mean, these rules can still be arbitrary,
and then I guess it's a policy fight.
But let's look at kind of business models
of kind of like big data companies, right?
So basically, what percentage of the population
do you think would be willing to kind of pay
for encrypted search, encrypted social network,
and so on?
Probably no one or very small percentage.
And I want to be clear, I don't think people care about privacy. I don't think they will care
about privacy. But you see, that's exactly the goal. The goal is that nobody cares, not because
it's not important, but because it basically becomes something that's guaranteed by design.
Privacy is something that people shouldn't think about because it shouldn't be a problem.
And so our goal at Zama is to make that happen, right? We were not trying to change people's
opinion on privacy. We're trying to change, if anything, developers' opinion on the importance
of making their application private buy design. But even kind of in cases where there are good
alternatives that kind of offer ostensibly the same service as kind of the data mining company,
say, for instance, you could use proton mail instead of Gmail or you could use duck, duck
go instead of Google. People, despite the fact that kind of the marginal cost of switching to this,
is basically zero, people still don't, right?
So basically it's not even harder to use or more expensive to use.
It just seems a lot of people actually value their right to privacy at literally zero.
That's exactly why the person who should care is Google.
Google should make Gmail private biode design.
The user shouldn't have to think about switching.
It's Google who should think about enabling that.
And it's not the first time we've seen this.
WhatsApp turned on
into an encryption
to a billion people overnight.
So it's possible
for a large company
with a lot of users
to go from a zero privacy model
to a privacy by design model.
But the entire business model
of Google is kind of like data mining.
They can still do that
with the FHC encrypted data.
They just wouldn't know
what they're mining and what they're serving.
You could literally,
you could take
encrypt a user profile and you could run an encrypted advertising matching algorithm on it,
the user would still see an ad. It's just Google wouldn't know who the user is, what the
profile is, or what ads were served. So it's possibly seen. That's the thing that's a little
bit like is that it doesn't prevent any usage of your data. It just prevents the visibility of the data.
Okay, yeah, fair. I think I get the distinction. So how does Zama monetize on of this technology? So how are you guys funded?
So we've been very lucky. We've raised a lot of funding, a lot. So we have runaway for multiple years. So we don't have to think too much about any short-term issue. Having said that, you know, we are a business. So clearly we're not a non-profit. We're not working for free. And even though,
everything we do is open source. We do offer commercial licenses. So typically for a blockchain,
we would take a percentage of the token supply plus a percentage of the block fees generated
by the network. If there is a token, if there isn't any token, then it would basically be
some kind of fiat-based licensing model. We're not reinventing the wheel where we're doing something
super vanilla. We do have a few ideas of like hosted services long term, but for now, effectively,
use our technology as long as you get a license to use it commercially. So the philosophy is very
simple. It's completely free. It's completely open. But if you're going to make money with our
technology, we should make money too. That's it. Okay, that's fair. So where can people
stay in touch with you guys, kind of follow the news, join the community, learn, we know,
what can be built on top of Zama with Zama protocols? Our Twitter handle for Zama is
at Zama underscore FHE.
We also have a very active community
called fHC.org,
where people can learn about FHE.
There is a Discord server as well
that is very active
with people excited about FHE.
I'm also very easy to find
and reach out online.
My Twitter is at RAND Hindi.
I try to answer as many DMs as possible,
although to be fair,
sometimes it's a little bit overwhelming.
But in general, you know,
if you're interested in building something
with FHE,
where if your company is doing some really cool science,
get in touch with me or with my team and would love to help.
Fantastic. It's been a pleasure having you on Rand.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of
places where you can watch and listen. And while you're there, be sure to sign up for the
newsletter, so you get new episodes in your inbox as they're released. If you want to interact
with us, guests or other podcast listeners, you can follow us on Twitter. And please leave us a
review on iTunes. It helps people find the show, and we're always happy to read them. So thanks so
much, and we look forward to being back next week.
