Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Arcium: Parallelized Confidential Computing Network - Yannik Schrade
Episode Date: September 6, 2024In this day and age, privacy and confidentiality are more important than ever. Advancements in the cryptographic research of zero knowledge proofs (ZKPs), fully homomorphic encryption (FHE) and multi-...party computation (MPC) paved the way for computational integrity and confidential computing. While FHE allows for computation to be performed on encrypted data without the need for prior decryption, it is MPC that enables compliance with regulations (e.g. AML). Arcium aims to build a global super computer for parallelised confidential computing, powered by custom MXEs (multi-party computation execution environments).Topics covered in this episode:Yannik’s backgroundConfidentiality & decentralised complianceConfidential computingTEEs (trusted execution environments) & side-channel attacksZKP vs. MPC vs. FHEArcium’s global super computer architectureHow Arcium differentiates itself from other privacy protocolsUse casesCensorship risksEcosystem developmentEpisode links:Yannik Schrade on TwitterArcium on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Sebastien Couture & Felix Lutsch.
Transcript
Discussion (0)
What we were designing was a system where users had this privacy utilizing zero knowledge proofs,
and every private transaction that one performed generated an encrypted transaction that was stored on chain.
And then we had a dedicated network of nodes that using MPC was able to screen those transactions, essentially.
But importantly, there was no trusted entity that could screen or block a transaction.
It was the network as a whole that ran secure multi-party computations over those encrypted transactions.
Arbitrary program execution in a confidential encrypted way,
but to also have the program itself be fully encrypted and to have encrypted random access memory,
this sort of replacement for you purchasing some trusted execution environment.
Now you can use this global supercomputer to execute anything.
If you're looking to stake your crypto with confidence, look no further than Corse 1.
More than 150,000 delegators, including institutions like BitGo, Pintera Capital and Ledger, trust Coros 1 with their assets.
They support over 50 blockchains and are leaders in governance or networks like Cosmos, ensuring your stake, is responsibly managed.
Thanks to their advanced MEV research, you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet, set up a white label note, restake your assets on I.
eigeneyer or symbiotic or use the SDK for multi-chain seeking in your app.
Learn more at chorus.1 and start stacking today.
This episode is proudly brought to by NOSIS,
a collective dedicated to advancing a decentralized future.
NOSIS leads innovation with circles,
NOSIS pay and Metri,
reshaping, open banking, and money.
With Hashi and NOSIS VPN,
they're building a more resilient privacy-focused internet.
If you're looking for an L1 to launch your
project, Nosis chain offers the same development environment as Ethereum with lower transaction
fees. It's supported by over 200,000 validators making NOSIS chain a reliable and credibly neutral
foundation for your applications. NOSIS DAO drives NOSIS governance where every voice matters.
Join the NOSIS community in the NOSISDAO forum today. Deploy on the EVM-compatible NOSIS
chain or secure the network with just one G&M.
and affordable hardware. Start your decentralization journey today at NOSIS. I.O. Welcome to Epicenter,
the show which talks about the technologies, projects, and people driving decentralization and the
global blockchain revolution. I'm Sebassin Kutu, and I'm here with my co-host, Felix Luch. Today,
we're speaking with Janik Shada. He's the CEO and co-founder of Archeum. It is a private compute
platform that allows for all sorts of interesting use cases, privacy in crypto and beyond.
So we'll be chatting with him today about Archeum, the architecture, their use of MPC, and much more.
Yannick, thanks for joining us today.
Thanks for having me, Sebastian on Fedix.
I'm very excited.
Yeah, so tell us a little bit about your background and how you got involved in the encryption
and privacy space and how you ended up.
working on Archeum.
Yeah, sure.
So I think the reason why I'm in the space of building Archeum, building confidential computing,
decentralized confidential computing and privacy technology, at the end of the day,
it might boil down to me as a small child reading 1984.
I think that might be the honest answer.
I think that that really shaped my perception.
of the world and my views of how the world should work and how privacy and freedom matters.
And reading that as a child, I think really, really influenced me.
So at a similar age, I started programming, teaching myself programming, then studied law
for a bit.
And then through founding my first startup pivoted to mathematics and computer science in the
process away from studying just law.
And then through the study of mathematics and computer science, I met my co-founders,
basically all of us caring about elliptic curves at that point, because that's what we learned
about in university and then getting into zero knowledge proofs.
And through the process of that, and they ended up founding Archeum, which back then was called
elusive. And yeah, with elusive, we, we focused on building transactional on-chain privacy.
And with the added twist, I guess, of adding trustless decentralized compliance so that there's
both privacy and still safety on the end of the users. So unwanted illicit behavior could be
excluded from this on-chain privacy.
architecture. And for that, we leveraged confidential computing with secure multi-party computations.
And through the process of designing the system and building this technology, what we realized
is the huge potential and the revolutionary potential of being able to run arbitrary computations
over encrypted data without having to decrypted data first. So data in use can remain fully
encrypted and anything can be processed on top of that data.
And so that was really a pivotal moment, this realization for us, which then allowed us to
evolve further into Archeum and fully focus just on providing this, what we like to call
distributed global supercomputer that allows for confidential computing.
Awesome.
Thanks for the quick overview.
We're going to dive a lot into Arquium over the course of.
this of course. I think for me also interesting because that's like how I got to know you. I think
working on elusive and also like working in the Solana ecosystem. Can you maybe break down how you,
how you started in Solana and can know kind of why you, how you came to the conclusion to add this
like compliance angle. I think that was like kind of like one of the main differentiators there in
in what you were building or in Solana. Yeah, sure. And so I think how we call
got into Solana is the story of many teams that are building on Solana.
It was really the developer ecosystem.
And so the final co-founder of our team, actually I met at a Solana hacker house.
So in the year 22, there were a lot of hacker houses.
And this really allowed developers wanting to build new interesting projects to meet up,
to talk with experienced founders
and then, yeah,
test the systems that they were building.
So that's what we also did,
how we came about building on Solana.
And yeah, back in the day with Elusive,
what we were building was essentially
a C-Cash-like on-chain privacy system,
which utilized zero knowledge proofs
as core primitive.
And the issue that
basically all of those systems face at some point really is compliance concerns, right?
Because we've seen that with tornado cash essentially, right?
So with tornado cash, there were enforcement actions by different governmental agencies
because, yeah, they claimed that tornado cash didn't prevent malicious behavior on the platform.
And if one utilizes mathematics to provide perfect privacy guarantees, yeah, that's the issue that they run into.
And so what we were designing was a system where users had this privacy utilizing zero knowledge proofs.
And every private transaction that one performed generated an encrypted transaction that was stored on chain.
And then we had a dedicated network of nodes that using MPCC,
was able to screen those transactions essentially.
But importantly, there was no trusted entity that could screen or block a transaction.
It was the network as a whole that ran secure multi-party computations over those encrypted
transactions.
And essentially within a black box, this virtual black box, were able to look into the
transactions and assess what happened.
And then they could find external consensus over, yeah, malicious parties that,
that tried to use that system.
And then after the fact, reveal the corresponding information.
And so that was extremely powerful because, yeah, prior systems really attempted
to add compliance by screening transactions upfront.
And with this systems, a screening after the fact ex post was possible,
which I think would be the ideal solution
and that's something that is possible
to build with Archeum.
But we really realized that
yeah, I think through the technology
that we were developing and we'll dive more
into this later I guess, but
the complexity of this MPC technology
I think there was just one area we had to focus on
And we really saw our strengths in building this computing platform and optimizing the compute both for trustlessness but also performance.
And so that's what we ended up doing.
And in the case of, you know, you mentioned tornado cache.
Like I when I heard about that, I couldn't help but be reminded of, you know, the cypherspunk stories of the 90s.
And we had Mark Miller on the podcast tell us about how, you know, he used to print the RSA algorithm on.
T-shirts because the U.S. government considered that encryption technologies were considered
under U.S. law to be weapons, like to be munitions. And so therefore, you couldn't export
them. And there was this whole, you know, legal threat and sort of threat of persecution
on those people in the 90s, you know, that were simply just freely expressing ideas and
more to the point, like just making math public.
public. And so like I heard that, you know, the tornado cash story and like it felt to me like very similar where, you know, these guys basically like wrote math that this stuff was, this stuff exists. It cannot be stopped. And and I think that we'll look back on tornado cash in the same way that we look back on, you know, the the cyberpunks in the 90s and what they were doing. There's really, you know, little one can do to prevent malicious activity when you're using cryptic technologies. I think it's like a huge societal debate. But I hope one.
that, you know, people that sort of subscribe to the idea that encryption, you know, is just
freedom of speech and, you know, is sort of like a right. I hope that, you know, those people
will fall on the right side of history. You know, having, you know, you said earlier you came into
the space and you were kind of inspired to work in crypto, having read in 1984, you know,
how did you, how did it make you feel that, you know, you were sort of constrained to build a
technology that enabled compliance
when
you had such as
this very strong
background and
sort of ideological pedigree of
non-compliance in
that sense.
Yeah, so
I think that
so first of all, yeah,
I am 100%
on this cyphorpunk
track.
I think one of my greatest
accomplishments in the privacy space
would be getting my entire family
to use Signal
and having the entire family
group chat be present on Signal
instead of WhatsApp.
So I think this compliance question
is an interesting one
and the way I think about it really is
that we gave control
in the hands of the people using it
at the end of the day.
there was no external action of enforcing compliance.
It was essentially being able to have distributed consensus
and giving users the option to use whatever system.
So it's the design that we designed back in the day with Elusive really was decentralized compliance.
So no centralized actor should be able to have any control.
It should be the users that have control.
And I think at the end of the day,
and yeah, we never ended up getting to this point
to see that in action and experience those game theory
and network effects.
But I think at the end of the day, yeah,
it boils down to do users want specific other users
using the servers for specific kinds of use cases.
And I think it's a bit more,
easy to have these mechanisms with those financial mechanisms attached, whereas it's way more
different if it's just pure expression of ideas.
So I think that's a system that made sense that at the end of the day found natural equilibrium
between both ends.
Yeah, it's hard not to also bring up the whole.
story around the telegram founder
who was arrested in France
I think like last week
yeah any thoughts on
that and
you know what it means for the
for the state of privacy technologies
in Europe
yeah I mean
to be honest
it's very difficult to say
I think
Telegram is difficult
is difficult for me
because
on the
encryption side of things
Telegram always
especially in
more mainstream media
I think is being
portrayed as this super
encrypted private
chatting platform that cannot be hacked
whereas in reality
there's no default
yeah encrypted
chatting between two peers right
so I think
the story is difficult.
Yeah, it's interesting to see, at least like here in France,
I've been watching mainstream media talk about telegram as this encrypted messaging.
In fact, they don't even use the word encrypted.
They use this other word that in French that doesn't even mean encryptus.
I mean, you know, and it's like it's as if they were calling it a, you know,
a cyber messenger instead of an encrypted messenger.
And it's really painted as this kind of like black box thing, right?
You can tell that the story is spun in such a way when in fact, you know, like anybody who uses WhatsApp is in fact using, you know, encrypted technology and anybody who uses Apple Messenger is using encrypted technology.
And Telegram is really painted with this with this very negative sort of lens.
So it's interesting to see how even the mainstream media here and I'm sure probably in Germany in other places as well is like very biased against these technologies for some reason.
But anyway, not to make this all about that.
We do want to talk about Archam and which you guys are building.
So let's talk about confidential computing.
So what does that mean?
And what are the different ways that it has been implemented in the past
and what's kind of new about the way you guys are going about building it?
So yeah, I think confidential computing has been around for quite some time already.
And at the end of the day, it means that computations,
can be performed over encrypted data without having to decrypt that data.
So it's about so-called data in use, right?
We have data in rest, which just lies encrypted on some hard drive,
but data in use is the data that actively is being used in a computation.
And so confidential computing allows for that data in use to be secure.
That's what confidential computing tries to achieve.
and there's different
use cases
I guess for confidential
computing
for one
creating a secure execution
environment right
having sensitive data
critical
critical business data for example
right that that has to be
executed over in some cloud
environment and if that data
were to be leaked
bad things could happen to
companies, enterprises
governments or individuals.
So building secure execution environments.
At the same time, confidential computing allows for data collaboration essentially.
So individuals can have their data remain private while being able to interact with others.
So those are two core aspects of what confidential computing tries to achieve.
And there are different kinds of technologies that,
have been utilized in the past to achieve this.
And I think the most prominent one so far has always been trusted execution environments.
And a few months back, I was in San Francisco at the so-called confidential computing summit,
which essentially is just a conference of trusted execution environment manufacturers,
and Intel, Microsoft, all those folks there
praising their trusted execution
environment technology.
And yeah, I gave a talk there
and I called it trusted execution is dead
and we have killed it.
And so my take really is that
there's new kinds of technologies now
that can replace those legacy systems
that require trust.
So what we're trying to achieve at Arkium
and what the entire decentralized confidential computing, DCC movement,
I think to some degree is trying to achieve is to add more trustlessness to confidential computing.
And in our case, that's by utilizing multipartic computation.
So maybe you can kind of, I guess, bring up the TEs already,
and we have seen over the years a lot of different types of attacks on these.
I think, yeah, like side channel, tech, kind of supply chain tax.
And still, like you're saying, right, in the summit, everyone's talking about that.
I think also in crypto space right now, T.E. seems to have another, like, sort of resurgence and popularity.
Is that, or like, you know, why is that maybe?
and how does MPC approve on that?
Or like how do you, yeah, tackle that at Arcum?
Like to sort of, maybe explain also a little bit
what these types of attacks actually are.
Yeah, sure.
So trusted execution environments are dedicated hardware
that comes with security and privacy promises,
essentially from the manufacturers.
So usually that's dedicated hardware, chips.
that have those secure enclaves is what it's called.
So virtual machines within those chips,
where data should remain secure, encrypted,
and cannot be accessed outside of that enclave.
And an enclave has a code base associated with it.
So only this specific code base can be executed over that state,
which sounds like a great concept,
sounds like a difficult-to-realize concept at the same time.
thinking about exploits and hackers.
So those systems suffer from many problems.
I think the most easy to grasp problem is actually the complexity of building
building architectures on top of trusted execution environments.
Anyone who has worked with TEEs will tell you that it's very difficult to build secure code
on top of those enclaves because developers really have to
have to ensure that data is being handled correctly
and they need to give rigorous attention to detail
that the boundaries between
the secure and non-secure environments
are being respected
and that also at the same time
brings increased time to market
I think because there needs to be a very
very thought through development processes
and those systems at the same time also on their own come with quite high costs associated.
It's dedicated hardware and dedicated hardware that requires specialized folks that are able to maintain and develop on those platforms.
Those are more of the soft factors that I don't like about working with trusted execution environments.
The bigger problems really are associated with the trust model as the name,
already communicates, the execution here is trusted.
And the trust model, in my opinion, is fundamentally flawed.
Because as you said, MTs are really susceptible to a whole range of more or less sophisticated attacks.
And I don't know if you guys saw this, but one and a half weeks ago, something like that,
I think there was some researchers.
on Twitter that was able to extract
the root provisioning key
from some Intel
SGX family processors
which is a key that
can be used to fake
so-called attestation report. So
trusted execution environments
require this process of
attestation where they prove
to you that they are
a real trusted execution environment
and you can trust this
environment so you can send your
data and an encrypted way to that environment.
And this attestation service requires some third-party trust.
Regardless, on the processor side of things, there's side-channel attacks and hardware vulnerabilities.
And side-channel attacks really boil down to you as the person having access to the processor,
or you being able to just track the execution of some program execution.
And through this process of being able to look at, for example, power consumption or tracking other metadata, I guess, that gets exposed through the execution of a program, be able to understand the execution path in our program, right?
So if you think of an if else statement and if some condition holds something complex is being executed, it takes one hour.
And in the else case, the program just ends.
If the computation takes one hour, you will know, okay, the condition was true because
the first branch was executed.
And so that's the most simple form of performing a side channel attack, right?
And so those processors are prone to these kinds of side channel attacks, and there's
many, many exploits that have occurred in the past and many fixes for those exploits.
but it's always new exploits that get identified.
So in the blockchain space,
we've seen a number of those kinds of exploits, actually.
I think the most notable one would have been secret network.
I think there was end of 22,
where their, I think it was called a seat key or something like that,
some master key that was shared between all the nodes in the network,
all the TEs there, was exfiltrated.
and then anyone who ever ran a node in the network
would have been able to remove the privacy
of all transactions, right?
So there's this inherent danger of side channel attacks,
firmware, microcode and SDK bugs.
It's just this large development stack
with incredible complexity,
both on the hardware but also on the software,
firmware, microcode level,
where at the end of the day humans are building those systems
and so humans in most cases are also able to exfiltrate information
those are one kind of exploits but I think more important is
as you mentioned Felix supply chains because when someone trusts a trusted
execution environment they're trusting a supply chain
usually proprietary supply chain which at the end of the day is just
chain of single points of failure.
Single points of failure that can occur in the attestation process, can occur in the manufacturing
process.
And I think a very striking example of how trusted these systems are is when Apple unveiled
their private cloud computing platform, which is their own TEEs that they supply for,
for model inference for their Apple intelligence.
I think something that they stated in their docs
in their release article was something like,
okay, they physically ensure that the machines
from the factory get placed in their data center, right?
So you can imagine people standing there
with their assault rifles, protecting crates of
computer chips
which is insane right
if that's the trust model
that you're working with
that there has been someone
that was able to physically
guard this chip from A to B
I think that's not something
that we should put
all of our trust in
so I wanted to ask you about
like the because we've been focusing
on SGX here and I think like
in crypto at least
you know SGX has gotten
a lot of negative
attention
because of the secret hack
and some other issues.
And like I'm here on the Wikipedia page for Intel SGX
and there's about a dozen different attack vulnerabilities
that are mentioned.
But so why is it then that, you know,
we never hear of like the Apple TEE and our phones being compromised
or, you know, even perhaps more interestingly,
like ledger devices, you know,
they claim that, you know, they're super secure
and they have like this whole
where they call it
the dungeon or something like that
their whole security lab
that just prize all day
at trying to crack a ledger
so why are T's different or are they
is just a misconception
or lack of understanding of the technology?
Yeah I think
as JX has gotten a lot of attention
the architectures
are significantly
different in the details, of course,
but the core architecture
and the core reliance on these kinds of supply chains
remains the same.
And yeah, I think in general,
this, yeah, I think so we can go more into that
how I think trusted execution environments
can play a role in the future.
I think they can play a role,
but yeah, just this approach
of using these systems and saying, well, so far we didn't see any exploits.
So we can be sure that we can trust those systems and we can use them for all our future.
I think that's a flawed approach.
Because at the end of the day, it's just the sort of non-provable blind trust.
In the case of Intel SGX, we've seen how fatal this can be really.
And I think what's important, and that's something that I've, I think, never seen anyone talk about when when talking about these kinds of supply chains.
In the case of Intel SGX, that's something that we've seen with different kinds of teams that use the technology was that, okay, there's the supply chain and trust into Intel, essentially, and the person that has access to to the technology.
to the computer trip, but there's also someone who deploys the code for the enclave.
And they upgrade the code.
They give updates.
And they have a private key they use to sign those updates.
And so there's a single signing authority that can update whatever code.
They can have their own local trusted execution environment, update the code, and exfiltrate information.
And at the end of the day, in most crypto projects that use trusted execution,
execution environments. Yeah, easiest single point of failure really is some development agency
sitting on some island, I guess, that has a private key that has full control over everything.
I think that's the most shocking aspect and that's overlooked a lot when talking about the trust
in the manufacturer. Most of the times, it's even simpler than that.
it's the $5 ranch attack.
You just you have to get the boat to the island.
Yeah, exactly.
Okay, yeah, that's super interesting.
I think I guess we're still haven't even talked about MPC.
So so let's get there.
I guess, you know, we're basically focusing on these hardware based models,
but now we also have like purely cryptographically based privacy techniques.
And how do they work?
And like what are.
their trade-offs and how do you like navigate that trade-off space.
I guess that's going to be the discussion for the next few minutes hopefully.
Yeah.
Yeah, sure.
So essentially the other technologies are zero knowledge proofs, fully homomorphic encryption
and MPC.
Zero knowledge proofs.
We've spent a lot of time working with zero-notice proofs are great technology.
And at the end of the day, zero-knowledge proofs are even part of those FHE and MPC
proving systems.
But CK Snarks,
CK Snarks,
those more prominent
proving systems,
yeah,
they don't allow for what we are
trying to achieve.
What we're trying to achieve
really is to be able to
have some encrypted data,
send it to some cloud environment,
some blockchain network,
whatever, send it to some computer
and have it processed
without that party that processes
it having access to the data.
And zero knowledge proofs
are this two-party protocol where one party is the prover that has access to the data,
and the second party they verify are not having access to the data.
So zero knowledge proofs on their own can't allow for those autonomous confidential execution environments.
FHE, fully homomorphic encryption, allows for that.
Essentially, the definition of FHE would be to be able to
execute
arbitrary
operations
over encrypted data.
There's an
encrypted representation
of some state
and with FHE
two operations
essentially addition and multiplication
are homomorphic
over that representation
which means that they yield an output
that again lies in this
representation.
So that
is a very, very good
concept and the problem with FHA really is the practical usage of the technology because the
multiplications for fully homomorphic encryption result in noise accumulation and
noise accumulation requires bootstrapping so bootstrapping operations essentially
reduce the noise of this encrypted output and if
there's too much noise that output can't be processed anymore.
So bootstrapping is required.
And that results in very high, very high performance latencies and computational overhead.
And that means that when using these kinds of FHE systems, there's many orders of magnitude performance latency associated.
And so it's not unusual.
and I think comparing the MPC protocols that we have
with some FHE operations,
it's not unusual for us to see something like us being faster 80,000 times,
something like that.
So incredible performance differences.
And for very, very simple operations, FHE can be used,
but more complex computations make this technology.
fail at the end of the day in practical implementations.
And so the system that we are using is multi-party computation.
And this multi-party computation uses so-called somewhat homomorphic encryption, S-H-E.
And somewhat homomorphic encryption basically uses the efficient aspects of homomorphic,
fully homomorphic encryption that we also utilize there.
but for multiplications,
there's a smart trick associated,
I guess,
that allows us to efficiently perform
those multiplications as well.
And so that's why this technology makes most sense.
And on top of that,
what's also very important to realize with FHE
is that moving from encrypted representation
to encrypted representation is very nice.
But what if one wants to move
out of the encrypted representation, right?
If we have confidential on-chain applications
and we want some transparent settlement to occur,
there has to be a way to selectively disclose information
about the encrypted state,
and that's not possible with FHE by default
and with MPC that becomes possible.
And so trust model-wise, what's the case
is that the trust models for systems in practice
is the same for MPC and FHE applications.
with MPZ being many orders of magnitude faster.
And so that's why we arrived at building those systems with MPC.
Super interesting.
And do you think, like, I guess in general, this is also something that FHE can ever
address in a way?
Like, is it something that, you know, over time, the algorithm will get better and like,
probably, right?
I mean, but yeah, we'll be curious about that.
That mostly boils down to hardware.
acceleration. So hardware acceleration is what matters most for that. There will be significant
improvements for sure. And what's important is that those hardware acceleration improvements also
get utilized by this somewhat homomorphic encryption system in MPC. MPC still requiring network
communication. That's
one difference requiring
some more network communication.
At the same time,
FHE,
in order to achieve verifiable
compute with FHE, you also
require consensus mechanisms
if you do it in the blockchain
setting. If
verifiable compute, which is
highly important for any
computation, especially confidential
computations,
if verifiable compute
is achieved without using consensus,
there's even more
significant performance slowdown.
So hardware acceleration is
what it boils down to, and
I think these kind of hardware accelerations
will also
be utilizable for
the MPC protocols,
especially with
recent advancements in those
protocols, greatly reducing
the need for
network communication
So there's different options to be able to pack a lot more data within communication around.
So yeah, I think at the end of the day, FHE will become more performant.
But yeah, at this point in time, MPC just makes more sense.
So it's always about being able to supply the required privacy technology.
at this point in time
and for the next years
and so it's the same with
with
fusion energy I guess
right
we also need energy now and for the next
10 years without
having fusion energy
so does this work with
any type of computation
can you do can you perform
really complex computations using
Archeum and
what
what languages can one
write computations and does this implementation support like specific languages? Can you
maybe explain that? What we are trying to achieve essentially as as the vision with Arquim
and that's why I like to call it a supercomputer really is for arbitrary program execution
in a confidential encrypted way but to also have the program itself be fully encrypted and to
have encrypted random access memory, encrypted RAM, right?
So to have a fully encrypted computer, you really have this sort of replacement for
you purchasing some trusted execution environment.
Now you can use this global supercomputer to execute anything arbitrary programs
that are essentially arbitrary encrypted RAM programs.
The current stage we are at is that we support arbitrary computation,
support and for that we have a dedicated compiler that compiles arbitrary rust code into the
into the op code microcode format that our network understands yeah so one could essentially
be be writing assembly code so on the op code level what the network understands but
what we offer mainly is this dedicated uh yeah rust
compiler. Now, depending on the use case, different languages make more sense. So what we have
at Argym right now is two distinct backends, two distinct MPC protocols. One MPC protocol that is
highly focused on being as trustless as possible for especially on-chain applications. And we
have another backend that is fine-tuned for floating point.
operations. So in order to perform operations over large
matrices with floating points for machine learning. And so for
that back end we we support a Python SDK essentially. So
one can use TensorFlow and Pender selected with normally and now have it
be confidential and have one party holds this data, another party
hold another data and then collaboration
train some logistical regression or tree boosting model with that.
So essentially, Rust and Python 4 for the machine learning applications is what I would say.
Yeah.
Awesome.
Let's maybe also take it back to like Arquem as this network overall, right?
Like so we talked about, you know, why?
you're using mpc and confidential computation but to achieve this at scale right there is a
network and there's the crypto component of it so can you provide us like a overview over the
arcium structure architecturally as a as a crypto protocol and how how that functions
essentially Arquem itself is a you could say stateless computing network so we utilize
existing ledgers existing blockchains for state management and computation scheduling so
you can have your own smart contract on Solana for example that has confidential functionality
and this smart contract then calls
an Archeum smart contract
that inserts a new
computation within the Archeum
mempool, the network picks that up,
executes the computation, and
settles it back to the corresponding
network. What's important is that
the Arquem network itself,
you don't write a smart contract for
Arquium. You can use
our smart contract
SDK and build a confidential
smart contract, but the network itself
just runs confidential computations
and it picks those up from those dedicated on-chain mempools that can exist on different lectures.
And so I can build a confidential off-chain application and send such a computation request to the mempool,
and it doesn't have to settle back to the dedicated smart contract.
Now, how the network itself functions is essentially it's a permissionless network of nodes.
And so the three of us could deploy a node in Archeum,
and we could open our so-called computation cluster.
So our computation cluster could now be used by anyone to run confidential computations.
Or we could restrict that computation cluster to say, okay, we don't care about providing computational services to third parties.
We just want to run our own confidential computations.
So it's fully configurable.
You can have clusters containing nodes from the network.
you can run your own node.
And what's so highly important about the trust model associated there is that it's the dishonest majority trust model.
So any computational cluster only requires one participant to be honest, one participant that doesn't collaborate with all the others participants in a malicious way.
So with that trust assumption, you can have arbitrarily sized computation clusters.
and associate them with a so-called M-XE-MPC execution environment.
So that's essentially this virtual machine instance that you create as a developer.
You create this virtual machine instance where you have encrypted stage,
you have your functions that can be called,
essentially analogous to a smart contract, if you will.
And that environment has one or more clusters associated
that then process these kinds of computations.
Now, the problem one on a theoretical level would run into is that such a computational cluster has this ideal trust assumption, only requiring the trust that one node is honest, right?
Which for our blockchains we have with practical Byzantine fault tolerance, we have the honest majority trust assumption.
And here this honest majority trust assumption, one out of N really is perfect.
But the problem is that this also becomes highly dangerous for censorship,
because one malicious party could also censor computations.
And that really has in the past been a practical limitation for those very efficient
and trustless MPC protocols.
Because, yeah, deploying them in this permissionless setting could really mean that
some malicious party could just dedos the operation, essentially.
And so what we have is a new kind of cryptographic protocol that allows to enforce execution and also correct execution for these computational clusters.
So if you have this computational cluster and our cluster in this example and Sebastian just randomly decides to share some invalid data with me, at some point I will notice that invalid data has been shown.
shared and I will stop the computation as I'm an honest participant. The problem now is that by
default Felix and I wouldn't be able to cryptographically identify that Sebastian was the one
that caused the computation to fail by sharing and valid data. Now with our new protocol, we are able
and actually everyone in the world viewing our computation from the outside is able to identify
that Sebastian is the one
that caused the computation to fail.
And what that means now
is that we can punish Sebastian
for causing the computation to fail.
And so that's where
blockchains really come into play,
that we are able to enforce
execution of those computations
and enforce this kind of
correct behavior.
So we are able to add accountability
to those computations,
which makes it incredibly powerful
now,
because then even
smaller sized clusters
can be
performant, run those computations
and the deployer
of these clusters can
be certain that
execution will occur.
And so that's where our staking and slashing
mechanism comes into play.
So compared to
other
privacy projects
out there,
how do you think
Archeum is fundamentally different
What is really the key differentiator that you guys are offering as a product?
Yeah, I think that really boils down to be trustless and performant at the same time.
Because what we've seen really in the past is that systems either require trust via trusted execution environments
or have incredible slowdowns
with fully homomorphic encryption.
And this sounds stupid,
saying that as the founder of Archeon,
but what I've seen in practice
and we are right now moving into our private test net
and have teams building with Arquium
really is seeing a lot of teams
that have attempted building complex systems
with FHE.
And, yeah,
Yeah, pivoted to using Arquem, because on the one hand, it's easier to use, and at the other hand, it just yields the performance requirements that applications require.
And so I think that's really what it boils down to.
And at the end of the day, all of that sort of is user experience, although trustlessness might be more hidden from the end user.
And in terms of applications, you know, obviously there's applications here in crypto. And, you know, I'd love for you to maybe talk a little bit about some of the applications that you're seeing and that users, initial users, are building. But, you know, applications here go beyond crypto, I think. You know, if we're talking about the trusted execution environment market, you know, potentially, you know, clients of,
TEEs, you know, companies that use TEEs could shift using technologies like Archeum,
maybe also describe some of those use cases and, you know, efforts that you're making to
address that part of the market as well. Yeah, sure. So I think that's also one of the more
interesting aspects to building Archeum, that it's not just, yeah, solving problems in the crypto space,
but solving problems for the entire world,
for both enterprises, governments,
any kind of organization, I think.
And that's also essential to the architecture design
that we've designed.
And I mentioned that earlier,
that you as a developer don't have to deploy a smart contract
or build a smart contract in order to run confidential computations.
You're essentially just accessing this confidential
a computing network.
And what that means is that on the crypto side of things,
there's a lot of use cases, especially in DFI and DPN.
There's a lot of teams building amazing stuff with our technology.
And also, I think, and that's, I think something that is less important to the
crypto space, actually, and more important.
to the traditional computing space
is being able to run
confidential AI,
more specifically,
confidential machine learning with Arquium.
Because in the past,
confidential machine learning really
hasn't been possible.
And with this MPC-based confidential computing,
what's so striking is that
completely new types of
digital interactions, I guess,
become possible.
Because if all of
of us have isolated data silos, isolated data sets of highly sensitive data that it on its own
is valuable, right? All of us care about our privacy. We would never share that data. But this data,
if we were able to in some shape or form pool it together and train AI models using that data,
that could become very valuable. And I think a very striking example for that would be healthcare,
right so very sensitive patient data that now can be utilized in a trustless way new models can emerge
that can help all of us without any one of us ever having to expose their sensitive data and this really is this
collaborative element and what we've seen is even logistic firms right so we are we're talking with
logistic firms that want to improve supply chains, regardless of blockchain, they don't care
about blockchain. All they care about is not giving their sensitive supply chain data to competitors,
but at the same time, they can improve the supply chains. They can all have the sort of win-win
situation by using this trustless confidential computing technology. So I think that's really the power of
decentralized confidential
computing to
address traditional markets
as well and that's
also what drives us.
That it's not just building
applications for
crypto's sake, but building applications
for humanity's sake at the end of the day,
I think.
When considering
the, you know, the Rkem network
are like I think one of the
one of the
important things to consider here
and I'd love to get your take on this is the censorship risk.
So if we're in the case of like an application that would handle the healthcare data of like an entire nation,
if we had like a small number of nodes, there would be a risk of censorship.
And so therefore you'd probably want to have as many nodes as possible in there.
Are there basically sort of exit mechanisms to prevent censorship?
And also, does the network performance,
is the performance maintained as the number of nodes scale,
or do we end up, you know,
does that start decreasing as you add more notes to the network?
Yeah, sure.
So we have multiple mechanisms to combat censorship.
I think the first mechanism that I outlined earlier
is so-called on a technical MPC level,
it's called cheater identification.
So to be able to cryptographically identify anyone who tries to stop a computation from occurring.
And we can force everyone game theoretically to run a computation.
Otherwise, they get slashed.
They get punished.
Now, if such a misbehavior happens and the misbehaving party doesn't care about losing all of their staked assets,
there's fallback mechanisms.
there's so-called cluster migration.
So to be able to move from one cluster of nodes to a new cluster,
there's also cluster forking mechanisms so that nodes that want to censor,
because maybe there's some kind of computations happening,
where one node says, okay, I'm not comfortable processing these kinds of computations.
So there's really also this tension of censorship,
because we want to force nodes to run any computation at the same time.
There might be based on where the node is being operated,
also legal implications to not be able to process certain computations.
And so we have both mechanisms.
We can allow for nodes to say, okay, in the future,
I will not want to process more computations for this corresponding MXE.
And they can do that.
They need to pay for the so-called cluster for.
working then and there can also be those forceful migrations if a node just doesn't jointly compute the function that they have to compute.
Right, maybe taking it back to the use cases for like a potential final question.
We talked a lot about Arcum being this platform and actually you already also mentioned initially you were building like this confidentiality thing on elusive.
right like kind of more application focused almost or I guess in general the elusive product was more
application now you're this platform and obviously if you're a platform you have to do like ecosystem
building you're mentioning you've been speaking to a lot of like logistics per like in health
like traditional businesses and I think you also like mentioned like you yourselves met in the
hacker house right in Solana which I think quite interesting and probably one of the best
ecosystem building examples in all of crypto in the recent years,
seeing like it brought also you together and just generally how successful
Solana has been with that.
So yeah, the question basically is like how are you approaching this?
How are you getting people to build on the Arquium platform?
Yeah.
So we currently have way too many people trying to build an Archaeum.
That's the current status.
So a few months back when we announced Archeum, we started accepting developer and node operator signups for the private TestNet, which we started rolling out.
And currently we are in the cohort one phase, so the first group of developers that get hands-on support from us, get access to all of the tooling and can start building applications.
and step by step we are expanding those cohorts and adding more teams so folks can join our discord
can register to be able to be accepted to those private test net development rounds and then
they get full access to arcium and can build their applications and get development support from
our end that's yeah congrats on the success there for sure
I hope we'll see a bunch of cool applications.
Sap, anything else from your point?
No, I'm just like always amazed that these technologies, you know, as great as they are,
you know, the technologies that we really need, you know, like private compute, more privacy
technologies are not the ones in crypto that, you know, get the most attention.
or get the most capital.
And so I'm just really glad to see Arkham,
like, you know, raised 5.5 million recently
and, you know, from some pretty big VCs
and hopefully, like, take this technology to market
because I think it's really important, you know.
I also, at some point, you know,
brought all my family to Signal
and thought that was a really big achievement.
Even though, you know, Signal might be backdoored
and we have no real way of finding out,
you know, but yeah, I think like this stuff is important and and it's also political.
I think we shouldn't we shouldn't forget that like encryption is a political issue more than just
a technological issue. And so I'm glad we can talk about here on this podcast. Thanks for coming on,
Yenik. Thanks for having me, guys. Thank you for joining us on this week's episode. We release new episodes
every week. You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode
of the Epicenter podcast.
Go to epicenter.tv slash subscribe
for a full list of places
where you can watch and listen.
And while you're there,
be sure to sign up for the newsletter,
so you get new episodes in your inbox as they're released.
If you want to interact with us,
guests or other podcast listeners,
you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show,
and we're always happy to read them.
So thanks so much,
and we look forward to being back next week.
Thank you.
