Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - MacLane Wilkison: NuCypher – Proxy Re-Encryption for Distributed Systems

Episode Date: September 15, 2020

NuCypher is a decentralized threshold cryptography network offering interfaces and runtimes for secrets management and dynamic access control. It provides cryptographic infrastructure for privacy pres...erving applications and it exists as a smart contract on the Ethereum blockchain. They are currently in testnet but once launched you will be able to use NuCypher to carry out proxy re-encryption.Proxy re-encryption is a cryptographic method that allows you to delegate re-encryption of data to a third party and in this case validators in the NuCypher network. With this method you issue a re-encryption key that is assigned to a specific public key and a third party has the ability to re-encrypt data for that third party to decrypt. In this case the NuCypher network acts as the third party service. It is useful for when you want to share data, with the ability to revoke access at a later stage. They also have an interesting token distribution mechanism called the WorkLock. This is similar to a Lockdrop but with work required from the participants.MacLane Wilkison, Co-founder and CEO of NuCypher talks about how and why the protocl was created, the challenges faced with it, and the problems it solves.Topics covered in this episode:- MacLane’s background and how he got into the blockchain space- Challenges with working with corporate entities and getting people to understand privacy preserving technologies- The zeroDB protocol and how that led to NuCypher- What was MacLane’s goal with starting NuCypher- What proxy re-encryption is and why it’s desirable over manual systems- What are threshold signatures- Attributability on the protocol- Some use cases of using FHE on NuCypher and why they have chosen this for the protocol- MacLane’s thoughts on trusted hardware- The infrastructure of NuCypher and why it is on Ethereum- Results from the testnet- WorkLock - their token distribution mechanism and how it compares to Lockdrop- How you can set up a node on NuCypher- Current gas costs and the impact of this- The NuCypher business model and how they make money- Where to learn more about NuCypherEpisode links: - [NuCypher Website](nucypher.com)- [NuCypher Discord](https://discord.com/invite/7rmXa3S)- [NuCypher blog](https://blog.nucypher.com/)- [WorkLock](https://blog.nucypher.com/the-worklock/)- [NuCypher Twitter](https://twitter.com/NuCypher)- [MacLane's Twitter](https://twitter.com/MacLaneWilkison)Show notes and listening options: [epicenter.tv/357](https://epicenter.tv/357)

Transcript
Discussion (0)
Starting point is 00:00:00 This is Epicenter, episode 357 with guest McLean Wilkeson. Hi, I'm Sebastian Cuccio, and you're listening to Epicenter, the podcast where we interview crypto founders, builders, and thought leaders. On this show, we dive deep to learn how things work at a technical level, and we fly high to understand visionary concepts and long-term trends. If you like Epicenter, there's only one way that you can support us that's super easy and cost nothing, and that's to leave a review on Apple Podcasts. If you're on a Mac, an iPhone, or an iPad, pick up your device and go to epicenter.rocks slash Apple. And that will take you exactly where you need to go to leave us an Apple podcast review. And I know it sounds complicated because you got to think about
Starting point is 00:00:54 what you're going to write. But here, I'll give you some examples. You can tell us how long you've been listening to the show. You can tell us who's your favorite host. You can share, you know, one thing that you learned on this podcast. Heck, you can even share your super secret, you know, yield hacking, sushi farming strategy, if you like. All the reviews that we get help establish Epicenter as the leading technical podcast in the crypto space, and it helps people find it. So thanks so much to everybody who's left us a review already. I love getting those Slack notifications and being able to read those reviews. And thanks in advance to everybody who I know will leave us an Apple podcast review in the next couple days. So with that out of the way,
Starting point is 00:01:38 I'd like to now introduce our guest for today's episode. It's McLean Wilkison, and he is the co-founder and CEO of Newseifer. Newsefer is a company that is providing cryptographic infrastructure for privacy preserving applications, and this infrastructure exists as a smart contract on the Ethereum blockchain. They're in TestNet currently, but they'll be launching soon, and when they do, you'll be able to use New Seifer to do something called proxy re-encription. Proxy re-encription is a cryptographic method that allows you to delegate re-encription of data to a third party, in this case, validators in the new cipher network.
Starting point is 00:02:12 So let's take an example. So you have Alice and she has encrypted data that's stored up in the cloud or on a centralized cloud service or decentralized cloud service like IPFS. And she wants that data to be sent to a third party, Bob. One way to do that would be to download the data, decrypt it, and then re-encrypt it with Bob's public key, put it back up in the cloud where Bob can have access to it. Well, that might work in some cases, but it's a little bit cumbersome as it implies that Alice has to do this operation every time she wants to send data to someone else. With proxy re-encryption,
Starting point is 00:02:42 you issue this re-encription key that is assigned to a specific public key, and then a third-party has the ability to re-encrypt data for that third-party to decrypt. And so in this case, it is the new cipher network that is acting as that third-party service. And of course, the owner of the data can then revoke that key if necessary. So this is interesting use cases for things like sharing medical records, but you know, could also imagine it used in something like a decentralized dropbox where you might want to share files with a specific person and then revoke access later on. Another interesting thing about Newseifer is their token distribution mechanism. They're doing something they're calling a work lock, which is essentially like a lock drop,
Starting point is 00:03:24 but with actual work involved. So in a more quote unquote traditional lock drop, like the edgeware lock drop, for example, you lock up some eth and at the end of the lock period where you get back your eth and you get your allocated tokens. In the new cipher work lock, the participants actually have to do some work. So they're going to have to run a validator for a period of six months. That validator is pinging the smart contract every day. And at the end of the lock period, they get back their eth as well as the allocated tokens. And so this is an interesting way to boost type of network with actual participants, people who are actually providing value to the network. Of course, there's a number of challenges around this specifically regarding transaction fees, which we get into
Starting point is 00:04:04 during the interview. This episode is brought to you by Algarand. Algrant has built a sophisticated platform that allows developers to build equally sophisticated applications. So they've built a layer one protocol that has all the primitives that you need to build defy apps. So things like token issuance and atomic transfers are built right into the protocol. But I'll tell you a little bit more about that later on in the interview.
Starting point is 00:04:28 For now, though, here's our conversation with McLean Wilkison. We're here with McLean Wilkison. He's the co-founder and CEO of Newsefer. And Newsefer is a platform for, I'll let McLean describe it, but I would describe it as building infrastructure for things like key management and secure multi-party computation or fully homeworkic encryption. I think we'll get into the nuts and bolts. Thanks for joining us. Thank you guys for having me. I'm very excited to chat about Newseifer.
Starting point is 00:05:02 Tell us about your background and how you became involved in the crypto space. My background is kind of this weird blend of technology and software engineering, but also traditional finance. So out of undergrad, I worked for a couple years in M&A and Capital Advisory at Morgan Stanley, helping large corporations in the tech and media and telecom sector go public or raise debt capital, things like that, or mergers and acquisitions. And then left after two years and moved out to the Bay Area. and sort of concurrent with that, started getting very interested in Bitcoin and, you know, sort of this new thing that hadn't quite happened yet, but they were sort of, Vitalik was on his roadshow for Ethereum and kind of pitching the upcoming ICO and was working out
Starting point is 00:05:47 of a hacker space and randomly ended up in kind of a meetup where Vitalik was pitching Ethereum and what it was and randomly just happened to sit in that and like didn't understand anything that he was talking about, but thought it sounded super interesting. It had been kind of aware of Bitcoin, obviously, when I had been working in Morgan Stanley, but never had really had time to dig into it, but just got very interested in sort of this new or potentially new industry and felt like it was really exciting time to be involved when this was back when sort of, you know, counterparty and MasterCoin and things like that were happening. So we did some early experiments on top of those protocols, just to try to understand, like, what sorts of things could we build on
Starting point is 00:06:24 this, you know, is it interesting, like, what are the problems that it's trying to solve? And then ended up actually kind of staying in the cryptography space and moving a little bit away from blockchain for a little while. My co-founder and I, Michael Alorov, who's now running Curve, we actually started the company building an end-to-end encrypted database. It was kind of inspired by the sort of early experiments that we have been doing in blockchain with counterparty, but really it was just, you know, a traditional database that allowed you to store and search encrypted data without the server ever being able to see that data. So we worked on that for a while.
Starting point is 00:07:00 That was called 0 dB. It became kind of a relatively, like, popular open source project on, like, the hacker news staff, but we could never really figure out how to monetize it. Like, we were pitching large banks to use it for, you know, archiving data in the cloud, for example. But it was kind of a tough road to hell on that one. And we worked on that for a couple years. That was for the genesis of the early precursor to Newseifer.
Starting point is 00:07:22 And then we ended up doing Ycombinator in summer 16, with the company, and then right after that was when the app started to become a real thing, a little bit more as opposed to kind of a theoretical thing. We realized that a lot of the stuff that we had built for 0DB, we could kind of repurpose for use by decentralized applications or other decentralized protocols and an attempt to kind of bring this ability to do access control and share private data alongside of a decentralized storage system. So kind of a long meandering path to get where we are, but have kind of always worked
Starting point is 00:07:55 in and around the cryptography and privacy space. You mentioned zero dB and the sort of challenge that you had selling that to banks and corporations. I think a lot of this kind of zero knowledge technology or like these encryption technologies that allow you to perform computations or like use encrypted data are like super interesting to people like us who like think about privacy and that sort of thing. but I found, like, in my experience, you know, dealing with corporates and, like, large corporations that a lot of these companies are not necessarily interested in this stuff. And so there's often, like, a disconnect between, like, the people who are, like, super interested and passionate, but, like, you know, check out all this cool stuff we can build about it around us. And then, like, the real business world that doesn't really give a shit.
Starting point is 00:08:42 I don't know if that's kind of been your experience as well and, you know, how you got around that. it was definitely a super hard thing to try to sell encryption technology and like an encrypted database into a large enterprise, especially in the financial sector. I wouldn't personally probably wouldn't try to do it again. I wouldn't recommend it as a first choice go to market for someone working on a startup. There are people at these large institutions,
Starting point is 00:09:06 at these banks and other large enterprises that are very excited about and are very interested in it. But I think by and large, those don't overlap with the people who are able to like make the purchase decision. So you'll have these interesting technical conversations with people and you'll be excited about what they could eventually use it for, but ultimately they're not the person who actually decides
Starting point is 00:09:28 whether they're going to use it and to pay it for it or not. And I think that's really one of the most exciting things for me as an entrepreneur in the blockchain space is that that's where there is a huge amount of overlap between the people who are just intrinsically interested in this stuff and who have the ability to just immediately start using something. thing because either they're running a project or there is a high degree of overlap there. So it's like much more the early adopters are able to actually deploy stuff early on.
Starting point is 00:09:56 So that's super exciting. You know, as a founder and a developer is like as soon as you build something interesting, there's much lower of a hurdle for people to actually start using it. What are some of those hurdles? I mean, you know, if you look at sort of privacy preserving technologies more broadly, probably we're all users of signal here, for example, we're super excited. about privacy preserving technologies and we'll get our non-technical friends to get on these things and stuff and start using them. What are some of the hurdles that you've encountered in
Starting point is 00:10:25 getting people to understand the value and the kind of utility of privacy preserving technologies? And, you know, are there some things that just, as most people will never really be able to pass off? Yeah, I think the unfortunate reality of trying to sell technology to large enterprise is that it's much more about like the enterprise sales mission and how good you are how good your org is at that than it is about the technology so do you have these sort of enterprise sales org that can reach these customers that can influence these decision makers there's a whole lot of sort of committee and bureaucracy that you have to do to get sort of particularly something like this where it started touching their core infrastructure and its encryption it's very sensitive and it
Starting point is 00:11:08 has regulatory and compliance implications for the end user like there's a lot of layers of approval that you have to go through, you have to go through compliance, they're going to want to know who your institutional backers are, they're going to want to who your existing customers are because no one wants to be like the first bank to adopt something and then, you know, it goes wrong. I think like if you're building enterprise technology and selling top down to like CIOs or CTOs, it's much less unfortunately about like the technology and much more about the show that enterprise sales motion and whether you have that expertise inside of your organization. And that's just not, I think for at least for me, it's like not really a salesperson
Starting point is 00:11:43 that's not something that I'm especially particularly like intrinsically interested in. With zero db what were some of the like technology that you had to use there was it like and how like related was it to like sort of the stuff you ended up building at Newseifer? So zero db was actually quite simple so it was an encrypted data and an encrypted database where you would encrypt everything client side before you upload it to the server and the server would never see the plain text data. So usually when you say that people like, oh, it must be like fully homomorphic encryption. It was not. It was much, much simpler than that. So basically we would, like, remotely and incrementally traverse,
Starting point is 00:12:21 like, a B-tree index. So we'd encrypt the index client side, send it to the server, and then, like, when we want to search for something, we'd ask the server for the, you know, the encrypted route of the tree, fetch that, download, and decrypt it. Then we know, like, which branch of the tree to go to next, and we just basically do that. So it was pretty sort of simple and kind of a little bit naive. It wasn't nearly like the fancy, like fully homomorphic encryption sort of parodon everyone is looking for. So it's still sort of like a client side search, but like a smarter way of like doing it, basically. Exactly. So like you couldn't really use it for applications that were performance sensitive, but you could use it maybe for things that you didn't really need to get a
Starting point is 00:13:01 result back quickly. So like the use case that we were pitching to enterprises was you can use us to archive stuff. So a lot of banks at the time and still currently, I think, are, because basically just have archiving everything on-prem, using like tapes. Whereas using 0DB, they could theoretically throw that into some cloud storage like AWS Glacier or something, but still never expose anything to Amazon, but be able to query back, you know, if a regulator asked for something from, you know, 8, 10 years ago. So that was the basic technology. And then we also built in this functionality to allow other people to query that encrypted data as well. And that was how we discovered this technology called proxy re-encryption.
Starting point is 00:13:41 Without proxy re-encription, obviously whoever has the key, basically the person who encrypted client's out, is the only one who can query a zero-d-d-d-dabase. But we started getting interested in proxy re-encription because then the enterprise could enable partners, whether it's like supplier, a regulator, or a customer, whoever, to also they could delegate access to that encrypted data and allow them to do the search as well. And that ended up actually becoming, like, the most interesting part. of 0db was this ability to sort of delegate access to encrypted data. And that's ultimately
Starting point is 00:14:12 kind of what morphed into sort of the first iteration of the new cipher network, which is basically a decentralized network of a bunch of nodes that perform this proxy re-encryption function. The basic idea of 080B we sort of left behind, but this proxy re-encryption or delegation to encrypted data is what we kind of pulled out and eventually became new cipher. You briefly mentioned tape drives there. And like as you were talking about it, I like, I Google tape storage and the first result is like IBM selling these massive, like these things still exist and companies still buy machines to store. Like petabytes of data on tapes. This is wild.
Starting point is 00:14:49 I didn't know this was still a thing. I think it's not an infrequent scenario where like you have something stored on a tape and you, you know, 10 years later a regulator comes and says, hey, I need access to this data. And you go and you get the tape. And it takes a couple days to get the tape out of storage and you upload the tape. And then it turns out all the data is corrupted because the tape got, I don't know, damage. or something. Yeah, because like magnetic data, you know, magnetic storage, what could go wrong? It's pretty terrible.
Starting point is 00:15:14 So moving on to New Sefer, when you started working on New Seifer, like, what was your goal and like, what were you trying to achieve with this? And what kind of convinced you that there was a problem worth solving here? Basically, we made this sort of pivot back to blockchain. This would have been probably early 2017. Right when we saw like, hey, D, after kind of becoming a thing, right before the whole totally bizarre craziness that happened after that. in the ICO world.
Starting point is 00:15:38 But we kind of, the reason why we had decided not to stay to build zero DB originally not for blockchain was just because it was too early. Like at the time, you know, back in 2013, 2014, like the idea of DAPs was a thing that people were talking about, but there were no DAPs at all, or meaningful DAPs. So like we basically decided, hey, whatever we build, like, it doesn't matter because there's no one around that is actually going to use it. At the beginning of 2017, I think that it felt like that was starting to change. So we sort of identified this sort of gap that we saw was, you know, everyone's building
Starting point is 00:16:09 public blockchains, these decentralized storage layers like IPFS and swarm and SIA, but there's not really a good way to build access controls into your application. Yes, you can encrypt data, client side before you upload it to, you know, an IPFS storage or something like that, but if you want to share that data later on with another user of the application, it's kind of difficult. Either you can just give them the key, which is not particularly secure. or whoever encrypted originally has to download the data, decrypt it, encrypt it with a recipient's key, and send it to them.
Starting point is 00:16:39 So it's a bit clunky. And so the idea was basically we could use proxy re-encription to make this a lot smoother of an experience where you could, as a user, let's say, you know, a simple example would be you're building some sort of electronic health records application on IPFS. You could encrypt those records before you upload them to the storage. And then later on, you can use new cipher to the delegate access.
Starting point is 00:17:03 to that encrypted data to a doctor or hospital insurance company, something like that. And what the new cipher network can do is basically take that encrypted data and re-encrypt or transform it such that it's now encrypted under the recipient's key without having to decrypt in the middle. We built a lot on top of this basic proxy re-encryption functionality to make it a little bit more user-friendly. So you can attach, like we call it a policy where you can attach certain conditions to sharing that the network will enforce. So you only want to share between, you know, some certain time period, or maybe you want to revoke access later, things like that, or maybe you want to condition it on the recipient paying you for access to the data first.
Starting point is 00:17:42 We've added a little bit of, like, fancy stuff on top, but that core functionality of being able to delegate access. It was a gap that we saw of kind of, like, making it easier to work with private data and it decentralized out. Back in January, we interviewed Steve Kokinos and Sylvia McAlli of Algrand, and during our conversation, we talked about how Algarend's unique design makes it easy for developers to build sophisticated applications on their platform. So what's great about Algaran, beyond the fact that it's fast, it's secure, it scales,
Starting point is 00:18:11 and it has instant finality, is the fact that they've designed a layer one protocol with primitives that are purpose built for defy. So what that means is that they've taken some of the most common things that people do with smart contracts, and they've embedded them right in the system, right in the layer one. So things like issuing tokens, atomic transfers, well, these are built into the layer one. and smart contracts are first-class citizens on Al-Gbrand. So with these essential building blocks at your disposal, you can build fast and secure DeFi apps in no time.
Starting point is 00:18:41 To learn more about what Al-Gran brings to the table and how to get started, I would encourage you to check out Al-Gran.com slash epicenter. That lets them know that you heard about it from us, and it takes you where you need to go to learn about their tech. And with that, we'd like to thank Al-Garan for supporting the podcast. What are some use cases where the proxy re-encryption would be desirable over the system where I just download the data myself decrypted and re-encrypted for someone else? Like, is it cheaper computationally or where would I rather use proxy re-encryption?
Starting point is 00:19:21 We have a blah-post that goes into like a couple specific details, but I think a big one is that it doesn't require you, let's say you're Alice, you're the person who encrypts and uploads the data. You can create these policies either the time you upload the data or any time in the future. If you didn't have proxy re-encryption and you decided you wanted to share the data with your doctor or your health records with your doctor, you'd have to come back online, you'd have to download that data, you'd have to decrypt it, encrypt it, and send it to them. Whereas with a new cipher policy, you can just specify this policy, say, share all of my data that gets created from my heart monitor, for example, with my doctor.
Starting point is 00:19:57 And then you can issue that policy to network, and you can just come back online to download and share that data. So it's a little bit better of a sort of a user experience. It doesn't require this sort of, you know, explicit Alice to be online. And it's also nice if, like, you're sharing the data across multiple recipients, it's just a lot cleaner to issue these policies as opposed to downloading it and then, you know, encrypting it, you know, five times to share with five different people. Let's maybe pause on proxy re-encryption a little bit here and maybe dissect what that means and like who the participants are. So for example, let's use this medical records example. I have a medical record that I got from, say, one healthcare provider and to pass it on to another
Starting point is 00:20:37 healthcare provider, that data is encrypted. It's on any sort of cloud server. It could be like a centralized or decentralized cloud server. If I want to be able to pass that data on to to a third party, I would have to decrypt that data re-encrypted with their keys so that they can decrypt it. And that requires, like you said, me to be online. And it's like, it's an action that I have to take. Whereas with proxy re-encryption, you essentially delegate re-encryption authority, in this case, to New cipher, to a third party. And New cipher, or the sort of delegate, re-encrypts the data in such a way that the third participant, can decrypt it. Is that accurate? That's pretty close. So I think it's actually helpful just to think
Starting point is 00:21:24 of it in like the traditional like Alice and Bob kind of cryptography narrative. So you have Alice who, I guess in this example would be the patient. And she has some health data that she is her personal private data. She's the one who encrypts that data and uploads it somewhere. That could be decentralized storage layer. It could be S3. It could be wherever. It doesn't make a difference. At that point, she's the only one who can access the data. She's the only one who has the key. So along comes Bob, maybe her doctor, who wants to share data with. Traditionally, she can either just basically, she can either share her key with Bob, she can share her key with the storage layer so the storage layer can decrypt it and share it with Bob.
Starting point is 00:22:02 Or she can download that data, decrypt it, encrypt with Bob's key, and then send it directly to him. What proxy re-encryption does is basically introduce this third character into the Alice and Bob narrative. And at Newseifer, we call this character Ursula. and Ursula is basically this remote proxy who can re-encrypt the data, such that goes from being encrypted under Alice's key to being encrypted under Bob's key. So you have Alice, the data owner, you have Bob, the data recipient, and you have Ursula, the proxy. And Ursula is untrusted, or is there like some trust required in Ursula? Ursula is trusted to be like available in online, but she is not trusted with any private key material or the plaintext. So she has
Starting point is 00:22:47 has no ability to access the data. What Alice will do is she says, okay, I want to share my data with Bob. She will create something called a re-encryption key on her client. And this re-encryption key has basically two inputs. One is Alice's private key and the other is Bob's public key. And the creation of a re-encription key is one way, so you can't take a re-encription key and like pull out the private key afterwards. And the only thing this re-encryption key can do is do this transformation, a re-encryption that we've been talking about. And so she will send that re-encription key to Ursula, and Ursula would then be able to use that re-encription key to transform and re-encrypt the data for Bob. And so re-encription key is tied to a particular
Starting point is 00:23:26 recipient, so Ursula can't re-incry that data for Charlie and Dave or Evan. She can only re-encrypt it for Bob. There's still an action on behalf of Alice for every new recipient. But since Ursula is always online, Ursula or some kind of like server-side thing, or in this case of blockchain, and perhaps can sort of notify Alice that Bob is trying to access this data. And so therefore, here, can you create a re-encription key so that the process is kind of seamless? Is that kind of how to think about it? If Bob tried to access data that he, that Ursula did not have a re-encription key for, it just wouldn't work. So Ursula wouldn't necessarily notify Alice.
Starting point is 00:24:05 There would be some sort of side channel way that, you know, Alice and Bob might communicate to say, hey, Bob will tell Alice, hey, I want access to the data. And Alice will say, okay, I'll create this re-encription key. for you. And she can either do that at the time that she uploads data or she can do that any time later in the future. Like if she switches doctor, she can tell her so, hey, please delete or revoke this re-encription key for Bob and, you know, accept this new re-encription key for Charlie, for example. So basically the new cipher network is decentralized service that provides this proxy re-encription functionality. It's a decentralized ursulize, what you're saying.
Starting point is 00:24:38 Correct, yeah. The new cyber network is actually more than that. It's sort of more of a generic threshold cryptography network and proxy re-encryption is kind of the first primitive on top of that. But theoretically, you know, people could deploy your secret sharing or threshold signatures or really any kind of threshold cryptography onto the network through these Ursula's. It might be useful also just because it's been a while since we've talked about it. It would also be helpful for me to get some refresh. So we've talked about threshold signatures in the past. One episode comes to mind when we talk with the folks at Zengo, they have this threshold signature,
Starting point is 00:25:08 Bitcoin wallet. Can you just kind of refresh our memories on what are our things? threshold signatures exactly and like the dynamics there. Threshold signatures is basically just if Alice can sign something, threshold signatures are basically just splitting up the ability to sign something across some multiple parties. So instead of Alice being able to sign something unilaterally, it requires, say, M of N, you know, three or five of the sort of shareholders to sign something in order
Starting point is 00:25:36 to create a valid signature. Why you called Ursula Ursula, is there some like cryptographer, inside joke there? I think it was just meant to sort of hint sort of like Ursula is untrusted. So untrusted ursula. Depending on your definition of untrusted is correct. I would say it's more like trust minimized. So I don't like the word trustless or untrusted, but that was kind of the original impetus. So Ursula, with proxy re-encription at least, even if Ursula has no ability to get access to the private key or the plain text, but you still are reliant on Ursula or a new cipher's case, a quorum of Ursula's, to be online and available. So if the whole network
Starting point is 00:26:13 goes down, obviously, there's no one available to perform the proxy re-encription service. So it's not trustless in that sense. You still are sort of reliant on them being online. And we have, we try to build in like certain economic guarantees through, you know, this is a staking process that Ursulis have to do in order to encourage them to be online. But that's basically, yeah, the idea is that they are trust minimized. How does this conditional proxy re-encription work, though? is do I basically have to pre-give a re-encription key to the Ursula, and then Ursula maintains it, you know, decides when to do the re-encription based on the policies, or do I only share the encryption key once the policy is met?
Starting point is 00:26:54 So it's the former. So we've been talking so far in the context of, like, there's one re-encription key and one Ursula. With a new cipher network is actually a little bit more sophisticated because we split that re-encription key up into a bunch of shares, you know, similar to the idea threshold signatures. So instead of having one Ursula with one re-encription key, we'll chop, Alice will chop the key up into, you know, 3 or 5 or 10 or this is configurable on her side, and then she will send those 10 shares out to 10 different nodes or 10 different Ursula's in the network. And there'll be some quorum of them that have to all sort of come together to do the
Starting point is 00:27:28 re-encryption successfully. And a policy is basically Alice will just attach some conditions that she wants the Ursula to enforce. So this, unfortunately, this is not, you know, enforce the cryptography layer, this is enforced basically by the Ursula saying, I will enforce this condition. So it could be like a time condition where she only wants, Alice only wants to share data after some certain date time, or maybe Alice later on, she wants to revoke access. And as long as six of ten of those Ursula's obey, like act appropriately and delete the re-encription key, the revocation will be successful. So the policy piece or the conditional piece is more trusted. It's at the blockchain layer, like at a smart contract layer more so than at the cryptography layer.
Starting point is 00:28:10 Correct, yeah. And other like sort of punishments. So, you know, one of the challenges I know with a lot of threshold cryptography is that like a lot of schemes are possible. But then when you want like attributability, that's sort of when a lot of the stuff becomes more challenging. So how attributable are Byzantine faults or faults in this sort of scheme? So the proxy re-encription piece is attributable. So basically what happens is every time in Ursula does a re-encryption, she has to sign that re-encription as well. So you will know which Ursula node did the re-encription. And if an Ursula node provides like a false re-encription, like she just returns garbage data, for example, there'll be a proof attached that they can take and they can submit to the blockchain and the Ursula's stake would get slashed. There's really two modes of miscarriage. behavior for an Ursula. One is that incorrectness, just returning basically garbage data, and the other is just not replying at all, not responding at all. What about doing a re-encryption
Starting point is 00:29:12 when they weren't supposed to? Yeah, so that one's a little bit trickier. Currently, the network would not slash for that, because the policy, a condition that the Ursula herself knows, there's no way for like the blockchain, at least currently, to be aware of these conditions. It would require like a quorum of the Ursula's to all do that. is there any way to like add this in in the future where like you know let's say i go and bribe some ursulas to re-encrypt for me but then i actually turn around and submit that to the blockchain where i say like hey here's proof that these ursula's re-encrypted data for me when there was the original owner did not give them the right to slash them and give me part of the reward so kind of
Starting point is 00:29:53 like gives an incentive for people to try to trick the ursula so that way they're higher on their guard. Kind of the nice thing about proxy re-encription specifically and why we chose it for the first sort of primitive to deploy into the network is that it's a little bit different from, let's say, like, Shemir's secret sharing or threshold signatures where if you have a sufficient number of Ursula's colluding, they still can't really do anything on their own. So let's say you have a five of ten policy and you get five malicious Ursula's that all agree to sort of re-encrypt outside of the conditions that were specified in the policy.
Starting point is 00:30:25 they can't really do anything without also colluding with Bob. So it requires Bob to collude with a quorum of Ursula's in order to get early access to the data, which isn't good, but it's better than just the Ursula's being able to do that on their own, which is one of the issues that you do have, you know, if you're doing threshold cryptography for a signature or just some mere secret sharing, where you're basically a quorum of Ursula's can reassemble the private key, essentially.
Starting point is 00:30:53 So there's not, at least right now, sort of good ways that we're aware of to sort of prevent this. And it's one of the reasons why when we talk about deploying other cryptography, threshold primitives to the network, like signatures or things like that, unfortunately you have to rely a lot more, I think, on like the sort of the economic piece, which is obviously much less robust and definitely scary if you had to think through that quite a bit. And we certainly, you know, to the extent that we or others deploy these other primitives onto the network, it would be. be something that we'd, you know, want a much more of us answer to first. I was talking to one of my friends, Dave Oja, and he was saying that, like, you know, there's actually not many libraries for even, like, normal proxy re-encription and sort of your guys' threshold proxy re-encryption is sort of one of the work first systems that does
Starting point is 00:31:43 proxy re-encryption. So is the reason for the threshold, is it only to enable things like the policies and stuff like that, or is there actually reasons why, like, cryptographically like threshold proxy reencription is actually easier to accomplish than like, you know, having one server provide proxy reencription. I don't know. I think threshold proxy encryption is not easier. I mean, with our library, which is pi umbral, you can do just a one of one threshold. So technically you could do a one proxy. But I think the reason why we want the threshold is a couple of things. One, it makes the availability guarantee is a lot higher. So I think if you have one proxy and that proxy goes offline for whatever reason, either maliciously or just. non-maliciously, like, you're out of lock and you can't do anything. Whereas, you know, if you have a quorum of Ursula, that's hopefully much less likely. I think historically there
Starting point is 00:32:31 have not been any sort of production ready sort of proxy re-encription libraries before our library. Perhaps a lot of that is just, you know, I think where proxy re-encription signs, a lot more is in this sort of decentralized setting where you have many different Ursula's as opposed to just one. is there a way to like account for Ursula's changing? So let's say I like, you know, initially had these 10 Ursula's assigned to my key, my data and my policies. But let's say a couple of them drop out.
Starting point is 00:33:04 Is it possible to add more Ursula's into this system? Can I use this in some like really long time frame? So like one of the examples I think would be really useful for this is like some sort of dead man switch for my. data where, you know, I have a repository that has all my passwords and my crypto keys and everything and I want them to be transferred only upon like, you know, some dead man switch kind of mechanism. But, you know, who knows, that might be 30, 40 years from now. So is there way to do this on that kind of time horizons? Yeah, I think that's a really interesting question.
Starting point is 00:33:38 So currently, the way that the network policy works is you as Alice, you basically specify some lifetime for the policy. And when you issue the policy, the network will only send shares of the policy to nodes that have staked for either the lifetime of your policy or longer. So let's say you specify a policy lifetime of like six months, that policy will only get issued to nodes the network that have a remaining stake duration of six months or more. It won't send it to, you know, a nurse that's going to, their, whose stake is going to expire in three months. But there's not a good way, Basically, once an ursula has staked for a year, that kind of maxes out the sort of the incentives for her. So she won't earn more inflation in that case beyond a stake duration of a year.
Starting point is 00:34:26 So right now, there's not a great way to do these ultra-long-lived policies. So basically what you would do is like after the end of one year, you would just reissue the policy for another year. Although we are doing some work on trying to make the network stateless so that you wouldn't have to specify a specific set of Ursula's for a policy, but really any Ursula in the network would be able to enforce the policy. So Tux on our team has been doing a lot of work on sort of how we can make the network sort of a stateless threshold cryptography network, which would have a lot of benefits including sort of this, but it would open up sorts of use cases. So even if a bunch of your, the Ursula's and your quorum go away forever, as long as the networks there, other Ursula's
Starting point is 00:35:11 would be able to step in. That's not there right now. That won't be there at the sort of the at the genesis of the network in October, but I think it's a really interesting medium to longer term piece that I do think is possible and that we're working on. Could you just like a brief like on like how that might work? Would that mean like every time an Ursula drops off, they would pass on the key, their part of the key to a new Ursula or what would that look like? I actually have not dug in yet to sort of the mechanics of how it is. I think he's... Got it. So I can't really comment specifically on that. I just know that he's been, he's been working
Starting point is 00:35:47 quite a bit on that and apparently making pretty good progress. But I think, I think probably in the next couple months we'll be in a position to sort of publish some of the work that he's been doing on how that might work, at least at a sort of mathematics and cryptography level. And so on your website, you mentioned that, like, you know, you guys are working on this, like, special re-encry encryption stuff. And the other thing you're working on is fully homomorphic encryption. One, first off, like, is proxy re encryption like a subset of FHA, or is this like a sort of two completely different technologies? I'd say they're pretty distinct. So you can maybe think of proxy re-encription. You can maybe argue that it's fully, it's like a subset of homomorphic
Starting point is 00:36:28 encryption because it's like operating on this encrypted data and it's doing this transformation. I don't know how useful it is to think of it in that way, but you could make sort of the argument that it is. And I technically, you could actually implement within fully homomorphic encryption, like, you could do this sort of re-encryption functionality with FHE. It would be much less efficient, and I don't really see why you would do that, but like, technically you could. Since fully homomorphic encryption kind of implies this ability to do arbitrary, arbitrary computations on encrypted data. But yeah, so at Newseifer, we are doing, we have done some work on FHA. So currently that's like I would say that's sort of distinct from the network.
Starting point is 00:37:06 So we think of at least right now the new cipher network is this generic threshold cartography network where we deploy proxy re-encription, your secret sharing, threshold signatures, etc. And then like the FHE piece is sort of separate. So right now we have this library called New FHE, which is a GPU accelerated library that uses an FHE scheme called TFFHE. But it's currently like the new Cypher network. does not use new FHE and there aren't any sort of immediate plans for it to do so.
Starting point is 00:37:38 We are sort of in parallel doing a lot of sort of longer-term research around whether FHE might have some applications for private smart contracts, but I think that's definitely a lot further out. So what are the kind of use cases that you would foresee here for, you know, FHE on New Sipher? for, like, what's the primary application for, like, a decentralized network that can do these kind of encryption schemes? So, like, new FHE is about 100x faster than just vanilla TFHE on a CPU, but it's still, like, on an absolute level, like, quite slow.
Starting point is 00:38:18 So when you think about, like, what use cases it could be used for, smart contracts are kind of interesting because they don't, in general, need to be fast anyway. I think some of the use cases that we've sort of been lightly exploring or like voting or potentially if there are like use cases in defy around preventing things like front running or just sort of generic private smart contracts where you can hide the inputs to the contract or even use FHE to do private transfers on a blockchain. I don't think that for that last use case is probably not the most efficient way. I think there are sort of already in production better ways to do that now.
Starting point is 00:38:56 But that's sort of the approximately like the areas that we're looking at for that. And like just this is more of a general question, but FHE is one of these things that gets thrown around in the crypto space once in a while. And how mature is that technology and, you know, how is it used today in production environments, perhaps? Like what's the, you know, state of FHE? Yeah. So I wouldn't say it's really ready for production. So our library, for example, is very much. experimental, you know, it's not been audited. I wouldn't recommend to use it in production.
Starting point is 00:39:30 And you can certainly pull it off GitHub and you can experiment with it. But I think it is not something that I would be comfortable using for like an important application currently. In general, like outside of blockchain, you know, outside of what we're doing at New Cypher, I think in academia and research, one of the areas that people have been spending a ton of time on is with FHE is for machine learning. So how you can train data, you know, privacy, you can train these machine learning models in a privacy-preserving way. The main library for that is, I think it's CU-F-H-E. So there are a couple different types of F-H-E libraries that are appropriate for sort of different use cases.
Starting point is 00:40:12 The new F-H-E is, it does arithmetic operations as good if you need like a precise result, but some of the library and the schemes that are used for machine learning have more error. So they're not precise, but that's okay if you're just training a machine learning model, and you don't need precise results. So they're basically better fit for that type of use case because they relax the error and the precision requirements. But in general, I think outside of the blockchain space, this machine learning area is where a lot of the current work has been happening.
Starting point is 00:40:42 And I think that's still kind of a maybe POC, proof of concept level more so than, you know, there are people doing this in production right now. And why did you choose to focus on this? And what do you see sort of as the promise of FHE in the context of Newseifer? Yeah, so the reason why we originally got interested in FHAG is because, well, specifically for smart contracts, is because I think there had been several projects that have been
Starting point is 00:41:13 working on private smart contracts and trying to solve that using secure multi-party computation. And I think long-term, the issue with that is secure multi-party computation just requires a huge amount of network overhead. Whereas with FHE, you know, the schemes are getting better. You can, you can accelerate them a lot at the hardware level. So I think on a longer term, it's possible that you'll be able to do things with FHE that just aren't practical with SMPC. You know, whether that is actually, whether just private-tort contracts in general, like that useful in a blockchain setting, I think is still a little bit to be determined. But I think there were a lot of people work on SNPC and no one was
Starting point is 00:41:54 working on FHE because the impression was that it just wasn't practical at the moment. But I think if you look at how the schemes work, I think like the scope for an area for improvement for FHE is potentially a lot bigger than what you'll be able to squeeze out from MPC just because at the end of the day
Starting point is 00:42:12 MPC is kind of network bound and requires a lot of chatiness and communication between different nodes that are performing in computation. You can get more privacy out of the FHE, right? Because from what I understand, with an MPC, you can sort of like hide the data that goes into the MPC, but the actual program is still public. But an FHE that you can actually make it so like the program itself, that what the contract does itself is private as well. Yeah.
Starting point is 00:42:46 So one of the one of the interesting things about FHE is that it hides the, the input is encrypted. So whoever's operating on the input doesn't see it. They don't see the output either. But at the same time, like the person who's doing the computation can be using their own proprietary model, for example. So I think in the context of like machine learning, this is one of the reasons why people are interested in it because let's say you're doing, you're some startup who has like the best genetic sort of analysis program and it's proprietary and you don't want to share that program with others or that model with others,
Starting point is 00:43:24 you can basically keep that on your own machine. People can upload their encrypted genome. You never see their genome. They never see your model. And you can give them back results that you don't see, but that they can decrypt. So basically keeping the model private and proprietary, but the data is well protected is, I think,
Starting point is 00:43:43 one of the benefits of FHA. So I wanted to ask you, What is your take on like sort of trusted hardware and like SGXs and things like this? Where like, you know, there's two approaches to do, you know, you can either use all this very cool cryptography you guys have built or, but you know, SGX comes along and says, hey, here's this magic black box piece of hardware that we can do everything you're saying. But like, you know, no math, no cryptography or fancy cryptography needed. Yep. Just your general thoughts on trusted hardware and where do you think it's going to go from? Yeah, I mean, I think short answer is that it doesn't really work, unfortunately. And the reason why
Starting point is 00:44:22 people like it is because, you know, as you said, like, it's fast. You can do stuff on it today. But I think, like, the tradeoff is that unfortunately, just like, you know, we've seen with, you know, these sort of, with SGX in particular, so just the most popular one. Just, I think, like, every other month, it seems like there's like a sort of new attack that, you know, someone's come up with. I think maybe where they, where teas have a place is like, just is sort of one part of a broader sort of, you know, security posture. But if your whole premise is that like this is secure and private because I'm using a tea, I think that's just, this isn't not going to work out well. I think maybe, like, I think it's, I think some folks at Berkeley are working on, you know, an open source tea called, I think it's Keystone or something like that, which is, interesting. At least it's not
Starting point is 00:45:14 proprietary and its open source. I don't know what the timing is for that. But I think ultimately, at the end of the day, it's very difficult to design a piece of hardware that's going to actually work or at least meet all the promises and expectations that have been made around trusted execution environments.
Starting point is 00:45:35 What's the unique challenge there to building the hardware as opposed to building things in software? Well, I think the nice thing about cryptography is like you can create security proofs. So it just comes down to math, basically. And so you have these mathematical guarantees around whether or not something is, you know, a scheme is secure. Obviously, like, you have to assume that the implementation is done correctly as well. But at the end of the day, it's like the root of security for a piece of cryptography is just math. Whereas with hardware, you know, it's not. Like, currently, like, if
Starting point is 00:46:05 you're using SGX, basically, you have to assume that the reason why it's private and confidential is because essentially Intel says it is. And so you just don't have this sort of same route in mathematics. It's the origination of the security guarantees. Let's talk a little bit about the, let's bring it back to Newseifer and talk a little bit about the infrastructure. So help us understand, you know, like, if we're talking about the blockchain itself, you know, like who are the participants and what is these sort of like incentive model here?
Starting point is 00:46:39 Is it a blockchain or a smart blockchain? contract, first off? Yeah, so the new cipher network is not its own blockchain. It sits on top of currently the Ethereum network, and basically it's composed of a network of these ursulas that are providing proxy re-encryption or generic, some generic threshold cryptography. And what these urslas have to do is if you want to operate one of these urslas and run a node on the new cipher network, you have to stake the native new cipher token, which is new. So this is, I think, like commonly people would call this like a work token. So if you stake the token that allows you to join the network, run a node, receive work, in exchange for that work, get paid by users. So Alice will pay a fee in
Starting point is 00:47:23 eth. And nodes also receive an inflation subsidy in new. So there's these two components for incentivization to run the node. And then also there's the slashing protocol where currently the network slash is very little, but could the Dow basically can slash for going offline for extended periods of time or providing basically like false output. So the network itself is just basically a bunch of these Ursulos that are staking the token and providing this this work. So it's a smart contract and there's a there's a network of nodes that are staking the token on the contract? Yep, so there's a stake, like there's an escrow contract where nodes will stake their new into this contract, and that will allow them to basically start getting selected for work orders or jobs or policies, and it will also allow them to basically receive some of this inflation subsidy. Okay, and how is the slashing work then? Like, how does one tell the contract that the, say, the proofs or whatever the ursals are providing the user are valid? Yeah. So generally, like, if.
Starting point is 00:48:34 If Ursula is doing everything correctly, these proofs won't get submitted because Bob will just receive his data and he'll be happy. But if Bob receives garbage response or just like a false re-encryption, he can take, I think we said earlier, that every time Ursula does a re-encryption, she will sign like a proof of this re-encription, which gets basically along with the re-encrypted data, is attached to that re-encrypted data and gets sent to Bob. And so if Bob can't decrypted that re-encrypted data for some reason, because it's going to be able to. garbage, he can take that proof that's signed by Ursula, and he can submit it to the blockchain. In this case, we call it the adjudicator contract, and the adjudicator contract would then go ahead and slash that Ursula's stake.
Starting point is 00:49:19 And so Ursula would be punished for that. How does the adjudicator contract know that the data is garbage? So there's basically a zero, the proof that's attached is like a proof of re-encription, and it's signed by that particular Ursula. And so it's essentially that's like a zero knowledge proof that's been signed by Ursula that it's easy to validate by anyone. Okay. So it's something that can be validated independently without like some kind of adjudication process where other parties are verifying. Correct.
Starting point is 00:49:49 Yeah. So adjudicator maybe is a little bit of a misnomer because it's it's not, yeah, it's just all automatic. Okay. So this all happens automatically. How expensive is like submitting one of these proofs, especially with like current gas prices and stuff on Ethereum? That's a good question. I'm not immediately sure on the cost of submitting a slashing, a proof for slashing. I will say that, like, in general, like the current gas situation on Ethereum has made it very difficult to run a small new cypher node. So each Ursula node is required in order to receive its inflation subsidy. They're required to basically do this sort of availability check-in daily or every period. And they do this through a check-in transaction, which is about 200k gas. So previously that was like fine.
Starting point is 00:50:39 But now all of a sudden it's like five, 10 maybe more dollars a day. Unfortunately, it's kind of if you're particularly if you're running a canoe with a minimum stake, that can start to get very expensive. So yeah, in general,
Starting point is 00:50:52 like that's been this sort of gas situation on Ethereum has been impacting a particularly smaller node operators. Is there any like particular reason that new cipher benefit from being on Ethereum? Like is there some sort of, composability benefit that comes from having the like this coordination system on Ethereum or would it be possible to have the coordination system on its sort of like own independent blockchain but like still be able to provide this threshold Dproxie
Starting point is 00:51:23 encryption work for DAPs? Yeah so in principle the the network would not have to be on top of Ethereum I think like in practice the reason why it is on top of the theorem is because when we started building it, Ethereum was like the only option. And the reason why it probably still isn't on Ethereum right now is because that's right now, at least that's where most of the potential users are of DAPS. But it would certainly be possible to save the network state and move it to another smart contract platform, whether it's Cosmos or Pocod or wherever.
Starting point is 00:52:00 I think the benefit right now for it being on Ethereum is that nodes are paid for policies in eth, and just it's easier for people right now to pay in Ethan and other layer one currencies. There's just more people that hold it. But there's nothing intrinsic to the network that it requires it to be on off of Ethereum. And I think like for New Seifer in particular,
Starting point is 00:52:22 like this sort of composability aspect, which is always super important for like defy and things like that, you know, probably isn't as compelling for New Seifer. How big is a network today and how many, Ursulas are on the network? So the network or the main net will be launching in October, so next month. Oh, that's right. Yeah. So after this worklock, yeah. So right now it's just a test net. We had an incentivized test net back in like February
Starting point is 00:52:50 and March of this year. And I think at peak we had something like a thousand-ish nodes that were running. So that was sort of nice to sort of stress test things. And you'll see how whether the network holds up or falls over at that scale. Ultimately, I think about 550 node operators from the incentivized test net basically performed well enough that they earned a reward. So actually pretty many nodes. I think it will be interesting to see
Starting point is 00:53:19 how many of those convert into main net nodes. I think particularly given the gas situation right now makes it challenging for smaller node operators. So there might be some pressure towards people pooling in a couple larger nodes or, you know, it may make some of like the smaller, smaller nodes just not financially viable until either we come up with some, you know, solution to the gas issue or just gas costs come down. So that will be something that I think we'll be watching pretty closely as we transition to main net. But we had, during the test nets, we had, you know,
Starting point is 00:53:51 quite a few nodes up. And what is some of the things that, you know, that the test net revealed or that you've learned through that incentivized testnet? Yeah, so we went through actually a lot of iterations of testnet before we even got to the incentivized test net. So like the first version of the network was literally like, you know, we spun up one node internally and we had like one new cipher company, Ursula, and then we opened it up to a few closer partners. We had a federated network where there was no token, but there was basically like a
Starting point is 00:54:22 white list of people that would be running nodes. And then we introduced, you know, the token and the stake. aspect, ran that for a couple of months, and then we basically turned on a testnet faucet where anyone could get the test net token and they could stake it around a node. And then we got to the incentivized testes. So by the time we got to the incentivized test net, the network was actually pretty mature. What it hadn't had was obviously like, you know, the 1,000 nodes running at one time. So it was definitely interesting to see, you know, what happened in that scenario.
Starting point is 00:54:54 And then we did a ton of polishing around, you know, like just rough edges and corners. you know, improve like the staking UI and the experience for that. What was also interesting was that we did several test net iterations of the worklock, which is this basically token distribution mechanism that we came up with. And that was kind of interesting because participants came up with some like theoretical edge cases and like attacks on the worklock that influenced like our ultimate end design for the main network lock, which is happening now. So there were some scenarios where, like, people could do weird things around, like, putting in huge bids during the contribution period, but then, like, canceling some stuff at the last minute, so that they went out up getting, like, a disproportionate share of the worklock tokens.
Starting point is 00:55:42 And some weird sort of attacks around that that we were able to change in the final design that will hopefully, you know, obviously to be determined, but we'll hopefully end up in kind of a better distribution of these worklock tokens than we otherwise would have seen. So what do you do in the work lock? Like what is the sort of work? So I know it's somewhat similar to a lock drop where like, you know, I lock up some eth for a little bit of time. But where does the work aspect come into it? Yep. So it's similar to the lock drop and that you're, yeah, you're locking up or escrowing this
Starting point is 00:56:15 eth. And then it's different in that it has this work requirement. And the work requirement in this case is, you know, running a new cypher node for at least six months on main net. And so the work is like this So basically we use the inflation subsidy that a node earns as a kind of a proxy for them
Starting point is 00:56:32 having done work. So if you run a new cipher node for six months and you produced X amount of inflation would be expected to earn during that period like we will have considered you to have been available as a node and have done the work. So the work like contract basically during the escrow period which is open right now
Starting point is 00:56:51 allows anyone to escrow eath into that contract. And then at the end of workout period, and once the network launches, everyone will be able to claim some sort of prorata amount of staked new cipher tokens if they escrowed into the workout contract. And those stake new cypher tokens are obviously locked for at least six months. But it allows them to run a new cipher node. And if they run a new cypher node correctly, eventually they'll be able to recover all of their escrowed eth. And then the new tokens as well will unlock and become transferable. And what happens is the work clock contract can basically just watch and see, is this
Starting point is 00:57:31 node earning inflation? If it is, then we'll start to unlock some of their escrowed eth. If it's not, everything will just stay locked until they do the work. Okay. So you really have to do the work. Otherwise, your, your, we'll get locked up forever. Yep. Yeah.
Starting point is 00:57:49 If you escrow eath and you don't run a node, you won't lose your Eith. It doesn't get burned or anything like that. And we won't take it. It's just escrowed into the worklock contract. It will stay there until hopefully at some point in the future you come back and you run a node and do the work. Okay. Interesting. Did you find this inspiration?
Starting point is 00:58:09 Is this sort of inspired by other networks doing similar things or is this something you came up with? Yeah. I would say it was very heavily influenced by what the Live Peer team did. with their Merkel mine. So they had an interesting sort of distribution strategy where anyone that had, I think it was, I forget what amount, but anyone that had an eth balance over a certain amount could basically do this Merkel mine. So they could produce this proof. They could submit it to the blockchain in an exchange for LivePure tokens. So it was interesting because it was permissionless. So like anyone could do it. Like it wasn't the LivePier team,
Starting point is 00:58:46 you know, doing an ICO and deciding who couldn't buy. who couldn't buy. It was kind of this permissionless sort of mimicking of maybe like a, you know, of mining a token. So that was, that sort of permissionless aspect was super interesting to us. And then, like, I think we just wanted to improve upon it a little bit and make it sort of more applicable to the new Cypher network. And instead of this arbitrary Merkel proof that doesn't really, you know, it's just sort of pointless work that's only meant to get LIPR tokens, like basically change the work requirement to be something useful for the actual network. So the idea was for it to be, the work clock was it for it to be permissionless,
Starting point is 00:59:23 but also like hopefully like targeting a particular type of token holder in which in this case, people who will stake the new cyber token and run a node for the network and trying to advantage them over, you know, people who just want to invest or, you know, trade or do things that maybe aren't as useful for the network long term. Yeah. And it is, it is a bit more of a barrier to say something like the edgeware lock drop, for example, where you just put the tokens in and you're expected to get those tokens in return, you actually have to do some work here. And it does create this incentive to actually participate in a network and do something useful rather than just speculate.
Starting point is 01:00:04 Yep, exactly. Yeah, it looks a lot like maybe like a sort of combination of the Merkle Mine and like a lockdrop style sort of edgeware type of thing. obviously like the barrier to entry is versus a lock drop is higher because you actually have to do the you know to run the node it's not a passive thing where you escrow your
Starting point is 01:00:22 eth or you signal your eath or for that matter you know like a lot of the defy farming stuff or yield farming stuff is you know maybe you can draw some parallels there but it definitely requires like more of an active engagement with the network than just passively providing liquidity for a dex or something are you mean to do you mean to say that yield
Starting point is 01:00:42 farmers are not actively doing any work? I'm sure they're doing actually quite a bit of work, but maybe like sort of technical technical work. Providing liquidity is valid, I guess. So, you know, if there are any, you know, validated operators out there or people who, you know, are technical enough to set up a node, walk us through sort of what's required and to set up one of these nodes and, you know, what might that cost in terms of like server infrastructure, this sort of thing. Yeah. So a new The new cipher node itself is actually very lightweight. So it's just doing elliptic curve photography, which is pretty computationally cheap.
Starting point is 01:01:20 Unfortunately, right now, like, probably the biggest piece of, the biggest cost for running a new cipher node is the gas cost. So 200K per node per day of gas, which I think at recent prices have been like $5 to $10 most days, you know, adds up pretty quickly, especially if you're, you know, staking the minimum amount. Yeah, for six months is a... Yeah, for the worklock. especially. So if you're a larger node or you have a larger stake, maybe it's not as bad because you're just spreading it out across a much bigger stake. But if you're staking, let's say, you know, $2,000 worth of new tokens, you're probably going to end up spending that much or almost that much or maybe even a little bit more on gas. So that's tough. But the new hypernote itself, pretty cheap.
Starting point is 01:02:05 I think like in testes, people were having pretty good success with like a $20 a month digital ocean droplet, for example. And then you do have to, the Ursula nodes do have to have an eth node to send these transactions, or broadcast these transactions from. That can be, you know, a full node that you run locally,
Starting point is 01:02:24 or, you know, you can use Klaf to just remote sign with, like, infura or acchemy or something like this. So that's sort of the considerations into a new cipher node. And then for the work lock specifically to participate, how that works is, during the escrow period, which opened a week ago,
Starting point is 01:02:44 and last until September 28, so about three more weeks, you can lock up ETH, and there's a minimum ETH lock of five ETH. And if you put in five ETH, you're guaranteed to at least get the minimum new cipher stake of 15,000 new. Then if you lock up, more than five Eth, you'll get some amount more. It's not linear.
Starting point is 01:03:07 There's sort of this bonus pool that determines how much eth you'll get exactly. So it depends on like the total amount of eith escrow and how many people end up participating. But if you do the minimum eth escrow, you'll at least get, you'll at least be guaranteed to have enough to stake and to run a node. And you have until the 28th to do that escrow, you can cancel it any time during the escrow period and for up to two days afterwards. If you decide you don't want to run a node anymore. And then once the network launches, you'll be able to basically claim the stake locked new, that you're entitled to from the work lock,
Starting point is 01:03:42 and spin up a node and run a node. And then once you've done that for a sufficient amount of time, you'll be able to recover all of your escroweat eath. You'll be earning sort of new inflation along the way, and you'll also get that stake locked new will best essentially after six months. Are you concerned that, like, you know, with current gas prices that, you know, the number of node that will basically bootstrap the network might not be as high as you would have wished,
Starting point is 01:04:12 and the network itself is not as decentralized as it should be if it's to be reliable. And is that maybe put into question the mechanism by which nodes kind of check in? So I think for sure, because of the gas situation, there will be less small nodes, unless somehow between now and the end of worklock, like, the gas situation gets radically better which I don't think is going to be the case. So for sure, there will be less. Like, we probably are not going to have, you know, a thousand nodes on the network,
Starting point is 01:04:44 like we might have had, you know, if the gas situation was better. But I think as of, I haven't looked this morning, but as of last night, I think there were something like 90-ish direct participants so far into the work lock, which means 90 independent nodes. And then we've also seen quite a bit of pooling.
Starting point is 01:05:05 So one of the reason, like, we, Coinless is basically offering a way to participate in the worklock through their platform. And so they're essentially pooling, you know, people's ETH to run one large node or maybe a couple large nodes. So basically they're able to spread this gas cost across a bunch of users. And so I think we've seen, you know, through CoinList and, you know, some other pools that have been happening sort of, you know, offline. I know, just, you know, people that are, like, have worked with each other in the past, like, pooling. to participate and to run, you know, one large node as opposed to a bunch of small nodes. That's been a kind of a phenomenon that we've been seeing, largely driven, I think, by this gas cost situation. And then separately, we are looking at ways to maybe alleviate the gas, the cost of in gas in terms of running a node.
Starting point is 01:05:55 So, you know, potentially making the check-in, not daily, but, you know, weekly or something like that, those are potential solutions that we're looking at. I think it's sort of not determined yet if we'll actually implement any of this. How important does that decentralized check-in? Could one just not sign something and send it to you guys? Well, like, so like everything is basically this check-in goes to a smart contract, which distributes the inflation. So like we, once that smart contracts is deployed and it's running, like we have no ability
Starting point is 01:06:31 to really influence it. So people couldn't just go send their check-in. to like our email address and we send them inflation. No, but I mean like through some API call or something. I mean like, yeah, basically it would, it would imply that you have some kind of key that does this distribution model. Yeah, yeah. Yeah, so we do not.
Starting point is 01:06:48 So some of the contracts in the network and the protocol are not upgradable or most of them are. A few are upgradable and they are owned or they will be owned by the new Cypher Dow, which is basically composed of the new cypher staker. So if enough stakers decided that they wanted to change. maybe like the check-in requirement from a day to a week, or they wanted to change how quickly the inflation comes out, there are some parameters that the Dow itself can kind of tweak, but it would require, you know, a quorum of stakers to agree on that type of change.
Starting point is 01:07:22 But yeah, I think right now, gas costs is not great. It's definitely impacting us as the bottom one. Yeah, I mean, it's impacting everybody. So what's the business model here? and how are you guys making money from all this? Sure. So New Cypher, or our company, at least, we have some new Cypher tokens. And our business model as a company is that we will be staking some of those tokens
Starting point is 01:07:48 and running nodes ourselves. So particularly at the beginning, we'll be a pretty large staker on the network. But other than that, you know, we'll just be a standard staker. We won't have any other sort of explicit advantage over any other staker. So hopefully the idea is, you know, we'll be on a relatively. equal footing with other note operators in the network. And if the network is successful in getting a lot of
Starting point is 01:08:09 usage and people are paying fees into the network, we would benefit from that along with other stakers and other node operators. Cool. So where should people go to find you to Lori Moore about the work lock? And yeah, any last words?
Starting point is 01:08:27 Sure. So new cipher.com and from there we link out to all of our channels. Probably Discord is the most active one, and that's focused on development and sort of technical aspects of staking. You can also get test-net tokens from the faucet in Discord
Starting point is 01:08:44 if you want to spin up a test-net node and experiment with that before Mainnet. Our Twitter, also we tweet out any sort of important updates on worklock or network announcements there, so it's just Newseifer. At NewCyfer is our handle there. But yeah, I think the starting place is
Starting point is 01:09:02 newcipher.com, and then we have links to a blog post on the worklock, and that lays out all of the participation details and requirements and timing and should have all the sort of the information that's relevant for participating in the worklock there. Yeah, and we'll link to that blog post as well in the show notes. So if anybody's interested, they can check that out. Cool. Well, thanks, McLean. Thanks for joining us today and telling us all about NewsCyfer and looking forward to learning more
Starting point is 01:09:32 once the work clock period ends and the network launches. Absolutely. Great to be here. Thank you guys. Thanks. Thank you for joining us on this week's episode. We release new episodes every week. You can find and subscribe to the show
Starting point is 01:09:47 on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts. And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast. Go to epicenter.com. for a full list of places where you can watch and listen. And while you're there, be sure to sign up for the newsletter,
Starting point is 01:10:04 so you get new episodes in your inbox as they're released. If you want to interact with us, guests, or other podcast listeners, you can follow us on Twitter. And please leave us a review on iTunes. It helps people find the show, and we're always happy to read them. So thanks so much, and we look forward to being back next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.