Unchained - DeFi Security: With So Many Hacks, Will It Ever Be Safe? - Ep.170

Episode Date: May 5, 2020

Dan Guido, cofounder and CEO of Trail of Bits, and Taylor Monahan, founder and CEO of MyCrypto, discuss all the recent hacks in DeFi, how it can be made more safely and who is responsible.  We tackl...e:  the Hegic security incident: whose responsibility it was to make sure the contract was secure — the auditor (Trail of Bits) or the team (Hegic) — what Trail of Bits was saying in its audit summary, and how to read between the lines of an audit summary how long an audit should be upgradeability: particularly around when more advanced technology and contracts interface with older technology/contracts centralization vs. decentralization: whether contracts can be made safely while maintaining adhering to the principle of decentralization, why Taylor would prioritize centralization and security, and how teams can create different levels of risk for users  bug bounties: why asking what amount they should be is the wrong question the security threats posed by oracles and what a checklist for DeFi teams might look like Thank you to our sponsors!  Crypto.com: https://crypto.com  Kraken: https://www.kraken.com  Stellar: https://www.stellar.org Episode links:  Dan Guido: https://twitter.com/dguido Trail of Bits: https://www.trailofbits.com Taylor Monahan: https://twitter.com/tayvano_ MyCrypto: https://mycrypto.com Initial tweet by Hegic calling the security issue a typo: https://twitter.com/HegicOptions/status/1253937104666742787?s=20 Hegic tweet saying, “It’s not a security issue”: https://twitter.com/HegicOptions/status/1253954145113038849?s=20   Trail of Bits saying it will no longer work with Hegic: https://twitter.com/dguido/status/1254260725431894020?s=20  Taylor breaks down the audit summary: https://twitter.com/MyCrypto/status/1254058121342803968?s=20 Molly Wintermute’s Medium post on requesting a week audit vs. three-day review: https://medium.com/@molly.wintermute/post-mortem-hegic-unlock-function-bug-or-three-defi-development-mistakesthat-i-feel-sorry-about-5a23a7197bce  Unconfirmed episode with Haseeb Qureshi on the Lendf.me attack: https://unchainedpodcast.com/haseeb-qureshi-on-the-unbelievable-story-of-the-25-million-lendf-me-hack/ Unchained interview showing Matt Luongo's approach to kill switches and upgradeability with tBTC: https://unchainedpodcast.com/tbtc-what-happens-when-the-most-liquid-crypto-asset-hits-defi/ Discussion of the bZx attacks on Unchained: https://unchainedpodcast.com/the-bzx-attacks-unethical-or-illegal-2-experts-weigh-in/ Issue with Curve contract: https://blog.curve.fi/vulnerability-disclosure/  Compound bug bounty program: https://compound.finance/docs/security#bug-bounty Taylor on “upgradeability makes things more insecure”: https://twitter.com/tayvano_/status/1222564979657723904?s=20  Synthetix oracle incident, allowing a bot to profit $1 billion: https://unchainedpodcast.com/how-synthetix-became-the-second-largest-defi-platform/ Taylor’s tips on how to get more ROI on an audit: https://twitter.com/MyCrypto/status/1254061500244713474?s=20 Tips to follow before getting an audit: https://blog.openzeppelin.com/follow-this-quality-checklist-before-an-audit-8cc6a0e44845/  Resources for security in DeFi:  crytic/building-secure-contractsGuidelines and training material to write secure smart contracts - crytic/building-secure-contractsgithub.com https://consensys.github.io/smart-contract-best-practices/  https://forum.openzeppelin.com https://swcregistry.io https://diligence.consensys.net/blog/2020/03/new-offering-1-day-security-reviews/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Hi everyone, welcome to Unchained, your no-hype resource for all things crypto. I'm your host, Laura Shin. Twitter fights, medium posts, scammers, fissures, and promotional content. Want to cut through all the noise in crypto? Sign up for my weekly newsletter at Unchainedpodcast.com to get a quick and easy summary of the top news stories every week. The Stellar Network connects your business to the global financial infrastructure. Whether you're looking to power a payment application or issue digital. assets like stable coins or digital dollars. Stellar is easy to learn and fast to implement.
Starting point is 00:00:36 Start your journey today at Stellar.org slash Unchained. Cracken is the best exchange in the world for buying and selling digital assets. It has the tightest security, deep liquidity, and a great fee structure with no minimum or hidden fees. Whether you're looking for a simple fiat on-ramp or futures trading, Cracken is the place for you. In response to the challenging times, crypto.com is waiving the 3.5% credit card fee for all crypto purchases for the next three months. Download the crypto.com app today. Today's topic is security in defy. Here to discuss our Dan Guido, co-founder and CEO of Trail of Bits, and Taylor Monaghan, founder and CEO of My Crypto. Welcome, Dan and Taylor. Hey there, happy to be here. Yeah, super excited to talk about. this. Before we dive into the meat of today's discussion, can you each describe what you do in
Starting point is 00:01:36 crypto and how you came to be involved in defy and or security? Why don't we start with Dan? Sure. So Trail of Bits is a software security research and development firm. We were founded eight years ago by myself and two other expert hackers in order to improve the foundation that we all build on. So I used to do just plain old code reviews for folks for many years. But in Trail of Bits, We try to actually engineer software and build new solutions that others can use to kind of lift all boats. So we do a lot of work with DARPA and the DOD to build really advanced tools and advance fundamental research. Companies hire us to build high assurance software for them on their behalf. And then we also do really detailed product security reviews and training that help engineering teams build more secure software.
Starting point is 00:02:25 In the Ethereum space, this was actually out of personal interest, we're out of blockchain. This is out of personal interest. We had a couple of folks on the team that were as excited about the technology as folks that were working in the field. And we saw this tremendous green field opportunity to come in and build the kinds of tools and techniques that other fields really ought to have adopted at their first steps, but did not. And that's what we did. So, so, you know, now in 2020, we have this massive suite of tools that people can use to build secure code and a vast amount of public knowledge that we've been able to communicate to folks that helps them build
Starting point is 00:03:01 secure code. And that's what we continue to do till today. And Taylor, what about you? I have a very different background. You know, I started out just because I was, you know, I was in crypto. Then I accidentally, as I say, you know, built this wallet that became immensely popular. And my security knowledge and the obsession I have towards everything that could possibly go wrong has sort of evolved over time. And as I've watched things go wrong again and again and again. And so my crypto is a wallet. Previously, I built my ether wallet. And, you know, both of these products have really, they're really interesting attack vectors, but also, you know, there's a lot of
Starting point is 00:03:52 unexpected things that happened as I was growing these products. And so today, really, I'm just quite obsessed with how things go wrong, how we can prevent things from going wrong, what steps we need to take to improve on both like the user side, on the general community side, and then obviously the technical side is a huge part of it as well. Yeah. So over the last several months, Defi has seen a number of security issues. And it's funny because when you look at the discussions around all these attacks, they're just so new that usually people are even arguing about what to call them. And I think, you know, obviously we can safely say that most of them are attacks or bugs. They're generally just ways in which the behavior of the protocol diverges from the
Starting point is 00:04:45 intention of the creators. And so let's start by talking about one of the most recent, one of these security flaws, which was on Hedchick. Dan, you were actually involved in this one. So let's actually have Taylor describe what happened first from an outside observer perspective, and then you can jump in afterwards. So, Taylor? Yeah. So Hedchick, you know, the way it came on my radar was I was scrolling through Twitter, as I often do, and stumbled upon this tweet by them. I didn't actually follow them, so it was retweeted by someone. you know, that said, uh, there's, there's been a typo. Uh, you need to take this action. Um, you know, warning, warning, warning.
Starting point is 00:05:30 And then a couple other tweets and then obviously, uh, you know, a whole bunch of replies to that original tweet with people kind of questioning how this happened, why it happened, um, you know, what exactly is going on, et cetera. Um, and as I dived deeper into everything about this tweet in discussion, I realized that there were some really overarching problems with just everything about it. So the first being that they initially called it a typo, which it obviously, while it may be factually correct, is not, it's just, it's painful to watch people try to downplay issues like this. And that was, that was pretty frustrating right off the bat. That put me in a bad mood. Yeah.
Starting point is 00:06:17 And just to clarify, and maybe I'm wrong. but but like essentially this quote unquote typo is that like you know if people kept their money in their the money would be frozen not like stolen but frozen right and so that was the second thing I'm already in a bad mood like I'm like 10 characters in I'm in a bad mood um and then they they like tried to clarify but they ended up saying it's not a security issue um it just It just ends up that everyone's funds are locked. And now I was very involved in the parody, the parody multi-sig situation,
Starting point is 00:07:00 which was, I guess, two years, two and a half years ago now, in which all the funds were locked, and it was a huge thing. And so to just kind of like juxtapose those two experiences and then have someone be like, oh, it's not a security issue. I just found it preposterous. And I would say, you know, it went downhill from there.
Starting point is 00:07:27 Yeah. Yeah, it was a lot of people were tweeting about how, you know, they should call it a bug, not a typo, and spelling out what could happen to people's funds if they didn't remove them. So, Dan, can you now explain how Trail of Bits was involved in the Hedgic incident? Sure. So in line with my introduction, we try to remain as open as we can and work with everyone that wants help in the Ethereum, in the blockchain community. There are folks that are at the beginning of their journey building a product, and there are folks that are, you know, part way through or at the end, right before they're about to deploy something. And there's always some guidance that we can provide. So I try to keep my door open when people show up and they say, I have written something and I need your help to secure it.
Starting point is 00:08:11 And that's what the folks behind Hedrick did. They showed up. They said, look, this is a small project. We don't have a lot of money. we'd like you to take a look at it. What can you do for us? And over the course of three days, we found a large number of issues
Starting point is 00:08:23 in a project that was at a very immature state of development, and we described those issues to them and said, here are the things that you need to do to improve the security of this app. Now, they took that, and we, you know, it can be difficult sometimes. Like there's all these competing interests in the blockchain community around how you describe to your users
Starting point is 00:08:43 that you are doing the diligence required to build something that others can rely on. And sometimes that gets boiled down to a tweet-sized bite of information that, hey, we work with trail of bits, or we work with XYZ firm, whether it's us or somebody else. And there are ways to do that that is like nuanced and correct. And there are ways to do that to transfer risk away to a third party. Because a lot of firms that build software, and we don't make any choices about how they built the software.
Starting point is 00:09:12 They choose what development methodology they'd like to use. what build tools they'd like to use, how adequately they'd like to test it, the architecture of it. And we just come in and we try to do our best to make sure that it gets better after we leave. So instead, in this case, we provide those recommendations of, hey, here's how you should talk about your security process. And they didn't do that. They said, look, our code is all safe because Trail of Bits used it and we're launching it today. Now, we put out a summary document that said, hey, here's what the project with the Hedgit looked like. We did three days of work for them, which is a very small amount of work, and we found a large
Starting point is 00:09:49 number of issues only two weeks ago. So from my perspective, this is a page and a half document that includes a page and a half of fairly negative information about the maturity of the product, but most other people didn't read it that way. Most other people looked at, well, Jailibitz found some bugs, had you fixed the bugs, therefore the code is safe, which is really not the right takeaway. and there's some, you know, things that I can do to improve that, but there's a lot about the community where the way that they are investigating,
Starting point is 00:10:20 what financial opportunities to provide their money to is kind of not producing the results they want. So there's a larger discussion here of like, what are the factors that I should use to trust a given project? And how can I interpret the information that's been provided to me about the safety or the viability of a given DFI project? project. But what is your question? Well, I wanted to ask about something that you mentioned at the very beginning when you said, you know, you will work with a project of any stage. Because, like, so based on the screenshots of the emails that the anonymous developer, Molly Wintermute, sent to you, this person wrote the letter Z for the word the and the numeral two for the preposition to and like just I mean obviously you know I'm a journalist so the way the grammar and spelling and all this
Starting point is 00:11:16 punctuation all those things are very important to me but you know just looking at that like that looked like a red flag even before like receiving the code you know it's like literally just the query itself to me and you know not I'm not like trying to be judgmental of like anybody who makes a grammatical mistake but this is this is like a totally different thing it's like somebody purposely you know not using right so but but but But really, it's really, really weird. It's really awkwardly. But my question is like, so.
Starting point is 00:11:45 So why work with someone like that at all? Isn't that such a, right? Because for me, it would be like, oh, like, you know, this business relationship may, may not turn out super well. Like, that is what I would think if somebody sent me a message like that. That's totally correct. And my thinking behind that is there are a lot of strange folks in the blockchain community.
Starting point is 00:12:06 We've worked with somebody named Barada Barada before. who ended up being an extraordinarily talented software engineer that helped provide input to build, to enhance the quality of a product that we've created called Critic. We worked with a pseudo-anonymous personality over at MakerDAO named Rain. He would show up on video calls as a black silhouette and never went by anything other than those works. So sometimes in this community,
Starting point is 00:12:32 because of the privacy and kind of like very like, you know, crypto punk cypherpunk kind of approach people want to remain pseudo-anonymous they don't want other people to know exactly who they are and they kind of approach their work in a in this way so that alone like well yes it's really weird it's really strange i didn't want to slam my door on working with this person because they were in in my view purposefully trying to obfuscate their identity um which is a thing that you've seen from other people by using that language but i mean i don't I don't think anybody actually types that way. What they probably did is they have some kind of script that they process all their text through
Starting point is 00:13:14 in order to create something that's more difficult to fingerprint. You know, you've seen this a lot back in the old days when they were like hacker crews and everybody went by all kinds of different handles and you'd try to obfuscate any of the publications that you made by processing it with some kind of, you know, script to eliminate the writing style that you have so that people couldn't figure out who you are. there's all these techniques that come from like stylometry and trying to figure out who Shakespeare was and whether he's like this guy or that guy based on their published body of work. You can apply that to software engineering. You can apply that to text files that people read on the internet.
Starting point is 00:13:50 And it's something that I have seen before of people trying to just avoid discussion of precisely who they are. Okay, but, okay. I don't know, like, I don't know if I totally think that obfuscating someone's writing style is the same thing. as switching out the letter Z for the word the, but, but anyway, like, I don't want to, I don't want to get too far down that. But Taylor, what did you want to say? So, yeah, there, I was, okay, so when I first saw her, I guess, writing style, I was, I was taken aback by it and had the same sort of feelings as you. And, yeah, I think that there's a couple different things going on. One is that, yeah, the crypto space is just weird.
Starting point is 00:14:35 And so when you see something like this, it's not as weird as if you're in normal corporate environment and you get an email like this. And the other is that the cypherpunk mentality and the value that these sort of cypherpunks can provide makes it so that sometimes we'll give people the benefit of the doubt when they don't necessarily deserve it and would never get it in another. you know, sort of industry or situation. And I think that's definitely what happened here. And Dan is not the only one who has brought up this obfuscation, you know, this on-purpose obfuscation. And one thing that lends itself to that theory is that Hedgick has a lot of
Starting point is 00:15:21 writing out there that is not written like this. The Hedgick Twitter account does not write like this. The white paper is not written like this. The website's not written like this. And so I'm not exactly sure why this personality, Molly, every time she writes, there's the Zs and the twos, when she's obviously capable of writing in, you know, a normal or what society deems as normal, you know, it's, yeah, welcome to crypto. That's all I'll say on that. And those are things that we looked at too. You know, I've turned, obviously we've been approached by people that are building pyramid schemes. And it's really easy to figure out. out when somebody is fraudulently trying to manipulate users in that way. But from the kinds of things
Starting point is 00:16:09 that Hedgwick was talking about, from the kinds of things that were documented in the white paper, you know, there were friends of mine that were following their Twitter account already. So they kind of had at least some items here that meant that, hey, maybe this is something that ends up becoming important in the defy space. And maybe I should look at them despite this weird interaction that I'm having by email. Okay. Well, one other thing, actually, that I did also want to ask about was so Taylor did such a great tweet storm dissecting the audit summary where she basically says okay you know all this is written in a very professional way but here's what they're really saying and um you know and correct me from or correct me for me or
Starting point is 00:16:53 Taylor if this was wrong but she was like oh you know trail of bits is saying like they didn't even do basic arithmetic correctly they didn't even do the basic thing of having documents. So I just wondered, like, you know, around things like documents, like, would it ever make sense for an auditing company to state that, like, potential clients need to meet certain requirements before they can have an audit? So I think there's never a point where it's too early to engage with a security professional. Like, that just kind of brings up this question again of, are there people that I should tell to go away from my queue of folks that are asking for help? And I don't think that's the right answer. However,
Starting point is 00:17:32 there are a couple things here. So first off, like the fact that we found, like, yes, we found all these critical issues in basic arithmetic. We found lots, like we found 10 issues that essentially could have stolen everybody's money two weeks before this project was going to deploy. They fixed only those specific issues and they didn't address any of the root causes of any of them, right? You can see that in a public GitHub repository and you can square that up with what we said in the summary report. that there was no substantial foundational improvements made to that code beyond patching individual lines of code.
Starting point is 00:18:10 So that's something that you need to understand about these security reviews is that they're usually focused on, hey, we spent X amount of time looking at the code and we found Y issues. If you found a lot of issues in a small period of time, it doesn't mean security improved dramatically. It means that the code is probably filled with bugs.
Starting point is 00:18:28 Like big number is bad. And I think most of the community, thinks the big number is good. Yeah, and that's, I think that there's like a couple of really important things that this is brought to light. And one of them is the way that I read audits and I look at teams is very different than a lot of other people look at them, I guess. And the way that I look at them is what does, not what does this audit say about the code,
Starting point is 00:18:57 but what does this audit say about the team or their, um, how they're approaching the code or how they're approaching defy. And so when I was reading this, you know, the review, I was like, this indicates to me that they're not really taking much seriously. You know, they went into this audit pretty unprepared. There were some pretty basic things that, you know, they could have fixed before sending it to audit, et cetera. People on the technical side kind of tend to miss how human,
Starting point is 00:19:30 all of this is. And people that don't have any technical experience don't realize that they can read these audits and apply sort of like the softer skills or the culture takeaways, even if they don't understand the literal technical underlying stuff. And that's one of the, I think the biggest missing pieces is like you have to have both sides. Like you have to fix the bugs, but you also have to, you know, try to understand why they even got in the situation in the first place. You know, why is this code being given, you know, basically handed to Dan in trail of bits with a chunk of money being like, okay, we're ready to go live except we want, you know, a third set of eyes on this. And if you, if you send it over in that state, that alone to me
Starting point is 00:20:22 as a red flag because, you know, in theory, you thought it was as good as it could be. To push back on that a little bit. I don't know what they're planning to do when they're sending me the code. A lot of people, like, I would have thought that a reasonable thing to do with this code was after review from us, put it on a test net and play around with it for a little bit. That this is a step in their development methodology where there are many more steps they have to take until they release it to main net. But instead, what they did is they shrunk all that down and they said,
Starting point is 00:20:52 okay, it's ready to go. Like ship now. Now is when we're going to do this. Because I think this would have been great if they just said like, okay, great, we've got some feedback about the maturity of our development. We've gotten consultation from experts. They said that the code is not great, and that they listed out all these things we should do. Let's show it to users, get it some testing, and continue to work on it. And that would have been the perfect use of an engagement with us, but that's not what they did. Well, wait, so I guess like maybe I just have, have a misconception around what role an audit should play in defy or like any kind of security, but, you know, if I sort of compare it to the way like a magazine article gets published or something,
Starting point is 00:21:34 like I would imagine that the audit would be one of the last steps where you would get the most bang for your buck if you put forth the best effort you can get it as ready as you can, like as close to perfect, as close to launch as you can. And then at that point, have somebody come in from the outside. Yeah, please don't do that. Please do not do that. So that's, I'll just point out before I let Dan finish because he's going to say, he's going to be right, but I just want to preface this by saying that the ideal way
Starting point is 00:22:05 to engage with security experts is not how anyone's doing it right now. And that said, Dan's going to tell you how it should be done. Yeah, sure. So a couple of things here. Before we get too far away from the point, do want to say that what Taylor brought up about the context around the code and kind of the organizational behaviors and their own maturity at dealing with security stuff is a really important thing for users to understand that I don't think shows up in many of these like security reviews,
Starting point is 00:22:38 these PDFs that come from vendors like mine. We try to do a good job at that. We always list out long-term recommendations that address the root cause of a bug being introduced. Not too many other security vendors do that. And we do, right? So, like, we're good in that respect, but I think we could do much better. So some of the things that we've discussed internally since this Hedgic thing kind of blew up is ways that we can provide literally a color-coded graph around the maturity level of various controls of projects in the DFI space and the blockchain space. This kind of, like, takes a lot of inspiration from the way that we do threat models for companies, which is a different kind of service.
Starting point is 00:23:18 To get back to your original question now, how should people engage with a security company is they should understand a little bit more from a strategic level where their risks are. And one of the ways that people do that is they use things like threat models. So threat models are a technique to understand what data you currently collect
Starting point is 00:23:37 and manage and process, the sensitivity of it, the components that do the processing, and the requirements of those components to properly protect it. If you have an understanding of that stuff early in your development cycle, then you've got a set of guardrails that make it much harder for you to get into situations where you inherit way too much risk, more risk than you're capable of mitigating. You know, examples are like if you engage with a security professional early, there might be ways that we can discuss with you your goals. And then within the context of those goals, help you avoid manipulating low-level solidity calls in order to achieve them.
Starting point is 00:24:14 because manipulating low-level solidity exposes you to a vast amount of risk that maybe you can avoid. And then there are hundreds of bugs that will just never enter into the code base at all. If you wait until the end and you've gone out on this limb and you've built all this code
Starting point is 00:24:31 and that's the first time that you're exposed to a security engineer, there might be a lot of cases where the code's been engineered in a way that is fundamentally unsafe and requires re-architecture. So there's no way that I've can secure a code base at that point. All I can do is point out all the things that are bad.
Starting point is 00:24:49 So basically there should be like multiple engagements. Is that what you're saying? Yeah. Well, and so there's there's sort of like, so what Dan's talking about, you know, when we, when we get into the solidity side, it can get really technical. But a perfect example of this is like when I was first, when we were first putting together the first version of my ether wallet and like sending it out into the world, we had assumptions. We, you know, You know, we wanted to make this tool, whatever. We weren't really thinking that it was going to blow up, et cetera, et cetera. But even down the line, we never engaged with, like, you know, someone who did this for a living who really, really understand security.
Starting point is 00:25:28 You know, fast forward to 2017, all of these things came out of the blue and just, like, hit us upside the face, right? Like the fishing attacks, the malicious browser extensions, on and on and on, all of these attacks. if we had talked to a security professional at any point before that point, they would have said flat out, no exceptions, don't put private keys in the browser. It's unsafe. The browser's unsafe. There's all these different ways that people will attack you that you can't control, BGP, the underpinnings of the internet. All of this is insecure. And instead, because we didn't, for various reasons, we basically built an insecure product that by the point we've realized how insecure it was, it's really hard to take steps back and move away from it. And I think that's a bit of a more accessible example, but the same exact thing applies to DeFi products, to, you know, smart contracts, to pretty much everything. You know, you don't want to get too deep into it before you realize that, you know, you're going to have to change the entire nature of your product or your system to ever be secure. And one other thing I wanted to ask about, because this was also a point of
Starting point is 00:26:40 dispute, but basically it looked a little bit like Molly Wintermute wanted maybe like a week long review, and then you guys were saying, oh, a three-day review should be sufficient. So like what amount of time would you recommend teams seek for an audit or like, I guess there's multiple audits. So maybe, you know, at different points in their project. Yeah. So I think part of the issue here is that we keep using the word audit as if it's this fundamental, like, scientific process where we can eliminate all the bugs from the code.
Starting point is 00:27:14 And Taylor and I are both of the opinion that instead, this is like a divining rod that lets us figure out where hotspots are and whether there's an underlying issue that needs to get remediated or needs to get re-architect or needs to get fixed somehow. So in the span of three days, yes, we got good coverage on the code. And what we found was that the code was bad. That was a sufficient amount of time for us to understand the current state of the project. So in those three days, we found 10 critical bugs that allowed us to steal everyone's money and manipulate all the things that you thought you could depend on. That is not a great result, and we only needed three days to get there. So the extra two days for a full week wouldn't have told us anything different.
Starting point is 00:27:55 It would have been a waste of money, in fact. So they already had now a list of things to do. Really what a security vendor like ourselves is trying to provide to people is a back. of activities and investments in security that you need to make. So we build up that backlog. There are now a half dozen or a dozen different things for you to do. And we want to try to provide the most information, guidance to you, and the least amount of time, which is why I'm not going to oversell somebody on a project when I know that I can
Starting point is 00:28:28 provide the results they need within a smaller time period. So it sort of sounds like you were saying, yeah, that they were looking to you. you to fix all their problems or to kind of like, yeah, just, whereas like you're saying that really the responsibility lies with them and that, yeah, you can point out the, you know, the ways in which maybe their process or their culture around the way they're billing this, you know, is going to lead them into trouble. But you cannot be the ones responsible for the security of their project. Yeah, I mean, they've been working on this for weeks, months, years. many people on the team, and just by the virtue that I looked at the code for three days,
Starting point is 00:29:10 doesn't mean I'm ultimately responsible for the security of their entire company and product. That's just the bottom line. There's a lot of other questions that you could ask, too. Like, there's things that are outside the actual code that determine the security of the product that I'm sure Taylor knows about just as well. Things like the owner privileges in a defy space. People are obsessed with that. How decentralized is the application?
Starting point is 00:29:35 Things like the oracles. that are providing feeds of information that the defy application might make decisions on. How many of them are there and can I manipulate them? Things like upgradeability. There might be changes to the code that get made after a review is done. There could be things about monitoring. Like, do you even know when things have gone wrong in the future? And those could be things that firms ask me to help them with.
Starting point is 00:30:02 Like, I can help you build a process around security monitoring. so that instead of the public finding out that you've been hacked, you find out that you've been hacked first and can take some kind of remediation, maybe immediately issue a contract migration that saves some portion of your user's data or money. But there are many things that can go wrong, and it's really on the owner of the D5 projects
Starting point is 00:30:27 to fully understand what those things are, and they can use our help when they ask for it, and I'll provide them the best guidance I can, all the best practices, all the new services, all the new solutions, and we'll bring in all the expertise we can to accelerate them. But ultimately, it is their responsibility to build a secure product. Yeah, yeah, this is actually a perfect moment to take a break because you basically listed a whole bunch of things that I'm going to ask you about in the second half of the episodes. So here we'll get a quick word from the sponsors who make this show possible.
Starting point is 00:30:59 In response to the challenging times, crypto.com is introducing three measures to help the community. First, the 3.5% credit card fee for all crypto purchases will be waived for the next three months. Second, you could get up to 10% back by using the MCO Visa card on food delivery and grocery shopping at merchants like Uber Eats, McDonald's, Domino's Pizza, Walmart, and more. Don't have a card yet? Buy gift cards on the crypto.com app from merchants like Whole Foods, Safeway, Burger King, Chipotle, Papa John's, Domino's, and more. and get 20% back on food and 10% back on groceries. This is a global offer, so check out which merchants are available in your country. Download the crypto.com app today.
Starting point is 00:31:45 Today's episode is brought to you by Cracken. Cracken is the best exchange in the world for buying and selling digital assets. With all the recent exchange hacks and other troubles, you want to trade on an exchange you can trust. Cracken's focus on security is utterly amazing. Their liquidity is deep, and their fee structure is great, with no minimum or hidden fees. They even reward you for trading so you can make more trades for less. If you're a beginner, you will find an easy on-ramp from five-fiat currencies,
Starting point is 00:32:18 and if you're an advanced trader, you'll love their 5-X margin and futures trading. To learn more, please go to kraken.com. That's k-r-a-k-en.com. The Stellar Network connects people to global currencies and assets. Stellar lets you make near-instant payments in any currency with anyone, anywhere. It's an open blockchain network that access payment rails for applications and institutions around the world and designed so that existing financial systems can work together on a single platform. Transactions powered by Stellar are low-cost, transparent, and fast,
Starting point is 00:32:53 saving both businesses and end users the time and money associated with traditional payment networks. With Stellar, your business can issue digital dollars or exchange existing fia currencies without the need for complicated smart contracts or new programming languages. It's robust documentation, toolkits, and multilingual support let you quickly integrate Stellar into your existing products and services. Learn more about Stellar and start building today at Stellar.org slash Unchained. Back to my conversation with Dan Guido and Taylor Monaghan. So let's actually just now turn to another recent pair of attacks, these involving IMBT
Starting point is 00:33:32 on Uniswap and then also on the DeForce Protocol's LendFME platform. Hopefully the audience here caught my unconfirmed episode with Haseeb Qureshi on these incidents because it was actually really, really fun chatting with him. And definitely it's a crazy story, so you should check that out. But essentially, both of these attacks were caused by this ERC 777 token, which is sort of like a more kind of upgraded or advanced version that has basically just other kinds of functionality that ERC 20 tokens don't have. However, if an ERC 77 token is used in an older smart contract that does not recognize that then an attacker can perpetrate a reentency attack using that token. So I was wondering how you guys thought about situations like this. How do you think defy should handle situations where the technology advances but then opens up new attacks?
Starting point is 00:34:34 So that's, it's like this is actually the thing that scares me the most about smart contracts in general. Like I have no doubt that at some point we will get to the point where we can write secure solidity or whatever language is going to be. I have no doubt that we can get, you know, the community on board with, like, understanding what makes a secure team, et cetera, et cetera. But when you think about the fact that there are all of these systems, right, like there's the D4 system or the ERC-777 system or the UNISWP system or whatever it is, you can make all of the pieces secure and you can have them implemented by like, you know, good team. that are security-minded, and then you combine two of them, and everything goes out the window, and now there's problems. And when you think about just how many different combinations there are and the fact that you can combine two or three or ten of these systems, it's really hard to imagine on a purely
Starting point is 00:35:40 technical level, like there's no way to ever have the system as a whole. every single possibility, every single combination, there's no way that it's ever going to be perfectly secure. So I have a little bit of a different take on this one, actually, and I wonder what you think. Yeah. Okay, go for it. So in the Uniswap-D-Force case, they're affected by the world's most well-known bug class in Ethereum.
Starting point is 00:36:05 They're affected by re-entrancy, right? Yeah. Like everybody. Made famous by the Dow in case people don't know what that was. Right. It is like, it's incredible. It's so funny because up until this point, since the Dow, there hasn't been a really exploited reentrancy attack on main net ever.
Starting point is 00:36:21 Everyone talks about it so much, but the kinds of things that are actually causing people harm are somewhere else. And now all of a sudden in 2020, we have a reentrancy that's used to steal real money. That was the most surprising part of this to me. And when I think about it a little bit, like why weren't they aware of a basic reentrancy flaw in a set of contracts that's got a lot of eyes on,
Starting point is 00:36:42 but like has actual development teams that are trying to do their diligence to build it? And you look at the technologies that are being used. So in the uniswap case, it's Viper. And the tools for a lot of secure development and bug finding and vulnerability discovery are written specific to solidity. Wait, wait, you know, Dan, just go back. So Viper is what is that? So there's choices that you can make around what programming languages to write smart contracts in. Most people choose to use solidity. Solidity is filled with footguns. There are many ways that you can step on sharp objects and end up really hurting yourself with solidity. So there's a community of people that have developed a new language that looks a lot like Python called Viper.
Starting point is 00:37:26 Now, Viper, while it has a lot less footguns, a lot less like sharp objects all over the ground that you could potentially step on, there are still some fundamental things that you need to do correctly and avoiding re-entrancy is one of them. So the problem here is that a lot of the best tools in the space for detecting basic security flaws like this have trouble working with Viper. So the issues with adoption here of those tools may have created a scenario where it was more difficult to find in a uniswap and IMBT kind of scenario just because they've chosen to use different sorts of tech. Now, on the other hand, the DeForce folks are in a different position. because they did use solidity. And that's simply a question of there is a checkbox, yes, no answer that you can get of have you evaluated your code for known flaws and ensured the absence of them?
Starting point is 00:38:23 And for DeForce, the answer was no, we have not, because this would have been detected immediately by any kind of off-the-shelf security scanner that exists in the space. Yeah, and just to, especially for people who didn't listen to the episode with Haseeb, I think it was Open Zeppelin, did blog, about this last summer in July. So it's been known for quite a while. But actually, one other thing I wanted to bring up about this is that one thing that Haseeb said was that, like, for instance, so DeForce had copied compounds code. But what he was saying is the reason that this issue didn't come up on compound is because compound, you know, knew about the issue and like made
Starting point is 00:39:04 sure that no ERC-777 tokens were put on compound. But like, you know, that happens because is they have kind of more centralized control. So I feel like there's this tension between like the decentralization philosophy and then, you know, having good security. How do you guys think about that? Yeah. Common thing in the defy space, like there is a lot of risk around composability, which is, I think the word we've settled on to describe all these emergent behaviors and potential interactions between things that happen on chain and the security risks that come from them. When we work with projects, you know, we've worked with compound as well, and a lot of the way that you have to approach this is by whitelisting the behaviors that you
Starting point is 00:39:46 have studied well enough that you trust and slowly opening up the ability of your contracts to interoperate with other stuff. So if you don't fully understand all the repercussions of working with arbitrary ERC-777 contracts, then maybe you should wait until you're fully clear on what that means before you allow your contracts to do so. Now, that's like one strategy, but at some point, composability is unavoidable, right? Like, I don't know, an example, you could buy insurance on a margin trade that's been collateralized with dye, right? And then there's three systems that all interact with each other. So at some point, there's no real way that you can avoid that composability.
Starting point is 00:40:27 So it's really everybody's responsibility for ensuring that the contracts and systems they use are, like, that the interactions they have with them are safe. it's it's something that I haven't seen many defy projects fully internalize where I think most projects in the space still depend on outside experts like trailobits or someone to come in and advise them about what's going on and what they should pay attention to next and new objectives they should build towards but there's there's definitely a point where it makes sense to have a smart security person on staff like we've talked to a defy project where they had an arbitrage contract on chain that was abusing their app. And investigating that issue required them to identify the arbitrage contract, download the binary code, reverse engineer it with one of our tools, and then deeply understand the way that it was abusing their work. Like that's something that I think Defi projects are going to need to come to terms with, that they really need their own deep understanding of these issues to deal with them in the future. Right. But yeah, that still is, I think, a more centralized model. And then, but also like, and this kind of is also related to the upgradeability thing. So the way I asked the first upgradeability question was just about like when there's advancements in technology, then what do you do? Especially if you, you know, if your project at that point is more decentralized and you have less control. But then another question is just like, how should each system be upgraded? Like, you know, I had this discussion with Matt Luongo about TBTC the other day where he at different points in the interview.
Starting point is 00:42:06 Like one time he was like, oh, we're going to, you know, set it and forget it kind of attitude. And then later he talked about like the next version. And he was like, oh, yeah, well, actually there probably will be a V2. But yeah, I just wonder, like, how do you think? Because I can't imagine. So let's say D5 becomes a thing. 10 years from now, we're not going to be using those current smart contracts, right? So how, but yet how do we get from here to there, you know, while keeping in mind all these different principles like decentralization and security?
Starting point is 00:42:36 and upgradeability, et cetera. Yeah. So the way that I look at it is right now, the biggest threat is like we are writing bad code. We are creating insecure systems. And so in the short term, I would prioritize centralization and security over decentralization. That's not to say that we should just forget about decentralization and not have it be sort of part of our goals or our, our philosophy, but just right now, the worst things that can happen can either be mitigated or eliminated by having, you know, say, just like a kill switch. You know, and I really lived through the doubt,
Starting point is 00:43:20 and I can say that everyone who is there is in the same mindset because we've watched what happened when you try to like fully decentralize everything and you're not ready to. Oh, wow. But I mean, I'm sure you're where you just made a controversial statement. I mean, I do get it. And that's the thing. It's definitely a conflict within me because I'm, like, I'm building on Ethereum. I love decentralization. I love what it empowers. But in the short term, we're never going to be able to get there if every single contract is the Dow and it just blows up and everyone loses their money. And so there's some really interesting ways where you can strike a balance in the short term. And then, you know, as the system becomes more secure and more mature, and you have confidence in it, you can like ramp down. Matt Luongo, obviously, he has one approach that's a bit a bit due to centralize up front from my taste. I've talked to him about this.
Starting point is 00:44:19 But, you know, like, just as an example, you could have a smart contract where you have like a big red button where if something goes wrong, you push the button and it stops everything except it allows for like one function that allows the user to withdraw their money. And so now if a hacker or a flash loan or an arbitraiser comes in and starts screwing with your system in a negative way, you can like prevent them from doing that. You can prevent the bad things from happening, but you don't necessarily lock the user out. They can still go and withdraw their money. And you can also do that in a way where like the user can withdraw. all their money, but you can't, et cetera, et cetera, et cetera. And so these are the things that I think in the
Starting point is 00:45:06 short term, we should definitely actually be encouraging because, again, if everything blows up and every single project basically like launches, huge fanfare, and then everyone loses their money, we're never going to get to a point where any of this stuff is actually useful. So, yeah, baby stuff, please. Yeah. Yeah, there's been a lot of hacks. But one thing I just wanted to ask was when you said, like, the user should be allowed to withdraw their money, but you can't. When you said you, did you mean the developers of that protocol? Yeah, exactly. So whoever, like, just because, you know, I know this is like almost a meme at this point, but the decentralization is the spectrum, it really is true.
Starting point is 00:45:54 You know, you can create a mechanism that allows you to turn all of the smart contract off. and that's centralized, right? That's the developer making a centralized decision to turn it off. But that doesn't necessarily mean that you have to be able to turn it off and steal everyone's money, you know, as the developer. You can have a system where a centralized party can turn it off, but they can't touch the money, they can't withdraw the money, and still allow in a decentralized way each individual user to withdraw their money from the system. I find that idea really interesting because basically what you're doing with that is you're making the risk for different levels of people in the system different. So like if you're building it, then their risk has to be higher. They have to like put more effort into making it secure. But for users, their threshold is like a little bit lower. And you know, by the same token, they have more like ability to go in and out. So yeah. Exactly. And we have not seen. I don't. don't think we've seen like a defy specific product that has like exit scammed or taken advantage of the centralized mechanisms to steal everyone's money whether that's either like a a team pretending to be good and they're actually bad or a hacker like abusing no i lie
Starting point is 00:47:17 there have been hackers that have abused like the the admin functionality of a smart contract um you know but that's you know when dan mentioned earlier like threat modeling right uh is the team itself that's good or bad? Are there attackers on the outside, you know, coming in from the outside attacking? Are there users that are inadvertently doing bad things on accident or on purpose? You know, there's all these different parties. And you do have to be aware of them. You do have to try to protect against them.
Starting point is 00:47:48 It's never going to be, especially in the short term, perfect and like secure against every single party. And that's why for me personally, again, prioritizing. the safety and the pause buttons and those types of tools. Yeah, do that for now. And I just wanted to ask you guys about one other thing that isn't exactly in your wheelhouse, but I was so curious to know your opinion. So with the de-force attacks, they did call the Singaporean police on the attacker.
Starting point is 00:48:20 And I just wondered, in general, like, do you think the traditional legal system should be a way to deal with these kinds of defy attacks? And if so, like, would it be the developers of the protocol who would be responsible or, like, how would that all work? You're right. That is a little bit outside, I think, our area of expertise. But if it's an option for you, then I don't see why you shouldn't take it. I, you know, there are like two things that I want to address about what Taylor mentioned. The upgradeability conversation doesn't just affect, like, the security of your product.
Starting point is 00:48:57 you can also think about, like your product may be safe today, but another contract on-chain could upgrade or change their behavior and now their interactions with your unsafe. And that's where the whole flash loan thing comes in, where there have been contracts that have been deployed for weeks or months or years, and this changes the entire kind of threat landscape, all the bad things that can go wrong are suddenly much more severe and much more likely to occur. And it was through no code change of yours. Your code did not change a single line, but things outside of it. of you did. So that's, you know, other other things that you need to be aware of and have an
Starting point is 00:49:33 ability to respond. And I think empirically, right now, the level of decentralization in the defy space is very low. Like, you can go download all the code for all the defy apps and run it through Slyther, our static analyzer. And you'll see all the owner privileges that just drop out. And it's extensive. I don't think anybody right now, very few people are really achieving that ideal goal of being fully decentralized, and I think that's okay. I'm with Taylor on this 100%. Like, you have to take baby steps to getting there, and it's going to be a long road. Yeah. Well, since you mentioned the BZX attacks, let's definitely talk about those. I guess actually something that interested me is what you just said. You kind of sort of call out the
Starting point is 00:50:15 flashloons as one of the issues. But actually, somebody else that I interviewed Lev Livnev when I had him on the show, he was saying that for the BZX attacks, he felt like they weren't necessarily the culprit. You know, obviously they made it cheaper to make an attack, but he felt that, you know, really this was more like actual bugs in their code. So I was curious to know, like, do you think flashloans are a problem? Because there definitely were other people at that time who were saying that they are a problem. Yeah. So there's some nuance there. Like there was a specific coding flaw in BZX that allowed this attack to happen, right? They had a short position that should have been closed because it was under collateralized.
Starting point is 00:50:54 But it wasn't. That's the bug, right? However, the ability for somebody to exploit this became significantly easier because flash loans were a thing. So what I think most defy projects need to understand is the bar has now been raised that issues that were low severity before are high severity now and that it's insufficient to only focus on like a couple of things that a firm like Trilibitz reports to you that you actually have to go through and fundamentally address.
Starting point is 00:51:21 address every issue. So, and, you know, this gets back to, like, how do you actually secure a DFI project or what is the process for securing a smart contract at all? And, like, ensuring that you're not exposed to known attacks is great, but at some point, you have to have a deep understanding of what your own code is supposed to do and be able to prove that it operates the way that you expect. And that's defining security properties and testing security properties during development. That's like the next layer of a pyramid that I, that I visualize of.
Starting point is 00:51:51 application security maturity, where a third level might be all the token economics and the incentives that you've created, which is just a whole other thing that very few people have a handle on. Yeah, I was about to say, like, in addition to all the technical issues that we should be scared of, there's the whole thing where financials and incentives and economics and tokens, when you start thinking about that, those are attack factors. Like, if your token economics, don't ensure that everyone is making money in the way that they expect to, bad things could happen. It may not be as drastic as, say, like, the Dow. But, you know, if you're promising a sustainable business and you're actually losing money every single month, that's unexpected behavior.
Starting point is 00:52:42 And I think we are going to see way more of that come to the forefront as these more and more DPI projects start launching. Yeah. Yeah, in a similar vein, I actually wanted to ask about bug bounties because this was actually another issue with the BX attacks where the attacker was unhappy with the amount of the bounty offered, which was $5,000. Whereas with compound, bug bounties range from as little as 500 to all the way up to 150,000. So I was wondering, like, how should protocol teams determine what amount their bounties should be? Like, what do you guys consider fair? or like how is that determined? So I actually don't think the conversation is about the bug bounty dollar amount.
Starting point is 00:53:25 Like there are some people for which the dollar amount is like not a thing. They don't care. There are good people and there are bad people in the world, basically. There are some people that are going to do things to screw with you and there's nothing you can do to convince them otherwise. And there are other people that are good people that just want to help you. And they're very receptive to any sort of assistance or acknowledge events or thanks or money. that you provide to assist them. And kind of what you want to do is you want to make sure that all those good people that are out there that are willing to communicate with you are kind of incentivized.
Starting point is 00:53:59 And it is easy for them to contact you and get those issues fixed. You don't want them to not know where to go, to end up tweeting about it, to end up putting it on Reddit or like wherever else. You want to make sure that you actually hear all the things that people have to say. So providing that free flow of information is the most critical thing. for a bug bounty program, and that means describing things like safe harbors, where you have language on your page somewhere that says, here is how you can skip the support queue. You don't have to email support at whatever and create a Zendesk ticket. Like, no, you can reach our security team directly. And if you do so, we won't sue you. And here are all the different ways that we won't go back
Starting point is 00:54:39 and harm you. Like, it is safe to tell us things. So that's really important. Yeah, I'm with Dan on this one as well. Like the bug bounty number, you know there's there's all sorts of philosophies on it but it's that's the least important bit the most important bits are everything else because if you think about um like typically they're called like gray hats right there there are these people that are somewhere in between a perfectly good person and a perfectly bad person um you want to sort of like you want them on your side you want them to be white hats for you and so the ways that you can do this are essentially um by not pisses them off and by making it very easy for them to get you information. And both of those are
Starting point is 00:55:24 insanely important because you can imagine that if someone either accidentally stumbles upon something or is hunting for something and then they try to get a hold of you or they try to share it or they try to figure out what this piece, how it connects to that piece or whatever it may be, every single one of these steps is going to irritate them more and more and more. And it doesn't take that much to piss people off on the internet. And again, if the person is somewhere in between perfectly good and perfectly bad, you know, they may, they may either just like not disclose it, just like give up and be like, screw this. Or they may be like, hey, I don't know what the heck's going on, like, but here's this huge exploit and just dump it on Twitter. And we've seen this
Starting point is 00:56:06 again and again and again and again. So, yeah, bug bounties, like, you should have the page. You should encourage people. You should give them all the ways to communicate with you. You should respond to those really, really quickly and professionally. You should have sort of your security information everywhere. Dan has a repo called like the security blockchain security contact list. If you're not on that list, when say Sam's son finds an exploit, he has to like go into Telegram and be like, yo, anyone know how to get a hold of X team? And then we're all sitting there going, Jesus, you know, and again, Sam is, you know, this example of pretty damn close to perfectly good, you know, but most people aren't going to be sitting in a telegram with a whole bunch of blockchain people and, you know, ask for a contact and then get an answer in two seconds. So the other thing to say here, too, is it shouldn't, like, so if this person was truly motivated by the amount of money that was being offered, that person still should not be able to ruin your day by virtue of them tweeting about some books.
Starting point is 00:57:12 your contract. You can't depend on the fact that the bug bounty exists, that no zero days will ever get dropped on your system. So this goes back to that security response discussion. We had a few minutes earlier where you need to have processes and procedures in place where you know what to do and you can safeguard people's money and you can take appropriate steps to respond to issues when they come out. Because just because you've got a bug bounty doesn't guarantee that people are always going to do the quote unquote right thing. and work with you. Yeah. So it's still your responsibility. What does that process look like? Because with BZX, there was yet another issue where, like, later on, it was revealed that one inch exchange had actually
Starting point is 00:57:55 previously notified them of a different vulnerability and then took issue with the fact that BZX did not pause their protocol during the 16 hours in which they created and deployed a fix. And so user funds were basically vulnerable during that time. There was a similar incident with Curve. And during that time, they kind of couldn't figure out, you know, should they alert people in what's going on? Because if that happens, then Black Hat hackers could take their money. And what they ended up, and their contract actually didn't have any kill switch or upgrade ability. So they ended up deploying a new version. But what they did, and the new version had the fix, but they didn't disclose any of that.
Starting point is 00:58:37 And then they kind of waited until most people migrated over to the new contract. and then afterward they announced it. So just curious, like, how do you think teams should handle bugs when they find out about them? It all depends on the situation. You know, like, if we, the very first parody, that was like the huge conversation because the people that were discovering this situation were discovering it all based on public information.
Starting point is 00:59:06 Like, we were all just looking at the chain, which means that anyone else could discover it. And of course, this, you know, we're not the parody team. And we're also not able to, to, like, put a kill switch or, or anything. And again, this is one reason I'm a fan of kill switches, because if you can kill it, it takes a lot of the options off the plate. And, yeah, striking that balance between not telling people and keeping things secret is, and the flip side of telling everyone and knowing that, you know, everyone also includes people that are just going to exploit it and steal all the money, it's a really, really, really tough position to be in. And this is why kill switches should exist because it takes that decision away, right?
Starting point is 00:59:55 If Curve could have said, oh, shoot, and then just like press pause, they wouldn't even have to go down that path because once you're going down that path, there's no right decision. There's no good decision. You're in that situation where what's the least shitty position? Yeah, context certainly matters here. And the first part of any kind of incident response plan is to prepare your company to deal with those unforeseen circumstances. So what are the set of things that could go wrong and how will we react to them when they do? You're not supposed to figure that out on the fly.
Starting point is 01:00:26 You should ideally have that in place while you're developing the product. And there are many choices that you can make. Some, you know, are going to work out, like curve in their case. case, maybe withholding a little bit of information, but then clearly explaining it after they took actions to secure their users' funds, might be the right decision for them, but it could be the wrong decision for somebody else. So I don't have any, like, specific concerns about what they did. I think that the, kind of the ends justify the means a little bit in that case, since, you know, you safeguarded people's money, but it really, really depends on context.
Starting point is 01:01:02 Yeah, exactly. And the thing is, is that with all the situations, situations where people withheld information and then revealed all shortly thereafter, I don't take issue with that. It's when they don't reveal all or when the information is so available yet, you know, this core group is denying, denying, denying. You know, it's, you know, once the sort of the swing has swung, you have to, you have to go all in and make sure that people do have all of the information. Now, let's discuss oracles. That's an area that's
Starting point is 01:01:37 pretty susceptible to attack and they can also be ripe for manipulation. And last summer, there was an Oracle for the price of the Korean wand on synthetics that was just incorrect. And somebody was able to obtain a billion dollars
Starting point is 01:01:54 in profit with their bot. Yeah, exploiting that. So I just wondered what your opinion was on Oracle Is it too early to have reliable ones? And if not, are there any particular characteristics that give you more confidence in certain oracles versus others? Yeah, this is just a huge discussion around like the security of your code doesn't just depend on like your code itself. You also have to consider the environment around it, the environment that it operates inside.
Starting point is 01:02:23 So, you know, when I'm looking at judging the reliability of a DFI project, some things I really want to know are how many oracles do they rely on? and how many would have to be untrustworthy for there to be some kind of manipulation of the protocol in a way that abuses my funds or the intended use case. So this gets into the kind of like in my pyramid little thing where you've got your known vulnerabilities at the bottom, you've got your application-specific stuff in the middle, and you've got your economic model up at the top. This is definitely a blend of like steps two and three here, where you need to actually model that behavior and think,
Starting point is 01:03:01 through what could possibly happen. There are some tools that people can use to model that that are already available, but they're not purpose built for this task, right? Like you can use tools that come from Trail of Bits like Akhidna and Mantacor, which are essentially a little EVM runtime written in Python and written in Hasdell that you can use to evaluate your contract with different environmental data being provided to it. But they're really more meant for finding more like code security-related. issues and less about providing this feedback on the behavior of your code in response to all
Starting point is 01:03:40 these weird oracle things. So I think that's a part where the tooling and the knowledge could get a lot more mature over the next few months or year. And it's certainly an area where it's needed as this incident shows. Yeah, and I'll just point out that a lot of the exploits that have been responsibly disclosed in the last two, three months have also seen. surrounded either oracles directly, manipulation of the price that, you know, the Oracle is getting the information from.
Starting point is 01:04:10 Like that even played into the BZX incidents as well. And yeah, again, like there are so many different ways that these systems can be, you know, outright attacked or have like an accidental bug or be manipulated, you know, in any of these unexpected behaviors, you have to, you have to think about them up front because otherwise, otherwise, you know, they're going to hit you, they're going to hit you hard and you're not going to know how to respond. You're not going to be prepared. And that's why, you know, I think the overarching theme of this conversation is we're not
Starting point is 01:04:44 mature. We're not ready for this. What do people need to do? What do the teams need to do? What does the community need to do to get like a little bit better? You know, there's all these little things that they can do to prevent bugs. And there's these tools that you can use to write better solidity. or check your viper or whatever it is.
Starting point is 01:05:03 But at the end of the day, like, there's so much going on that, at least for me, what I look for is like a team that is really obsessed with security, that's paranoid, that understands that bad things can happen and that they probably don't even know what those bad things are. Because for me, that's the best hope. If a team's super paranoid, that's the best hope because there's so many unknowns. That's my hot tip for figuring out if a company's got a secure product, too is for non-blockchain software. I always just pop them open in LinkedIn,
Starting point is 01:05:36 and I search for security and then their company name, and I see how many people they have working for them that actually have a responsibility to secure their company. And if it's zero, then I know that, okay, this whole thing is probably a garbage fire, but am I okay with that? So it really goes back to the same thing of, does anybody working for this DFI project have experience
Starting point is 01:05:57 that would indicate they know about security stuff? Did they work in traditional finance at some point? to have that sort of background, or do they have a past history of development or publications or at least public communication that they understand what they're in for as they're building this product? And if no, that's a serious concern,
Starting point is 01:06:17 and that's really the underlying most fundamental concern that I could have about the project. Do I trust the owner? And that's a question you can ask from multiple angles. Do I trust them not to run away with all my money? And do I trust them to actually do what's responsible? to protect it. Yeah, exactly. And the answer to that is not ever, well, Trail of Bits audited them, therefore I trust them. And that's what I think like the fundamental, all of these like disagreements about audits and what they are and what they're not, why the whole thing is missing the bigger
Starting point is 01:06:49 picture, which is there's no one thing that any team can ever do to be perfectly secure. And so throwing it on Dan's head when something goes wrong is preposterous because you're not asking the right questions in the in the first place like you are not asking the right questions well so i might not be asking the right question here because i actually asked so so normally obviously for like certain episodes i don't tell the guess what the questions are but here i did um ask dan and taylor to come up with maybe like a checklist of things that they think defy protocol team should do before um launching a protocol because i you know i wanted that you know i wanted them to have a good answer ready that would be useful to the teams. But Dan already warned me that
Starting point is 01:07:35 he, like, maybe thought my question didn't make sense. So curious to know what your answers are. Okay, yeah, sure. Yeah, so I thought about this a lot over the last few days because of this incident with Hedgwick where it can be difficult for an outsider to understand the level of maturity of a project. And that's really what we're trying to get at is like, what are the long-term steps that someone should take to end up arriving at a secure product. And how do we evaluate those? How do we communicate those? And what are the important steps within them to have actually taken? So on one hand, we have a set of critical controls that are necessary for defy projects to have. They have to have access controls. They have to deal with numbers correctly. Like kind of important. The degree of
Starting point is 01:08:23 centralization or decentralization, their documentation and specs, the kind of key management that they use. Their security monitoring, the level of testing that they've gone through. Those are all kind of indicators. And what I think we're planning to do for our reports in the future is rank all those critical controls from weak to excellent, where each of them, there's no like overall rating. There's no like, hey, this is safe. At the end of the day, if you get like five out of seven, then like you're good.
Starting point is 01:08:53 But it'll at least provide some information from our team in our expert view where we think they are in terms of building a defensible system. Now, that's one way to take it, and that's sourced from the threat models. There's another ancient web kind of document that I love to cite. So if you go back to the year 2000, there's a guy on the internet named Joel Spolsky, kind of a famous guy. He created the fogs bugs system, one of the best bug tracking managers that people had before, like GitHub and GitLab were about.
Starting point is 01:09:30 created Trello, and it's kind of just been like a software engineering leader for many years. He came up with this thing called the Joel test, and it was a set of 12 yes or no questions that you could ask in 30 seconds or less to figure out the maturity of a development team building software. I love that because it took something that was so complicated at the time. Things like the capability maturity model, CMM, are a kind of really rigorous way to evaluate if a team builds good software. and he managed to simplify it down to a 30-second yes-no exercise. So what we've done is we tried to build that same thing for Ethereum, and we could call it the Dan Test, but it's also kind of the Dan Jocelyn test
Starting point is 01:10:10 since he came up with a lot of it with me. But, you know, there are some basics here. Like, can you compile without warnings on the latest compiler that you're using? Do you import third-party libraries from a package manager and track their versions? have you located and documented every privileged operation in the system? If you can't say yes to those questions, then you're probably not ready to go. So I have a big list of those. I'm going to publish them all next week.
Starting point is 01:10:40 Oh, great. When you do that, send me the link so I can put them in the show notes. Yes. I'm so glad you're doing this because it's really tough. And this is what I think that I asked on Twitter two weeks ago now, you know, what are are the things that like every developer should do before you know having 25 million dollars in their contract on main net you know what are the big red flags and there's a lot of like really deep in the weeds type you know type things that i think are really really important um but it was actually
Starting point is 01:11:13 interesting because some of the responses were like very different but also really enlightening and so you know one thing that came out of that conversation was you know if someone doesn't have an audit, that's a really big red flag. Like if they don't get anyone to look at their code, that's a red flag. You know, but that doesn't, just because they haven't on it, it does not mean that they're secure. It does not mean that they're ready for main net. It just means that like, you know, there's not a red flag in that area.
Starting point is 01:11:47 It doesn't put a green one there. It's just not a red flag. And then some of the other ones that I think were really interesting, you know, we're around the teams and the people and how sort of like how much effort and time they dedicated to the things that weren't the literal code. So a lot of teams obviously love to focus on the code.
Starting point is 01:12:10 They love to focus on the product. They want to build this awesome system. But did they spec out the project before starting to write that code and figure out what exactly the architecture is going to look like? did they document the intended behavior? You know, does the white paper, is it like a marketing piece or is it actually, you know, a technical document that dives in all the different situations?
Starting point is 01:12:38 Another really interesting one that I can't necessarily call it a red flag today because not a lot of people do it, but certainly would allow me to have more faith in a team is if they, anytime they sort of acknowledge the risks of their project or their code or their system, you know, if they've taken the time to, especially if they've taken the time to document and share where the bad things are that could happen, that shows me that they not only have like awareness around their code base, they also have awareness that bad things could happen, which is something that is surprisingly missing in this space. And it also shows that, you know, they've taken the time to write it down, and that provides, like, an additional level of accountability.
Starting point is 01:13:29 And so, you know, all of these sort of tools, you know, there's not one thing that's going to make a project trustworthy. There's not one thing that's going to make a project secure. But if you take them all together, you know, a team that is a team that has a better chance of success is a team that, you know, has documents. They've written tests. They have a specification. You know, they're engaged with the community for a long time. They're open to questions. They're open to answering the questions.
Starting point is 01:14:02 You know, they're aware that not everything is perfect and glorious all the time and that bad things can and probably will happen. And I'll say, I think the first conversation I ever had with Robert from Compound, I was very skeptical. And I was like, so you're just going to have all this money on the smart contract. and, you know, how are you ever going to know it's secure? And he literally just responded and he was like, well, there's always a non-zero risk. Like, it's never, there's never going to be a moment where I can go to sleep and be like, everything's perfect. Nothing bad will happen. And it really, it knocked me off my feet because I had been, you know, talking to so many people in the space where, you know, the answer would have been, oh, well, we had two audits by two different auditors and then we had it formally verified and, you know, we have 100% test coverage. But it's actually Robert that gives me more faith in his team that code the compound protocol because I know that today and tomorrow and the next day, you know, that that culture is going to
Starting point is 01:15:03 always be on the lookout, you know, whether that's the lookout for other hacks that may also affect the compound system, whether that's awareness of, you know, flash loans coming into existence, whatever it is. You know, they have a better chance of success than, you know, even someone that has had all of the audits and use all of the tools. Right. Yeah, that makes sense. And I love it that his honesty is actually what gave you confidence. All right. Well, this has been a fantastic conversation. I've really enjoyed it. Thank you both for coming on Unchained. Thanks a lot. Thank you so much, Laura. Happy to be here. Thanks for tuning in. To learn more about Dan, Taylor, and Defi Security,
Starting point is 01:15:44 be sure to check out the links in the show notes of your podcast player. Whatever your favorite crypto meme is, Lambo's, unicorns, or the Guy Fox mask, it's probably on the Unchained Rabbit Hole t-shirt. Check it out at shop.unchainedpodcast.com. And also be sure to check out our hats, mugs, and stickers, too. Unchained is produced by me, Laura Shin, with help from fractal recording, Anthony Yoon, Daniel Ness, Josh Durham, and the team at CLK transcription. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.