Unchained - How Solana's Largest Perp DEX Was Exploited for $285 Million
Episode Date: April 4, 2026Chaos Labs' Omer Goldberg unpacks the $285 million Drift Protocol exploit. Did the perp DEX fail to implement best practices? Sponsored by Nexo: A crypto lending and borrowing platform that l...ets users earn interest on digital assets and access credit against their holdings. Now available in the US with exclusive privileges for new clients. Get started today: http://nexo.com/unchained Solana's biggest perp DEX Drift Protocol was exploited for $285 million on April Fool's Day in a compromise observers have described as “methodical” and “chilling.” Chaos Labs founder Omer Goldberg unpacks how the exploit, which is among the 10 largest in DeFi history, went down, including how hackers leveraged a Solana feature to lie in wait without triggering alarms and how the attack bore some resemblance to the Mango DAO and Resolv exploits. He also weighs in on criticism against Circle for its slow response and whether the exploit has the markings of a North Korean state sponsored attack. In Omer's telling, the loss could have been avoided. Listen to find out more! Guest: Omer Goldberg, Founder and CEO of Chaos Labs Previous appearances on Unchained: How the Resolv Hack Was a Web2 Exploit, Not a Crypto One - Uneasy Money Links Unchained: Drift Protocol Suffers $285 Million Exploit After Admin Key Compromise and Oracle Manipulation Uneasy Money: How the Resolv Hack Shows an Audit Doesn’t Mean ‘Secure’ The Mango Markets Attacker on Whether His ‘Trade’ Was Ethical or Not North Korean Hackers Are Winning. Is the Crypto Industry Ready to Stop Them? Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hi everyone, welcome to Unchained.
You're no hype resource for all things crypto.
I'm your host, Laura Shin.
Thanks for joining this live stream.
Before we get started, a quick reminder.
Nothing we hear on Unchained is investment advice.
This show is for informational and entertainment purposes only, and my guest and I may hold assets discussed on the show.
For more disclosures, visit Unchained Crypto.com.
Introducing Nexo, the premier digital wealth platform.
Receive interest on your digital assets.
borrow against them without selling. Trade a variety of cryptocurrencies, all in one platform,
now available in the U.S. Get started today at nexo.com slash unchained.
Today's topic is the Drift Protocol Hack. Here to discuss is Omer Goldberg, founder and CEO of Chaos Labs.
Welcome, Omer.
Hi, Laura. Thanks for having me.
Salana's Drift Protocol, the largest decentralized perpetual futures exchange on the Salana
blockchain was hacked for $285 million, which just for context, the protocol's total value
locked before the attack was about $500 million.
So that was over half of the money in the protocol that was drained.
That also puts this hack amongst the top 10 defy hacks of all time and the biggest this
year thus far.
The drift token dropped from over seven.
cents to 3.9 cents on the news and is now trading a bit above five cents. So the hack was pretty
multi-layered and also quite methodical, it seems. It sort of seemed chilling, reading about it,
and it made me feel a little uneasy. The attacker or attackers compromised the system a little
while ago, actually, and then they kind of waited. So, yeah, there are things about it that seem
similar to the by-but attack. But anyway, Omer, why don't you walk us through what it was that it
appears these hackers did to perpetrate this hack? Yeah, definitely. And I really agree with what you
said in the opening that it is chilling. I think we've seen a lot of hacks, unfortunately,
already in this year. Many of them seem like, you know, could be someone who's potentially
less experienced and gains access to some key or admin privilege and kind of takes it from there.
But this one was very technical, well thought out.
And from what we know today, spend at least three weeks.
I can jump into kind of the end-to-end timeline.
Yeah, yeah, please do.
Cool.
So around, I think as of today, around 21 days ago, for, if I'm not mistaken, for the first time,
drift initiated a migration towards a multi-sig.
This multi-sig was a 205 multi-sig.
Notably, it had zero time lock on any of the functions it could execute.
And for listeners, what time lock means is even though certain privileges in an application need to be signed by white listed addresses, a time lock basically says after they sign it, there's a gap between when it actually executes.
And this is typically an additional security precaution to make sure that what was signed and the change enacted is indeed what you want it to be.
So this happened about 20 days ago.
And in parallel to this, there was a fake token set up called CBT, completely fake, only no kind of preexisting activity outside of this hack.
And the attacker waited.
I think some of the speculation was that they waited until April 1st for April Fool's Day so that when messages of the hack were being dispatched, there would be confusion about whether or not it was real or a prank.
And pretty swiftly, within seconds, at least for the first batch, the attacker executed a series of transactions that effectively enabled them to deposit and manipulate the price of the collateral into the drift vaults and extract all of the blue chick assets.
So that was like the first part of the attack.
Later, there's how they kind of got out, bridged out, and into Ethereum.
but there were at least five or six discrete steps that the attacker had to do,
which for me indicates that this was not like a random person who stumbled upon the keys.
They studied the program.
They were methodical and strategic in how they planned everything and executed it.
Yeah, yeah.
And we'll break down more of these steps as we go.
But when I was looking at this, it really felt like the original sin here was around this, the admin key.
So explain how it was set up and how it appears to have been compromised.
Yeah.
So in contrast to last week we were talking about the resolve hack.
And the result hack was unique in the sense that there was one key that had effectively unlimited privileges to mint as much USR as they wanted,
which made it easier for the hacker in terms of how many keys needed to be attained and compromised.
Here, it wasn't a single key.
It was a multi-sig.
However, it was a 205 multi-sig.
So this is like the minimum amount of signatures
that you would need in a multi-sig.
So it's one step above a single key.
We're still waiting for an official, I think,
post-mortem.
I think squad have written a few updates.
As of right now, it doesn't look like
that their infrastructure was compromised.
They're the multi-sig provider on Solana.
Drift are still in an active war room.
We don't know exactly how, although there is speculation with the recent wave of supply chain attacks
that have been kind of perpetrated and executed by Lazarus or DPRK.
And by that you're talking about the Axios attack that, so I'm not a technical person,
but it's something like there's these libraries that just hundreds of millions of people
access kind of very frequently.
And so, you know, these exploits in a way could be.
sitting in a lot of different systems.
Yeah, it's exactly that.
Axios is but one of them.
Axios is a particularly popular package, but developers in crypto and Web2,
they'll install hundreds, sometimes of thousands of different packages and dependencies
that are open source and written by other developers.
The way that this particular method works is it's actually quite clever.
So if you can actually receive control on one of these packages, you just make a
tiny modification where you can add a piece of code that effectively once run on any
developer's machine gives you root access to the machine so you can read and write whatever
you want.
And the second, something like that happens, which we've seen with Axios last week with
Light LLM, one of the biggest AI packages, but there have been hundreds of packages that have
been infected in this manner.
You can do whatever you want on the machine.
And this is a much easier vector because you don't need to break any cryptography.
You just need to maybe do a little social engineering for one of the many open source developers that may not even realize who and how their package is being used by.
And you have access to the whole machine.
So this is like, it allows them to cast a wide net.
Yeah.
So the other piece that was interesting is you tweeted about, you know, some of the steps that you saw.
And when you talked about that new multi-sig, you said basically,
So Drift created this new multisig, and it was a signer from the old multi-sig that created it.
But then you wrote, the signer did not add themselves to the new role.
And at first, I was like, this must be a typo.
I was so confused.
And then I realized, oh, wait, what he's saying is, you know, somebody had gotten a hold of the keys of the previous signer, created this multi-sig.
They probably weren't aware.
And then you noted that basically the new multi-sig was immediately signed by a second cosigner
one second after it was created.
And that met the two out of five threshold.
So when you see that kind of evidence of, you know, the steps leading to the hack,
what does that say to you about, you know, how this hack was pulled off and why the admin
key was so critical to it?
Yeah.
So this next part is a bit of how I interpret the event and trying to understand how it could have happened.
It's still very much in the fog of war.
But Drift had actually communicated about the migration to like a multi-sig.
And again, I think we mentioned that this might be the first time that this happened, at least for this specific role.
I'll speak on why.
I think it may have happened, like suddenly now a week before the hack.
but it looks like this was a planned event,
and I think that the hacker had some type of access
that the team didn't know about.
And so he may have had access
to the rest of the keys,
and that's where the two signatures came from.
Okay, okay.
So then the other thing that was so fascinating was,
so you briefly mentioned this CVT token
that they created,
and they gave it some wild parameters,
ones where if anybody had looked at it, they would have realized like this is not a legit coin.
What were those parameters and why would that flag that this coin could be so dangerous?
Yeah. So I think just to clarify between two things. The coin CBT itself is just like a scam token that was created for the purpose of this attack, had very low liquidity and just a regular kind of token on Solana.
What did have the unlimited parameters and infinite parameters and max parameters set was the market.
So this enabled the user or the exploiter to add CDT as a new collateral asset on the drift protocol.
So depositing it as collateral, they then continued to pump the price of that pool.
Because they also, as they configured the market, could decide which Oracle was being used.
So created a token, spun up a fake Oracle.
or like a real Oracle that was pointing to the fake pool, humped the price,
and then they had all of this kind of credit in the system
that they could use to withdraw and drain drift
from all of the Bluetooth protocols.
So this is again why I say it's sophisticated
because this attacker was preparing.
He spun up the feed, he was running fake volumes in the AMM
where the CVT pool was being traded and the Oracle read the price from.
And then also created a fake market on drift,
With max risk parameters, all of this happened unnoticed effectively during the time of the exploit and with no restrictions.
Yeah, yeah. So here's where we get to that original point we were talking about how this feels chilling,
because so far we can see that not only was there a social engineering component, but then there was an Oracle manipulation, and then there was market manipulation.
Like it's just, you know, using all the tools in the toolbox in, you know, I'm going to give them credit in a creative way.
So, you know, at that, so at that point, just talk about then exactly what they did with the with the coin to pull off the attack.
Yeah. So A, in terms of all the toolbox, it's like very reminiscent of at least the market, the Oracle manipulation pump of bingo markets, getting out.
access to all of the admin keys is very reminiscent of USR.
So definitely kind of drawing on a lot of different techniques
and stacking them together to do this.
The main difference here, however, was the manner
in which the market was created.
So in any kind of lending application or vault application,
it's a pretty big deal which assets can be white listed
as collateral because you're extending a credit line
against those assets.
So again, this was an unknown token with those privileges of the admin.
They were able to white list this token and decide where the Oracle was coming from,
what kind of AMMs it was reading from.
And here, once they pumped on a very low liquidity pool, the price of the asset,
effectively they had hundreds of millions of dollars in collateral.
At least that's what the Drift program viewed it as and gave them all this credit to extract
all of the blue chip assets against it.
So there's a few extra steps here, but the actual method of extraction is something that we've seen time and time again.
Yeah. Honestly, this reminds me of when, for whatever reason, after Avi Eisenberg did the mango markets attack, he reached out to me saying he wanted to come on the show.
And when I interviewed him, I think I used the word market manipulation or something. I don't remember exactly what I said, like these actual words.
He said to me something like, oh, well, what is the legitimate price of a token?
Like he kind of disputed that it was manipulation.
He was saying that was the legitimate price at that time.
So, you know, again, that's, you know, the kind of maneuver that was pulled here,
like at least when it comes to the smart contract.
Interestingly, I don't think I've ever shared this before,
but Avi Eisenberg also contacted us at that time.
because after the Mango Market attack, he tried to wage a similar attack on Avey.
And we had configured the parameters such that we have these machine learning models
which estimate the cost of the attack.
So Avi was asking us for access to it.
He was like, hey, can I see if my numbers match up with yours?
So that was one of the, yeah, that was one of the crazier DMs that we received at chaos.
Yeah, clearly he's not the typical person.
But anyway, okay, so let's talk about the next bit because there's another component that I've seen has been generating some controversy on crypto Twitter, which is this notion about durable nonces, which is kind of a core part of how Solano works.
So explain what a durable nonce is, like what the purpose is for them, and then how that also became a potent attack vector.
Yeah, so the durable nonce, basically like what it solves for is that you can sign transactions that don't have time expiration.
And there are legitimate use cases for this.
Like, for example, in an application where the user doesn't want to be burdened with signing a bunch of transactions,
perhaps the application wants to ask the user to give a few permissions up front and then streamline the UX from there.
So this is like one of the things that it solves for.
But what it did here is as soon as the attacker had access to those keys,
they were able to sign on those transactions and not ring any alarms
and just wait for the perfect time to execute the attack.
Typically, without the durable nonce, they would have to sign and execute the attack,
I think, within a two-minute time frame.
Oh, okay, okay.
So here's like a funny thing.
Or, oh, okay, no, that's, okay.
So, sorry, I'll explain in a moment what I meant by that.
So, okay, so Anatoly Yakovenko, the founder of Salana Labs, tweeted about the durable
nuns.
He tweeted, quote, durable nonce is observed on chain.
So to sign it off chain, attacker creates the nonce on chain first and then transfers
authority to the target.
Your cold storage offset can actually monitor for this and flag it with pager duty.
There's no way to not have this feature because cold storage custodians have to have
arbitrary amount of time to recover the key. So any system with smart contracts will basically
end up with some version of this no matter what. So do you agree with him that there is no other
way to not have this or do you think that durable nonsense should be rethought in some way?
So I understand what Anatole is tweeting is my understanding of that tweet is that you can
monitor and set alerts for this. So what I mean, in
I mean, in theory, you can sign any type of transaction.
Here, there was a transfer of authority,
which is actually something that we've seen in many, many DPRK attacks,
that they transfer authority to a different wallet
and an attempt to obfuscate like their addresses.
And all of this can definitely be alerted and monitored for.
So my understanding of the tweet and just generally
my thought and durable nonsense is using the correct way.
It's not like a bug in the VM.
Unfortunately, it's been used for a lot of exploits.
It does have legitimate use cases, but for sure it's something that you can monitor and get alerted on.
And that's where page or duty comes in, especially for anyone that's dealing with hundreds of millions of dollars or doing hundreds of billions or even trillions on an annual basis from a notional perspective.
These are exactly the things that you want to be looking out for.
And page or duty is just a very common kind of software solution to alert you when these things happen, get through your do not disturb, and kind of get the team that's on call to start taking a look.
Okay, I guess the one thing is if the page is going off, then isn't it too late at that point?
Like already something's happened, right?
And like you might not be able to do anything about it.
So is that really the best solution?
No.
I mean, so basically in anything in security or risk management related or operational security,
the way to solve this is actually having like a good setup sound architecture and robust system.
And that's number one.
This is like after the fact.
And if we look at even hacks from the past month, like USR and even here, it looks like it took the team hours to react.
And so that would solve for that second part of knowing that something bad has happened, but it's not necessarily going to prevent it.
Right. Okay. Yeah. That's why, hmm, I don't know. It feels to me like at the very least, potentially it could be structured differently, but I'm not a technical person.
And so anyway, okay.
It could be structured in many ways to your point, right?
Like this is one of the big breakthroughs in EVM and SVM is that it's turning complete.
You can program whatever you want.
And so, yeah, there's many ways to prevent it.
But when something bad happens or something irregular, you want the monitoring just to alert the team and even the partners who are integrating.
Yeah.
Okay.
Well, in a moment, we're going to talk about what potentially Drift Protocol should have done,
but first we're going to take a quick word from the sponsors who make this show possible.
Step into a new era of wealth.
Discover Nexo, the premier digital wealth platform.
Manage your crypto portfolio with confidence and control.
Receive interest on your digital assets.
Borrow against them without selling.
Trade a wide range of cryptocurrencies, all in one platform.
Now available in the U.S.
With 30 days of exclusive privileges for new clients.
Experience Wealth Club Premier.
Access enhanced interest rates, reduced borrowing costs,
and crypto cashback on swaps.
Get started today at nexo.com slash unchained.
Back to my conversation with Omer.
So one other thing that I wanted to note here was,
I mean, you had such great tweets kind of outlining what happened,
But one of the points he made was he said there's no time lock, no multi-seg and no delays,
which is how all this happened.
I also saw this is very direct.
Beanie Maxi tweeted, quote, what makes this $280 million drift tax so devastating is that it's not a smart contract exploit.
We've seen flash loans breach similar platforms before.
In this case, a dev was fished for an admin key.
It's inconceivable that no risk factors are just closed to anywhere.
And then he said, if I lost my money in this, I'd be certainly lawyering up right now.
So I'm just curious, like, when you kind of look at their setup, how would you say it differs from best practices?
I think there were a few ways.
It's not just one.
But time lock is definitely one of them.
And in this case, like having a time lock and when the transactions were signed and when it actually executed would have given a window of time.
for the team to potentially stop it.
So that's number one.
The reason for a time lock, yes, it's hacks and exploits,
but it's just also to make sure that you're doing what
you think that you're doing.
So in lending markets, in derivative markets,
when you're listing new assets, when you're changing
parameters and you're making changes that are sensitive to the system,
it's very common that you'll have either time locks
that are a few hours or even a day for the thing to go through,
for more parties to kind of look at it,
whether it's auditors, the risk team, or the
core team itself. So that's number one. Like if had, even if everything had gone wrong with the
time lock, it still would have to potentially stop it and definitely for all the 20 plus
integrating partners to reduce the mitigate exposure. That's on the time lock. I can go more into
time locks or speak about maybe the multi-sig setup or some of the infrastructure.
Anything you feel like it could have been done better. Yes, I mean, multi-sig 205, again, it's one step over a
single key. So for something that houses so much TVL, you'd want to see more, three or five.
And ideally, you can use kind of things like biometrics to ensure that the person who's signing is
actually like who you think it is. And there's just so much you can do there today in terms of
the signing itself. And then it's just alerting. All of these safeguards and admin privileges were
compromised by the attacker. But all of this happened without any alert to the core team and not to
any of the teams that were integrating. So I think most of this news broke through Twitter. It took
several hours. By the time the news was out there that the core drift team had communicated,
there's also nothing that the 20 plus integrators could do. And then similar to kind of the contagion
with Morpho and USR that we spoke about last week, it's also on integrating applications. So,
understand the counter party risk of whatever they're integrating and set up alerts, circuit
breakers and risk guards as well.
Yeah, yeah.
It's honestly, like, so I don't know drift.
And I don't want them to like take what I'm saying as, you know, being harsh at a time
when they're sort of down bad.
But, you know, it just sort of looks like they weren't super paranoid.
Like to me, that's how it, you know, reads.
like and I'm not a security person, but I've obviously written about it a lot.
So yeah, there was just like a sense to me like, like, yeah, it just wasn't paranoid enough.
But you started to talk about contagion, which definitely was an issue here.
So explain, you know, how this hack started affecting the other money legos in the Solana ecosystem.
Yeah. Maybe just a word like on the team before I jump into that and just for all teams.
I know some of the core contributors pretty well.
They spend time in New York, amongst other places that we frequent.
And they're a super hardworking team.
I've seen them work all hours of the day probably over the past few years.
So I don't know that they're not paranoid.
And we'll wait for the post-mortem so much as there's all of these audits and things that you should do in a Web3 system.
But again, all of these exploits that we're seeing are classic Web2, OPSEC, kind of risk.
exploits and there are systems like PagerDuty, but also Crowdstrike, Palo Alto networks,
all of these very big companies that they're literally built to do and help mitigate exactly
these things.
So I don't know about their security posture.
We'll wait for the post-mortem, but my experience is that this is something that I've
seen kind of less prevalent amongst Web3 teams in their understanding.
Okay, so the contagion.
Yeah, the contagion.
And like you said this was a top 10 hack in terms of TVL and funds loss, correct?
Yeah.
Yeah.
At least as of yesterday.
As of yesterday.
Yeah.
When I checked it yesterday.
As of yesterday.
And I think also every time that we check, it looks like the number in terms of
funds loss is continuing to increase as more exposure is being kind of disclosed.
I think this was number one in terms of discrete protocols that were affected with over 20.
So I can pull up the list here, but effectively the-
Yeah, you can also share it in the chat and the engineer can show it on screen.
Sure.
Let me put it in chat.
Sorry, this is the wrong one.
It's actually on the Twitter thread of Chaos Labs.
So if you want to pull it up, we just publish an article that reviews everyone who was affected by this.
I'll drop the link in the chat.
Okay.
There we go.
Now we can talk about the contagion itself.
So roughly, you can break it down into three categories of applications that were affected, hit by it, and as a result, have lost kind of user deposits in one way or another.
The first is the vaults.
So Drift started as a pure PurpDex.
they've expanded the product to offer the drift vaults effectively,
and a bunch of curators have created their own vaults on top.
So there's a curator called Prime Number that had over 10 million deposited into drift
via their vault at the time of exploit.
Guntlet had, I think, 6.4 million.
Nitrade was over 3 million as well.
So this is kind of the first, and this is the direct parallel
to kind of Morpho and USR.
The second is borrow-lend integrations.
So there are protocols like Pira that use drift,
drift integration in their application,
to offer the full borrowing and lending stock.
So here, their application was compromised
via dependency on drift as well.
The third is all of the emergence
of these new yield products.
So you have products like reflect money,
trade neutral, elemental,
that have all integrated drift.
as their source for generating yield on behalf of their users.
So I think now we've touched on nearly 10.
There are another 10 that fall into these different buckets.
But again, here, by the time that all these teams understood what was happening,
due to the lack of monitoring and alerting infrastructure, it was already too late.
Wow.
So let's now talk about the other kind of drama that was unfolding while this was going on,
which is, of course, now the hacker.
have almost $300 million, and they start moving it, you know, with the intention probably
of laundering it and cashing out. So the community was incensed to say the least that Circle did
nothing to stop the hackers. Zach XPT tweeted, and here he noted that they were using
Circles cross-chain transfer protocol or CCTV to, you know, move the money. And he said,
quote, six hours is how long Circle had to freeze stolen funds from the 280 million plus
dollar drift hack. Why does our industry allow them to stay silent? And he even tagged the CEO
Jeremy O'Lear. He tagged Circle. He tagged USC. So why do you think Circle did nothing and generally
does not do anything in these types of situations? Yeah, I saw that tweet as well. And it's a good
question. I don't think that or at least I haven't seen Circle fully published what their exact
position is on these things. What I do think and what I've seen based on their actions historically
is that they're kind of reluctant to act when there's no unilateral legal cover where it's not
coming from something that's mandated by the court. And it's upsetting. But on the other hand,
I think the counterpoint to that is it sets a precedent.
And this one was, it seemed more clear cut than ones that we've seen previously.
But there are other instances perhaps where it's less clear.
And then what is Circle's role?
Should Circle be monitoring for all hacks?
I think some people would say that at their scale, there's legitimacy to that point,
but others would say that it's not their role.
Okay. Interesting.
Well, so do you have an opinion on what would be a better way for Circle to tackle these types of incidents?
I think it's not, it's not binary.
It's not clear cut in the sense that, look, Circle Today does have the, and they have historically blacklisted addresses.
For addresses that are blacklisted, I think many of the bridges will not allow for transferring those funds.
But again, those blacklists are usually derived from some legal process or some kind of very clear instruction they get from a different law enforcement agency.
So I don't know who spoke to Circle.
I don't know if anyone was able to reach them during the time of the attack.
I don't know what information was available.
And maybe in the post-mortem that will become more clear.
But yeah, I mean, if everything was clear, which it wasn't in the hours leading up to the hack,
at least not to us, but perhaps there was different information available to internal parties,
that is something that could have stopped the funds from reaching Ethereum
and later being mixed.
But it looked like the attacker was actually pretty confident that they wouldn't freeze
because he moved into USCC and then used the CCTV protocol.
So they may have kind of made the bet that it wouldn't be frozen without any of the requirements
that we said earlier from a law enforcement agency perspective.
Yeah, yeah.
And I think like the the, um,
urge to potentially freeze with imperfect information isn't necessarily a bad one, considering
what I'm going to ask you about next, which is that many people observe that different elements
of this attack resembled the by-bit hack. And as most people should know, that was executed by North
Korea. I know nothing's confirmed yet, but, you know, I saw, like, for instance, DivergSec
tweeted about like multiple different parallels they saw, you know, when comparing this to
previous hacks by Lazarus. So like, you know, when you look at it, do you also think that
there are certain kind of, like there's a certain fingerprint to this hack that leads you to believe
it could be North Korea?
Yes. So, I mean, there's a lot of documented attacks that have been attributed to them.
And we've seen some of the techniques they've used.
Certainly, again, this wasn't a random dev that
kind of stumbled upon these keys and decided that they
would give it a go.
This was very methodical.
The similarities between this and by bit for me
are that it was a different kind of form factor and attack
vector, but deceptive key signing.
So in by bit, I think they even manipulated the UI on the machines
that were signing.
And the signers were actually signing.
something that they thought was legitimate, although it was the transaction which transferred the funds to the rogue wallets.
Here, so that's where it's similar, but the difference here is it took it one step further.
They didn't just sit on a transfer transaction.
They literally controlled the protocol in that moment.
So there were a few more steps involved, again, touching on the Oracle,
touching on the fake token that was created in the volumes that were run in that pool,
another layer of sophistication.
Yeah.
Yeah, so what would need to happen for the community, you know, or, you know, I don't know who it is.
That kind of makes the judgment on these things, but what needs to happen for there to be a level of confidence in saying who the hacker was?
I think right now what most of the security teams that are in the drift team itself and however they're working with in an attempt to recover the funds are doing or tracing the funds.
They're probably seeing if it's associated with addresses that have either been blacklisted or suspected of being tied to the North Korean regime.
So there are many ways to kind of, it's yet to be seen like how they're going to kind of get the funds off chain.
Like ultimately, like this is, or if they need to.
So I think these are the things that are going to happen now.
I don't recall DPRK claiming direct responsibility for.
any of these, but there are tell-tale signs. The techniques are, it could be a copycat, but this is like
the MO of some of the attacks they've been more successful with. Okay, so last question here is
potentially a little controversial. I saw Hayden Adams of Uniswap tweeted, quote, people might
accuse me of grave dancing for saying it, but we have to stop letting centralized things call
themselves defy. Admin key can drain all funds, C-Fi. Otherwise, defy means nothing and its brand is
destroyed. No admin key can drain any version of Uniswap for any reason. So there's that view. And then I'm
going to ask you about one other one that was like kind of on the flip side. Hossu tweeted,
every defy protocol should have. Number one, circuit breakers for deposit withdrawals and possibly other
internal operations as well. Time locks for any change, which you mentioned.
Third, security councils that can shut down protocols immediately.
And then he said, we don't need insurance.
We need to start doing the fucking basics correctly.
It's too early for the space to drive without any training wheels.
I beg you, sacrifice a tiny bit of UX to gain a lot of peace of mind.
The worst possible UX is losing your users money.
So yeah, what do you think of those two tweets side by side?
I think they're both fair, right?
And I'll start with Haseu, like the security council, the time locks, all the things that we've discussed today.
These exist in a lot of applications.
I was formerly in the Arbitrum Security Council.
Layer Zero has a Security Council.
Ave has the risk contributors led by us and other contributors that need to sign
on sensitive transactions as well.
So these things do exist.
It's not that they don't.
And certainly it could have stopped what we saw yesterday.
So it adds friction.
I think it's a decision that the teams kind of like need to make.
I think it's definitely like the right decision.
And that's number one.
Number two, for what Hayden was speaking of, look, this is drift by no stretch of the imagination,
even like hyperliquid and some of the other perplexes, there are areas in the system that
are more centralized than not, right?
And I think every team, like when you're building,
it's not that you're either defy or pure CFI.
There's a spectrum that you sit on.
And I think teams make decisions based on the product
that they want to go to market with and the customers
and users they want to cater towards.
So I don't think there's anything like wrong with that.
But if you are making those decisions,
they need to be disclosed.
You need to do them responsibly.
And you need to architect the system,
do the risk audits, do the security audits,
such that you're doing everything in your power,
to prevent what we saw yesterday.
And users can, if those things are disclosed properly,
they can decide do they want to be in a system that's law as code?
It's 500 lines of code and what you see is what you get.
Or do you want to be a system that has a better UX
and is in the happy path, more accessible to users?
All right, well, Omera, this has been super fun.
You know, well, fun is maybe not the word.
It's a sad day, but it's been really,
interesting learning how this all went down. Thank you so much for sharing your insight on Unchained.
Thank you, Laura, for having me. And thanks to everyone to join this live stream. We will catch you
tomorrow. Bye.
