Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - UMA & Across: Optimistic Oracles & Intent-Based Bridges for Unifying Ethereum - Hart Lambur
Episode Date: August 1, 2025Universal Market Access (UMA) was founded by 2 ex Goldman Sachs traders that wanted to make global markets universally accessible through financial smart contracts that used synthetic assets on Ethere...um. However, this was taking place long before the massive boom of DeFi summer of 2020. As a result, UMA shifted to building an optimistic oracle to power prediction markets as a decentralised ‘truth machine’, thus expanding oracle use cases. Through game theoretic models, UMA managed to properly incentivise its token holders to act as voters, rewarding them for good predictions & disputes, and vice versa. Later on, Hart Lambur also co-founded Across, an intent-based optimistic bridge that set out to create a seamless UX for unifying EVM chains. Through their solver network, Across managed to achieve fast (as low as 2 seconds) and cheap bridging, abstracting away crosschain complexities, without any security tradeoffs.Topics covered in this episode:Hart’s backgroundUniversal Market Access, from synthetic assets to oraclesBuilding AcrossUMA’s optimistic oracleIncentivizing voters & resolving disputesDealing with invalid outcomesOptimistic security assumptionsUMA x Across dual token interactionsAcross’ intent-based bridgePricing mechanism & solver competitionZK settlementBridging fragmentationAbstracting & unifying cross-chain bridgingBridging between rollupsUMA & Across governance systemsEpisode links:Hart Lambur on XAcross Protocol on XUMA Protocol on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Friederike Ernst.
Transcript
Discussion (0)
Our initial thinking was we should write financial contracts on a blockchain.
It makes them globally accessible, whereas financial contracts in Tradfai, you know, you have to be part of the legal jurisdiction of country XYZ.
Well, we need an Oracle to be able to resolve the data about what the outcome of this financial contract should be.
We were inspired by things like Vitalik's Shell and Coin blog post from 2014 and various like games.
Theoretic concepts and crypto economic concepts.
But the core idea is pretty simple.
Anyone on the blockchain can propose a statement as true.
And then there's a challenge period.
And anyone else on the blockchain during that challenge period could say, I disagree.
There's bonds and there's incentives here to both propose correctly and to dispute correctly.
And there's penalties if you propose incorrectly or dispute incorrectly too.
What a cross did is we're like our whole MO is we want bridging to be a two second or less experience.
We want it to be fast.
and cheap, but we think like this speed is critically important.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Friedricha Ernst, and today I'm speaking with Hart Lamper, who is the co-founder of Uma and Across Protocol,
which are two interrelated projects in the wider Ethereum space.
So, Uma is a decentralized, optimistic oracle, and a cross is a bridge that in some flavors,
uses Uma under the hood, but we'll get to that in just a second.
Before I talk with Hart, let me tell you about our sponsors this week.
If you're looking to stake your crypto with confidence, look no further than Corse 1.
More than 150,000 delegators, including institutions like BitGo, Pintera Capital and Ledger,
trust Corse 1 with their acids.
They support over 50 blockchains and are leaders in governance or networks like Cosmos,
ensuring your stake, is responsibly managed.
Thanks to their advanced MEV research,
you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet,
set up a white label note,
restake your assets on eigenayer or symbiotic,
or use their SDK for multi-chain staking in your app.
Learn more at chorus.1 and start staking today.
Hey guys, I want to tell you about NOSIS,
a collective of builders creating real tools
for real people on the open internet.
NOSIS has been around since 2015.
In fact, it started as one of Ethereum,
very first projects. And today, it's grown into a whole ecosystem designed to make open finance
actually work for everyday people. At the center of it all is NOSIS chain. It's a low-cost,
highly decentralized layer one that's compatible with Ethereum and secured by over 300,000 validers.
So whether you're building a DAP, experimenting with DFI, or working on autonomous agents,
NOSIS chain gives you a solid, neutral foundation to build on. But NOSIS is more than just
infrastructure. It's also tools that people can actually use, like circles first,
example, lets anyone issue their own digital currency through networks of trust, not banks.
And then there's Metri. It's their smart contract wallet that makes it easy to access
circles, manage group currencies, and even spend anywhere visa is accepted, thanks to their
integration with NOSISPAY. All this is governed by NOSISDAO, where anyone can propose, vote,
and help guide the network. And if you want to get involved, running a valider is super easy.
All you need is one GNO and some basic hardware.
or more and start building on the open internet, head the nosus.io.
Nosis, building the open internet one block in a time.
So cool, hard.
It's so nice to have you on.
Thank you so much for having me.
Good to be here.
Incredible as it sounds, you've never been on this podcast before.
So your co-founder, Alison, has been on.
So maybe before we kind of dive into kind of like all the awesome stuff that you're building,
tell us about who you are and how you got here.
Sure, yeah. You know, I was looking back. I was actually trying to remember if I'd been on EpiCenter before, like years ago. But you're right. It was Allison, my co-founder that was on here. TLDR, I studied computer science. I then worked in financial services at Goldman Sachs as a bond trader, like U.S. Treasury bonds in the financial crisis. Allison was two years junior to me and worked beside me for four years.
there, something like that, and we were very closely together, and that's how we got to know each other in the first place.
Trading bonds during the financial crisis, U.S. Treasuries during the financial crisis was super interesting.
It's not my background, but I learned a lot about finance.
I learned about market structure.
I learned a lot about actually incentive structures and what incentivizes people.
Come back to that.
I left to do a fintech business that weren't kind of sideways, got acquired by NASA manager four years later,
and then was full-time in crypto starting beginning of 2018, and I recruited out.
Allison to work with me on this decentralized finance idea before defy was a term. If we go back to
2018, people hadn't really thought about that as a concept yet. And Frederica, you'd remember
this from like Nosis days too. It was just kind of like, it was all sort of researchy stuff,
right? Like what could, what kind of financial applications or financial ideas could be built
on smart contract platforms and on Ethereum? Yeah, absolutely. So kind of the thing that you guys
started back then was called Universal Market Access, or Uma for short, and you actually started
with synthetic assets. I mean, you came from a trading background, so I guess it kind of like
it checks out. But how did you land on this? And when did you kind of decide to kind of pivot to kind
of an Oracle? Well, it's interesting to think about it because in some ways we definitely pivoted
And in other ways, it was just like kind of all the plan the whole way.
The way to think about this, so the way I like to think about it is our initial thinking was we should write financial contracts on a blockchain.
That makes all the sense in the world.
It makes them globally accessible.
Whereas financial contracts in Tradfai, you know, you have to be part of the legal jurisdiction of country XYZ to have access to that financial product.
service. So, you know, the name universal market access was all about how do we make finance globally
accessible? Well, we need an enforcement mechanism that is globally accessible. A blockchain
actually can do that pretty well with economic incentives. Okay, so we like this idea of financial
contracts. You could call them derivatives. derivatives are just financial contracts, right? But we wanted to do
that on a blockchain. And we like, well, we need an Oracle to be able to resolve the data about
like what the outcome of this financial contract should be.
Concrete example, like you and I do a binary bet on whether the price of Bitcoin will be above
or below 100K.
We need to know whether that's true or not, etc.
So that's where we like came up with this Oracle design that we were super excited about
early on.
Then though, we needed a use case and we're like, okay, it's super early days.
Like we need something for somebody to use.
It's super early days with crypto.
So nobody wants to like trade financial derivatives yet.
That's like too sophisticated.
We were like, let's make tokens.
People like tokens.
Let's make tokens that look like financial contracts.
They're derivative contracts.
There's synthetic assets is what we did.
Synthetic assets that track some other underlying object.
And we can use our Oracle to power that.
So it's kind of the path there of how we got into into that space at the time.
Yeah, super interesting.
It's funny how sometimes you start doing one thing and then kind of like you need to build
like parts to kind of power this and then kind of like the parts that power this kind of
become your main thing kind of later.
And this kind of happened again, right?
So kind of like you guys have since launched a cross, which also kind of came out from
the OMA ecosystem.
And across is a bridge kind of like which is already kind of clear from the name.
But what prompted this?
So, honestly, we wanted more use cases for the UMA Oracle.
And we did basically, it was an internal hackathon.
Again, there might be like Nosis parallels here, too,
because you guys also came to spawn up bunch of ideas along the way.
Actually, we always kind of looked at you guys for inspiration that way.
But we wanted more use cases for this UMA Oracle.
And we're like, okay, synthetic assets, they weren't really taking off.
This was, you know, four years ago, three, four years ago.
People want to trade crypto assets, not, you know, synthetic assets.
That might actually be changing right now.
We'll see.
But we're like, we have this really interesting Oracle that can give you data on anything.
It can even give you data on what's happening on an L2 faster than the seven-day bridge is going to let you get data from that L2.
And we actually use this as an internal hackathon to build a fast exit bridge from optimism.
It went one direction only.
It only let you get off optimism.
This was the first internal hackathon.
And then we're like, yeah, you know, there's something here.
And then we thought about it a lot harder, came up with these early intent-based designs,
which I'm sure we'll get into, and realized that we had a really compelling product in this
super fast, intent-based bridge to connect mostly EVM chains and then launched across as like
something that used Uma in the wild.
Yeah, super interesting.
Maybe maybe because kind of maybe we go sequentially here.
So maybe let's talk about Uma first.
So kind of, umma, you already alluded to this.
So kind of it's an optimistic oracle.
So what does this mean and how does it differ from more traditional orcas like chain link?
I mean, Chainlink does a few things now, too, to be fair to them.
But the classic version of Chainlink is we're going to have a series of nodes or some set of nodes write price data to a blockchain.
And so it's very constrained in that it will only write the price data that those nodes know to write.
Works great for Bitcoin prices, Ethereum prices, that kind of stuff.
Okay, cool.
but we were trying to solve the generalized problem of we want to get any bit of data onto
any bit of verifiable data onto a blockchain um and uh we're like okay well we need a different system
to do this um we were inspired by things like vitalics shell and coin blog post from 2014
and various like game theoretic concepts and crypto economic concepts but the core idea is pretty
simple. We say, hey, anyone on the blockchain can propose a statement as true. So they could
propose, we'll take an election. They could propose Trump won the election. And then there's a
challenge period and anyone, anyone else on the blockchain during that challenge period could say,
I disagree. Right. First step, the happy path, the optimistic path is nobody disagrees. And so then
that that proposal gets taken as true.
And there's bonds and there's incentives here to both propose correctly and to dispute correctly.
And there's penalties if you propose incorrectly or dispute incorrectly too.
So we have incentives lined up there.
But it's a very simple kind of challenge game where anyone can say, this is what I know to be true.
And anyone else on the blockchain can say I disagree.
That's the core concept.
Let me through what happens if I post.
an untrue statement and someone calls me out on it.
How, how, I assume, kind of I get a chance to kind of redeem myself or prove that kind of like
what I stated initially was actually true.
Or how does, is there some sort of escalation mechanism here?
Yeah.
So you don't necessarily get a chance.
You and everyone else can decide who's right.
But like, let's, let's go concretely.
So, Frederica, you propose that Harris won the election.
We'll use that example.
And you have to post a bond to do that.
The bond can be a parameter of the protocol or the system.
But you post a bond to do that.
Let's say it's $1,000.
And then I can be like, actually, I disagree with you.
I think that's untrue.
I post a matching bond of $1,000 to dispute you and say this is untrue.
Okay, so first thing that UMIT does is now that there's a dispute,
that data won't get used in the underlying contract.
We're going to have to wait.
We wait until somebody proposes in a way that doesn't get challenged to use that data in the underlying contract.
So this is mostly true.
So if it was Polymarket right now, Polymerk would be like, okay, we're not going to use your proposal that Harris won.
We're going to wait for another proposal later on.
But we still need to resolve between two of us who was right to figure out where we get those bonds.
Do the $2,000 worth of bonds go to you or do they go to me?
And so then this is where we lean into like Vitalik's shelling coin blog post ideas where,
and this is also a lot of concepts from Auger and Nosis and early prediction market type stuff too.
But we go and we say to all our token holders, they go through a two-step voting process where they vote in secret what they believe is true,
whether your proposal or my dispute or it was correct.
and they vote in secret and then they reveal all at once.
And the game theory and economics of this design are such that you're incentivized to vote your own beliefs, your own truth.
And the majority will get rewarded with a reward at the expense of the minority that voted against the majority outcome.
And so we use this shelling point concept to resolve who, out of the time.
two of us was correct by having this distributed, decentralized voter base.
Okay, there's several super interesting things here.
So kind of, is it possible that kind of like sometimes there are situations where you are
incentivized to actually vote against your beliefs just because you know what the other
people's beliefs are like.
So kind of, for instance, I give you, I give you an example.
Also, say the COVID lab hypothesis.
Right. So kind of like early on, kind of like there was there was already a very vocal kind of minority who kind of who was advocating for this. And kind of as time went on, it's turned out that kind of like they weren't crazy. And maybe it is actually kind of a lab sort of thing. But someone who would have known kind of like in the very beginning of COVID that this kind of like this lab hypothesis was true. Maybe because kind of.
of like they were there on the ground or maybe because it was them or whatever.
They were involved in covering it up or however they kind of come to know this.
They would still have been incentivized to kind of vote against what they truly believe
because they know that this is not a consensus narrative, right?
Yeah.
I mean, I think you picked a very, very, very interesting example of where I think the,
like I think if you ran a lot of.
long-term prediction market over like the COVID lab leak theory.
Like you didn't let it resolve and you kept letting it run.
I think you'd see that prediction market move a lot over, you know, a four-year period, right?
So it's a really, really interesting example.
What I would say is, um, uh, in, in theory, um, a voter, uh, you can't trust what other
voters say they're going to vote. They actually have an incentive to frankly lie to you.
Because if they get you to vote in the minority, they get a bigger reward if they're the majority.
So there is this like a purposefully built PVP kind of mechanism within the voting game itself,
where other voters are supposed to, like they literally have the game theoretic incentive
to tell you that they're going to vote the opposite of how they actually do vote.
And so, you know, the theory behind this is if you understand how this all works and or if you don't,
you're supposed to just be like, okay, I don't know what the noise out there is saying.
I'm going to do my own research, come to my own conclusion, and vote my own beliefs,
because I don't really know at all with any kind of certainty what others are going to say.
Now, there's a whole bunch of probably very interesting, like philosophical and theoretical thoughts here too.
Like, again, the lab leak example is something that I,
I think is fascinating because, you know, personally, I was, I thought, uh, myself that the lab
leak theory was kind of crazy early on. And then my own beliefs evolved over time. And now I like,
I probably have to do a bunch more research to come to my own beliefs. But I don't think it's
crazy anymore. Let's put it that way. So it's a very interesting example. But at the point in time,
I think at any point in time when these votes come on, voters really do have the game theory to
vote their own true beliefs, um, uh, because they don't know what other people are actually going to
do. What happens if you don't vote? Are you penalized? Yeah, you penalized, you penalized less,
right? So the penalties here are also, they're parameters of the system. There's probably a lot more
modeling we can do to figure out how to optimally set them. But the penalties aren't like
100% or not. It's not like I vote and I lose all my voting tokens if I vote incorrectly.
Like I, we want to design the penalties. So there's a strong incentive to do your own research and
correctly. And we want to design the penalties so there's a strong incentive to have people
continue to vote and show up all the time. But not so much of an incentive that, you know, if I miss a
voter two, it's so painful that I exit the system and stop participating. Okay. That's fair. So let's go back to
the Harris example. So I posted the Harris won the presidential election statement. You contested this.
and kind of there was a vote that kind of confirmed that you're right.
Is this statement now kind of in the in the negated form usable kind of as an input for smart contracts?
Or is it tainted and we have to have a new vote?
This is up to this is up to the integrator that wants to like use this Oracle.
Like they could go either way.
I prefer the design because this voting process also takes a while, right?
So one of the things that we've seen with Polymarket, for example, is somebody will propose an answer.
Often they propose the answer too early.
They're like a sports game is a good example.
They propose that someone's going to win the sports game, but they propose like five minutes before the end of the game.
And you're like, even if that team they proposed did win, you kind of can't be like, we can't say like it's okay to propose five minutes early.
Like it could have chint turned around, right?
So that will get disputed and it'll go through this voting process.
But we don't want that polymarket market to sit there and wait for this resolution process.
So we'll have somebody else, second proposal that doesn't get disputed because it's after the game completed and it has the right answer.
So that's like an implementation decision of like how polymarket would use the Oracle in this example.
I prefer it.
I prefer the idea of you only take as true.
undisputed markets that are not tainted
and the
the dispute process
sorry, the voting process is really to resolve
disputes and like
where the bonds go. I prefer that
because the economic security behind that
is like much, much deeper
in many ways.
But it's a parameter.
It's up to the integration decisions
of the protocol.
Okay, but in that
design,
you could have kind of like
say,
500 statements that kind of speak to one question and 499 say no one says yes it still goes
kind of like unchallenged somehow because people missed it and despite the fact that almost
all of them kind of point one way you you could you would you don't actually have to wait for
kind of like any dispute to be resolved um you can already kind of use the one yes statement to resolve right
Sorry, walk me through this.
How do you, where do the 500 statements come from?
So kind of say there's 500 people kind of like post an answer to post a statement.
Who won the election?
So say 500 kind of say Harris and kind of like 499 of those get contested.
And one just kind of gets missed by kind of the because kind of like there's also attrition here.
right, kind of for the, for the, because it's an optimistic system.
Yeah, there's sort of a DOS vector is kind of what you're, you're kind of like,
yeah, spamming the system to kind of overwhelm it. Yeah. So I'm going, I'm skipping over some
details. Like I should, in Pauly Markets implementation, at least, you couldn't have,
they wanted to let you have 500 proposals for the same market for these same reasons. So it's kind
of like anyone can propose the first one. Anyone can propose the second one.
After the second proposal, we actually wait for the resolution of the voting cycle.
Okay.
And then other people, there's like stuff that we do to kind of make sure eyes are focused on the right markets because you're right.
Otherwise, you kind of have like, you can kind of just spam and overwhelm people.
And if there's actually humans voting on this, there are practical like human, human bandwidth constraints about what you can take care of.
Which, you know, like as polling markets scaling and other use cases are scaling.
and you and I talked about this very briefly out of the podcast.
There's lots and lots of use cases of where LLMs and AI can potentially scale this much better,
which I find super compelling and interesting.
But yes, your example is a fair wine, and we have to put constraints around this too.
Can I grief kind of the market creators or the people who have money on the market by creating too conflicting or two wrong statements and kind of like just,
making sure that they incur the opportunity cost of not being able to get their capital out of the market?
Polymarket does not allow permissionless market creation right now.
I can't speak for them.
I don't want to speak for them, but one reason might be this problem, right?
And if you go back and I think you have experience with this, but we look at Auger and other examples.
And if you have a bunch of very similar markets, it also leads to this kind of fragmentation of
attention problem. I think it's a hard problem to solve. Again, I think you could, going forward,
if I were to design like a permissionless prediction market kind of system here, I think you could
use an AI to validate, like, are these markets similar and then not allow the creation of markets
that are super close. But I think these are all the problems that prediction markets have to solve
to keep scaling and growing.
So I think you have a lot of experience
and personal interest in this, I think.
And yeah, it's a good question.
But that's why Polly Market doesn't let anyone do it right now.
You could also kind of tie the amount of the bond
you have to put to the amount of money in the market.
So they're kind of like you can't do it at a flat rate.
you kind of, it at least kind of like costs you kind of a proportion of the funds you're tying up.
Well, I was just going to say the last point I'd make, the thing is interesting is like with prediction markets.
So again, the Uma Oracle does a lot more than just prediction markets, but that's what we've been,
we're really the like kind of the only Oracle type you've been used in in production at scale that does
prediction market type stuff.
But the thing that I found interesting over the years and years of kind of doing this is there's
the theory and then there's like the.
practical implementations. And there's a bunch of heuristics here, too, that I think are like,
where it's not obvious from a pure theory or research perspective, you've got to, like, do this
kind of like glue and band-aid type stuff to keep things working effectively. And, you know,
this is where I will give, like, Polymarket deserves a lot of credit for figuring out how to
actually grow and scale these markets. And even since the election, they have figured out how
how to have many, many more markets than they did six months ago and have it like mostly work.
Yeah, I agree.
What's your take on fine print driven outcomes on prediction market?
So for instance, kind of on auger kind of invalid outcome was kind of a huge problem.
And one that kind of comes to mind where I actually think kind of like,
Uma worked really well as an oracle was kind of,
do you remember that submarine that kind of sank?
And kind of there was a market on polymarket where the submarine be found.
And clearly it wasn't found because kind of like it imploded, right?
But kind of somehow, kind of,
but clearly that was not the intent behind the market.
So, and Uma actually resolved it to, yes, it was found.
And kind of I remember there was kind of some,
upheaval on crypto Twitter
about whether this was
the right call. I think personally
it's the call that I would have made
because clearly that was kind of the intent
behind the market.
But it's really difficult.
So what's your take on
this? Should Arrca's way context
or should they just strictly enforce
definitions? Well,
you can't
strictly enforce definitions because
the English language or any language is
imperfect. Right?
Frederick Gay, like, it's so funny you brought out that example because I agree with you personally.
And like, you know, in the moment, I don't have any opinion and I don't vote or participate in these markets myself at all.
But there was a bunch of people that were trying to argue that, you know, they found pieces of the submarine.
But they didn't find a big enough piece of the capsule to fully confirm it was imploded, right?
but like it was like a very nuanced reading of the rules that just didn't capture the spirit of it at all.
So yeah, the thing that is super interesting about prediction markets, if we go back to calling them financial contracts, they're financial contracts, but they're binary.
One side wins 100%, the other side loses everything.
And what that means is that if there's ambiguity in the rules, the losing side has a really, really, really big incentive to try to be loud and proud and try to scream as loudly as they possibly can.
Hey, hey, we're right.
The other side's wrong.
Or if they lose, they also have an incentive to scream.
The system is broken.
Scam manipulation, like all this other kind of attacking stuff.
which, you know, sometimes I'm on the receiving end of, which isn't particularly fun.
But like the game theory here is like, look, if it's something with some ambiguity, one side is going to be really upset because they're losing all their money.
And that's human nature, right?
So what's my take on this?
My take on this is that it's tricky.
And like you, Polymarket has a process now of issuing clarifications around markets, which are also sometimes very imperfect.
and they don't ever want to be the one just like deciding the market outcome here too.
But I think we really try to use the shelling point concept to come to the least wrong answer is the way I look at it.
Like what is the least wrong answer that we can offer to resolve these markets?
Because you also touched on another point that is very nuanced and not many people know of or think about.
But if you go back to Auger days,
Auger had a lot of invalid market outcomes.
So it just said,
this market isn't clear enough.
We're just going to not respond.
And Uma and Pauly Market generally don't do that.
We actually don't know if you ever have.
It's also a terrible user experience, right?
So kind of, yeah.
Yeah, because it's a terrible user experience.
Exactly.
Right.
You're like, hey, I'm a prediction market trying to
predict the future and then you're like, you kind of like, eh, can't, can't answer it, right?
That's not good.
And so, but then that that forces the Oracle system to resolve slightly ambiguous outcomes to.
Yeah.
Can you give us an idea of how often disputes happen kind of in absolute and relative terms?
Yeah, the dispute rate is less than 1% of all proposals.
We're actually putting a lot of effort into getting it down more.
And there's a very cool project we're working on that's not far away that I think can actually lower it by an order of magnitude too.
many of the disputes, like, I need to double check this, but it's something like 70% of the
disputes are actually from people proposing too early.
So they actually put the market up before they should, which is fascinating.
And yeah, it's really fascinating, too, that that happens.
So you can say that it's something like 0.3% of all proposals are like legitimately disputed
that aren't like too early, something like that.
And the system is currently processing between 250 and 500 proposals a day,
like market resolutions a day, which is up a lot from six months ago.
Give us an idea of kind of who uses Uma's article today.
Yeah.
So the biggest user by number of proposals by far is polymarket.
it. They just, they're, yeah, they've kind of achieved escape velocity in the prediction market space here.
Across, we'll talk about that, maybe transition soon.
Across uses it also to validate this like complex data structure.
So it's a very different use case.
We're basically being like, here is this complex data structure that anybody can recreate,
but it would be hard to do on chain.
Anyone can do it off chain easily, though.
and across uses UMA to validate whether that data structure was created correctly or not.
And then we have other interesting use cases that are experimenting, like story protocol.
They're kind of doing IP stuff.
And they use the UMA system to resolve some IP disputes.
That's just getting going.
And then there's like a long tail of smaller prediction market adjacent use cases too.
Super interesting. Maybe before we kind of transition over to a cross, talking about kind of like the optimistic security assumptions, where would you say they work well? And where do you think kind of like maybe they're not fragile, but we can do better?
I think they work well in markets where there are lots of eyes on them. And eyes could be humans or machines too. We'll come back to that in a second.
They also work well where, like, false outcomes are very obvious, right?
So if we go back to the Trump Harris election, there were a ton of eyes on that market,
and the outcome is actually pretty obvious.
Maybe it's not obvious to resolve it, like, right in real time.
But it's obvious, like, if you say, this is too early or I shouldn't do this yet,
and you wait till there's, like, enough polling clarity, it's pretty obvious, right?
what the outcome is.
So in those use cases, I think these optimistic systems work really, really well.
Where things get scary, right, it would be like, okay, there's so many markets.
This goes back to kind of your griefing example.
There's so many markets, or they have small enough value in them, or for whatever reason,
there's not enough eyeballs watching them.
Then I think there's like a legitimate question about, like, wait, is there anybody around
to dispute this if somebody does try to sneak a bad,
come through. However, this is where all of like the super intelligences that people are working on
become extremely useful. And you think about what an LLM can do. I don't think it completely
replaces humans yet, but it certainly gives us a machine to put a lot more eyeballs on a lot of
things. And I think that that actually strengthens many of the optimistic verification assumptions
in an extremely useful way. Yeah, I think that's a fantastic answer. So you guys, you already talked
about this briefly in the beginning. So you guys actually started a bridge, kind of initially,
kind of like as a consumer of Uma the Oracle. How, how?
maybe kind of like the
housekeeping first.
How did this work in terms of ownership structure?
So kind of these are two token projects.
So kind of like how are the tokens related
and kind of like how did your first set of
token holders feel about kind of
you guys working on a second token project?
I mean, I'm very much asking kind of
not for a friend. So kind of we've done this before too.
So yeah, we looked at you guys.
for some of this stuff a little bit too.
And it was, it was interesting.
So that's, so the way we looked at it is, um,
Umma users were, uh, are happy to have other users of Uma.
Um, at least at the time when we did this.
It's like, okay, that makes sense.
Um, and then we made an attempt to launch the across token in a very fair way.
Um, and there's lots of actually like, uh,
lots of really interesting decisions that with more, like, looking at them in hindsight,
maybe could have done things differently too.
We, we, in some ways, I feel like the across token was rushed out because we actually
needed, we needed a token to gather some LP funds to help make the bridge usable.
Like, we needed token emissions to incentivize people to deposit into our protocol.
And we actually came, put this out just a little bit before people sort of started doing the points thing, the points program thing.
And it's funny because I think, first of all, I wish we invented that idea, which we didn't.
But if we had the points idea, we may have delayed the token much longer and used points to help incentivize some of the behavior we needed.
Anyways, neither here nor there.
We did try to put the token out in a very fair way.
we tried to have a fairly broad air drop
and the kind of OMA team
actually had pretty little ownership
in the across token, at least early on too.
And so nobody really had a problem with that.
That's the way I'd answer that.
Yeah.
It's just kind of like we're building cool stuff in the space.
It's related.
It's good for both projects.
Good for both teams.
Let's go.
Cool.
So maybe let's talk about how across
used to work. So a cross used to use, you know, a purely optimistic model. Let's talk about
this first and then kind of like how you've recently kind of upgraded it in the ZKRM.
Yeah. So, you know, your listener base is pretty nerdy. So would you like?
We'll take that as a compliment. A massive compliment. You know, they're not the,
not the crypto curious folk that like are, we can go deep on this stuff. So, um,
What does it cross do? Let's actually start there, if that's okay. So across is an intent-based bridge. The way I explain this, and this is the basic explanation, but I want it for blockchain A to blockchain B, we'll make it arbitram to base. How can I do that? The naive way is I deposit something on arbitram. I send a message to base, and then I release funds on base to the user. And that's all well and good. That's how I think a lot of like first-generation bridges work.
And the problem with that is I need to wait for finality or a high degree of finality on blockchain A before I send that message.
Because if that message is wrong, I'm going to like release funds from a pool or something or mint tokens that aren't backed if there's a reorg on A.
So I have to wait for finality and then I have to send that message.
And that's like a slope process.
So what a cross did is we're like our whole MO is we want to be.
bridging to be a two-second or less experience. We want it to be fast and cheap, but we think
like this speed is critically important. So we're like, okay, well, we have finality constraints
on the origin chain that just are not going to let us do stuff in two seconds with, at least
with like other people's money. However, we could introduce a third actor, a solver or relayer,
same term, that is sophisticated enough.
to price the reorg risk or understand the risks they're taking.
And we could use them to front money to the user on the destination chain very quickly.
And then they get paid back later after we verify the fill habit.
So this is the intent-based model where a user effectively deposits on chain A.
They deposit into escrow.
A race starts for the third-party solver to fill them on chain B.
Third-party solver fills them on chain B very quickly.
and then user goes on with their day, they're happy,
and then the protocol says, verifies that that Phil happened
before releasing the funds back to the solver.
Yeah, I mean, it's a slightly different design
than many optimistic bridges where kind of you say,
okay, look, you kind of, you bridge optimistically.
And there's kind of, there's a message sent across that bridge.
And then kind of on the other side, kind of like you kind of,
someone
fronds you
the liquidity to kind of
already use the funds on the
destination chain for a couple of basis
points until kind of the optimistic
time period
has run out.
So how would you kind of
pit these against one another?
I mean, there's no
optimistic part in what
I just described. We can go and maybe in the
verification of this. But the fact is
user wants funds unencumbered on the
destination chain. So we just use the third-party solver to just give them their money. And then the
user goes and they have the money and they can do whatever they want. And there's no chance of like a
rollback. They just, they have the funds. Right. So it's very clear like there's nothing weird going on
there. And the solver just has some risk that if there's a reorgan chain A, there's some risk of ruin that
they like lose their funds or something like that too. But they're very good at pricing this. And we use
market forces to price this reorg risk all over the place. And so it really does allow us to have
two second bridge experiences between all the major L2s with like very pretty low fees, very low
fees. Can anyone be a solver? So kind of like say for instance, I had dirty funds that kind of like
I wanted to distribute amongst kind of like innocent people, kind of say from some sort of hack.
could I just kind of like send this to people on the destination chain and kind of reclaim the clean funds that they initially deposited?
Man, you're asking like the hard questions here.
Solving is permissionless with like asterisks and caveats.
We have not had any rogue or like any dirty solvers participate in our network.
and we do have ways to prevent them from participating in our network while still maintaining
the permissionless ethos.
I don't know that I actually want to share all the ways that we have to prevent them for reasons.
But it's a very good question that we've actually gone pretty deep on.
There's also not a very good mechanism to wash funds either.
Like, it's all pretty, you have to compete against our other solvers that are very competitive, too, and win.
So it's, it would actually require a lot, a lot, a lot of expertise to do this at any type of scale, too.
But so does all laundering.
Kind of like, laundering is an incredibly labor-intensive business, right?
Kind of like, this is why Lazzer has like 20,000 people doing this.
Is it 20,000 now?
Well, it's 25,000 in total, but kind of like a majority of them are supposedly doing laundering.
So, yeah.
Yeah.
Okay.
Well, for our friends out there, I'm not going to share how we can prevent this, but we can prevent it, I think, in a pretty, in a fairly robust way,
while also securing the ethos of permissionless participation in our network.
And we haven't seen any evidence of this to date, too.
How does the pricing work?
So kind of like I'm, I mean, I kind of, there's lots of things that kind of I have to factor in.
So kind of like, how often can I use my funds one way or the other kind of like, is there a preferred way of bridging?
Kind of do I have to kind of route them kind of back the long way or kind of like, yeah.
So kind of all of these things.
So kind of I assume kind of every solver kind of does this kind of like on their end.
But do you have any idea of kind of like what goes into this pricing mechanism?
Yeah, I have a lot.
And it's super nerdy and super fun.
But you can actually step back and be like, okay, so filling this intent, again, go back to our example from A to B, from Arbitrumb to Base.
Filling this intent, we're actually kind of competing on two dimensions, competing on price and competing on speed.
Who can do it fastest?
And what we've chosen to do to date is actually just compete on speed.
So our API, you know, you can set your own fee.
It's permissionless that way.
But our API will tell most users that use like our API or use our front end.
We will suggest a fee of what the user should submit.
And then all the solvers are purely competing to win that fee on the destination chain,
competing based on speed.
And we've done this because we've really tried to see.
sell the fast user experience. And so we definitely, in many ways, we charge, well, definitionally,
we charge too much. We charge more than the marginal cost, the purely competitive price of what
it would take to fill on the destination chain because we are trying to have all the solvers compete
on speed. And we've got in our own heuristics to kind of set that price in a way that we think
is going to lead to good competition on the fill.
There's a very, very cool idea that it's probably going to be a white paper there
where we can kind of do both, not quite ready to talk about,
but I think there's very cool ways that we can do both that are still have a good UX
here too, which I'm excited about.
But yeah, that's what we're doing today.
And, you know, setting that fee, the fee is a matter of gas costs on origin.
and destination chain.
Our gas costs of our system are very lightweight because we batch the verification,
which we can talk about the OMA component here too.
So our gas costs are extremely lightweight, both sides, but there's still a gas cost.
So that goes into the fee estimation.
The other costs would be, okay, how long before we repay the solver?
So they're making a loan.
What's the cost of capital?
How do we manage that?
or the third cost would be rebalancing costs.
So user is, solver has to front money on this chain.
How easy or cheap or how long does it take to rebalance funds on and off of that chain,
which is a very difficult thing to measure, but you can kind of approximate it.
And then the fourth cost would be the fourth cost we actually get to ignore.
The fourth cost would be like, what's the risk of a reorg on the origin chain, which would cause the solver to not get repaid?
That actually gets priced into the speed competition.
So solvers can be like, okay, how many blocks of finality on the origin chain do I need to see the user's deposit transaction into escrow before I feel confident that the risk of reorg is low enough for me to actually take this risk on?
So that's, that one kind of gets lumped into speed here.
And so we get to think that's pretty, pretty deeply.
It's pretty fun.
But that's the nerdy answer to your nerdy question.
It's kind of like a reverse Dutch auction.
But yeah, and you kind of, you send the, you send the bid at the time point where you feel like kind of the cost kind of balance.
So you kind of recently switched to a ZK bridge.
or a ZK verification mechanism for part of the bridge experience, namely to BSC, right?
Correct.
So we've added ZK, ZK magic, and it really is magic.
We've added it to part of the across intent verification.
Let's call it the settlement system here.
So I look at across and all intent systems in three layers.
There's like the auction layer of who's going to fill the intent.
There's the solver layer of who's the solver that's actually doing the work to fill the intent.
And there's the settlement layer where funds go into escrow.
And then we verify that the intent got filled before releasing funds.
So one thing that across does that I think has been critically important to us working well to date is we don't verify each intent individually.
So the naive system would be like, okay, intent.
got filled, I now need to send a message proving that the intent got filled to release that
user's funds from escrow. And that means I'm sending one message per transaction, per bridge
transaction, which, you know, we're doing 50,000 plus transactions a day. It's a lot of messages
that are costly, just a lot of stuff. And so we've always taken the approach where it's like, well,
and you also have to send the message, it's pairwise, right? So if I support N chains, I have
N squared routes. And so I have to send messages in all these directions. And it's, it's,
kind of a mess. So we've always had the approach that we're like, actually, we're better off
batching all of the verifications into a period in epoch. And we make it about an hour. So we say,
okay, for every hour, we're going to look at all the fills across all the chains. And we're going to
create this data structure. We call it a bundle that says, here's all the fills that happen at all
the chains. We sum them up. If you're a solver that did like 10 fills on one route, we say we're
going to pay you once, but back for all these fills, which also reduces gas costs, do all this stuff.
And then we take this bundle. We optimistically verify it using UMMA on MainNet. And then once that
bundle is, is, has been not challenged past the challenge window. We then use the canonical
message bridges to send it to all the L2s to a contract that will release funds if there are funds to be released to all those chains.
So it was like a long process. I just walked through. But we're basically doing batching and we're optimistically verifying this batch and they're using the canonical bridges to send that bundle to all the chance.
That makes a lot of sense. Let me just quickly kind of interject here, kind of the people who kind of verify this on the UMA network.
So the people, they are in principle the same crowd,
kind of also you go to for polymarket statements and so on.
So kind of how, kind of verifying a data structure and kind of saying
who was elected president are two very different skill sets.
And I think kind of like not kind of one is much more vanilla than the other, right?
So kind of like how do you skill up your validators to kind of,
to kind of, yeah.
Well, there's two, I think you're sort of skipping one step here,
because there's the disputer in the across system.
So the disputers actually naturally are solvers in our system, too,
where they're like, wait, this bundle doesn't pay me back the money I'm owed.
I'm going to dispute it, right?
So we actually have a natural set of sophisticated disputers for the across system
to be like this bundle isn't right.
Once it's been disputed, then you're absolutely correct.
Then the same UMA voters that are voting on did Trump or Harris win the election are now voting on, was bundle A or bundle B correct?
Right.
Or rather, they're voting on was bundle A correct or did it miss something?
But they now have a period like time where it's like somebody can go and be like, if you run this code, which you can go and do yourself and you should go into yourself, you can see that this bundle.
was incorrect for this reason.
And so then the voters are kind of just able to capture that part effectively.
Okay.
Does that make sense?
Yeah, that makes sense.
Maybe let's talk about kind of like the wider bridging space.
Okay, so kind of we see kind of.
Can I go back?
Because we didn't talk about the ZK bit and across yet.
And I want to make sure we get that.
So first step, if you go back to our design, where we're doing this batching, optimistically
verifying this bundle.
And then because we actually had this belief,
where we didn't want to add in additional third-party trust assumptions.
We were using the canonical bridges to send this bundle to all the chains.
So a requirement in the across system before ZK was that you had a canonical message bridge
that could connect to Ethereum L1.
And there are chains that don't have that and they're growing, right?
So what we've done with succinct is we've created a ZK proof of that bundle.
And then we're able to take that ZK proof and bring it to any chain,
including chains that aren't connected to Ethereum mainnet, like Binance smart chain.
And we can then verify fills in like they had a canonical message bridge going to that chain.
So we're able to use ZK to basically delete the need to have a canonical message bridge
between Ethereum L1 and the L2.
And this has advantages also for L2s that do have canonical message bridges because they're different.
It takes us engineering work and like smart contract to audits to add new L2s because their message bridge might look different, whereas the ZK proof is the same on every chain.
So our new process is batch verify.
Sorry, batch batch these intents, verify them optimistically, create a ZK proof, and then send that to all the chains.
You can't imagine that we might be able to do more ZK stuff.
Hint, hint, like, we might be able to do more ZK stuff that lets us, like,
further compress that process and get to more chains faster too.
Cool.
Yeah, no, that definitely makes a lot of sense.
How do you see kind of the fragmentation in the bridging space?
So do you think we'll converge towards one dominant model, or do you think kind of will retain kind of this darkinized state?
There's an incredible amount of nuance in that question.
But again, you have a nerdy listener base so we can touch on it, right?
Bridging is like an overloaded word that means a bunch of things here.
There are many bridges that are mint and burn bridges for specific tokens.
Example, circles USDC, they have their own mint and burn bridge called CCTV Circle Cross-Chain Transfer
Protocol that lets you burn USDC on one chain and minton on another.
So it's a form of bridge, but it's quite different from what we do.
it's meant to be really secure and they take 20 minutes coming from Ethereum and stuff like that
because Circle does not want to just mint $10 million on some chain by mistake.
They would lose their own money.
Okay.
And then you've got layer zero with OFTs.
So they're using OFTs to do something similar with their own set of security assumptions,
with their own set of validators.
You have Wormhole doing NTTs.
same deal using normal validators.
But these are not the bridges that look like what we're doing.
These are like allowing you to mint and burn a specific token on different chains.
And that to me feels very fragmented.
And that's not the business that crosses in at all, nor the business we're going to enter.
And I do feel like there is going to be fragmentation there for a long time.
There's going to be like many teams that are warring for market share around minting and burning
tokens here. That's one thing. On the bridging side, when I think about the bridging architecture,
I think intents are clearly the only way that Ethereum is going to feel united and broader
crypto is going to feel united too. And I do not think that means that like everybody uses a cross.
I don't think that that's right. But I think the intent standard, intent concept is, I think this need
for speed for having transactions take like two seconds or less.
I think that is critical from just a user experience in a multi-chain world.
So we created a standard for intents.
We co-authored a standard with UNISWAP called the ERC-7683
that defines a common interface to express an intent.
And we're doing this as a public good so that many intent protocols can adopt the same
intent standard.
And solvers can listen to the same intent format, makes it less work for solvers to support
intent systems.
And in doing so, we can kind of like push this two second or less user experience everywhere
where across is meant to be a service provider.
I hope to be the premier service provider of intents, but by no means like the only one.
Does that make sense?
It's a long answer.
That makes sense.
Do you think bridging is something that in the future, mostly builders will have to worry about?
Do you think kind of this experience will be abstracted away in the user experience layer?
Yes, although I've thought that for a couple of years, and it's been actually pretty slow to happen.
So we have for, you know, across is integrated.
into uniswap, which I think is kind of cool.
It's been a big integration for us.
And we've sold across, or we've attempted to sort of talk to a bunch of DAPs that should
integrate a cross into their products too.
And we've had some traction and some interest.
But when we look at our volume and where it's coming from, the majority of volume still comes
from front ends or aggregators, places where users go and they say, I want to bridge.
and they specifically are going to like the across dot two front end to bridge from chain A to chain B.
And it's been slower than I expected for that process to get integrated and abstracted into DAPs that builders do.
And I don't actually think I totally know why yet.
My working theories are crypto is still got a bunch of DGens that actually,
like doing bridging manually.
They like enjoy the process and they enjoy doing it.
That's one theory.
The other theory is the infrastructure and the kind of standards around bridges
haven't mature enough that builders feel comfortable integrating it or really pushing it.
Is another theory.
Yeah.
I think that that's the one that kind of like I would probably subscribe to.
kind of you don't want to bake something into your system that you don't know is going to be around in six months from now, right?
So I think kind of as a builder kind of saying, okay, look, we'll just be agnostic here.
It's often simpler.
Yeah.
I think you're probably right.
Or I think it's a working theory.
I should say that.
And they also think wallets play an important role here.
So 7702 and putting smart contract wallets to every.
I think is huge for intense and huge for across.
And this is something that, frankly, we need to be a bit louder about.
Other people need to be a bit louder about too.
But if I have a smart contract wallet everywhere,
there is a design pattern where I could keep my money on a home chain.
Like, let's keep doing the A to B example.
So I keep my money on arbitram.
That's just my preferred bank, like home chain bank.
but I can sign a user off or sign a message to do any arbitrary action on chain B,
even if I have zero money there, like absolutely zero money there,
because I can pay for that action from my home chain.
And basically, the kind of the abstraction here is I'm bridging the gas money to execute this user off on chain B.
And I'm doing that incredibly quickly, right?
And so, you know, once we have smart contract wall, it's everywhere,
on all the chains, this idea of, you know, I'm not really sending funds to the destination chain.
I'm sending a message I want to execute.
And I'm like just in time delivering the money to pay for that message.
I mean, I think that's a very compelling future of how you like unify all the chains and make
everything feel like one network again.
So that's that's the U.X that I'm most excited about.
And I do think that's mainly going to be driven from the wallets,
but then there's like fragmentation and sort of still a bunch.
There's a bunch of people that heard to figure that one out too.
Yeah.
So I mean, the entire kind of like smart, smart wallets on all chains kind of thing,
there's a bunch of underlying problems that I think are not generally appreciated enough.
So kind of for instance, I mean, obviously kind of like now with create two, kind of you can kind of create on the same address.
But then what happens if you kind of lose one of the EOAs on one of the chains or kind of like you kind of, you kind of have all the EOAs on all of the chains or kind of like there's a lot of kind of like what happens with recovery and kind of like it's it's a little bit of of a messy situation.
And I think, I mean, I've also always.
been in camp, kind of we need smart wallets, it's just because kind of like it's clearly the
way better user experience, but there are still, there still are kind of a bunch of unsolved
user problems, kind of that come with it. Yeah, well, like you have experience here. You know this,
right? This is like the being in industry and trying to, not the theory, but the practical
like implementations here too. Yeah. And I think like this is where what,
What worries me is just it takes longer because we've got to like figure these problems out.
It takes longer to get the solution I want or I want to see in the world.
It doesn't change the fact that I still think we're heading in the right path.
Like you said, we need smart contract wallets everywhere.
And then we need infrastructure.
And I really think of like what we're doing from intent bridging.
It's it's a layer of abstraction.
Like the intent is just this layer of abstraction on top of some pretty complicated piping about how these genes might
actually be connected. And so the solvers might be using finance to rebalance or whatever. It doesn't
matter. There's just all this other piping under the surface that users shouldn't have to see
because they're operating on this like thin and tent layer of abstraction. And that I have a lot of
conviction that that's like a necessary piece. I just want to get it there as fast as we can
and then make the user experience be as abstracted as possible. But yeah, it might be a little bit
before we do that.
How do you see the interplay between bridges,
roll-ups and shared sequencing?
So kind of could across one day,
kind of integrate with shared sequences?
It's a really good question and really nuanced.
I, I, um,
there are so many innovative technologies around,
like,
pieces of interoperability.
And, you know, to take a, to throw something else there that you didn't mention, like,
think ZK proof aggregation, kind of like some of what the like the egg layer stuff is doing,
too.
There are ways that you're going to, you're going to have really interesting ways to send
messages between chains to and send funds.
Shared sequencing, too.
There's like, okay, so fun ways that a couple chains or something.
set of chains could do somewhat synchronous things between them.
I think is great.
When I look at this, there is no, there's no one technique that's going to touch everything.
It's like there's going to be these pockets of interoperability.
And you see this with like, okay, so super chain might have some interoperability here.
And there might be, you know, some shared sequencers over here and some ag layer proof aggregation over here.
and then these chains are connected by finance over here,
and you just have this Venn diagram that does not have everything under one circle.
And the way I look at it is intense are that one circle,
kind of layer of abstraction,
where we basically put the solver in charge of figuring it out.
And they can use any of these underlying techniques to do it faster and cheaper,
which lets them deliver the intent of product
to users at a better price.
So that's kind of how I see the interplay
between all these things. Does that make any sense?
Yeah, makes a lot of sense. I think there's still a lot of things
to kind of be hashed out, but I think this is also kind of just the state of
the technology, right? I kind of have one final thing,
kind of like I'd like to touch upon, because we haven't actually talked about it
at all. So, Uma and across kind of, they are super,
cool projects and kind of what I always find fascinating is how well kind of your governance
systems work. So kind of kind of if you look at the DAOs, kind of they are pretty,
they are pretty alive, right? So kind of if you look at kind of your DAO's and kind of the
the common Dow. What do you think you guys are doing right? Wow. You know, it's funny because
we just got some weird fud on the internet about attacking our governance and it was wild
because I generally agree. I think we've actually done a pretty good job on this. We've been around
for a while. We actually have a lot of token holders that are not investors that hold like
reasonable chunks of tokens too, I think. And again, I don't actually track token addresses or know
who people are. But by being around for a while, we have former employees that like us are happy
with us and have chunks of tokens and actually participate in governance, which I think is cool.
Yeah, Frederick, this is an interesting question.
I mean, I appreciate it.
But I think we try to have, we try to, I try to direct kind of our foundation that's
doing the work, Risk Labs, to be relatively opinionated about what we think is good or bad in a way that
minimizes Dow infighting is kind of what we try to do.
And we try to make proposals or suggestions that we think make a lot of sense and have to be well reasoned.
And then we like, yeah, have a bit of an opinion.
So I think in the spectrum of like we are by no means operating like a centralized company, like centralized Delaware C corp.
But we're also not trying to be like, hey, community, you guys figure it all out too.
Yeah.
We're trying to do something in the middle.
We were basically like, here's what we think should happen from the Risk Labs Foundation perspective.
And we are looking for community sign off on it, which frankly is like, it is like how a Delaware C-Corp shareholders interact.
It's like a management team makes a shareholder proposal and we ask shareholders to vote on it.
So I think that's the kind of analogy we've been going for.
Cool. For people who kind of want to become involved with either Uma or Cross or kind of who want to build on top of you guys, where do we send them?
So you can follow me on Twitter at Hal 2001. It's my initials, HAL, 201, which is also from 2001 in Space Odyssey.
But you can follow me at Howl 2001 on X or Twitter. And then the two projects, it's a cross protocol and Uma Protocol on, on,
X or Twitter.
She follows
there.
We're hiring
and building
actually a lot of
fun stuff.
Sorry,
small plug.
And, you know,
we're trying to engage
in like,
there's a bunch,
there's a bunch
of really interesting
stuff going on,
both on like the
polymarket side.
How do we actually
scale this Oracle?
How do we use AI in this?
And then on the bridging side,
how do we like make Ethereum
and make Web3 just feel like
one,
network again. So I'm pretty pumped about our near-term objectives too.
Super cool. Thank you for being on. Thank you so much for having me, really fun.
