Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Inside Nansen's AI Trading Agent Platform
Episode Date: March 30, 2026In this episode, host Friederike Ernst is joined by Alex Svanevik, CEO of Nansen, to explore the platform's radical pivot from passive on-chain analytics to active, AI-driven agentic trading. Alex... unpacks the technical hurdles of labeling over 500 million addresses, the transition from raw data into harmonized insights, and why true alpha now lies in attribution rather than raw data . He explains how Nansen uses ClickHouse databases and a mix of algorithmic heuristics, agentic teams, and human specialists to maintain the highest industry precision. The conversation dives deep into the intersection of LLMs and blockchain, exploring how standard AI models lack domain-specific common sense and why Nansen augments them with real-time data and visual "artifacts". Alex introduces "Nansen Gym," a simulated historical replay environment for training trading agents and teases the upcoming release of "Smart Money 2.0", which aims to predict future profitable addresses with 2-3x uplift on precision. Finally, they discuss the existential risks of AI, the striking parallels between open-source AI and early DeFi, and why Alex believes agentic trading will be the absolute default by 2028. Chapters00:00 Intro & Context04:15 Nansen's Evolution & Agentic Trading09:30 Harmonizing Data & The Attribution Layer15:00 Deterministic vs. Inferred Labeling (Uniswap vs. Binance)21:45 Evaluating AI Agents: LLMs as Judges27:10 User Privacy & Public Blockchain Realities35:20 Building a Unified Trading OS42:15 Smart Money 2.0: Predicting Which Wallets Win49:00 The Limitations of Vanilla LLMs in Crypto55:30 Nansen Gym & Time-Traveling AI Agents59:45 The Open Source AI vs. DeFi Parallel LinksAlex Svanevik on X: https://x.com/ASvanevikNansen: https://www.nansen.ai/NEAR: https://near.ai/Sponsors:NEAR AI Cloud now lets developers deploy OpenClaw—the rapidly growing open-source AI agent platform—inside Trusted Execution Environments, providing hardware-level encryption with cryptographic attestations. With OpenClaw on NEAR AI Cloud, you can run agents with cloud convenience, but without traditional cloud data exposure. No hardware to manage. No trust assumptions required. Learn more at near.ai.
Transcript
Discussion (0)
Welcome to Epicenter, the show which talks about the technologies project and people driving decentralization and the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Alex Swanwick, who is the CEO of Nansen.
Most people who are active on a chain are active because they're trading or investing.
What is your trading strategy?
What is your investment strategy?
You can tell that to the agent.
And then, of course, the first thing you would want the agent to do is to give you feedback on.
Would this actually have worked?
If you deployed the strategy three months ago or one month ago, would you have made money?
That capability for a retail investor is like a superpower.
Do we want to be kind of a horizontal data company or analytics company that can serve many different use cases?
Or do we actually go vertical on the primary user segment that we know we can create the most value for and then do more than just data analytics?
Welcome to Epicenter, the show which talks about the technology.
project and people driving decentralization and the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Alex Swanwick, who is the CEO of Nansen.
So Nansen started off as an on-chain analytics company and has recently moved into an AI
trading outfit.
And I am here to learn all about this today.
Hey, Alex, it's super nice to have you on again.
Yeah, good to be back.
Good to see again.
Cool.
So maybe for everyone who isn't familiar with NANN,
Can you give us the Nansen backstory in a minute?
Yes.
So my name is Alex.
I'm the co-founder of CEO.
My background is in AI initially.
My two co-founders are data engineers.
So we started out trying to label the entire blockchain and more or less
because blockchains are pseudonymous or anonymous.
Some people think of them as you don't know who's behind the address.
And so we tried to solve that.
And so we've built a data platform that has
a data platform that has more than 500 million addresses labeled and a whole analytics infrastructure
that allows you to see the flows on chain in real time. That's kind of the backstory,
and we are mostly used by investors and traders in the space, although other people also find
our product useful, whether it's for journalism or reporting, analysis, even compliance and so on,
but the primary focus we have is on-chain investing and trading. And for that reason,
which I'm sure we'll get into,
we've started basically turning Nansen into a full stack trading,
on-chain, agentic trading product.
So, yeah, it's a very exciting moment in our history
and super excited to talk more about agentic trading.
Yeah, absolutely.
It's fascinating how, kind of lots of people
who have some AI background kind of found their way
into the blockchain space.
And now kind of it's all coming full circle.
It's exciting.
Maybe before we dive into the AI aspect, tell us about the technical goings on for on-chain analytics.
So kind of how do you go from raw blockchain transactions to kind of meaningful insights?
Yeah. So in a nutshell, the most basic thing you have to solve is to get the raw on-chain data into a more convenient storage and compute later.
And so you can't really run fast queries on just a normal blockchain node.
So you pull the data out of that into, for example, Clickhouse, which is what we use,
which allows for extremely large kind of queries across tons of data to be, you know,
run in less than a second.
So that's kind of the first part.
The second part is that you need to kind of harmonize all that data, maybe across different
chains to make sure that you can look at things like dex volume for different chains that might
have different data schemas and so on and so forth. And then the third part, which we, I would say,
probably are the best in the world at is the attribution there where you label the addresses.
And so, you know, as nice as it is to have like very fast queries on raw on chain data or even
like harmonized on chain data, it really starts becoming useful for investors and traders once you can
kind of know what these flows actually are, right?
If you just see $100 million going from address ABC to one to three,
like you don't necessarily learn very much from that other than,
oh, that was a large tether transaction.
But if you see that, oh, this is from FTX going out to one of their, you know,
fund clients, for example, that's kind of an interesting thing to know about,
especially if FTX, you know, says that they have closed off their withdrawals,
which literally happened.
They did that or they said that.
And then we could see on chain that flows were actually coming out of FTCS.
So, yeah, so the attribution layer, which we could, in theory, spend the whole podcast talking about is very multifaceted.
We've kind of evolved our approach to labeling addresses over the years.
Right now, it's very agentic.
And so we have basically agent teams that can go out and figure out who is behind addresses.
We also do a lot of algorithmic work that is not agentic, but it's still super important.
And then we have human attribution specialists internally who also make sure that the quality is super high of our labeling, that we are prioritizing the right domains that need to be labeled and so on and so forth.
This episode is brought to you by Near AI Cloud.
OpenClaw is one of the biggest stories in AI right now.
Rapidly, it gained over 200,000 GitHub stars with adoption from Silicon Valley to Beijing.
Why? Because OpenClaw makes it easy to create an AI agent that actually does stuff, like manager emails, brows the web, schedule.
appointments and remember contacts across weeks of interaction. But where do you securely store and run
an always-on agent that needs persistent access to your most sensitive data? Local deployment means
you have to manage expensive hardware at home, and traditional cloud means surrendering full access
to your data to some cloud provider. Near AI cloud solves this problem by running open claw
inside trusted execution environments. These are hardware-level secure enclaves where your agent operates
in encrypted memory that even Near AI can't inspect.
It's not a promise that they won't look.
It's cryptographic guarantees that they can't.
With OpenClaw on Near AI Cloud, you get cloud convenience without data exposure.
There's no hardware to manage and no trust assumptions required.
You can learn more at neer.a.i.
How do you go from raw blockchain transactions to meaningful insights?
Yeah.
So there's a few different layers.
to it. The first part is you need to get the raw entre data into something that is more convenient
for running analytical queries, so typically a database. In our case, we use something called
Clickhouse, and we index all the data in-house. Well, we do run some nodes in-house, but a lot of
the node provisioning is actually outsourced because it is very commoditized. So we use, you know,
provide us a quick node, alchemy, and so on, and then we index the data ourselves. The second layer is you
you have to kind of harmonize the data cross chains.
Think of decks trades.
That's like a conceptual thing,
but actually it's different on every blockchain
and on every decks.
So you need to make sure you have harmonized data
that you can aggregate.
And the third layer is attribution,
which is labeling addresses,
which is what we are, I think, the best in the world at doing.
And that's a very multifaceted area.
We run agentic teams to label addresses,
which is pretty exciting.
We've been doing that for years now, and we're looking to scale that up right now, actually.
We're also doing a lot of algorithmic work that is non-agentic.
You know, think of kind of heuristics that you can work out on the structure of addresses of centralized exchanges or dexes or what have you.
So those are probably the three layers I would think about.
There's the raw entry data.
There's the harmonized and cleaned up data.
And then there's the enriched attribution powered data at the top.
So I think I kind of understand how you go from kind of like the basic to the enriched.
But kind of how do you even come by the information that you need for the basic?
So say Binan spins up a new wallet, they don't have an API that kind of you can pull that information from.
Right.
So kind of like how do you kind of glean these insights from the blockchain?
Yeah.
So you have to study the behaviors of different entities.
some behaviors are deterministic because they're smart contract driven.
So think of like a uniswap pool that's deterministic.
You can literally just look at the pair created events from, you know, one of the deployer contracts.
And then you know, hey, if this smart contract generated with this event, then we can label the outgoing new smart contract as a uniswap pool.
For Binance, because the logic is technically off-chain, you have to infer,
So you have to look at, okay, this is a, you know, Binance main wallet.
You know, it could either be like widely reported to be that,
or you can literally send some money to Binance and see where the flows go.
And then you could say, hey, this main wallet is where the funds ended up.
This is my deposit wallet.
Let's look at any characteristics of that that I could find.
And sometimes you have hybrid approaches.
Some centralized exchanges will also use smart contracts.
They used to use something called like forwarder contracts.
I think Bitco maybe created those smart contracts back in the day.
But you have to do kind of rely on a combination of inference and deterministic approaches.
Okay.
Do you have an error reporting path?
So kind of like if for instance I feel something's been mislabeled because kind of like some of it is inference?
It's, it's, and I'm not just saying this to brag, but it's very rare that we get reports of errors because you do a lot of quality assurance in-house.
And for every label that is created in Anson, we have the actual evidence compiled for it.
Literally every single label has evidence.
And because of that, because of that hard requirement, like you cannot add a label to the database unless you've compiled the evidence for it, whether you're like,
an agent or a human, that means it's much harder to screw things up. You can screw things up, of course.
But of course, we also have human attribution specialists internally who review both individual
submissions. Not all 500 million, admittedly. That wouldn't work. But they do take like the harder
cases, for example. And they also, of course, review the processes and the heuristics and the agents.
we also do a lot of e-vals.
This is kind of moving a bit into the AI area,
but people might be familiar with like benchmarks.
If a new model launches,
you look at model evals or benchmarks
to say, hey, you know,
Kimi K2.5 is better than, you know,
Minimax 2.1 or whatever.
These are often like model evals,
but you also have task evals
that you can run for specific tasks.
So we have our own attribution
e-vails internally that we run for our agents. And this is kind of following best practices for how
you run benchmarks and e-vals for AI agents. So we use, you know, LLM as a judge where other agents are
judging the work of the main agents. And then the humans judge the LLM as a judge. So it's kind of like a
meta thing where you have to like judge the judges. But yeah, so without going too much into the weeds,
we do focus a lot on quality assurance
because we've always kind of strongly believed
that it's very, it takes a long time to gain trust,
but it's very easy to lose trust.
So you have to make sure that the precision is extremely high.
And, you know, I'm proud to say that based on some of other vendors I've seen out there,
we tend to have very few errors reported.
And I'm happy about that.
Yeah.
your tours are so powerful that it allows kind of labeling addresses as kind of individual people, even if they're not particularly notable, right?
So where do you draw the line?
So kind of like having Vitilic's main wallet kind of labored, this is something that probably fine,
but kind of I have wallets that I don't want attributable to me.
where do you kind of draw the line there?
Yeah, I think there are two kind of core principles that are worth thinking about for this.
The first one is that we rely on public information, right, information that's in the public domain.
So if we think about a specific example, let's say that you are active as a governance participant,
which is maybe not totally unlikely, right?
It's the same for myself.
I might write on a forum, hey, you know, I'm voting for this and here's my address and so on.
And that's information in the public domain.
And we take that as kind of a signal that, well, it's in the public domain.
So presumably it's fine to label that entity, whether it's an individual or a company,
because it's in the public domain.
Now, the other principle is that individuals are different from corporations and projects.
And so if you come to us and you say, hey, I actually want that label removed.
As an individual, you can do that and we will remove the label.
There are a few edge cases though, which maybe are uncomfortable for people, but it's worth knowing about.
We have had examples of people who maybe have an ENS name that they registered, not thinking about the fact that now you're basically potentially exposing the identity of the address.
And so there was one example where a user reached out because they had bought an E&S domain for their wife.
And it was the wife's name.
And so they wanted to remove the ENS name.
but then we had to educate them and say, look, this is actually on the blockchain.
Like, this is not, it's like immutable.
Even if we remove it from our database, the ENS name is always going to be etched into that address.
That's just life.
I'm sorry.
So I think that's kind of maybe a third principle that like, look, blockchains are public and transparent by default.
And so don't hate the player, hate the chain at the end of the day.
If you are not happy with that.
But it's like, you know, some people learn this the hard way.
Some people maybe think a bit about it before they do stuff.
But yeah, so those are kind of some of the basic principles we follow.
Yeah.
Makes complete sense.
Maybe let's talk about the trading aspect more.
So kind of, if you kind of look at blockchain data, I mean, you guys are very good at it,
but it is all public.
Do you think in some way it's kind of become commoditized or is there real alpha still in
it? I think raw onchand data, definitely the data itself is commoditized. There's no doubt about that.
I think what is not commoditized is the enrichment that you do on top of it. And so we constantly get,
you know, requests from other analytics or data providers to get access to our labels, for example,
because they know they're very valuable and it's hard to recreate. So if you know that, like
said before, if you kind of know that this is actually, you know, a fund investing into a
protocol's treasury, that's like a very interesting sign. And like it's, you might even, there are
tons of examples where you could have seen that on chain before it's even announced, right?
And as an investor, I think most people want access to that, but it does depend on like the
attribution and the labeling of addresses. So I think that part is not yet commoditized. I don't
it will be commoditized in a while, although I would say that, you know, we are trying to find
ways to rapidly scale this up so that the unit economics might be so favorable that we could
share more of our data, actually. But that's a work in progress. I think the other part is
it's also kind of non-trivial to do a lot of the infrastructure around on-chain data, especially
as chains have more and more throughput, right?
It's important to understand that if a chain brags about having a very low latency
or very high throughput, that means they're going to have a lot of data, right, if people
use the chain, which, if we're being honest, is not always the case with many blockchains.
But if you think about a chain like Solana, it has tons of data, right, because the throughput
is so high and people actually use it.
And so you need to have really good infrastructure where, yeah, you could potentially get all that data onto your, like, hobby, you know, computer set up at home or something like that.
But it's actually pretty hard to make sure that it stays in sync and has super good performance in terms of the query latency.
So some of those things are a bit harder to do as a hobbyist or, you know, if you're not super serious about it.
So I would say probably if I would kind of highlight two things that make it difficult would be number one, the enrichment and the attribution, which fundamentally is off-chain data that you're adding on top of the on-chain data.
And then the second part would be the infrastructure that's required.
But to your point, you know, we have kind of never wanted to be only a data company, right?
And this kind of, we've seen that over the years, the primary user segment that gets,
the most value out of our product is the trader and the investor, right?
And this is, like, I think we should know, we should not pretend otherwise.
This is the most common persona in crypto, right?
There's been a lot of talk about kind of Web3 gaming, which I was very excited about,
as many others and other use cases for blockchains.
I'm not saying that those will never pan out, but the reality is that most people who are active
on a chain are active because they're trading or investing, right?
And that's not a bad thing, but it's just something we should acknowledge.
So we kind of had a choice when it comes to the strategy of the company.
Do we want to be kind of a horizontal data company or analytics company that can serve many
different use cases?
Or do we actually go vertical on the primary user segment that we know we can create the
most value for and then do more than just data analytics?
And we chose that second path.
Yeah.
So kind of you saw your users discover signals on Nansen and then leave the platform to execute trades elsewhere, right?
So kind of is that kind of the, okay, I understand the rationale.
Tell me what it kind of entails to kind of build up this almost like a trading operating system, right?
Yeah, and I want to be very clear.
The way I explain it now, it sounds like it's all coming from the company's perspective, like what is best for the company.
But I think anyone who has created on chain will acknowledge that the user experience is pretty shit, right?
If you think about it, it's horrendous.
And like one reason why centralized exchanges have been dominant versus decentralized exchanges is because the user experience is so much better.
It is faster.
You know, it has historically had lower fees.
And it's all integrated into one product.
And so the realization we had was that actually,
we think the infrastructure now for on-chain is good enough to create a user experience
that is as good or better than a centralized exchange.
And I want to be very clear in our ambition to create the best on-chain trading products
in the universe, there's no chance we can do that without relying on incredibly good
infrastructure partners.
And so we use privy, for example, for embedded wallets, self-custodial embedded wallets.
we use multiple different decks aggregators under the hood.
You know, we use Li-Fi, we use OKX, Dex API,
we use Jupiter on Solana.
We'll probably integrate more.
And then, of course, all the different blockchains, right,
that you can use in layer twos.
And so I think the exciting thing from a user point of view
is that if you download our mobile app,
Nansen AI, and you go in,
and you literally start talking with the agent,
you now have an agentic way to explore what's happening on chain,
but also to say, hey, put $20 into this token
and it finds the best route for you.
Let me say the best route that we can find among our aggregators.
And of course we aspire for that to be the best in the universe.
And then you just execute in the same product.
So instead of this kind of go to an information product like Nansen,
then you go to say cow swap or you go to Jupiter,
then you have to like connect with a wallet.
It's like three different products that you have to touch just to do that one thing.
We've tried to just kind of bundle it together in one unified, integrated user experience.
So I think like that's important.
It's not just that it's good for us to get the trading onto our product.
It's fundamentally just a better user experience at the end of the day.
That makes a ton of sense.
how do you seed the trust in that product, right?
Because people are kind of like they're set in their ways, right?
Kind of like even if you look at kind of like how many people
they'll use uniswap despite the fact that it probably gives you
worse prices than cow swap, just because the UI is sticky, right?
Kind of like, it's kind of like what you used to.
And how do you unlearn that behavior?
Yeah, I mean, one benefit we have is that the user.
user is already on Nansen because they're already looking for the information.
And so this is kind of one of the realizations where like I do think we have an advantage
because we already, I'm not going to say we own the user.
That sounds kind of bad.
But we own the direct relationship with the user because we are already part of their
journey, their investor journey.
They come to Nansen to discover tokens.
They come to announce them to do diligence them.
They come to us to see what the smart money is doing.
And so we're already in their journey.
And so the thinking is that if we can just have like a very simple one click button
that allows them to execute the trade there too,
then it's actually very convenient.
And the key thing is that it also has to be, you know,
as good prices is possible to get as well as low fees.
Right.
So you shouldn't feel that, okay, cool.
Nelson has this, but like the fees are outrageous, you know.
and I'm not going to call out certain wallet providers
who charge a lot of fees to place to swap.
But yeah, the idea is that we're kind of already in the investor journey.
And I think a lot of them trust us because they know we have the best data.
And so we kind of have a bit of a starting point there.
And then I think we should also just be transparent on like what's happening under the hood.
Which infrastructure providers do we work with?
What the hell is this wallet?
you know, who we actually show in the UI as well, like which aggregator is going to fulfill
the transaction, where are we getting the quote from, right?
So one of our values at Dunstan, as you can imagine, is transparency.
And so we also try to lean on that a lot.
I think if you're transparent, you can build trust faster.
Yeah.
Tell me about this Nansen AI agent.
So kind of I open the app.
What can I ask, what can I ask them?
You can ask it anything about on-chain.
So the way I would think about it, if you have used the Nansen product,
you might have found it like overwhelming, which section do I go to, which dashboard do I look at.
And so now you can bypass all of that.
And you can just ask it what you want.
Like, hey, who's buying this token, right?
Or here's the transaction hash.
Just help me understand what the hell will happen in this transaction.
Or, you know, what tokens are smart money buying right now?
what are the top holdings of defiance capital?
Which tokens are winter mute market making the most?
Like anything that you can imagine finding on a blockchain,
it will most likely have the best answer for that.
We've also gone a little bit beyond on-chain.
So we have integrations with X because you want to get social media feeds.
We're also integrating with prediction markets now.
That's a bit of an alpha leak.
that's, I think, was roll out yesterday into the agent, so people can, like, try that out.
You can ask about perps.
We have, like, probably the best data on hyperliquid, for example.
So you can kind of, anything that's on-chain related, I would say, you can ask it about.
And it has all of the data in real time.
It's not like, we train the model, and then there's some historical snapshot.
It has access to, literally, data from seconds ago.
So you can make a transaction.
That's, like, one fun thing.
You can make a transaction, copy the hash.
dump it into the chat and just see, hey, does it understand what this transaction was?
So, yeah, you can do a lot of stuff.
I think people should just like download it and try it out and play around with it.
Yeah, super interesting.
Have you ever thought about kind of using this in a forward looking way rather than kind of a backward looking way?
So kind of, I mean, there are companies that specifically kind of screen transactions for you and kind of tell you what they do and flag if something puts you at risk.
So with all the data in the background, it seems like you would be ideally suited to kind of offer this.
Yeah.
I guess you're talking about like chain analysis and elliptic, RM labs, those kinds of compliance or AML-focused companies, right?
Yeah.
Yeah.
Yeah.
So about once every two weeks, someone asks me, like, when are you going to launch an AML or compliance product?
You have all the data, so why not do it?
And the short answer is that it's very different from our mission.
our mission is to surface the signal and create winners
and we are very focused on the crypto natives
these companies are I think great companies
at what they do but I think it would be a distraction for us
I think we want to make sure that we can focus on
the core kind of on chain investors and traders
but on your point on forward looking
to have a different spin on it
so kind of the security aspect right
so kind of like what hypernative does for instance
so kind of like where kind of they screen
your transaction before you click send yeah that's that's a that's a great clarification so there's
different ways to look at it i think that makes sense to do like to have some kind of risk score if
you're like hey this wallet you know has done certain nefarious things in the past again i think
it's a little bit of a side quest for us and we want to be laser focused on the main quest some of
these things i've often thought maybe we should just like do a joint venture with someone or someone else
could do that and focus on it and use our data.
But there is like a third, so there's the compliance in AML,
there's the security one that you mentioned.
And like a third way to think about forward looking at the address level,
which I'm very bullish on and excited about,
is can you actually predict which addresses are going to make money in the future?
And so that's something we are working on right now
and we have some promising research results
internally. And I think ultimately this is going to be released as what we refer to as smart money
2.0. So you can think of 1.0 as being like backwards looking, which addresses have had the highest
P&L in the past in the last 30 days, 90 days, 180 days. But actually what you care about as an investor,
if you want to follow what the smart money is doing, is I want to look at the addresses that are
likely to make money in the future. Right. And in some ways, people might think this is like,
impossible. And of course, it is impossible to do 100% accurate. But if you can get like,
you know, depending on how you measure it, like 60, 65% accuracy, you know, as opposed to like
a random coin toss, 50% accuracy. And then you aggregate it. Then I think it starts getting
pretty interesting. Right. So if you can with like a two to three X uplift on precision,
identify the addresses that end up in the top 1% of trading next week, that's pretty exciting.
So that is something we're looking at literally right now and we'll hopefully release in a matter
of weeks, not months, as Smart Money 2.0.
Yeah, super interesting.
Looking forward to that for sure.
What are the challenge areas of trying to apply LLMs to blockchain data?
I mean, they have shortcomings, right?
Like they sometimes they hallucinate and what breaks?
What kind of do you need to pay special attention to that it doesn't break?
It's a great question.
So let's consider taking a vanilla LLM and getting it to do on-chain related tasks.
Where does it break?
Well, firstly, the most obvious place it breaks is that it doesn't natively have the data, right?
So if you ask it about, you know, flows,
tokens, addresses, transactions. It just doesn't have it and it would have to go like on the web and
maybe if you're lucky, it would like find something about it. But it's likely going to be stale.
It's definitely not going to be real time. So that's the first part where it breaks. There are a few
other places it breaks to. If you wanted to sign a transaction or like make an actual transaction,
it doesn't natively have that. So you have to add that capability to it. And then you have to make sure that it can
do that reliably.
If I say, hey, buy $20 of Pengu, it needs to know that like Pengu is on Solana.
Here's the token address.
You know, here's the right amount of tokens to buy to match $20.
And then to execute it.
And those things an LM cannot do natively.
You have to do, you know, tool usage somehow, whether it's to an MCP or a CLI or anything
like that.
it also breaks down in some other ways
like weirdly it often doesn't have
a good understanding of significance
so by significance I mean
you know if you say hey
like go and find some alpha for me on chain
look at the Elmstein flows whatever
it might come back and say like wow
you know this smart money address is buying like
$300 of this token
and you're like $300 is not that much money
you know that's not necessarily super interesting
But it doesn't have like the common sense or the domain understanding to get that,
which sounds like a dumb, naive or trivial thing.
But if you just use it out of the box, it's going to make silly mistakes like that,
which makes it like less useful.
And then there's other things that are more generic like formatting in the responses.
Do you want it to be concise?
Do you want it to be very verbose and elaborate?
Mostly investors and traders want concise output.
So you have to like get that right.
And then another thing is that when you invest in,
you trade and you make decisions, most likely you're not going to be comfortable only looking at
text.
And if you think about a lot of use cases for like other software or apps, you often want to see
something visual before you make a decision.
Like if you book a taxi or like a, you do ride hailing through like grab here in Southeast Asia
or Uber, you often want to like see the map and like get comfort that it's like going to the
right place and all these different things before you click the button.
And so as humans, I think we're very visual when it comes to decision making.
And LLMs are not natively very good, or sometimes they are not able to at all, visualize
the information.
So we have like the concept of nonsense artifacts, which you can see in our mobile app
occasionally, which are like summoned at the right time.
If you ask about like, hey, tell me about, you know, Lido tokenomics or something.
it'll like spawn a little token card at the top
with the basic information about it
which just makes it feel a bit more delightful
and also gives you more comfort in your decision making
and later this year we're going to release
the next version of Dunstan which is called Dunstan 3
and in Nunston 3 we've kind of taken the concept of artifacts
to the next level where it's kind of like a science fiction
like user experience where you talk to the agent
and the agent just in like sub-second spins up a visualization that is like exactly what you want for that question.
And a lot of the magic there is that the artifacts are curated, right?
They're not like coded in React on the fly.
We have a library of the right artifacts and then we make sure that we can fine-tune the LLM to pick the right artifacts for the right questions.
And so anyway, so there's a bunch of different things.
But I'd say like those are some of the ways that LLMs, like a vanilla LLM falls short,
then you have to add a lot of capabilities on top of it.
Yeah, I hear that.
Seeing that kind of like you have to put guardrails on your agents,
if there's kind of literally thousands, tens of thousands of agents trading based on the same signals,
what does that do to the market structure?
Yeah.
So first, I think it's going to be like millions, if not.
billions of agents that are trading in the next couple of years.
And I firmly believe that in like 2028 maybe, perhaps earlier,
the default way that you invest is going to be through agents.
You're not going to be picking individual tokens anymore.
And this is very analogous to like you're not coding lines of code anymore if you're
an engineer.
You orchestrate agentic engineering.
And you create the build loop.
and the quality gates to make sure that you can really create software,
but you're not actually writing the code.
And so I think it's totally analogous to what's going to happen with trading and investing.
But to your point, like, you can't just say, okay, start trading.
That's it.
I think you need some kind of strategic intent initially, right?
And so we talk about the concept of a trust ladder where let's take self-driving,
self-driving cars as an example,
analogy. Most people would not be comfortable just jumping right into a car that they don't know
the brand or anything like that and just sit in the back seat and drive, although admittedly I did
that in Shenzhen, China quite recently. But most people would not be very comfortable doing that.
And so you might sit in like the driver's seat first. You might have your hand near the wheel.
And then like eventually you may maybe make it to the back seat and you just let the car like
figure it out. And so I think it's similar where you will have a trust ladder. You have to earn the
user's trust by keeping them involved in the beginning. So I think in the beginning it's going to be
vibe trading where you're talking to the agent, but you actually pull the trigger for every trade.
That's what you do in the mobile app today that we have. You say, hey, you know, buy this token
and then it spawns the sort of modal that allows you to execute. And then you have to tap execute and do
a biometric scan for MFA to execute it.
But over time, people are going to be like, okay, I just like auto tap on execute because
I know the agent is going to be so good.
It doesn't hallucinate.
It finds the right quote.
It finds the right token.
So at that point, you might say, why don't I just like let it auto execute, you know,
for me?
And then after that, you might say, hey, actually, like, I don't feel like I need to micromanage
which tokens it's looking at.
I should just tell it my overall strategy.
Like, I want to buy, I don't know.
I want to buy like.
tokenized commodities, you know, under these conditions, or I want to buy meme coins that have launched
in the last 15 minutes or whatever your strategy is. And then you just let the agent execute the strategy.
So to go back to your question, I think people will have different strategies, just like they have
take different actions when they look at data, right? And so, and they will have different models,
and they will use different tools, and they will have access to different elements of data. So, and, and
And crypto is so large in terms of like the different corners of that market.
And it's just going to get larger as we get all assets on chain.
Commodities, real world assets, stocks, fixed income, all this stuff is coming on chain.
So the investment universe is just going to grow on chain.
So I think the probability that like two agents will do like exactly the same thing in a way is actually pretty low.
because there are so many parameters and variables
that you can play with to get different outcomes
and like even LLMs are unpredictable
they have like the notion of temperature right
if you have high temperature they can do like crazier things
if you have low temperature they're more predictable
so that's kind of generally my take I think people are going to have
different ways to use them and because of all these different
impact factors and the fact that LLMs are not fully predictable
I think actually agents are going to do a bunch of different things
on chain and so
that's also how we have to build our product.
We don't build a product such that
you and I have the exact
same user experience because
you have a different portfolio than I have
and you have a different intent, you have a
different strategy, you have different risk
tolerance, right? And all of these different
things. So
yeah, I think it's actually like extremely
unlikely that these agents are going to be doing the same
thing. At least
if you give it varied enough
input, right? Kind of like if I just go
and go in and say, hi,
agent nice to meet you make me the most money then obviously that's very kind of undifferentiated
yeah and i think like another very exciting thing is the concept of like co-creation of strategies right
like what is your trading strategy what is your investment strategy and you can tell that to the agent
and then of course the first thing you would want the agent to do is to give you like feedback on it not not like
qualitative feedback, you know, like, oh, that sounds nice. But like, would this actually have worked?
If you deployed the strategy three months ago or one month ago, would you have made money?
And so we had this concept that we called Nansen Jim. Sounds a little bit cheesy.
But Nansen Jim is where the agents go to train, the trading agents. So the trading agents go to the
Nansen gym and they are given a simulated world. It's actually not simulated. It's like,
a replay of all the on-chain history.
And the agent is unaware that it is a replay.
So the agent thinks this is happening right now.
How is stressful for the agent.
This is like you can live them through other cryptos west.
Oh, FDX, you know, every, every day kind of.
Oh, my God.
Yeah.
Oh, man, that's like, that's like a ground hot day.
That's like FTCX.
Like live, yeah, that's torture.
that's the
what that's like a it's like a nichean concept isn't it
the eternal recurrence
yeah that's that's a funny
like yeah that sounds pretty dark actually
when you think about it but
we will try to not make sure that it has to live
through FDX like a million times
but anyway like you you have
it goes back to kind of the infrastructure
and the data that you have to create an environment
where you can sort of we call it time travel
right where you can time travel on chain
and then replay a certain
section of history
and then place the agent in there and then
validate the agent but do it
really fast right you don't want to sit
and like wait for three months obviously so you
speed it up and so you have
to speed up the whole process of the
agent like living in
this world
and then have you seen this
black mirror episode where they do this like dating
it turns out
I'm spoiling the episode
but it's like a dating
program and it turns
turns out that the whole thing is just simulations that are run to match two people.
It's actually a fantastic episode.
It's an amazing Black Mirror episode.
It's probably my favorite, one of my favorite.
And so it's like a little bit similar to that where you just do it over and over again
and you try to like optimize it and see that it actually works.
Right.
And so that capability for a retail investor is like a superpower because historically it's only
been like Renaissance and you know like these amazing hedge funds that had these kind of capabilities
whereas the rest of us retail investors are like yeah this token has a cool name I'll buy it right
and so I think what AI potentially could do that I'm very bullish on you know there's a lot of
AI negativity in the world but like I'm really bullish on how AI can actually level the playing
field and empower individuals a lot more and you're kind of seeing that with open claw if you think
about it. I think there are very strong parallels between open source AI and crypto. But without,
you know, going off on a complete tangent, I think the point here is that you have to have a
really strong environment where you can very quickly test these strategies through trading agents.
And I think that's going to make you a better retail investor at the end of the day. And so that's
very exciting for me. Like I have to admit, you know, there's a lot of like institutional focus at
crypto right now and like oh yes the tradfif funds for coming and like hooray which to me it it feels like
that moment in the matrix where you know one of the guys is like backstabbing the rest of the crew
because he wants to go back to the matrix that's kind of what the tradfai wave feels like to me
like i want to stay true to the cypherpunk open source um version of crypto i think the opportunity
we have is actually to empower individuals instead of like making like institutions do
better. I'm not super excited about that if I'm being totally honest. I think it's much more fun to
empower individuals and make sure that they're making better investment decisions. How do you ensure
that your product remains open to retail, right? Because kind of like, institutional can just
outspend retail every time. So kind of, and if there's only so much alpha to be gleaned from
on-chain data, wouldn't kind of be the logical business decision to kind of just make.
access more expensive so it's more exclusive that you earn more and basically your retail users will then
have become kind of like the second level nansom gym for your agents yeah we've actually done
the exact opposite you know we used to be not going to say ridiculed but some people were like
hey nonsense is too expensive you know it's like it was too expensive $2,000 a month it was super
expensive I mean like it depends how you think about it it was like expensive for a retail user for sure
But we actually done the opposite.
Like we've reduced pricing to a flat $49 a month for the subscription tier.
That's the subscription tier.
Number one, for everyone.
There are no like plans above that.
It's just a $49 plan.
And you get everything you used to get in the $2,000 plan, by the way, and more.
That's number one.
And number two, we've actually made the product more open even for people who don't pay anything.
And so you can use Donson.
You can actually, for example, Solana is fully free on this.
Nansen. Like you can go on Nansen and see all of the labels we have for Solana. You could trade with
Solana. You can even stake soul. And it's fully free. And it's basically just subsidized through
staking revenues from Solana and through trading fees, like as people use our product. So I think
the business model here is actually pretty straightforward, which is that if you have a lot of people
trading well you can get trading fees just like any other trading venue right and so in a way I think
that move makes the whole product more open actually whereas if we just gate kept it which arguably has been
like was the strategy in the past then I think you're right you have to keep like raising the price
and you have to keep making it more exclusive but now I think you know the approach we're taking
is actually to open up everything more and more and actually our command line interface which everyone
try if you have an open claw, if you have an AI agent that you're managing, you can give the
nonsense CLI to that.
The CLI is open source.
Like you can even make contributions to the GitHub repository.
So we've actually kind of done the opposite and gotten more, more open, whether it's open source
CLI, you know, more accessible, free version of the product and actually lower pricing.
So I'm actually pretty happy about that.
The AI Dumerism has kind of died down.
I mean, I mean, I think kind of like it had its moment and now it's kind of become very quiet.
I hope it's not because the AI killed those guys first.
But kind of are there any worries that you have with respect to AI and survival of human civilization?
I mean, the short answer is yes, I do have some concerns.
I have many concerns.
At the same time, I do feel like the concerns get enough airtime, you know?
And like humans have a very strong cognitive bias, which is called loss aversion.
And so if you are averse to losses, you will tend to focus on that more.
And so the classic example is jobs, right?
Literally, like loss aversion expressed through understanding AI is like fear of jobs.
going away. But at the same time,
and I don't mean to like criticize humanity,
I think is just a fact that like generally we have like poor imagination.
Like let's say everyone is equipped with strong loss aversion.
Very few people are equipped with very great imagination.
And so the ability to imagine the jobs we're going to have is much harder
than to think about which jobs we think we're going to lose.
Right. So I think like everyone should
generally think about that.
The other thing is like the economic value
of an employee or a team member
who uses AI and is super productive
is much higher than it was before AI.
So like the economically rational thing to do
is to hire more people.
And so I see you know,
I see kind of companies
talking about AI as a justification
for cutting people but I think then
they weren't very well run in the first place
or they didn't have like the right people.
But like on a more existential level
I mean I think
definitely the risk of like a total human wipeout is real.
I'm not sure how I would rank the probability.
I think our best case scenario is to merge with machines.
At one point, AI, you know, I think of as children of our minds.
And that is a beautiful and poetic thing to me.
But I'm biased.
I'm, I literally have a degree in AI.
and that is what I'm excited about.
So I think most people would be better off thinking about
what are the opportunities that come with AI
and like how should I lock in right now to skills max
and learn as much as I can
because you're going to have a huge advantage,
I think as like a builder, as a team member, you know, as a human.
And that's what I would focus on.
I hear that.
So kind of AI has made me so much more productive.
It's insane.
but kind of pushing back on the children metaphor.
So kind of AI are, they are like children,
but kind of children that are not necessarily like you are kind of in.
Yeah, I mean, your kids are also not necessarily aligned with you, right?
That's true.
But kind of they are kind of in terms of most people have kind of between one and probably five kids or so.
you can have millions of AI agents, right?
That's one.
And secondly, kind of just from a biological evolution perspective,
kind of your kids are mostly like you, right?
Kind of there are no step changes.
Whereas in AI's, this is possible.
And I also find it curious how LLMs in particular kind of spark.
fear, right?
Kind of like no one has a fear of alpha fold taking over the world, right?
So it's always geared at LLM.
So I think it's kind of like this uncanny valley thing where kind of like they are a lot
like us, but kind of not quite and not containable.
And yeah.
I'm afraid Dave, right?
It's the Hal 9,000 feeling that you get.
Yeah.
So I think the other way to think about this, though, is like game theory.
and prisoner's dilemma.
And I would say that in the West,
you see a lot of these calls
to, like, data center moratorium
and, like, we need to regulate AI.
Yeah, I mean, you don't want that.
It's kind of like, otherwise,
kind of like, only the bad guys
kind of have kind of the good AI.
Right?
Exactly.
Exactly.
And this is like,
it's a shame that people don't, like,
recognize that logic.
It's so obvious.
It shouldn't even be like a political thing.
Really?
It should just be like, look,
that's inevitable.
it's bound to happen, even if you don't do AI, someone else will do AI.
So I think like the responsible thing to do in a way is to like maximize our own research and focus in this area to get ahead of any potential like bad actors than have it.
Because why shouldn't AI also be able to be protectors of humanity?
like if we build them well
and we have the right alignment, right?
So then it becomes a bit more of like an arms race
where maybe you can kind of keep things in check.
But certainly, you know, over-regulating
or like moratoriums, that kind of stuff
is not only going to help us.
It's probably going to make things a lot worse, right?
So yeah, it is a very interesting,
it is a very interesting area
I generally think that
we have no choice but to accelerate
like that is
that is the out like the
whatever way you kind of look at it
you just have to accelerate
and
and then try to make the best possible
future
based on that
and that's what I would advise people to do
and like I think everyone should actually
like tinker and dabble with open source AI
I think that's been extremely exciting to see how open source AI has gotten this massive upswing with OpenClaw.
And you're seeing a lot of fine-tuning of models as being done.
Distillation, which maybe is a little bit in the gray area.
But there's a lot of cool stuff happening in open source AI.
And there's a lot of interesting people to follow.
It feels a little bit like early Defi.
I see a lot of the same people I used to engage with during Defi summer,
who are now deep into the weeds on OpenClaw or,
you know, pie agent or herni's and so on and so forth.
So I think like my call to action here to everyone listening would just be to like,
don't be afraid you have to do this anyway, like just lean into it and ride the wave.
Yeah.
Speaking of riding the wave, tell us what you're looking forward to with respect to Nansen over the next year or two.
Yeah.
So first of all, I want to see Nansen in every agentic stack, every agent.
trading stack. So people who are hobbyists, tinkers, using open claw hermise, etc.
I want to see people use Nansen because it's going to make their agents so much better.
We have a lot of stuff. I mentioned like time traveling capabilities,
full feature parity with the core product in terms of what you can do with it,
more chains to support, even better liquidity through more aggregators.
then so that's for like the DIY folks and the tankers and then I'm also very excited about
Nansen 3 which is going to be more focused on like mass market and kind of consumers who aren't
necessarily deep down the rabbit hole and open claw based on the early versions we have internally
I think it's no doubt it'll be the best product we've ever created at Nansen
maybe even the best product ever created in crypto although maybe that sounds like that sounds
arrogant but I think it's just going to be incredibly cool like the design of it the user
experience the futuristic feel but also the simplicity and beauty of the product so that's like
another thing I'm extremely excited about and yeah I could go on but agentic trading is
overall the main thing we're focused on and whether it's for the DIY folks the tinkers or
for the more consumers mass market folks who don't want to tinker so much they just want a
that's great and works.
We will have really good offerings for both coming out this year.
Fantastic.
Where do we send people to find out more about Nansen?
Go to Nansen.a.I, our website.
You can also install our CLI by typing in NPM install Nansen dash CLI for the technical folks.
And you can find us on X at Nansen underscore AI.
Perfect.
Thank you so much for coming in again, Adler.
Thanks for having me.
