Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Nevermined: Payment System for the AI Agentic Economy - Don Gossen
Episode Date: February 25, 2025Even though current LLM providers usually charge flat rates regardless if the subscriber is a ‘power user’ or not, as AI agents become the workhorses of the online economy, a dynamic AI-to-AI paym...ents infrastructure would prove crucial for proper remuneration. Nevermined proposes a blockchain-agnostic, AI-agnostic credit system, that each builder can use to implement custom pricing strategies for their users. As the next trillion transactions will be mostly performed by AI agents, a universal payment system would help streamline integrations and economic interactions.Topics covered in this episode:Don’s backgroundIntegrating payment systems in the agentic economyMarketplaces vs. payment systemsAI agentic economyFixed vs. variable pricing strategiesLicense tokenizationSettlement and payment for AI agentsEstimating price and avoiding overchargesNevermined’s tech stack and integrationsValue propNevermined metricsEpisode links:Don Gossen on XNevermined on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Friederike Ernst.
Transcript
Discussion (0)
What we believe is that we're actually witnessing the rise of a new consumer that's going to manifest as trillions of AI agents.
And in order to scale these systems, we're going to need to rethink and rebuild a big chunk of the payments infrastructure.
Being able to understand what's actually being used in that analytical pipeline, like, how much of a given data set?
How many times is an algorithm called?
What's like the computational cost of executing that algorithm in conjunction with that data set to, say, train a model or what have you?
That output then gets commercialized and everybody in that value chain is somehow rewarded.
Because without that, there's actually nothing to pass back upstream.
Something that needs to be addressed in the Web Free space is there's oftentimes a trivial outlook on payments.
And that's because I personally believe people conflate settlement for payments.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization in the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Don Gosson, who is the CEO and co-founder of Nevermind.
So Nevermind as in Nevermind as in Has Never Been Mind.
Because there's quite a few projects kind of like with very similar names, right?
And never mind is positioning itself as the PayPal for AI to AI payments.
Before I talk with Don, these are our sponsors this week.
If you're looking to stake your crypto with confidence, look no further than Corse 1.
More than 150,000 delegators, including institutions like BitGo, Pintera Capital and Ledger trust Corus 1 with their acids.
They support over 50 blockchains and are leaders in governance or networks like Cosmos, ensuring your stake, is responsibly managed.
Thanks to their advanced MEV research, you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet, set up a white label note,
restake your assets on eigenayer or symbiotic, or use their SDK for multi-chain staking in your app.
Learn more at chorus.1 and start staking today.
This episode is proudly brought to by NOSIS,
a collective dedicated to advancing a decentralized future.
NOSIS leads innovation with circles, NOSIS pay, and Metri,
reshaping, open banking, and money. With Hashi and NOSIS VPN, they're building a more resilient
privacy-focused internet. If you're looking for an L1 to launch your project, Nosis Chain offers
the same development environment as Ethereum with lower transaction fees. It's supported by
over 200,000 validators making NOSIS chain a reliable and credibly neutral foundation for your
applications. NOSIS Dow drives NOSIS governance where every voice matters.
join the NOSIS community in the NOSISDAO forum today.
Deploy on the EVM-compatible NOSIS chain or secure the network with just one GNO and affordable hardware.
Start your decentralization journey today at NOSIS.I.O.
Done. Thank you so much for coming on.
Thanks for having me.
Maybe we can get off to start by talking about you yourself.
What's your background and kind of like what brought you here?
Canadian transplant in Europe.
So I live in Lisbon now by way of Berlin, by way of London, by way of Tokyo, by way of Los Angeles.
So I've lived all over the world, studied engineering.
University went into commodities trading after university.
I was on the risk side.
So doing back then what was called statistical modeling,
which became machine learning and now has been co-opted by the AI branding.
But yeah, just basically augmenting internal credit histories
with external credit scoring and figuring.
out which of our clients were deadbeats and which ones weren't.
So which ones we could like loan money to in order for them to hedge and stuff like that.
So it was pretty boring to be honest.
And then I went into IT consulting for the better part of a decade and a half as a subject matter expert in data and analytics.
And that's what took me all over the world.
So I built large scale.
data states for ML purposes for some of the biggest companies on the planet.
So I was at HSBC and L'OLE and AXA and Mizzouho and stuff like that.
And so, yeah, I've been in the machine learning space my entire career, so 20 years.
and then added the crypto flavor in 2016.
So I got introduced to blockchain
not as a
a system or platform or ecosystem
for like speculation and payments and settlement and stuff,
but more on the side of providential integrity.
So blockchains are very elegant provenance machines.
And asset provenance is a really hard problem to contend with within the confines of large analytical estates, right?
You've got to answer four questions with high fidelity.
Where's the asset coming from?
Where is it going?
Who's using it and what are they doing with it?
Right.
And the assets can be like data sets, it can be algorithms, all kinds of stuff.
And if you can't answer one of those questions, it undermines the integrity of the output.
So they're quite critical questions to answer, but it just so happens that with contemporary software,
it's really hard to actually answer that stuff.
So what ultimately happens or usually happens is that you create these very bespoke one-off patchwork solutions that like cobble together information from all of your different sources and destinations to try and figure out the topology of what's going on.
Anyway, blockchains help plug into that and make it much more seamless in terms of understanding the answers to those four questions.
And so that's what kind of got me hooked initially, sort of within the grander scope of my career.
And so have been at this crossroads of AI and Web 3 for going on a decade now, because that's,
Shortly after, you know, I kind of went down the crypto rabbit hole.
I co-founded a project in Berlin called Ocean Protocol, which was one of the first projects at this intersection of AI and LUT3.
And I've just, you know, kept beating this drum ever since.
And now it's, we've sort of distilled the learnings and experiences and understanding of merging these two technologies into a very,
hyper focus on AI payments.
I can get into why we're hyper focused on this stuff.
But yeah, that's the background.
Yeah, absolutely.
So you talked about kind of the provenance of data
and kind of this entire deity economy
that kind of crucially hinges on that.
How does payments actually fit into that picture?
It's a good question and it took us quite a while to realize sort of the gravity of the payments piece.
So where what we were focused on for a long time was in establishing the provenance component so that we could do by, you know, extension the attribution piece.
Right. So taking this holistic view that.
We want to build these analytical systems and that now take, let's say, the form factor of an AI agent, right?
We want to build these things in a decentralized landscape.
So what does that mean kind of holistically?
And it means being able to understand what's actually being used in that analytical pipeline,
how much of anything's being used, right?
Like how much of a given data set?
How many times is an algorithm called?
Well, it's like the computational cost of executing that algorithm in conjunction with that data set to, say, train a model or what have you.
And basically accounting for all of that.
and then extrapolating or extending that into the attribution piece.
So, Federica, you provide some contextual data,
I provide some training data,
somebody else provides the algorithm,
another third party provides the infrastructure
for bringing all of this together.
And when combined,
we create this inference or actionable insight
or whatever we want to call the output,
that output then gets commercialized
and everybody in that value chain is somehow rewarded, right?
Now, what we realized a few years ago
is actually the most critical piece
in that whole workflow is the end state.
It's the last mile.
It's commercializing that output,
enabling that thing to, you know, enabling payment for that inference or that actionable insight.
Because without that, there's actually nothing to pass back upstream.
There's really, other than recognition, like a nice pat on the back, hey, Federica, thanks for providing that context data comes up.
reputational reward, whatever.
There's actually nothing.
If you can't capture the end state
utility,
there's nothing actually to
translate back upstream
to the different participants.
And so in and amongst everything that we were building,
there was that piece,
that payment system
and the attribution component.
Well, as part of the attribution component.
And what we realized,
a couple of years ago was like, that's actually the most important piece.
If we don't get that piece right, then we can't do all of the other stuff that we want to do from
a providential integrity and attribution point of view.
So we became hyper-focused on that one component within this whole word flow.
So Ocean is very much sit around and kind of like Ocean, I mean, trends been on this podcast multiple times.
So kind of like listeners will probably know that it's kind of, it's sort of, it's,
focused on the data economy and having a marketplace for for data sets.
It seems to me that kind of like payments is just a very natural part of any marketplace.
Why do you think it kind of makes part to kind of ply these two things apart?
Because payment systems are complex and they're non-trivial, let's put it that way, right?
as are marketplaces.
The other realization purely from a marketplace point of view is
marketplaces are difficult propositions
from a commercial and operational point of view.
They tend to not make a lot of money,
and therefore they're hard to persist.
They're hard to accrue enough revenue
to keep the thing going.
So basically,
marketplaces are won in the margins
and the way that you win marketplaces
is through monopolization.
And that monopolization usually comes
from a very discreet focus on a specific domain
or a niche within a specific domain.
Right?
So the common marketplaces that are sort of presented are like Bloomberg, right?
Or maybe I'll Sevier for a more abstract one in like the research domain.
But they're very difficult to win and they require a significant amount of focus in order to actually make a successful marketplace.
you know, personally, I would say one of the learnings out of the last decade for me is that
I'm pretty skeptical about general purpose marketplaces.
I think it's, you know, I think there's a lot more examples of those failing than there are
of successes.
So all that aside, that's one part of the argument.
and then the other is a realization that payments are quite complex in their in their manifestation, right?
So it's not, I think there's a, there, something that needs to be addressed in the Web Free space is there's oftentimes a, a trivial outlook on payments.
And that's because I, I personally believe people conflate settlement for payments.
and payment processing and that sort of thing.
And so just anecdotally, right?
There's, you know, like, if you don't want to take my word for the complexity of these things,
there's a company called Metronome that does like payments and billing services predominantly
for SaaS providers, right?
and the CEO of that company and a few of the founders,
they come out of a box, out of Dropbox, right?
What they recognized was there was a discrete issue
with the payments and billing functionality that Drop was creating.
So they had, if they had something to the effect of 60 engineers at Dropbox,
just supporting the payments, the price setting, infrastructure, and the billing at that company.
That's like a huge undertaking from an engineering standpoint, right?
The reason for this is like operating across a bunch of different divergent, you know,
jurisdictions and stuff like that.
So they have like different price points for different localities, all of these different, you know,
depending on the customer and the volume and all of this,
like prices would need to fluctuate.
And so they had a team of 60 engineers supporting this.
Their realization was, well, we're not going to be much different from our competitors.
We're not going to be much different from Databricks or, you know,
these other like SaaS providers for these services.
So why don't we take what would build here,
extract it and offer it as a service to,
multiple companies. And so it's that kind of application that we're looking at providing,
but in this case, discreetly for agentic transactions, AI to AI transactions.
So I understand that kind of like pricing and kind of price pricing strategy can be arbitrarily
complex. But given that kind of, I would assume most of these agents kind of set their own prices or
kind of they kind of that this is something that's pre-negotiated.
Walk us through the complexities of payments because kind of like, I mean, I have, I have, as
someone who also works in payments, I have said this time and time again that kind of, I think
makes sense to kind of like productionize things in Web 3 first that have simple use cases.
And I always say kind of like, I always say payments in principle is a fairly simple use case.
If you compare it to other things that also warrant disruption like social media and so on,
because kind of like it's kind of like it balances that go from one place to another.
Kind of like it ideally kind of they should be conservative.
But there's nuance to it, right?
Exactly.
So walk us through kind of like what makes it difficult.
I'm clearly speaking with with a kindred spirit here, right?
So this is the other kind of revelation.
Like it's simple within the context of like it's easy to understand.
stand. It's not esoteric. And it like the infrastructure, the technology that we're using actually
makes sense to focus on payments, right? So in that sort of very general sense, it's easy. You know what I mean?
So why is this, why is this important? Okay. So our view of the world is like a particular one.
I think up until about six months ago, it was relatively unique.
Because most people back then didn't know what an AI agent is.
AI agents are now starting to emerge conceptually for some people.
I'm not going to say most people,
but those that are kind of either in the AI space or adjacent to it,
you know, are, this has entered the lexicon, right?
Like people are getting familiar with what an AI agent is.
Even those in the AI space, like agentic AI,
um, makeshures of experts like this concept was, even in AI was quite fringed
up until relatively recently.
So our view of the world, what,
we believe is that we're actually witnessing the rise of a new consumer that's going to manifest
as trillions of AI agents. And in order to scale these systems, we're going to need to rethink
and rebuild a big chunk of the payments infrastructure. You know, if you hold this vision
of the world, like we don't, we don't believe in the monolithic one AI,
rule them all, right, like the God AI. So we believe in this concept of mixture of experts that
there's going to be sort of finely trained agents that will provide very discrete expertise
that will be called upon when and where needed, right? And so holding this view,
it kind of becomes relatively obvious that like actually we need to work on at a minimum
standardization or protocolization of this payments process, right? Because if you just think about
this scenario where you have ecosystems with effectively trillions of agents, all either, you know,
collaborating or in competition, but ultimately consuming, buying and selling from one another,
if each one has their own payments mechanism, not only do you have to negotiate the price for that
good or service that the counterparties providing.
You also have to negotiate which payment system you're going to use, right?
But how is it different from non-AIs?
So kind of like if I kind of make a deal with someone who is human,
kind of like, why do you need one that's kind of cater specifically to AIs?
Sure.
So it's intrinsically, like it's not that dissimilar.
It's the way that effectively the information gets packaged around the service that's
actually being provisioned.
So let me unpack that a little bit.
If you're familiar with AI and how this stuff works,
and in particular, an AI agent, so we're, like,
our tech can be used for like pricing models and stuff like that,
but really, we're looking at these analytical pipelines in the form factor of an
AI agent, which is a compilation of different AI tools, right?
So at a minimum, it's like the ability to likely source an inference from more than one model, right, depending on the complexity.
So like, I mean, operator does this.
If you've used, say, 4-0 or 01 from OpenAI, you put in a prompt and you and I can be like completely divergent users.
there is sort of, there's, under the hood, there's logic that takes place that
roots the request to the model that will adequately handle the complexity of that request,
right?
So the very general broad kind of use case that I provide, it's not, or example that I
provided, it's not used case to lie.
you and I are users of some third-party agent, right?
And let's just say that that agent within its architecture
is composed of the GPT series of models from OpenAI.
So GPD3, GPD 3.5, GPD 4, GPD 4.5, and maybe 5.
I'm a simple user in this case.
You're a complex user, okay?
You and I can both interface with the same agent.
and it can sufficiently respond to our level of request.
It does that by, like I said, calling on a different model or set of assets,
AI services that's going to allow it to provide a sufficient level of inference or response.
So I submit my simple request to this agent.
That gets decomposed by the agent's back end and optimally rooted to the model
that will sufficiently handle that simple request.
GPD 3, right?
You, on the other hand, you submit a
multimodal request to the agent.
That has to go to a model that can handle
the multi-modality of your request.
So in this case, GPD4.
The cost difference
between invoking
GPT3 and GPD4
is like an order of magnitude,
if not more, indifference.
And accounting for that,
especially with most
contemporary pricing solutions,
is non-trivial.
It's relatively complicated.
So Stripe, which is what a lot of people turn to when they go to commercialize their AIs and their AI agents,
it's a skew-based architecture, right?
You price per skew, SKU.
It's set up to sell T-shirts on the internet.
So I set the price of my small T-shirt.
That price doesn't change from one day to the next.
versus an AI agent, its cost is variable depending on the complexity of the request that's served to it,
as well as the tools that it has at its disposal to respond to said request.
So an agent can take in dynamic requests and respond to those dynamic requests in a variable fashion
by invoking a variable set of services, which correspondingly have a variable set of cost to them.
So what we've built is a system of unit accounting, effectively an accounting module,
that straps onto an agent's observability function and translates the metered cost of that service to provide that inference into a settlement cost for the requester for you and I,
depending on the variable response and invocation of the corresponding services on the back end.
That sounds very prescribed.
So as someone, for instance, if I were to offer different AI models,
I mean, I could come up with different pricing strategies, right?
I could kind of sell a flat rate to all my models.
I could say, okay, I give you a flat rate until kind of you hit,
a certain number of requests and then kind of you have to pay per request or kind of,
I mean, there's different strategies.
So how much of that do you impose on people and how much can you actually tailor the,
this pricing strategy?
So this is where, again, like, you're clearly like well versed in the nuance of this.
what we are trying to accommodate for
is as much variability in that price control setting mechanism
as possible.
So if you want to have sort of
a fixed price subscription
that may be or may not rate limit
or time limit a service, you can do that.
If you want to go as granular as like pure pay to play
and, you know, each access costs this or each GPU cycle, for that matter, costs X, you can do that with this system.
What, I mean, that part of the rationale is we want to provide that flexibility.
The other is we are in the process of discovering what is like sort of the dominant set of attributes.
and characteristics for these pricing mechanisms within the agenic landscape.
Because the reality is this stuff is all quite new,
and we don't know what the dominant system for costing and billing wrapped up in some pricing component is actually going to be yet.
So we are trying to provide as much variability.
So basically in a true decentralized fashion,
we attempt to give that control to the user
or the builder as opposed to setting it for them up front.
Okay.
And then kind of I understand that kind of like you give me variability
and kind of like how I set my pricing strategy.
But do you, you also take care of settlement, right?
So kind of you also make sure that I actually get paid.
Right.
Yeah.
And how do you do that?
So kind of like how does the payment work?
Yeah.
So this is a good question too.
So we leverage a concept called licensed tokenization.
So we believe quite strongly.
And so again, like going back to the, this is a function of a marketplace, right?
Marketplaces are hard to win, especially as you get as especially as you start
to sell assets that become more and more commoditized, right?
Like, market's trying to push the price to zero,
and then your revenues generated at the margin, right?
So having as much fidelity on the actual operational cost
means that you can, like, eke out as much margin as possible.
So, like, recognizing that that's sort of the, this set of,
I don't know, maximal constraints that we have to deal with here.
We want to enable these agents to eke out as slim a margin as possible
and also allow them to continue to be functional from a business and operational point of view.
That means setting a single price and just giving free rein access
doesn't really make a lot of sense.
It works right now for propositions that are, broadly speaking, toys.
And then more importantly, where there's not a lot of competition that's yet pushing that price point down.
But then we are already seeing this manifest, right?
Like Open AI very clearly has, you know, a price per usage function.
And then a deep seat comes along and has like an open AI.
source model and it's like, here you go, have it for free, right?
Trying to find the equilibrium between the two, that's somewhere in between, right?
It's not free.
It's also probably not maybe as price gougy as some of the things that OpenAI is doing.
So anyway, in all this work that we've done, there's this recognition that actually understanding
sort of the MLOPS, that the observability piece of translating that metered cost into a settlement
cost, it's likely going to be important, especially as these AI agents and their services get
commoditized. And so we kind of looked around and said, okay, if we're taking the position that
like a traditional subscription style model isn't the right one, what is, what does look and feel
right based on experience, and it's this concept of license tokenization. So it has nothing to do
with crypto. It's a traditional licensing scheme, but it differs in comparison to like named
user licensing and concurrent access licensing where in a named user license, Federica,
you negotiate your usage for a platform and the underlying sets of tools within that platform,
right? Concurrent access license would be you and I are on a team. Together we negotiate our usage of
said platform and the corresponding tools of that platform. It's a pretty laborious process
in the grand scheme of things or very rigid, right? One of the two. Either like everybody gets
the same thing or it takes a long time to negotiate what you get, right? The response to this
is this concept called licensed tokenization where the platform is tokenized and the tools that make up
that platform have redemption criteria in those tokens. So you buy a thousand tokens,
I buy 10,000 tokens. And that platform is made up of tool A, B, B, and C. And A has redemption
criteria of 100 tokens, B, 1,000 tokens, C, 5,000 tokens. It's like usage credits or
something. It's use. It's exact, that's exactly, it's this, yeah.
So it's this emergent licensing model.
We've taken that, looked at agents.
They are, again, this form factor of a platform compilation of a bunch of different tools
where you can issue tokens, or in our case, we call them credits,
for each of these agents or set swarm of agents.
and the tools and or agents within
that that agent is composed up
or those sets of agents,
those swarms of agents are composed up,
they can have their own redemption criteria
in those credits or those tokens.
So that's how we facilitate
sort of the legibility
and the fine grain
component of the payment aspect.
And how do you set it?
So because kind of like you have,
you then have to transfer them, right?
Right.
So there's, so like in this scenario where you and I
are the user of this third party set of credits,
let's say we pay a dollar and we each get a thousand credits.
And so under the hood of this agent,
GPT3 has a redemption criteria of 10 credits,
GPD 4, 3.5 of 100, GPD 4, 400 credits, right?
We both pay a dollar.
I get 1,000 credits in my wallet.
You get 1,000 credits in your wallet.
We make these requests to this agent.
My simple request goes to GPD3.
Out of my pool of 1,000 credits, I get charged 10.
You, with your multimodal request, that gets rooted to GPD 4,
you get charged 400 out of your pool of 1,000.
So that's how, and then that function, that accounting, that redemption is a burn function on chain.
Okay, so it's a burn function.
But then how does the AI that actually did the work receive the payment?
Good question.
So we have basically two forms of settlement.
one is the
the piece that the settlement that authorizes you
to use the system
so the payment of a dollar
for that thousand credits
that's the first settlement
and in our case you know recognizing that
there's a large swath of like we
we view this as an AI
solution or adjacent solution
solution. So we don't
distinguish between
like Web 3
AI versus Web 2 AI is just AI.
What we're trying to build is something that's like general purpose for
AI. Recognizing that there is a
large swath, maybe if not
a majority, like
there's, yeah, probably a majority of AI that
of the AI community that is not very
conversant in Web 3.
So one of the things
that we've done is like full account abstraction.
You still get a wallet, right?
And the agent still gets a wallet.
But you can use socials to set it up.
It's an MPC solution.
It's fully gasless.
So there's no like
extraneous signature signing,
stuff like that that's required.
For anybody that's listening, we're paying for the gas right now.
If there's questions around that.
But anyway,
So in this case, in this scenario that I was describing, where I'm the simple user and you're the power user, I don't have any affinity towards Web3.
I don't have a wallet.
I just want to use this AI.
Okay, I use, never mind.
I go through this checkout.
Part of the checkout process is I register the system.
That creates my wallet.
We've gone so far as to integrate Stripe.
So I'm now in the ecosystem.
I've registered.
have created this MPC-based wallet
that's attached to me.
If I don't know where to look,
I don't even know that it's a crypto wallet.
And then I can just take out my debit card
or credit card and pay a dollar
for these thousand credits.
Now, in the background,
the builder that's registered this agent,
a third-party agent that you and I are using,
they've linked that to their bank account
through a stripe integration.
What they've also done
link that to a wallet. Because in this case, in this scenario, the builder is going to take both
Fiat and Crypto's payment. So I pay my $1 with my debit card. That goes to the builder's bank account.
You, on the other hand, you're well-versed in crypto. You have a wallet. You've got USDC. So you pay one
USDC. And that goes into the agent or the builder's wallet in that case.
And so in this case, we're handling both.
But as you can see, there's two forms of settlement.
One is this sort of overarching authentication gatekeeping function for access to the agent.
That gives you the set of credits or tokens, the usage asset, to start utilizing and authorize, you know, and authenticate the usage of that agent.
how do I, as a user, know that kind of algorithms that I solicit,
how do I know that they are metered in a fair way?
So kind of like, if there's different ways that kind of like different underlying functions
that I could call and kind of they're all meted in some way,
is there like some rubber stamp of approval somewhere that says,
okay, this is actually, this is an okay pricing scheme because kind of like I could
I could kind of like make a really obscure pricing scheme where kind of I overcharge
massively for certain parts because it's somewhat intransparent to the user, right?
Yeah.
So right now it's very finger in the air and cottage industry.
There's a lot of price discovery going on.
at the moment, a lot of like estimation.
We're actually working on something that we hope is going to help
both with the price setting piece,
as well as for an understanding from a user-based point of view
what maybe the pricing should be for a given agent.
So that's, that will be, it's like a pricing engine.
So that's going to come down the,
pipe relatively soon.
But it's something conceptually that we've been working on for about six months.
And over the last two months, we've like put pen to paper and actually started to, we,
we POC'd it.
And now it looks like we can actually do what we want to accomplish to kind of address,
not the buy side, but more the sell side to start.
Because the other, the flip side of your question is,
What do I price this at?
Right?
So helping answer that question is where we're trying to get to first.
And I think the knock-on effect of that will be disclosing that sort of price-setting mechanism
will help those on the buy side also understand maybe what their cost structure should be.
Maybe let's kind of switch gears a little bit.
So all of this kind of is built on blockchain infrastructure,
kind of like walk us through kind of like what kind of of stack this is built on
and why you chose that stack.
So we're a DAP in the classical sense.
So we're chain agnostic, though we are, we're an EVM-based solution.
So, you know, from a deployment point of view, we're on mainnet and polygon and arbitrum and base and nosis and cello and a bunch of EVM based chain cell ones and L2s.
Yeah, the code base is Python and TypeScript.
you know where needed we've got backhins that are um what do we have i don't know it's um postcress
database um yeah i mean it's relatively run of the mill from an architecture point of
okay let's talk about kind of like the interoperability aspects here right so kind of like say
I as a user kind of come to your app,
how do you determine
kind of like which
of these
chains I kind of
buy my credits on and settle on
and so on. How is that
determined? Because in principle, that's
something that the user probably doesn't care
about, right? No, I
disagree with that statement
when you're talking about AI Web
3 builders because they usually
have a network that they want
on a default to.
Okay, but then let's talk about kind of like the people who kind of consume, right,
who kind of consume your AI.
Well, they don't care.
Yeah, they don't care at all, right?
So as someone who wants to consume, how do I decide which network to kind of consume my,
which network to pay for my...
In this case, you don't.
The builder would...
So, okay, so here's where, like, from a crypto point,
of you, you run into friction. But again, the choice is up to the agent or the, you know, the builder to decide which chain or chains this thing the agent is anchored to, right? It's connected to. Usually it's one. The dominant chain at the moment is base. So that's the default. From a consumption point of view,
you don't really care other than if you are, in your case,
this power user that is crypto native or savvy,
you know, you're in a condition where, ah, shit, I don't have any USDC in a wallet on base.
So now you're in, and you want to use an agent that's anchored there and it needs, you know,
you got to pay one USDC for those
thousand credits. Well, now
you've got to bridge that.
That's outside of the scope
of our
operation.
I think there's enough
bridging tools out there that
you could probably figure it out
if you need to bridge.
Okay, so basically as a consumer,
kind of like I decide what model
I want to use, what kind of
agent I want to
frequent and then kind of I just have to pay on the commensurate chain.
Is that fair?
Yeah, exactly.
Yeah, yeah.
That's the way it would work.
Yes.
Okay.
So how does Nevermind currently integrate with other kind of decentralized platforms or
protocols, right?
Because kind of like you primarily enable the payments here, there's a lot of
functionality that kind of that has to come together to kind of make this into a good user experience
that kind of goes beyond payment, right? So how do you interoperate here? Yeah, so we kind of have,
we have three levels of engagement. Um, so there's the SDK, which is, um, the most robust,
it provides the most set of features for integration.
You know, that's,
if you're like a pretty serious Web 3 builder,
moonlighting is an AI developer,
you're probably, like, you might gravitate towards that.
On the, those that are gravitate more towards the AI side
that are less familiar with like
the full suite
of capabilities
from a blockchain point of view,
they're going to use the libraries
that we have on offer.
So we've got the SDK,
then on top,
a more refined set of libraries,
and Python-based ones
because it's the dominant language
for building AIs.
And then on top of that,
for anybody that's kind of like,
there's this new subset of builder
right, this like non-technical or let's call it pseudo-technical.
They can build, you know, there's emergent tools,
especially on the model side where you can like prompt engineer a relatively
sophisticated agent.
And now there's tooling that's coming out that makes building agents even easier.
for that
for that demographic,
we have an app,
which is an even more refined set of functionality.
And so that's, yeah,
those are the three kind of mechanisms
or means of engaging with what we built.
What's the value proposition that kind of
you put forward to each of these groups?
So kind of why shouldn't they
kind of just buy kind of credits with open AI or Claude or kind of use Deepseek for free.
Okay, I would say this.
Well, even in the case where all of these services are cost nothing,
and that's just long term probably untenable unless they become, you know, public goods,
which I think most of these companies will fight tooth and nail against,
but, well, let's see how it plays out.
You know, barring that from happening,
but even if that does occur,
there's still sort of the aggregate
that these agents represent
any tuned expertise
that can be captured
and then subsequently deployed in these packaged agentic services,
that in and of themselves can be priced and paid for.
So going back to your question, like, which solution would you gravitate towards?
Well, it's less around like what's the pricing and payment mechanism and more
what's the level of functionality that you want to have within your agent and or your
swarm.
So, for example, we have in our SDK the attribution function.
So once payment has occurred, like, if you have a swarm of agents and that swarm is
ultimately what's priced, that you can actually, like, redistribute funds.
within the commercial, the value capture piece, right,
and redistribute those amongst the agents proportionately to their contribution within
that swarm.
But that's like super low level and really only going to be interested,
interesting to somebody that's been working on swarms for a long time, right?
So like, that functionality in the SDK.
is not really being used at present.
What most people are just trying to do
is wrap their HTTP end point
in some sort of gatekeeping functionality
with a payment mechanism attached
as part of that gatekeeping functionality.
So there's a broad spectrum
of requirements and demands.
We're trying to cater to,
the simplest set of those demands initially,
though we have built in some relatively complex functionality,
just, you know, I don't know, because it's interesting.
Okay, so maybe let me reframe the question a little bit.
So with nevermind, how is the AI data payments landscape
becoming better for actual users, consumers of data,
or providers of data and algorithms.
How is it becoming better to what we currently have
in kind of like this centralized model?
I'm going to answer this in a relatively flippant way.
We don't actually care.
what we're trying to offer and enable is a higher degree of fidelity on that transactional piece.
So whether and what I mean, like there's two aspects of this.
one is on the accounting piece, right?
So enabling that and doing it in a way that's more dynamic than existing systems today.
And then the other, and this is whether or not over the long term this actually matters,
time will tell.
But like, these services that are being rendered discreetly don't cost that much.
It costs like fractions of a fraction.
of a cent.
Existing payment systems,
Fiat-based payment systems,
cannot handle that discrete mechanism.
You can't get below, you know,
a certain denomination of a currency,
say one cent, right?
Why not?
It just depends on how often you settle, right?
Kind of like if you have fractions of a cent,
then kind of like...
But this is what I'm saying.
from a very discreet point of view,
if all I need is one action,
just for sake of argument,
one GPU cycle.
I can't price that it.
I have to do what you just said.
I have to aggregate it.
And so the question becomes,
and this is why I said,
the jury's still out on this piece,
is that
actually a requirement or not, time will tell.
But I do believe that there will be applications on a use case where you have discrete expertise
that for that particular use case, for that set of requesters,
that particular agent may only ever get called upon once.
or less times than you can actually aggregate to the floor of that currency.
And so what do you do in that case?
That service is always free or it has to be coupled with other services.
And, you know, so again, there's an element here of, you know, speculation on whether that's going to be a driver.
I think it will be having operated in this space for as long as I have.
But whether or not it's a primary driver, I don't know.
Time will tell.
But anyway, getting back to the original question, you know, I think at the end of the day,
why are we doing this?
Why are we using crypto instead of doing this in like a centralized, multi-tenanted
charted database?
We're driven by
optionality
and like the drive to provide option.
And that
is derived
from a desire.
I mean, this is where the sort of the crypto ethos
shines through
to provide the option of censorship resistance.
Right.
So from our point of view,
we view the payments piece as the most critical component in decentralization for AI agents.
So if Microsoft OpenAI, Google, Facebook, Deep seek, whoever, if they monopolize the means for these agents to pay and get paid, their ability to de-platform one agent, in our opinion, is an existential threat to all agents.
if that centralizing entity can just, with the flick of a switch,
Microsoft deems this agent competitive with one of its lines of business,
it doesn't matter if the rest of that agent is decentralized,
at least in an economic context,
it might as well not exist because it can no longer transact.
So providing the infrastructure, the means for these agents,
providing that optionality for them to pay and get paid always is like that's a driver for us.
I totally hear that and I think I fully understand that argument kind of like from a defensive engineering kind of point of view.
But kind of the question is how do you get the flywheel going, right?
So kind of I have no doubt that kind of you find people who are willing to kind of sell their services on your market.
It's a place that's usually kind of the easy, the easy side of kind of kind of the business.
Yeah.
Of any business, right?
Kind of like, but how do you make sure.
How do you find the buyers?
Consumers actually, you know, consumers actually come to, to your interface to kind of buy services there rather than elsewhere.
Okay, so to be flipping again, we don't care.
That's up to the agent to provide a productive service that somebody or something actually wants to use.
That's out of our sphere of influence.
Now, kind of generalizing, it's in our sphere of influence by selecting who we partner with.
So making sure that we're
There's two things that we need to make sure of
One is to your point
We need to partner and get those
agents that are productive
using our solution
And the way that we can do best
To guarantee that is by making it
As seamless and simple as possible
Both from an integration point of view
And from a usability point of view
so if we can reduce the friction and it's the easiest thing for agents to use to pay and get paid
and there is a there is at least some kind of requirement for agents to pay and get paid then
the extrapolation is we're likely at least going to be in the running for
the product that gets used um so that's that's the way you
you know, this is how we're going to market.
So from just a practical point of view,
looking at multi-agent system builders,
swarm builders and Web3 parlons,
that's who we want to partner with.
You know, this is like the crew AIs of the world,
the agent ops, the agencies on the Web 2 side,
the virtuals, the Eliza OSs, etc.,
are the naphths of the Web3 world.
And then there's the additional step to that,
because that's like B to B to B,
to A, I or B to C.
So B to B to C or B to B to A.
A.I. Also helping with that sort of step function
with our partners, helping them try and attract
agents that are doing something useful.
I will say this.
I'm going to rant a little bit
just to get this off my chest.
What we need to do when the Web 3 AI sign of things
is get out of our own fucking way
and quit worrying about like
verifiability and attestations on these systems
because like those are nice to have.
Well, we need to build our productive agents in swarbs
to do some kind of useful work.
I also don't think
Defi and agents being added to
Defi is the massive unlock
that a lot of our community actually thinks it is.
But that's for another conversation.
But anyway, I just want to say, like,
we should be, we need to get out of our own way.
And instead of trying to focus purely on the infrastructure side
and like, integrate zero knowledge
or trusted execution environments or whatever,
like I think we'd be better served
just trying to focus on building like agents that do work
that to your point are going to have actual users.
Okay, then tell us about the usage of nevermind right now.
So kind of how many agent payments do you actually process
kind of like on a daily basis?
and how do you see that grow or where do you see the main drivers of that growth?
Yeah.
So at the moment, there's been a bit of a down swing,
and I think that's somewhat related to the down swing
and enthusiasm in the market.
At its peak, we're probably seeing like,
a couple handfuls of transactions a day.
Um,
you know,
in total,
we've done a few,
uh,
somewhere in the neighborhood of like 5,000 transactions.
Um,
again,
we're not counting the,
the,
the burn piece.
We're just talking about like that initial purchase of those thousand credits,
for example,
right?
Um,
It's relatively nominal.
But I'm bullish, like that, that sort of really predate the AI agent meme taking hold.
And so I'm bullish that as we see more output on the agentic AI side, that obviously this is going to be more and more of a requirement.
So now our business is the business of amplification
and getting this in front of as many AI agent builders as possible.
Cool.
So where can we send the AI agent builders to kind of find out more about Nevermind?
Love to have you in our Discord.
So you can connect to us via our website,
nevermind.io, N-E-V-E-R-M-N-E-D-D-I-O.
You can also follow us on X.
We're at Nevermind underscore I-O,
N-V-E-R-M-N-E-D underscore I-O.
Yeah, it would be happy to have
everybody that's building
and all of your agents
be a part of our ecosystem.
Fantastic. Thank you so much for coming on, Don.
Thanks for having me.
