Unchained - Ben Fielding: Gensyn, Decentralized AI, and the Prediction Market That Settles Itself: Bits + Bips
Episode Date: May 3, 2026A prediction market trades on outcomes. An information market trades on knowledge. Fielding makes the case for the latter. --- Heads up! If you haven’t yet, be sure to subscribe to Bits + Bips, s...ince the show will migrate there in a few weeks. Follow us on Apple Podcasts, YouTube, Spotify, X, Unchained and wherever you get your podcasts. ---- What if the biggest constraint on AI is not compute or data, but trust? Ben Fielding, CEO and co-founder of Gensys, spent years as a machine learning researcher before concluding that decentralized hardware was the only path to true scale, and that blockchain was the only technology that could make machines trust each other without human intermediaries. With the launch of Delphi, Gensys's onchain information market built on an OP stack L2, Fielding puts his theory to the test while making the case that prediction markets have been asking the wrong question all along, and that the long tail of markets no one has thought to create yet is where the real opportunity lies. Host: Steve Ehrlich, Head of Research at SharpLink and Host of Bits + Bips: The Interview - https://x.com/Steven_Ehrlich Guest: Ben Fielding, CEO & Co-Founder, Gensys @BenFielding Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hi, everyone. Welcome to another episode of Bits and Bips, The Interview.
My name is Steve Ehrlich. I'm the head of research at Sharplink and also your host.
We've got a really exciting show for you today.
But before we do, just a couple of quick disclaimers,
nothing that you see or hear on the show should be construed as financial or investment advice.
For full disclosures, please see Unchained Crypto.com backslash bits and dips.
And before we dive in, let's just take a very brief moment to hear from
some of the sponsors who make this show possible.
If you've been loving Bits and Bibs,
don't forget that the show is transitioning to its own feeds
on X, YouTube, and your favorite podcast player.
If you're not already subscribed to Bits and Bips on its own channels,
go there now and hit that subscribe button,
so you can keep up with our twice weekly live streams
and Macro Meets Crypto breakdowns.
Bits and Bits will only be on the Unchained Feed for a few more weeks.
So subscribe today to be ready for launch.
You can get all the links at Unchained Crypto.com
slash Bits and Bips.
All right. So today's show really deals with the intersection of two very, very hot topics right now in crypto and in broader tech landscape, prediction markets and AI.
And we've got the perfect guest to discuss all of it. Ben Fielding, the CEO and co-founder of Jensis. So welcome, Ben.
Thank you. Great to be here.
Yeah. Great to have you here too. And we're bringing you on as you are launching, I think, like your first.
I guess, quote-unquote mainnet application for this decentralized AI platform that you built.
It's a prediction market built on top of the OP stack, layer two on Ethereum.
And I want to get into kind of all of that.
But before we do, since this is the first time on your show, I'd love for you to just briefly introduce yourself and your company.
Absolutely. Cool. So yeah, as you said, I'm Ben, co-founded and CEO of Jensen.
My background originally before Jensen was actually in machine learning research.
So I started my PhD back 11 years ago now in 2015, just as deep learning was starting to become viable on real devices.
So it was a few years after something called AlexNet was released, which proved that you could accelerate deep neural networks on GPUs for computer vision tasks.
I joined a computer vision department initially to do applied machine learning research.
So take these models that could be used in these computer vision contexts and apply them to new contexts.
I looked at things like diabetic retinopathy.
You take images of the retina and you detect whether somebody has diabetes, the skin cancer detection within images of lesions and things like that.
And so I was applying these models to those situations.
But something became clear to me really quickly, which was the machine learning back then.
these deep learning models were being handcrafted and they didn't need to be handcrafted.
You could actually automate the process of generating a deep neural network for a specific task.
And very quickly, I focused my entire research on that problem.
It's an area called neural architecture search and it involves optimizing the structure of a deep neural network while you train that network.
That now is a kind of area called autoML.
It's automating the process of creating machine learning models.
There's one key piece to that, which really kind of stuck out to me and
led into what we do with Jensen, which is that the techniques I used for my research then
are embarrassingly parallel. So I used evolutionary algorithms to optimize the structure of
these deep neural networks. Those evolutionary algorithms can be run on very distributed devices
because they don't depend on each other as they train and improve. Machine learning on the most part
in the world as it currently stands is done in a vertically scaled way. It's not embarrassingly
parallel. I can't split up the training of a deep neural network right now.
across many different devices because it just does not train in that way.
But I've seen and used the techniques.
I did research on the techniques to do it in an embarrassingly parallel way.
And so the research I was doing, I could do that over GPUs and people's homes if I wanted to.
It was very, very possible to distribute it like that.
And what became clear to me through my research was, A, I was constrained in the resources I
could access.
I just couldn't get access to enough to do as big kind of training runs as I wanted to.
And B, these techniques would scale in a way that the centralized techniques
don't scale. And so if I could access to GPUs all over the planet, I could scale my techniques.
You can't necessarily scale the way that models are trained in the standard world. And so this
showed to me that we had the tech to scale it. We just needed to be able to access very, very distributed
hardware in order to scale machine learning. And so that was my kind of an early experience of how
machine learning could scale from my research. The only other thing is I previously founded a company
before Jensen as well in the data privacy space.
And so I've kind of seen what happens when people give their information to companies,
their data to companies, and those companies use that data behind the scenes.
I tried to build technology that would alleviate some of those kind of discrepancies
between individuals and companies.
And that was my company prior to Jensen.
Yeah, and that's a big issue too.
I think when we were talking during the pre-interview, I mentioned,
I think my very first ever crypto-related article was about how blockchain-based identity
identity services sort of aligned with what was to become the GDPR general data protection regulation
in Europe. So you put obviously a very technical background. I want to just try to synthesize
what you said for the lay person who is frankly me. I mean, basically it seems that what you
figured out is that there's a way to efficiently run like sort of like AI learning, machine learning
type algorithms on a distributed set of devices that do not have to be like highly specific
like $10,000 computers or hire.
But it's something that is that is more generally usable.
Like that's sort of the that that's what it sounds like you said you've unlocked through through your
research.
In my research for specific tasks, yes, you can make them embarrassingly parallel in that way.
Underneath that, I essentially have, oh, we.
as a company have a deep belief that machine learning needs a horizontal scalability moment.
And so this is what the kind of centralized technology has had multiple times.
The most clear example of this I can give is probably MapReduce from Google.
And so when they were scaling the page rank algorithm, initially they scaled that by just getting
more and more expensive bigger servers and just running the same algorithm on those servers,
until they hit a point where they couldn't scale like that anymore.
It didn't make sense.
It was diminishing returns from building a more powerful server, like the heating costs, the ability to, the cooling costs, sorry, the ability to cool something like that, actually having the power to power, et cetera.
They hit these problems.
And what Google did then was invent something called MapReduce, which took the same problem that PageRank was solving.
And it converted that problem into something that was solvable across many different devices.
And so the difference vertical scaling just says, hey, just stack more compute on top and make this thing faster.
horizontal scaling says actually split it up and split it over 10 different devices and then you can keep scaling.
And MapReduce did that and unlocked a whole new level of scale.
Our belief is that machine learning is ready for its Map reduced moment.
It needs to move from vertical scaling, which proved, hey, we can make a model that's effective and chat GPT showed that, etc.
But we're hitting the scaling limits on the vertical approach.
We now need to say we want to solve the same problem, but we'll do it with a different set of techniques in a different way horizontally.
And Jensen builds the infrastructure to allow that to happen.
Gotcha.
Okay.
So here's the big meta question.
I wouldn't ask you.
And frankly,
I ask anyone that comes to me with an AI blockchain solution.
Like,
how do they really fit together?
Or because I've,
especially back to my days as a full-time journalist,
I was inundated with pitches about this,
but it just seems like a lot of marketing talk
and trying to join two hype trains together.
I'm intrigued by essentially what you're saying is that there's a way to scale AI in an efficient way across a distributed network of quote unquote like cheap or commoditized devices.
So maybe you have a better answer than I've heard in the past.
But like how do you see these two technologies working together?
Sure.
Great question.
And I would just highlight that I and Jensen as a company came to blockchain and crypto in a, I wouldn't say begrudgingly, but.
from a very sort of technical perspective.
So the reason we even discovered this technology was from research papers
describing how you solve disputes between two technical devices
without using the human owners of those devices.
We had to solve that problem.
And we discovered in kind of the depths of papers,
actually you can solve this through these kind of set of algorithms,
these consensus algorithms,
and those are already implemented in blockchain systems.
And so those consensus algorithms require financials,
financial stake within them. And so a lot of the time, it doesn't make sense to kind of build another
one from scratch because at that point you have to bootstrap the security of the system, all of the
financial security, etc. And so build another consensus algorithm you mean. Yeah, exactly. If you want to
deploy a new kind of consensus algorithm, you're going to have to bootstrap the entire economics of
that system, which in the early days, Jensen actually, we focused on doing that. But what we realized as
we matured was actually this class of existing crypto networks provides that security.
What we have to do is use it as a tool.
And so we need to be able to take that security and translate it down into our technology.
And then we have access to.
And that's what we've done now as an OP stack L2.
What we build is the machine learning side technology to take those gnarly machine learning
tasks, those operations performed on GPUs, and translate them into something that a blockchain
can agree on using its consensus system,
and then we get the security of the blockchain, etc.
The reason we do that underneath absolutely everything
is that we have to establish programmatic trust.
And so if you're connecting up any resources for machine learning,
you can connect those resources up in the current world.
You could have a GPU.
I could have a model.
I could push my model to you,
and you and I could sign a contract together,
which says that if you don't run my model,
I'll sue you or we'll go through the courts,
and then I'll recover the money.
That is too slow.
It's too inefficient.
It's too expensive.
For the system that we described, you have to have that dispute settle out instantaneously through technology.
And that's what smart contracts give us.
They give us the way to define a very specific kind of exchange that's happening and then execute the verification and arbitration of that exchange essentially instantaneously.
There are other benefits to it as well.
There's programmatic payments, et cetera, micro payments.
We need all of these things to happen if we're going to scale this technology to the point where it can do what machine learning,
needs it to do, which is operate far faster than any human could possibly operate.
Bostle-necking machine learning on humans doesn't make any sense.
Exactly.
So, I mean, you point out in some of your documentation, and these aren't just unique to you,
but there's a few key primitives that AI needs.
You sort of need a way for machines or agents to sort of identify themselves and maintain,
I guess, a persistent identity.
They have to have the means to engage in peer-to-peer communication and, I guess, even
transactions, both messages and value, and then there needs to be that computational trust layer.
Which of those do you think is the hardest to solve?
Or do you feel like all through those components at this point are mature enough that you can
really start to grow?
Sure.
Yeah.
So I think the way we describe it is that you need three specific areas of primitives, as you
mentioned, you need identification, you need communication, and you need verification.
Identification and communication exist in some form already. So you can use an existing blockchain,
you can use the existing systems like ARC-804 to have like identification of machine learning models.
You could also just implement your own. It's not too difficult. There's no like the world can
rally around the standard, but ultimately a wallet address is an identity within
in the crypto world.
And if you can allow a machine learning model
to control that address, then it has an identity.
There's various intricacies as to how you do that.
How do you make sure that only the model controls it,
et cetera, but those are all solvable with technology.
And so that part exists already.
It's not kind of heavily used yet, but it exists.
On the communication side, you need ways for models
to communicate peer to peer.
So join a network without having to have a central server,
communicate with the other models, et cetera.
There are some things that exist that allow you to do this,
The big high profile one is something called HiveMind, which we've used heavily.
We highly respect HiveMind as a technology.
We know the creators pretty well.
That technology we used in RL Swarm in the past, which is one of their applications,
but there are various aspects to that which can be improved, which we've improved upon,
but ultimately the ability to communicate here to bit already exists.
The final piece of verification is the bit that just did not exist.
And so if you allow models to interact with each other, communicate with each other
identify themselves to each other, they can start trying to train
together they can start transacting of things but they cannot trust each other and that's the big problem
imagine if kind of crypto existed but it didn't have trust between the kind of addresses in the system the whole thing would just fall apart it wouldn't work trust is the absolute key
since that's what we've been focused on building for the past few years is an ability to take a machine learning model
execution and verify it at the consensus of the nodes themselves that verification takes two forms one is verifying that this computation did actually
actually happen in this way up to like the other individuals.
And then the final piece is arbitrating a dispute.
So if somebody says, no, it didn't happen in this way, how do we get final, final, final
ground truth on that?
And that's the bit that was really, really hard.
And we've solved it by building something called re, a reproducible execution environment
that allows machine learning to be done on any device and compared in a bitwise identical way.
So you can narrow down any computation down to exactly what is different and solve that
the consensus of the blockchain nodes themselves.
Yeah, and we're going to get into that much more in a little bit
when we start discussing your prediction model
and how it might adjudicate disputes, especially,
if the outcomes could be a little gray.
But two more just, I guess, quick ones about just AI and blockchain
before we get into all of that.
Question one, I'm sure you get it all the time.
You're building this.
Meanwhile, the open AIs, the Microsofts, the Googles, the Anthropics,
I mean, they're raising money at trillion-dollar valuation, spending hundreds of billions of dollars to build data centers all over the world.
I mean, data centers is probably not even the right word.
It should be like cities or continents or something like that.
How do you see yourselves competing when it really does seem like you're in the middle of a fundraising arms race?
Sure.
Really good question.
Ultimately, it comes down to scale.
So what we view those companies is doing is equivalent to what you,
you see, for example, AOL doing in the early internet era.
It's building up a walled garden as big as you can possibly make it
with as strong a kind of technology as you can possibly make it to convince users to join
so that you can then continue to make money from those users once you've got them into your
walled garden.
And that's really effective in the early days of the technology.
It's a very good way of proving this technology works and scaling it to do value extraction
essentially.
We saw it happen in the early internet era.
We saw it happen in the social media era where exactly the same thing happened.
Realistically, between those companies, like if you look at the social media companies now,
there is very, very little difference between what those products are going to actually do.
The thing is entirely commoditized.
It's just the users that they manage to kind of bring into their ecosystem and hold that,
and they are able to make money from that.
In terms of innovation and technology, though,
once those companies get up to that monopoly position,
they no longer are incentivized to continue innovating on the technology.
They're incentivized to maintain their hold on their users.
We don't want that to happen for machine learning.
We think that is a kind of, if you think of machine learning as trying to achieve the greatest
scale, the most effect it can possibly have as a technology, we think that World Garden
approach falls short.
It ultimately stops progress at a certain point while that company tries to extract value
from those users.
We just don't think that should happen.
There's an example. In social media, we didn't manage to do this. We didn't manage to make an open alternative. Many people tried, but we're beholden to these companies. In the early internet infrastructure, we did manage to do this. And I think if you look at the internet and what happened, how it allowed so many things to flourish across the world, so many completely open things, we want that for machine learning as a technology. We can achieve that by building this as an open system where anyone can participate. And in doing that, we get internet level scale, not like meta or Facebook,
level scale. That's not as open and accessible as it possibly could be compared to the internet
running on like every device in the world. And so from a machine learning perspective, we think
those companies will continue to make enormous amounts of money. They'll create walled gardens. They'll
make money off their users. They'll make great products with good UXs, but they won't scale as far as a truly
open technology can scale. And that's what we build. I could go into a little bit more on the kind of
machine learning side with the concept of a world model, but I'll maybe hold.
that Legato is going down that.
Yeah, let's get into the prediction market because, I mean, that's obviously a very hot topic
these days, especially with the polymarket looking to get back into the U.S.
I guess it was what last week, either last week of the week before, where I think a U.S.
soldier was arrested because he apparently bet on the successful capture of Venezuela
and President Nicholas Maduro.
And we're coming up on election season, which is going to drive a lot of attention.
So there's a lot of, obviously, with that perspective, launching a prediction market makes a lot of sense.
That said, you could have done anything.
Why did you choose to launch a prediction market as your first sort of mainstream application?
Sure.
So what I would say is what we have launched with Delphi is something called an information market.
And there's a crucial difference with the information market concept compared to prediction markets.
So the reason we've launched an information market is because it trades a resource which is required for machine learning.
So underneath everything within machine learning, you need three resources.
You need compute, so you need access to GPUs to train a machine learning model.
You need data, which is just raw data from the world.
You just deploy a sensor in the world and it kind of takes analog data,
like a camera starts converting analog data into digital data, the image itself,
or a microphone converts audio in the world into.
to digital audio so that then you can process it like technologically.
And then finally, you need information.
And information is this, it's kind of fuzzy what it is.
Our definition is, information is what you get if you apply intelligence to raw data.
And so typically intelligence means human intelligence.
More and more now it doesn't.
It means machine intelligence as well.
But ultimately it means taking some intelligence, so a human, for example, looking at an
image and categorizing that image or labeling something in that image or segmenting
out an area of that image. That is creating information. Machine learning models need all three
of those things to exist, to improve. And crucially, machine learning models only generate information.
That's the only thing they can do. And so they take raw data and using the value that they have
in their parameters and weights, they generate information from them. And so what Delphi information
markets do is allow machine learning models to trade the one thing that they create. They can now,
through the information markets, start to take the information that they have.
And if they see a price delta that makes sense for them,
they can trade that information within those markets.
So they could do that within prediction markets.
But if the markets existed, that's fine.
But the second piece of it is models and anyone in the world can now create a market.
That's the difference with these information markets.
In creating a market, you're essentially posing a question to the world
and you're paying for somebody to answer that question.
So you're buying information.
So the information market is crucially a bi-directional.
That's a difference to prediction markets.
Prediction markets are just anyone's centralized company can say,
I made a bunch of prediction markets.
Come and predict in them.
Sure, the prediction is the main thing there.
In information markets, it's actually bi-directional.
Posing a question says, I'm buying information on this question.
I'm incentivizing people who have information to trade
and therefore give me the information.
And I could go to an existing market and I can trade the information I have
if it's more valuable than the information other people have.
In that way, we've created this bidirectional market more information itself.
That becomes the flywheel for machine learning improvement.
Gotcha.
Okay.
So, I mean, I just want to get a little more, bring this, I guess, a little more down to the ground level.
Would it be accurate to kind of characterize this information market as sort of a hybrid between a prediction market and like some sort of, I guess, like, like bi-directional, like peer-to-peer Google in a sense, where people.
can source information and and again, the flywheel effect is,
is that people learn from each other.
The model learns becomes more efficient on where to kind of route the right types of
queries and pricing and all sorts of, all sorts of things like that.
And I mentioned there be some sort of reputational system for which information,
which actors are the most accurate or the most responsive to incentivize use and that type
of stuff.
Am I interpreting this correctly?
Exactly, yeah. And you've actually, you've touched on the piece in that that we feel really strongly about, which is, as you mentioned, these markets, A, they just allow the trade of information. That's the kind of purely financial use case. But B, they also create this queryable model of the world through the markets themselves. And so this is the combination of two concepts of a world model, essentially. The free market concept from like Hayek and the Austrian School of Economics, that if you have free markets over as many things as possible,
you, those markets start to represent the state of the world because all beliefs are traded in those markets.
And so you get this like economic model of the world within the markets, which I think a lot of people listening are probably familiar with that kind of like theory of economics.
The same kind of world model concepts exists within machine learning, which is this idea that if you make a model big enough, it can represent the world within its parameters.
And then you can ask it questions.
The crucial thing behind Jets in as a kind of network and as a set of technology is that,
that we combine those two things.
We say that through the information markets,
we create the economic world model
and through the ability for machine learning models
to access all the resources they need
to train and improve on our network.
We also create the machine learning world model,
but both of those things work in concert
to incentivize the creation of the model.
And so that's where you say,
well, open AI could maybe try and create
the machine learning world model themselves.
What if they could make a better one?
Well, we say they can make that up to a certain scale,
but they'll never make it to the scale
that true open free market incentivization can make it.
Because we can go into every single corner of the world
through completely open creation of information markets
to the point where the incentive exists for somebody in any corner of the world
to trade any information, no matter how small it is.
Open AI are incentivized to gather that information.
No incentivize to gather the big information.
So, ultimately, this spider's out and becomes larger
than anything they could possibly build.
Yeah, I mean, it seems like Open AI or, I mean,
it's not just them,
I mean, perplexity, anthropic, all of them.
I mean, they cut deals with news organizations and databases so they can just ingest all of it.
I mean, it's the same thing that Met is doing by having their employees use their, whatever their LLM is called, so it can learn too.
And just ingest all of it to spit it out, whereas you're basically doing or it seems like you're trying to do a full court press on anything everywhere and it may not have the same depth as some of the stuff that Open AI and those guys are doing, but it's much broader.
Totally.
I would say as well, yeah, if you think about that, so those large companies, they have to go and they have to make deals with everybody with the information to make that happen.
We take the opposite approach.
We say we will build an economic system where those companies are incentivized or even individuals in those companies.
To trade what they already have or what they already know.
Exactly.
Okay.
So here, what was my question?
I just, I just listen here.
I'd like to maybe, could you give one or two represents?
representative examples of what this looks like, perhaps one for a prediction market and one for
someone trying to query a piece of data that would appear on your network?
Sure. So I guess an example that I use relatively commonly, and this is, it's not the cleanest
example, but it works reasonably well. So imagine you have, I'll give an example right now.
So I'm sat in front of a window in front of me as an intersection. And that intersection, and that
intersection, this car's going through it, et cetera.
If that intersection had construction on it, then the intersection would be shut down and
cars would root around it, etc. It would affect local businesses. There would be various issues
from that. And a lot of people would be very interested in knowing when that construction is
going to end and that intersection opens up again. And the current way to do that would be a human,
if you owned a shop like next to the intersection or something, you'd have to go and try and find
that information. So you'd start Googling. Maybe you'd try and find like the local
authorities website and see if they say where it is, when it's ending. Maybe you would go on the
street and you would ask someone, hey, when is this finishing? Do you know? And like, you'd try and go
and get that information. Wikipedia could have that information theoretically because that's where
most of the time humans put information about the world. But it doesn't, it's not as current at that.
It can't have all of that information because it's meant to be a more canonical source and it's
curated by humans, etc. In this world of completely open information markets, that's
that shop owner could actually just create a market.
They could say, when will the construction on this intersection finish?
That then incentivizes everybody else in the world if they have information that they believe is accurate about that to trade that information.
That information becomes public knowledge when it's traded.
And so that shop owner now has access to all of the public information about this thing traded by the people in the world.
We know that information markets or like the prediction markets in the form that many of them exist right now,
they incentivize information aggregation in that way and are very effective at it from economic research.
Information markets allowing people to pose the questions themselves extend that further and allow
anyone to query it. So that's fine. You could do that just with markets, but that's just getting
information from humans. And so you're relying on a human knowing what's going on. Machine learning
models now are obviously very, very powerful. They're able to adjust enormous amounts of data,
translate it into information, but it has to be done on the instruction of a human right now.
So in that kind of purely human information markets world, maybe a human would go and use a
machine learning model, they'd get the outputs of the model and then they go and trade that.
We say, you don't need that intermediate step.
Actually, with programmatic access to these markets from machine learning models, the models
themselves can see the market, they can see that there are data sources available to inform
themselves about the market, they can go to those data sources, they can gather the data,
they can process it into information, and then they can trade that information immediately.
that actually creates is an immediate live answer to any question incentivized by machine learning
models, but not one model. In the current world, you would just ask ChatDBT and see what
ChatGBTDT says. In this world, you're asking every single model that exists in the world to
answer it. And they answer it staging their belief by how much they trade in that market.
So if a model is completely certain, it's incentivized to trade a huge amount of information,
which gives you a really strong answer in the market. And so you're actually eliciting information
from humans and machines via information markets live,
and everyone is incentivized to trade the most current and up-to-date information
and collect that information.
Okay, so let's step a little deeper into the prediction market
because that is what has been launched.
I just have a bunch of questions that I'm sure a lot of people are asking themselves.
For one, what do you see as the real market opportunity,
or what's out there in a world where Polymarketing and Kalsi are the two,
dominant players and I don't know if calling it the Diwapoli is accurate, but they're
the two big fish. And I mean, Robin Hood's getting into the game. I know some of the big banks
might be getting involved. So where do you see the market opportunity for your platform?
Yeah, good question. Ultimately, I would say the end goal of all of those prediction markets is
probably very similar. It's to take the kind of trading value that people currently put through
like the financial markets and pose it much more targeted onto these prediction markets themselves.
So actual individual events and things. The way I phrase that is the ultimate end state of
prediction markets or in my view actually information markets making them bidirectional is a new
UX, a new user experience over the financial markets themselves, which opens up access to
the financial markets to many, many, many more people than currently have access. It also allows
the financial markets to extend to many different areas that right now they can, but it's very
expensive for them to do that. So to explain what I mean, when you, for example, if you're running
a company and you need to hedge a certain risk that you have in the world, you right now can go
and ask a broker to construct you a basket of options to hedge that risk. And that's an expensive
process, but you can do that. The average person can't do that. They can't hedge
a risk in that way. It doesn't make sense for them to go and pay to have that be created.
But an information market actually gives them the ability to do that. I use the example sometimes
of like orange farmers in Los Angeles, but in California. An orange farmer in California sees that
might be a drought coming. If they're a very large company, they can hedge against that drought
by buying a set of options which represent the opposite of that drought. And therefore,
they've essentially insured themselves via the financial markets. You're a small scale orange
farmer. You're not doing that. You're not going to like JP Morgan and asking,
for this like basket of options to be constructed.
But if there was an information market over that,
you actually could hedge.
If it was just an information market that said,
is it going to rain this month in California?
You are able to buy a position in that market,
which actually hedges your current risk.
And that gives you access to the benefits to the financial markets
in a way that you couldn't otherwise have.
And so I think information markets,
particularly with the creation of markets,
the prediction markets don't have,
allow you to get to the maximum scale you possibly can
on becoming a neater U.S.
of a financial markets.
When I say U.S. rather than alternative,
what I mean is when information markets get to a certain scale,
the financial markets are incentivized to come in
and operate within the information markets.
So, for example, if you are sitting there constructing these baskets of options
and you see the liquidity in the market around,
is it going to rain in California this month, get to a certain level,
you're actually incentivized to put those options into that market
because they make sense as a counterparty trade within the market.
So instead of having to go as the orange farmer and ask for the options,
now you can actually just pose the market and the options will come to you.
And that's why it's a better you act over the financial system.
So I think ultimately most prediction markets, most people getting into prediction markets,
are trying to create that, but they're doing it in the wrong way in our opinion.
Because they're doing prediction markets, which means they have to make every market.
If you have information markets, you scale much further,
because you allow any individual to pose the market that they think is interesting,
and viable. And the markets are essentially selected by the volume that goes through them.
So if a market doesn't get any volume, it's not a very interesting question for the world.
If it gets a large amount of volume, it is an interesting question for the world.
Polymarket, Kalshi, etc., are currently deciding themselves what is an interesting market for the
world, which is why you see huge volume go through the sports markets, huge volume go through
the presidential election markets, etc. And if I was to predict what will happen,
those companies will fight over those flagship markets,
and they will try to be the person
that gets the most volume through the presidential election market, etc.
Jensen, with Delphi, not interested in that.
We don't care about these big flagship markets.
We care about the information market concept,
which is the long tail of markets that do not exist right now
on prediction market platforms,
because they can be created on Delphi.
Okay.
I just want to make sure you understand everything you just laid out there
because it is an interesting point.
So your contention is like Polymarket, Calci, whoever else.
They're sort of, I mean, I know at least on, I mean, I think on at least on polymarket,
I don't know if Cal she's the same way.
Almost anyone can actually launch a market themselves.
But if nothing else, the liquidity gets concentrated in whatever is on the homepage,
be it sports, be it some sort of big political event, elections, whatever.
And then all the liquidity concentrates on on those markets.
markets and then at some point there's an outcome and
disputed or not disputed and it's the binary outcome.
People win, people lose.
What you're saying is with your market and when you,
I guess when you're trying to define a prediction market,
it's not like necessarily betting on what's going to happen,
but more using your rainfall example.
If I'm someone trying to perhaps put a bunch of options out there to let a farmer
hedge. I would query data from whoever has it, like different farmers or meteorologists around the
world to get a sense of what the likely rainfall is going to be. I get that information. I pay for
that information through your platform. And then I can choose like how I want to structure the options,
how I want to price them, et cetera. And then I guess the other by directional side of your market is that
the farmer or whoever, or even just a trader, would then find them and be able to purchase
them, et cetera. Am I capturing that accurately? It's actually much simpler than that. So in the current
system, like take Polymarket, for example, users cannot create markets. Only Polymarket can create markets.
Okay. And so users can suggest a market and Polymarket will decide whether or not they think they
should make that market on their platform, but users can't create them, which means no one can ask a question.
This is why prediction markets are very different from information markets. They are not
bidirectional. You cannot pose a question to them. You could only go in and answer questions.
So in the Jensen or in the Delphi information markets, that farmer can pose their question.
And so they can create the market that says, will it rain in California this month?
By doing that, they're essentially putting up a, in many ways, like a bounty for the rest of the world.
Because if there's somebody sat at home somewhere who sees that market and they say, actually, I know, I can go to all the meteorological sites, I can get the information together, I can be 99% certain about whether it's going to rain.
and this market only has 80% certainty,
there's an edge for me there.
I can go and trade that information.
And that isn't unintended.
That's the entire intention of information markets
because that farmer might not have the skills and knowledge to do that.
They might not want to do that.
The opportunity cost of their time is higher them going out
and doing the orange farming.
And so what they do is they put up the market,
they put initial liquidity into it,
and they're therefore paying somebody in the world
to do that information collation work,
come and trade in the market.
The bit where it involves machine learning is saying
it's not just humans who are going to do that.
Actually, in our view, machines are far better at doing that than humans are.
And so gradually, these information markets start with human participation,
but very, very quickly, it will predominantly be machine participation,
because the machines will get access to as much data as possible.
They'll scrape it faster than humans can.
They'll come to an intelligent decision faster,
and they'll trade it in the markets instantly.
And so you go from this world where the farmer puts up the market,
someone at home, like, reads all the meteorological,
sides, maybe the next day they go and trade a position because they think they've got a good
answer. In the true information market future, the farmer will put up the market and within
milliseconds, machine learning models will have scraped all of the information they need to have
certainty and they'll have put it into that market and it will instantly resolve. And so it goes
from days, hours, minutes, seconds of humans getting this data to milliseconds. And eventually it becomes
high frequency trading, but it's trading of information, not trading of pure finances anymore.
that's the true scale here.
It can't happen in prediction markets because they're centralized.
They create the questions.
How do you prevent this from being misused?
I mean, someone putting out a query saying,
hey, where is this politician's child going to be?
Where are they going to college?
Or what is the itinerary of, I don't know, some dignitary?
If this is to be decentralized,
how do you make sure that doesn't happen?
Yeah, that's a really good question.
ultimately the regulation of things like that in our view is done at the application level so if you create an application which allows people to trade the location of politicians children for example we as society agree that that is not a good thing that should exist and so we regulate that and say you should not create an application that does that and we have all of our social governance tools that exist in the world right now to do that so we would use existing laws to say that shouldn't exist the same things exist for media that we have
don't think should be allowed to be distributed, for example. In those worlds, if somebody made a site
that just distributes that illegal material, we have laws right now that stop that from happening.
If you look at Delphi and the information markets underneath, they are raw technology itself.
It's like saying that you don't regulate, for example, encryption to stop certain downstream
applications from being created, you just go directly after the downstream applications.
And you say, encryption is highly valuable for banking, etc.
We don't want to undermine it itself because it's too useful for the world.
But when encryption is used for bad purposes, we will go after the applications that use it.
And the same thing applies to Delphi information markets.
And so right now with Delphi, there is one user interface to Delphi, which is the one that we built for people to use.
Any number of users can exist, though.
Delphi realistically is a set of contracts that exist on chain, which allow these markets to exist in a pure
totally technical sense. And then there's a front end that allows you to trade those markets.
When we run the front end as a company, we will censor what we need to. We will go in and we
already have abuse filters. Those kicking in immediately. They delist markets from the front end that
shouldn't be there, etc. All of that should exist. We strongly believe what shouldn't exist is
censorship of the technology itself. An easy way of kind of comparing Delphi to like market
Cali in this respect is the Delphi looks like the uniswap alternative, where polymarket
calcium are like the Coinbase, which might kind of resonate with your users.
Like Coinbase decides what goes on the platform.
They are regulated to say what gets traded on their platform because they are the ones making
the determination.
If they list something that's illegal, it's on them because they've listed it.
In the uniswap example, anyone can create these pools.
Anyone can go in and put an asset on there because it's fundamental technology.
And we agree that that should exist.
the same thing exists for Delphi information markets.
So anyone can on chain create these markets,
but when it comes to front ends which give access to the world for those markets,
those front ends should be regulated by standard social governance mechanisms.
If somebody created a front end which only listed assassination markets,
that should be shut down.
That should not exist because that's what the social governance of the world says.
If we decided something different, then we should do it a different way.
We're not trying to kind of do anything against what our standard social governance
system say, we just think they should do it through the standard way they do, which is at the application
level.
I'm sure a lot's going to still come from that, but I appreciate your answer on the question.
We're almost out of time, but I just want to hit on one or two other things.
This week, you also launched your token.
I think it's the tickers AI.
And I'd love to just very briefly walk us through sort of the token economics of it and
like how you see.
And yeah, I mean, how you plan.
to use it to drive liquidity on your platform. It seems really an interesting challenge because,
again, like you're looking to service almost everyone, especially like the long tail. And that can be
particularly challenging. Absolutely. Yeah. So the AI token fundamentally is the core utility
token of the Jensen network. And so it can be used for many things within the Jensen network.
And I'll touch on what those will be in the future after I've kind of explained what it is right now.
initially right now the AI token takes a fee from the Delphi information markets.
So there's a fee in those information markets.
The vast majority of that fee goes to the market creator.
So the person who posed the question, because this is totally a decentralized market
creation, anyone can create one.
The incentive to create one is that you take fees on the volume.
In the contrast to like Polymarket, for example, they create the markets, they take the fees
on the markets, they are the market creator and the platform itself.
In Delphi, the platform is decentralized and the market creator is anyone.
And so the market creator should get the vast majority of the fees.
A small amount of those fees goes back to the protocol.
Obviously, the protocol needs to continue to exist.
It's not free as a system.
Most of that fee that goes to the protocol is used to automatically purchase the AI token and then burn it,
which creates deflationary pressure on the token, and it puts that value back through into the token itself, which represents the protocol.
And so you see this from other platforms like Uniswop, for example, have a buy and burn.
It took them all very long time to implement it, but it's kind of seen as the way these
platforms become sustainable into the future.
We knew that that needed to be in there from day one.
You need to prove that this makes economic sense and this actually has a fee model that works
and we put it in from the very, very beginning of Delphi.
What that means is in the information market concept that we talked about, where that becomes
the world model by incentivizing humans and machines to trade information, there is a small
fee on all of the trade of information, which goes back to the AI token and continues to build
the totally open system, which allows those information markets to exist. And so that becomes
all encompassing. It scales with the progress of machine learning. It scales with the progress of
machine intelligence, autonomy of machines, et cetera. There's no cap on where it can scale,
as long as it continues to incentivize scaling of machine learning. So it's a very kind of core
part of the AI token thesis. Separately to that, the Jensen network doesn't allow just information
trading. It allows trading over lots of things like machine learning compute, machine learning data,
etc. And when the trading over those resources happens, it will happen purely by machines most of the
time. Those machines will use a native currency and that currency is the AI token. So for right now,
the utility is purely in the revenue generation from Delphi and that in that way, that revenue
generation scales with machine learning theoretically infinitely. And then when it goes in the future,
when we ship more of that kind of autonomous access to resources, the AI token will be used to transact
over the resources. Machines will need it to pay for compute, to pay for data, et cetera, in order
to trade in the information markets. Two quick ones. And then we need to wrap. One, how do you
prevent like wash trading and basically people manipulating the markets to farm fees, but also
it's natural to dissipate some sort of airdrop in the future.
Sure.
So in terms of watch trading to generate fees, the system is economically balanced.
And so if somebody was trading in that way, they wouldn't be gaining any money.
They're not creating free money from anywhere.
So the markets within Delphi are AMMs that exist on chain.
None of the value in those markets comes from anywhere else.
There's no inflationary token rewards or anything like that.
We are very strong believers that if you're going,
to create a system like this, it needs to be economically balanced from day one. You can't have
this thing which is way too common in crypto, which is you create a black box. That black box
emits more money than goes into the black box because of some complex thing inside, which actually
is just inflationary rewards. That isn't a viable system. That cannot continue to exist. You can
potentially do that in the very short term to bootstrap the box, but far too many crypto systems
are just the black box dependent on the rewards.
There are no rewards in Delphi markets right now.
The value that goes in is the value that comes out minus the fees,
and that system needs to continue to be economically viable
with absolutely no rewards being inserted into it.
And so right now, wash trading, not going to gain you anything.
Maybe you get the fees off it, off it.
You're not creating money from anywhere because you cannot.
You're not able to do that.
if you're not able to exploit another user because the system is purely kind of economically designed.
You're just trading with other users that are also trading.
It's a pure free market system.
In terms of the air drop piece, that's another unfortunate part of the crypto world in that people now expect air drops.
I think that's a really bad side of crypto because it means even if you don't do an air drop,
you're annoyingly beholden to people using a system for the wrong reason.
We try to mitigate that by saying we do not do those air drops.
There aren't reasons why you're like being incentivized to do this thing.
We haven't said anything like that.
We don't mean anything like that.
I think in reality, the first air drop that ever existed was the only air drop that was
effective.
Every single one after that was a flaw in the system.
And I think it's a really unfortunate part of crypto.
Last question.
And then we need to wrap.
We're recording this about a week, week and a half after the Kelpdal attack.
There was the Drift Protocol one.
I mean, how do you maintain the security of your network?
How do you make sure it's not being manipulated?
We don't want to see another French hair dryer manipulating the test head or whatever to win a few thousand dollars.
Sure.
So on the network side on the chain, we are very kind of very heavily secured.
We have a research team.
We have smart contract engineers who stay absolutely on top of this stuff.
We were also quite paranoid.
And so like the Layer Zero, Kelped out stuff happened because there were a lot of one of one DVN setups that existed in the world.
We never even started with a one of one.
We thought that was a ridiculous way to start.
So we were two of two from the very, very beginning.
We can also improve that security and we're doing that work internally.
In terms of all of our smart contracts, when we build the Delphi information market contracts, for example,
those are fully audited by Trail of Bits, the top, in our view,
top auditing firm in crypto right now. We work very, very closely with them to make sure absolutely
everything is audited to the full degree it can possibly be. That doesn't mean you can't have
issues in the future when you do something completely new. You're always taking a risk like that.
We have all of the resources around us to do maximum amount of security work we possibly can.
And we do that constantly. So on the chain itself, on the design of the contracts, etc.,
we are the strongest we could possibly be in this world. On the,
the idea of manipulation of markets like the French hair dryer within the information market
concept. From our perspective, that isn't a floor in information markets. There's actually
a kind of information market full operation system. It shows where information can be created
and traded in the world. I think gradually, information markets have to heal themselves against
those types of manipulation because manipulation is kind of two way. You're trading information,
but you're also trading the ability to act in the physical world.
And so those sorts of things, I think, are somewhat inevitable
until we harden the Oracle systems that determine the outcome
of information and prediction markets.
And we build those Oracle systems at the machine learning level.
So we don't have time to go into it here,
but there's a flaw with the way that Polymarket, Calci, et cetera,
settle their markets.
There's never going to be a perfect system for this.
But we have research and philosophy going back to like the Greeks
on how you can establish truth.
within the world and what oracle systems look like.
And we do that through machine learning and commitments to digital intelligence that we believe
is the strongest commitment you could have to a truth, much stronger than the systems that
things like Polymarket Calci use, they use like stake weighted voting, et cetera.
Those are manipulable.
We use a different system through machine learning oracles.
But like I said, we probably don't have time to go into that.
Yeah, that's something we may have to discuss in another interview.
Well, we're going to have to wrap it up right there.
So, Ben, thanks so much for joining.
Thanks for joining us on Bits and Bips, the interview.
But stay with us.
Lars coming up next on Unchained.
She's going to be sitting down with Tom Dunlevy of Varys Capital and Adrian Vassel Hevick of
Steakhouse Financial to debate whether defy yields are actually compensating depositors for the tail risk they're taking.
You won't want to miss it.
