The Good Tech Companies - Behind the Startup: How OpenLedger is Building a Blockchain-Native AI Ecosystem
Episode Date: May 26, 2025This story was originally published on HackerNoon at: https://hackernoon.com/behind-the-startup-how-openledger-is-building-a-blockchain-native-ai-ecosystem. OpenLedger i...s decentralizing AI with transparent data attribution, rewards, and agent-based economies. Learn how their team plans to make AI accountable. Check more stories related to web3 at: https://hackernoon.com/c/web3. You can also check exclusive content about #web3, #openledger, #cryptocurrency, #agents, #ai-economy, #openledger-news, #kamesh, #good-company, and more. This story was written by: @ishanpandey. Learn more about this writer by checking @ishanpandey's about page, and for more stories, please visit hackernoon.com. OpenLedger is decentralizing AI with transparent data attribution, rewards, and agent-based economies. Learn how their team plans to make AI accountable.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Behind the startup. How Open Ledger is building a blockchain native AI ecosystem, by Aashan Pondi.
As AI and blockchain converge, one project stands at the intersection of this revolution,
Open Ledger. With promises to decentralize model training, reward data attribution,
and power agent economies, OpenLedger is
pushing toward a new era of transparent and community-owned AI.
In this interview, we sat down with core contributor Kamesh to understand the principles, innovations,
and roadmap that underpin OpenLedger's unique thesis.
Ashant Pandey, Hi Kamesh, It's a pleasure to welcome you to our, Behind the Startup,
series.
Could you start by telling us about yourself and how you became involved with OpenLedger?
Kamesh.
Hey Ishan, I'm a core contributor at OpenLedger.
Before OpenLedger, I was working on an AI, ML research and development company during
which I worked with enterprise clients like Walmart, Cadbury and more.
There were few main issues that we noticed in the AI segment, which is the black box
models and lack of transparency, and that there was no way of knowing which data led
to a specific inference.
The bigger issue is that centralized companies have not fairly compensated their data contributors,
and this is what we're trying to address with Openledger.
Aashan Pandey
Openledger is being positioned as the f*** blockchain built for AI. Can you walk us through the gap in the current infrastructure that Open ledger is being positioned as the blockchain built for AI. Can you walk
us through the gap in the current infrastructure that open ledger is trying to solve and why
this moment in time is critical? Kamesh. As mentioned previously, centralized companies
have trained their AI models on user data without permissions and have made billions
of dollars without paying anyone fairly. On open ledger, every training step, data source,
and model upgrade leaves a trace that anyone can inspect. This matters right now because
people ask AI for financial advice, health suggestions, and even election coverage. While
dealing with such sensitive topics, it is important to ensure that the model is using
accurate data and not hallucinating. By using proof of attribution, we can eliminate the data
that led to a specific harmful inference, ensuring safety in sensitive use cases.
Aashan Pandey One of the more ambitious ideas behind
OpenLedger is to create attribution-based rewards for data and model contributions.
In practical terms, how do you measure contribution in a decentralized environment?
Kamesh Think of a shared on-chain history that records every dataset and model tweak along with the
wallet that submitted it.
Whenever the network trains a new version or responds to a user query, it looks back
through that history to see which contributions were involved.
Each time your entry shows up, you automatically receive a portion of the fee tied to that
action.
This information is public, so anyone can open the explorer and trace exactly how their work
has been used and what it has earned. Ashaan Pandey. Let's talk about the Model Factory and
OpenLORA. From a technical perspective, how are these tools built to handle resource sharing,
GPU bottlenecks, and the demands of model iteration at scale? Kamesh. Think of Model Factory as a no-code platform where anyone can fine-tune a specialized
language model without renting an entire data center. You pick a base model and select the
parameters. When your fine-tune finishes, it's saved as a lightweight LoRa adapter,
so many versions can live side by side without eating huge amounts of memory or bandwidth.
OpenLora then lets you plug in those adapters onto a shared base model during inference,
so a single GPU can switch between dozens of specializations, allowing for iteration at scale.
Model Factory and OpenLORA are very important pillars of the ecosystem
since they yolo everyone to participate in AI development with no significant cost.
Ashant Pandey. You're also introducing a concept called, Proof of Attribution, POA. What exactly is being measured here, and how do you ensure it's our reliable metric in
assessing agent activity? Kamesh. Proof of Attribution is how we track which data was
used by the model to arrive at a specific inference and reward every meaningful contribution.
When your data is used by a model to create an inference, it is recorded on chain. Each
time users rely on the model, a portion of the revenue is automatically routed back to
you, and the entire trail is open for anyone to verify. It allows contributors to see proof
of their work on chain and allows them to get rewarded for it fairly. Ashaan Pandey
AI royalty as a concept hinges on long-term tracking and trust.
How do you plan to handle issues of model forking, proxy usage, and downstream value
attribution across AI agents?
Kamesh
Our priority is to ensure that contributors always get compensated fairly.
To avoid issues such as model forking and proxy usage, we will be hosting all the models
ourselves and external access will be via API only.
Ashant Pandey, you've hinted at a testnet rollout.
Could you share the next set of milestones you're working toward, and what developers
can expect when engaging with Open Ledger in its current phase?
Kamesh.
Contributors are already spinning up nodes and streaming real data into the
network. We currently have over 4 million active nodes running on our testnet, and we
just wrapped up Epic too. We have over 10 projects building on US already, including
a former Google DeepMind researcher. We are also very excited to share that we will be
soon going for our TGE and Mainnet launch. We will share the full details
soon, so keep an eye out. Ashaan Pandey. Finally, as someone who's building deep tech infrastructure,
what advice would you give to developers or researchers looking to enter this intersection
of decentralized AI? Kamesh. The best advice I can give people is to keep your product simple.
People should be able to understand what you do in the first few minutes. Users care more about whether a product works smoothly and solves their issue
instead often see keywords. Don't forget to like and share the story, Vested Interest Disclosure.
This author is an independent contributor publishing via our business blogging program.
Hacker Noon has reviewed the report for quality, but the claims herein belong to the author.
Hashtag dyo thank you for listening to this Hacker Noon story, read by Artificial Intelligence.
Visit hackernoon.com to read, write, learn and publish.