The Good Tech Companies - How Douwe Faasen Is Reinventing Data Infrastructure: Unveiling the Future of Data Availability
Episode Date: February 3, 2025This story was originally published on HackerNoon at: https://hackernoon.com/how-douwe-faasen-is-reinventing-data-infrastructure-unveiling-the-future-of-data-availability. ... Discover groundbreaking insights from blockchain innovator Douwe Faasen in this exclusive interview. Check more stories related to web3 at: https://hackernoon.com/c/web3. You can also check exclusive content about #web3, #blockchain, #dlt, #cryptocurrency, #douwe-faasen, #hyveda, #hyveda-news, #good-company, and more. This story was written by: @ishanpandey. Learn more about this writer by checking @ishanpandey's about page, and for more stories, please visit hackernoon.com. Discover groundbreaking insights from blockchain innovator Douwe Faasen in this exclusive interview. Learn how his journey—from early Bitcoin faucets to launching HyveDA—is reshaping data availability, scalability, and decentralization on Ethereum and beyond. Explore the future of rollups, ZK technology, and decentralized infrastructure in this must-read conversation.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
How DoFaustin is Reinventing Data Infrastructure. Unveiling the Future of Data Availability
by Ashan Pandey. Join us as we dive deep into the evolution of blockchain infrastructure,
data availability, and decentralization with DoFaustin, co-founder of HiveDA.
From early coding adventures to pioneering high-throughput solutions,
this interview unveils the visionary insights behind the next generation of decentralized
technology. Ashan Pandey. Hi Du Fawson, welcome to our Behind the Startup series.
Your journey from coding at 11 to founding Hive DA is fascinating. How did your early
experiences with Bitcoin faucets and smart contracts shape your
vision for data availability solutions do fawson hey ashaan thank you it is great to have the
opportunity to be a part of this series thank you for that question i always do like reminiscing
about the past it was quite a journey for me to go from simplistic websites to building a high
throughput data availability solution in In that journey, however,
I indeed experimented with Bitcoin faucets and started learning how to develop smart contract
in 2017. Did taught me a lot about the atomicity of blockchains, but also exposed the problem to
me of data infrastructure limitations in blockchain development. Often, Ethereum became too restrictive
build something on that scales, so teams would spin up their own chain where they can issue the gas token themselves and build something of
much bigger scale.
Ultimately, people stuck to Ethereum and these chains became ghost chains.
This observation, coupled with my passion for data, gave me the drive to build the tools
for decentralized technologies to build something of scale without leaving behind the network
effect of existing blockchains. HiveDA is a result of this and will be building data infrastructure for rollups,
validiums, appchains, and verifiable services to build upon with limitless scale and capacity.
Ashant Pandey. We're seeing a significant trend in the reutilization of Ethereum infrastructure,
particularly with based rollups. How do you see this evolution
impacting the future of blockchain scalability? Do Fossin. I think reutilization is the perfect
framing for this. Based rollups have quite a couple of improvements over current rollup design
in the way they handle security and decentralization. One notable element is that
they leverage Ethereum validators for sequencing while they outsource execution to the rollups execution nodes, which is a great modular design and can be very scalable.
In my opinion, based rollups are a great testament to Ethereum's ability to operate as a settlement layer, but it also highlights the importance of data availability.
When scaling horizontally and vertically on execution is not a problem anymore for rollups, the bottleneck
becomes data availability. Rollups need to ensure the data to continue the chain and to verify the
results are available to all participants, including other rollups that need interoperability
with each other. With all these puzzle pieces aligned, blockchain scalability becomes limitless.
Ashant Pandey, you've been vocal about the importance of home stakers in maintaining true
decentralization. Could you elaborate on why this matters in an era where high TPS solutions often
favor large data centers? Do Fossin. Solo stakers are a crucial part of the puzzle in mitigating the
risks posed by centralized entities and protecting networks against threats like censorship or 51%
attacks. I strongly believe that our focus should
be on building truly decentralized networks that are open to everyone and resilient to attacks from
large, centralized actors. Security should never be compromised for the sake of performance.
Even in high TPS environments, there are several ways to protect decentralization.
For instance, you can design the system so that each node processes only a
portion of the data, rather than requiring all nodes to handle the entire dataset.
This portion can be so small that even a basic home computer can handle it at scale,
ultimately contributing to greater throughput for the network as a whole.
Ashant Pandey. With your background in building indexers before the graph,
what unique insights have you gained about data availability challenges in the blockchain space? Do FOSSEN. Indexer networks
and data availability are completely different concepts, but they do share a few underlying
patterns. You could even argue that the graph helps maintain data availability for indexed data.
Of course, the security assumptions and cryptographic properties are fundamentally
different.
Still, one key thing I've learned from building my own indexers, as well as working with the graph and streaming fast, is just how much data redundancy can cost you.
Nodes can disappear at any moment, and as a user, you never want that to affect your
application.
That means you've got to replicate your data across multiple nodes.
Naturally, this gets expensive, and that concept of redundancy is one of the reasons data availability,
DA, became such an important topic in the Ethereum community.
Sure, having all nodes process the same transaction data is great,
because you only need one honest node to ensure data integrity. But it's also a bottleneck,
because to scale, every node needs to scale up too,
and that just cranks up the network's costs. A high-throughput network really needs to split
data responsibilities across several nodes to ensure redundancy. However, you can't just
replicate everything everywhere, or you end up with a massive bottleneck. Striking that balance
between redundancy and efficiency is one of the things that makes data availability such a challenging problem.
Ashant Pandey.
There's ongoing debate about the balance between decentralization and verifiability,
especially with ZK technology.
How does Hive DA approach this balance in its solutions?
Dufasin.
That's a great question.
I personally believe that ZK is a fantastic tool to prove that execution
was done right. You would only need consensus for the ordering of the transactions and to ensure
the state is correct after applying the proofs. We're actively exploring ZK for DA and it's a
great way to aid decentralization. ZK removes the requirement for any kind of trust between nodes,
which means that more nodes can join the network and consensus
can be reached in a more simplified way. Ultimately, this increases verifiability and
decentralization at the same time. In the context of DA, we're particularly interested in how ZK
can ensure correctness without requiring extensive overhead. For example, ZK proofs could be used to
guarantee that nodes in the DA network are storing and serving data correctly, while Ethereum's consensus ensures the global integrity and sequencing
of these proofs.
Ashant Pandey.
You've mentioned that real use cases will be enabled by verifiability through Ethereum
rather than complete decentralization.
Can you share some concrete examples of how this might play out, Dufasun?
Absolutely. Let me start by saying that
decentralization remains extremely important and trustless verifiability wouldn't exist without it.
Verifiability through decentralized consensus-based systems like Ethereum means you can create
trustless guarantees that operations or computations we recorrectly executed and
applied without every step to be ran in a decentralized setup. Let me give you some examples of that. Game engines could run their infrastructure off-chain,
enabling the usual high throughput that they are dealing with, but periodically prove game
transitions and submit them on-chain. The chain's consensus guarantees correct game outcomes,
but none of the logic or assets are hosted on-chain. Players, especially in games that
have expensive in-game
assets, can remain confident they're not being cheated on. Another use case would be decentralized
order books. Trade matching and execution would happen off-chain, but its proofs would be verified
on-chain. A decentralized exchange like this would be able to be ran by anyone, but without
needing expensive and lengthy consensus for order matching.
Another fun thing about this is that you could design it in such a way that deposits and withdrawals simply happen on-chain, also minimizing any trust assumptions in a centralized entity.
Payment systems can also profit from this. Consensus creates several limitations,
such as latency and bandwidth limitations. For a payment processor, it will be hard to
handle millions of transactions per second with a consensus algorithm implemented.
What is more important to the end users is transparency on fees and accounting.
Decentralized verifiability can enable this, without relying on a consensus algorithm in
the payment processor itself purse. Ashan Pandey
Looking ahead, what developments in data availability and
blockchain infrastructure are you most excited about, and how is Hive de-appositioning itself
for these changes? Dufasin. There is a lot happening in infrastructure right now and
it's all pretty exciting. The most exciting thing for the industry as a whole, in my opinion,
is how far ZK has come in such a short period of time. It is technology that can advance
decentralization and distributed systems in terms of security, speed, and reliability,
but previously required a team of PhDs to actually implement it. A big shout-out is due here to
Succinct, who have eliminated this requirement and now developers can build with ZK without
extensive prior academic knowledge on the topic. At Hive we're actively researching
and developing to bring ZK into our data availability layer for faster finality and
more simple data availability security assumptions. This will ultimately make us even more reliable
for DA integrators while maintaining the same throughput. Ashan Pandey. Thank you for your time
and insights, do don't forget to like and share the story.
Tip Vested Interest Disclosure. This author is an independent contributor publishing via our
business blogging program. Hacker Noon has reviewed the report for quality, but the claims herein
belong to the author. Hashtag D-Y-O-R. Thank you for listening to this Hacker Noon story,
read by Artificial Intelligence. Visit HackerNoon.com to read, write, learn and publish.