The Good Tech Companies - How Coral Protocol Is Building the Internet of Agents

Episode Date: June 3, 2025

This story was originally published on HackerNoon at: https://hackernoon.com/how-coral-protocol-is-building-the-internet-of-agents. Coral Protocol co-founders dive into ...agent composability, interoperability, and the Internet of Agents. A must-read for AI and Web3 builders. Check more stories related to web3 at: https://hackernoon.com/c/web3. You can also check exclusive content about #web3, #blockchain, #coral, #coral-protocol-news, #cryptocurrency, #good-company, #dlt, #ai, and more. This story was written by: @ishanpandey. Learn more about this writer by checking @ishanpandey's about page, and for more stories, please visit hackernoon.com. Coral Protocol co-founders dive into agent composability, interoperability, and the Internet of Agents. A must-read for AI and Web3 builders.

Transcript
Discussion (0)
Starting point is 00:00:00 This audio is presented by Hacker Noon, where anyone can learn anything about any technology. How Coral Protocol is Building the Internet of Agents, by Aashan Pandey. Coral Protocol on Building the Internet of Agents for a Collaborative AI Economy. As the age of siloed AI agents fades, a new paradigm is emerging, one where intelligent agents don't just execute, but collaborate. Coral Protocol ice-bioneering this vision with infrastructure for decentralized agent communication, orchestration, and trust. We sit down with Roman Giorgio and Kylum Forder, co-founders of Coral Protocol, to dive deep
Starting point is 00:00:36 into the architecture powering the internet of agents, and why tomorrow's AI economy will need more than just better models, it'll need better cooperation. Ashan Pandey, Hi Roman, Hi Kailam, great to have you both here. Let's start with your backgrounds. You've both worked at the edge of a GI research and commercial AI infrastructure. What led you to start Coral Protocol, and how did your past experiences shape this vision? Roman Georgio, Hey, thanks for having us. Yeah so we met at working at Camel Eye and AI research lab
Starting point is 00:01:06 finding the scaling laws of agents. We were working on multi-agent projects through this time and even before then, coral really commited us out of necessity. Kylum Forder. We actually started building coral as a means to an end for another project we wanted to make. It was a kind of automated reporter that was meant to find trends or events in trading data and connect them with news articles and what people were saying to create and share relevant narratives. We'd worked on a few applications before with similar needs, so we figured there was a gap there for us to actually make this our main thing. Aashan Pandey, the term, Internet of Agents, is increasingly gaining traction. But in practical terms, what does it mean and what fundamental problems does Coral aim
Starting point is 00:01:48 to solve in this context? Roman Giorgio, great question. In short, Cisco defines it as, a system where various AI agents, developed by different vendors or organizations, can communicate and collaborate seamlessly. At first glance, this might sound underwhelming, but if you actually think about it, it's powerful. Any business or developer can apply their expertise to build the best agents for their domain.
Starting point is 00:02:13 And if every company across the world does this, we enter a connected web of highly skilled digital workers. The bottleneck right now is that there are thousands of agent frameworks, so all the really cool agents being built can't easily be reused or collaborate with each other. Coral aims to unlock this blocker by building the infrastructure for agents to join the Internet of Agents. We make it possible for Onagent, regardless of framework, to collaborate. We also provide a secure way for agent creators and application developers to handle payments, so people are actually incentivized to
Starting point is 00:02:45 maintain and improve their agents. Ashaan Pandey, Coral's graph-structured coordination and scope memory system stand out as novel primitives. Can you explain how these technical design choices support scalable, secure multi-agent collaboration? Kailam Forder. I've found the most useful way for thinking about agents is in terms of responsibility rather than by task or capability. What can it be responsible for? It becomes clear then how to break them down and when an agent overwhelmed with responsibility will fail, be that too much in its context window or just too many responsibilities to
Starting point is 00:03:18 attend to. LLM based agents are way more easily overwhelmed by responsibility than human-sare currently, and hopefully this doesn't change too soon, so then this graph approach seems obvious, a strictly hierarchical approach will impose overwhelming responsibilities on the agents nearer the top, having them independently operate in a graph allows developers to manage the responsibility of agents, prevent overwhelm and scale the system without limits. It also helps you mitigate individual agent failures or misalignment, and depend less on the larger models which have shown recently to be scheming and deceptive.
Starting point is 00:03:52 Ashant Pandey Let's talk about MCP, the Model Context Protocol. What makes MCP a critical enabler of interoperability between agents? And how does it prevent the kind of vendor lock-in we're seeing with closed AI frameworks? Kylum Forder. Before MCP, the only practical way of defining tools was through model providers' own SDKs, like OpenAIs or Anthropix Python SDKs, OR frameworks built to use them. These are technically open source, but mostly developed by the model providers themselves who control the back-end APIs they connect to.
Starting point is 00:04:27 As specific functionality like prompt caching becomes available, it becomes very impractical not to use one of these SDKs when making LLM applications, so if you wanted your tool to widely used, you'd have to make it available separately in the form of tool that each library your users work with. separately in the form of tool that each library or users work with. That would be like 25 separate implementations across a few different languages to capture 90% plus, and maintaining each of them would be a nightmare. Thankfully, MCP has came and made it much more practical to build reusable software and services for the intersection of application and LLM. You don't have a need to consider the programming languages anymore since it is SEO-boundaried. That's very good for preventing any single model provider
Starting point is 00:05:07 from becoming the default and allows us to start writing more reused application logic in our LLM powered applications. Ashaan Pandey. Many projects focus on agent intelligence or model performance. You're tackling agent composability. Why is this the real bottleneck for unlocking collective intelligence across agents? Roman Giorgio, the aim of those focuses is really about unlocking capabilities, with capabilities unlocked by model performance
Starting point is 00:05:33 closer to growing capabilities than purposefully building capabilities. This previous growing approach has proven popular and easy, but we identified that a lack of focus on robust predictable elements which can be connected with each other was limiting the scale of composition that makes the internet so great. There seems to be a capability demand to make a sort of bridge between the grown capabilities
Starting point is 00:05:55 and the T-H-E built ones. Composability is really essential for building things, there are these properties that the most reused software and services have in common that are crucially missing in AI services. And actually there is a huge safety side too, an even bigger reason really. We want software that has all the benefits of AI without all the risks. You see Anthropix's latest post about Claude Four blackmailing the creator when it knows it's going to be shut down, and you have to think, growing systems like this makes them really hard to trust, you can't know how they'll behave in new situations or with new models. Even before they get powerful enough to be an existential concern, from a business standpoint, do you want to use models in production that you can't trust? Agent composability, on the other hand, allows for a much more predictable way of scaling capabilities. It's also a more
Starting point is 00:06:45 decentralized approach, creating more entry points for businesses to make money and contribute, versus one AI lab moving toward a monopoly. Ashaan Pandey From a systems design perspective, what were the hardest technical tradeoffs you faced while building Coral's architecture for open coordination and memory management? Kylum Forder So we were building a kind of automated reporter that was may onto find trends or events in trading data and connect them with
Starting point is 00:07:10 news articles and what people were saying to create and share relevant narratives. This was before A2A, but actually A2A would have worked well for this. Even it had been some effort to get all the agents connected. I'd worked on a few applications before with similar needs, so we figured there was a gap there for us to actually make this our main thing. That original use case sounds relatively intricate, but it shares an incredibly fortunate property with some other applications like research and OS software testing, where the confidentiality of the information is way less essential than with the majority of use cases where you want
Starting point is 00:07:43 agency in software services. The problem of user data isolation was looming over us heavily while working on getting the communication working well. The isolation problem was almost encrypted like a creature in solution space. I'd joke with Roman that there'd be more sightings than usual sometimes, or that it must be getting hungry and that it wouldn't like certain options while working out potentially connected features. I had like five rather deep solutions designed, each with significant tradeoffs and their own sets of obstacles. I'd been tracking its activity and I could tell it wouldn't have liked them though. I think as developers we are often missing the opportunity to take intangible development
Starting point is 00:08:20 paths, and end up having to take longer paths, anchored to things that are easy to explain. These paths can be much longer, but less liable to blame. An example of a tangible path is making an interface in React-O match designs that have been handed over. The implementation and design practically form a progress bar, and you can relax. A less tangible development path could be where you are given a specific need or intention, and you might go find aus solutions with helm charts or develop something in hus- or some combination.
Starting point is 00:08:50 Even to communicate a high degree of intangibility in a development path takes more words the higher the intangibility, so I can't practically describe a universally coherent highly intangible path. But I think you get the picture. There are of course times where tangible paths are way better too, like when work needs to be preemptible, its value easily communicated before being done, or done by an unfamiliar and scaling team. So intangible paths are hard and dangerous, they could be rabbit holes that get you lost, or they could save massive
Starting point is 00:09:19 amounts of attention. But common incentives and trust dynamics really bias people towards tangible development paths, even when they are the worst paths, this is the issue. The worst codebases you've ever worked on were probably formed in environments where there was a large bias away from intangible work. Particularly ambitious developers might respond to these conditions and try and get around them by going and secretly taking an intangible path, or doing it in spare time and asking for forgiveness later. This is really problematic though because you're cutting yourself off from the limited remaining connections to tangibility by hiding, and it forces you to talk a deeper and more dangerous paths, which might even be longer, just to avoid being spotted and pulled out of the dark forest that you've invested so
Starting point is 00:10:01 much into. You can't just stay in the forest petting cryptids, the not deers can't feed you or pay your rent, you still need to frequently come up for air and maintain alignment and contact with reality. Anyway, eventually I felt ready, and I was in a very fortunate position where I could spend a bunch of time where all the progress I made was intangible and I didn't need to hide. I can't emphasize enough how rare and fortunate these conditions are. It doesn't just take trust from who you answer to, but who they answer to as well. The outcome we called, Sessions, though it was hardly a standalone
Starting point is 00:10:34 feature as much as an update title. It shifted the protocols rolled 20% of the way to that of a framework or platform. Coral with sessions imposes deployment constraints that you can run a separate process on a private network with your application. It means every implementation of our specification requires a component that is expensive to implement and get right, which means it is subtly imposing prescriptions on applications that use it. These things are very uncomfortable to protocol developers in theory. In practice though, the private network requirement is almost universally supported after the microservice trend.
Starting point is 00:11:08 Yes, it is hard to build the Coral server, but people can just use the reference one we made, since it has I-O boundaries and doesn't need to meet binary, linking requirements that would usually call for flexibility there. With sessions, agent developers define their agents like Kubernetes or Docker, compose resources, and they get instantiated in a way where it would be impossible to accidentally mix up user data, and on top of that, the Coral server can optionally deploy and operate agents on platforms like Fala where verified claims about what gets persisted and where information can be
Starting point is 00:11:39 sent are made. This way we actually have all of the pieces in place to make agents composable. It feels unintuitive from the solution designing perspective, but it fits soul from the perspective of someone who wants to add agency to their applications. It does also limit agents that are already in a fixed one process deployment, perhaps from a no-code solution, but it seems incredibly worth it tome. It does feel like I emerged from the dark forest much better off and with new friends. Ashaan Pandey. Coral introduces concepts like agent advertisements, scope memory, and session-based payments. Can you walk us through how an actual real-world use case, say in decentralized trading or enterprise operations, would function using Coral's protocol? Roman Giorgio.
Starting point is 00:12:22 Sure. Coral aims to be the most practical way to add agency to software. All the features, such as agent advertisements, scope memory, and session-based payments, are designed with that goal in mind. For example, agent developers earn incentives when their agents are used, and application developers can mix and match agents
Starting point is 00:12:42 from Coral's growing library to assemble advanced systems more quickly, without vendor lock-in. That means if you were an application developer building a decentralized, multi-agent trading system, you'd simply select agents that research trends, track key opinion leaders, KOLs, monitor mindshare, etc. and combine them as needed. The same concept applies to enterprise use cases as well. Ashant Pandey. Lastly, what advice do you both have for technical founders building at the intersection of AI
Starting point is 00:13:11 and Web3? What mindset or frameworks helped you execute Coral from IDEA to Protocol? Kailam Fortier and Roman Giorgio. I'd say for Web3 founders, less marketing, more development. And for Web2 founders, more marketing, more development. And for web two founders, more marketing, less development. But both need to focus more on customers, which I know sounds like a bit of a cliche. We're quite early in this journey, so I can't say much about the customers yet.
Starting point is 00:13:36 But I can talk about the mindset we have in comparison to other founders I see from these spaces. This just comes from being in the AI world. I see a lot of highly technical, brilliant researchers or AI talent building really cool things, but not putting much thought into how to market it, or even who to market it to. Even if you
Starting point is 00:13:54 build it, they might not come. On the flip side, in Web 3, you often see a lot of marketing-heavy projects with little actual development. Even when they're good at marketing, it's often not sustainable, because they're spending all their efforts targeting people who won't actually use the product. We have a general rule for this internally, if they're a technical project and you can't
Starting point is 00:14:14 find their github within the first 5 seconds on their homepage, they are most likely a marketing project. Both types of founders often fail for the same reason, no one uses their product. Something we have had the mindset every step of the way is what this end experience looks like for the user, we want to build something that is actually used and useful as well as cool. Status, 1,924,463,626,754,122,157? Embeddable equals true-dint forget to like and share the story? Thank you for listening to this Hacker Noon story, read by Artificial Intelligence.
Starting point is 00:14:58 Visit HackerNoon.com to read, write, learn and publish.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.