The Good Tech Companies - AI Agents Are Great at One Thing at a Time. Life Isn't Built That Way.

Episode Date: March 17, 2026

This story was originally published on HackerNoon at: https://hackernoon.com/ai-agents-are-great-at-one-thing-at-a-time-life-isnt-built-that-way. AI agents can handle si...ngle tasks, but real-world automation requires coordination across devices and protocols. Tethral aims to solve this. Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #tethral-ai-orchestration, #ai-agent-coordination, #ai-smart-home-automation, #ai-systems-runtime, #iot-protocol-orchestration, #multi-agent-architecture, #tethral-ai-research, #good-company, and more. This story was written by: @jonstojanjournalist. Learn more about this writer by checking @jonstojanjournalist's about page, and for more stories, please visit hackernoon.com. AI agents excel at individual tasks but struggle to coordinate multiple devices and systems at once. Tethral is building a local-first orchestration layer that interprets natural language intent and dynamically composes coordinated actions across smart home protocols like Zigbee, Matter, and web APIs. Instead of static automations, the system generates real-time coordination graphs based on devices, context, and user behavior.

Transcript
Discussion (0)
Starting point is 00:00:00 This audio is presented by Hacker Noon, where anyone can learn anything about any technology. AI agents are great at one thing at a time. Life isn't built that way. By John Stoyan journalist. By Meredith Bradford photo courtesy of Tethril your AI agent can turn on a light. It can check your calendar. It can set a timer. Each of those is a solved problem. But when you say, leaving in five minutes, and the house needs to arm security, step down lighting, adjust climatato away mode, kill the media, and cue a morning state for when you get back. No single agent call handles that. You need coordinated execution across five or six protocols simultaneously, and that is where the current stack falls short, where the protocols run out. MCP gives agents structured access to APIs.
Starting point is 00:00:47 A2A lets agents delegate to each other. OpenClaw is building open standards for translating agent skills into real-world activity. Each of these solves a real slice of the problem, But when the target is a Zigby light, a matter lock, a proprietary HVAC controller, and a calendar API that all need to respond as one coherent action from a single natural language intent, you need something that sits across all of them. The individual connections exist. The orchestration layer does not, or did not. Tethril is building one. It sits between user intent and a device landscape that includes major smart home ecosystems, common IoT radios, and web-based services, interpreting natural language and decomposing it into coordinated actions
Starting point is 00:01:30 across whatever protocols exist in a given environment. How it actually works. Execution is local first. Automation logic runs on the home network rather than round-tripping through cloud, which matters when coordination needs to happen in milliseconds across protocols with different latency profiles. Cloud round trips add enough variability that tightly timed sequences break down. Running locally keeps the timing deterministic. Intent decomposition is the interesting architectural choice. A command like, hosting tonight, is not a single API call. It is a coordination graph, lighting scenes across rooms, climate adjustments, audio routing,
Starting point is 00:02:08 notification suppression, possibly doorbell behavior changes. That graph is not predefined. It gets generated at runtime based on available devices, user history, and current environment state. Tethril does not run static automations. It composes them on the fly, which is a different architecture from scene-based systems like home assistant routines or IFTT-T-T chains. Those are pre-configured sequences. Thesis runtime composition. Who built this? John Lunsford, Tethril's founder, came at this from the operations side before the research side. He worked as a security engineer with the Department of Justice, then moved into a senior AI and safety research role at a major
Starting point is 00:02:48 technology company where he shipped consumer products and co-led the enterprise design partnership with OpenAI. He holds a PhD from Cornell with fellowships at MIT and Oxford, focused on autonomous system to society adoption. Beyond the platform itself, he designed Tethril's own transformer architecture and coordination protocol built specifically for multi-agent, multi-device orchestration, a new control plane rather than an adaptation of existing ones. He writes about the orchestration problem in more depth on Tethril's blog. His take is that the home is a convenient test environment, but the architecture is not home-specific. Orchestrating intent across five incompatible IOT protocols with local first execution and runtime graph composition is a general coordination problem.
Starting point is 00:03:34 The home is just where you can build it with real users and real devices. Tethril has a working product and a partnership with the Connectivity Standards Alliance. Early stage, actively building. If you work on the boundary between AI reasoning and physical execution, it is worth a look. This story was distributed as a release by John Stoyan under Hackernoon Business Blogging Program. Thank you for listening to this Hackernoon story, read by artificial intelligence. Visit hackernoon.com to read, write, learn and publish.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.