The Good Tech Companies - Engineering at Scale: From Search Systems to AI-Native Platforms and Data Products
Episode Date: December 31, 2025This story was originally published on HackerNoon at: https://hackernoon.com/engineering-at-scale-from-search-systems-to-ai-native-platforms-and-data-products. Sai’s e...arly industry roles involved building and leading search and recommendation systems at large Indian e-commerce platforms, including Myntra and Zomato. Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #ai, #ai-native-platforms, #data-products, #sai-sreenivas-kodur, #machine-learning, #iit-madras-alumni, #myntra, #good-company, and more. This story was written by: @daniel-mercer. Learn more about this writer by checking @daniel-mercer's about page, and for more stories, please visit hackernoon.com. Sai’s early industry roles involved building and leading search and recommendation systems at large Indian e-commerce platforms, including Myntra and Zomato.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Engineering at scale from search systems to AI native platforms and data products by Daniel Mercer.
Every system changes once it reaches a particular scale. Traffic grows unevenly,
assumptions stop holding, and design decisions that once felt minor begin to shape everything that
follows. This article traces the engineering career of Sai Sri Nevis Cador from building
large-scale search and recommendation systems in e-commerce to leading enterprise AI platforms and domain-specific
data products. Along the way, it looks at how working at scale shifts in engineers' focus from
individual components to platform foundations, data workflows, and team structures, especially as AI
changes how software is built. Early foundations in systems and machine learning,
Sai Srinivas Kador completed both his bachelor's and master's degrees in computer science and engineering at the Indian Institute of Technology, Madras.
During his undergraduate and graduate studies, he focused on compilers and machine learning.
His research explored how machine learning techniques could be applied to improve software performance across heterogeneous hardware environments.
This work required thinking across layers.
Performance was treated as a system-level outcome shaped by algorithms, execution models, and hardware constraints.
working together. Small implementation choices often produced large downstream effects.
The academic environment emphasized rigorous reasoning and first principles thinking.
By the end of graduate school, the most durable outcome of this training was not familiarity
with specific tools, but the ability to learn new systems deeply and adapt to changing technical
contexts. Search and recommendation systems at scale.
Size early industry roles involved building and leading search Andre commendation systems
at large Indian e-commerce platforms, including Mintran Zomato.
These systems supported indexing, retrieval, and ranking across catalogs of Marethin 1 million
frequently changing items. They handled approximately 300,000 requests per minute. At this scale,
system behavior reflected multiple competing constraints. Index freshness had to be balanced
against latency requirements. Ranking quality depended on data pipelines, infrastructure reliability,
and model behavior operating together.
Many issues surfaced only after deployment,
design decisions that appeared correct in isolation
behaved differently once exposed to real traffic patterns,
delayed signals, and uneven load distribution.
This work reinforced the importance of aligning technical design
with productisage patterns.
Improvements in relevance or performance required coordination
across distributed systems, data ingestion,
and application behavior rather than isolated changes
to individual components.
Startup environments and broader engineering exposure.
Early in his career, Sai chose to work primarily in startup environments.
These roles offered exposure to a wide range of engineering responsibilities,
including system design, production operations,
and close collaboration with product and business teams.
Technical decisions were closely tied to customer requirements and operational constraints.
In these settings, the effects of architectural choices surfaced quickly.
Systems with weak foundations required frequent rework.
as usage increased. Systems built with precise abstractions and reliable pipelines were easier to
extend over time. This experience broadened his perspective on engineering. Systems were defined
not only by code and infrastructure, but also by how teams worked, how decisions were made, and how
platforms were maintained as they grew. Building Food Intelligence Systems at Spoonshot,
SIE later co-founded Spoonshot and served as its chief technology officer. Spoonshot focused on building a data
intelligence platform for the food and beverage industry. The core system, food brain,
combined more than 100 terabytes of alternative data from over 30,000 sources with AI
models and domain-specific food knowledge. This foundation-powered Genesis, a product used by
global food brands such as PepsiCo, Coca-Cola, and Heinz to support innovation and product
development decisions. Building Food Brain involved working with noisy data sources, evolving domain
requirements and enterprise reliability expectations. The system needed to accommodate changing inputs
without frequent architectural changes. Under size technical leadership, Spoonshot raised over
$4 million in venture funding and scaled to a team of more than 50 across the U.S. and India.
During this period, he introduced data-centric AI practices by creating a dedicated data
operations function alongside the data science team. This reduced the turnaround time for new model
development by 60% while maintaining accuracy above 90%. Enterprise AI platforms and reliability.
Sai later served as director of engineering at Observe AI, where he led platform engineering,
analytics, and enterprise product teams. The platform supported enterprise customers such
as DoorDash, Uber, Swiggy, and Assurion. These customers had strict expectations around
reliability, performance, and operational visibility. Scaling the platform to support a tenfold
increase in usage required changes across infrastructure, data ingestion pipelines, and
observability practices. These efforts contributed to more than $15 million in additional
annual recurring revenue. Alongside technical scaling, SIE focused on building engineering
leadership capacity. He hoped to find hiring frameworks, conducted over 130 interviews,
and hired senior engineering leaders to support long-term platform development. This phase highlighted
how organizational structure influences system outcomes, as platforms grow more complex,
coordination, ownership, and decision-making processes become part of the technical system.
From systems engineering to AI native teams, across roles, SIE maintained hands-on involvement
while gradually expanding in tow-broader technical leadership responsibilities. His focus increasingly
shifted toward platform foundations and workflows, Thetello teams to work effectively with complex data
and AI systems. Mentorship of senior engineers in investment in precise abstractions became essential
parts of this work. His research publications reflect this practical focus. Papers such as
Genesis, Food Innovation Intelligence, and DebugMate, an AI agent for efficient on-call debugging
in complex production systems, examined how AI can support product and engineering workflows.
DebugMate demonstrated a 77% reduction in on-call load by assisting engineers with incident
triage using observability data and system context.
Long-term engineering foundations, looking across Sai Sri Nevis Kedore's career,
a consistent theme is an emphasize in building systems that remain reliable as complexity increases.
As AI accelerates software development, this focus becomes more critical,
especially when teams begin building truly AI native software teams rather than layering AI onto
existing architectures.
AI agents introduce new workloads and different patterns of system usage.
Data and infrastructure platforms originally designed for human users must adapt to support these
changes. Rather than focusing on individual productivity gains, this work centers on platform
foundations, data workflows, and team structures that can scale overtime. The career reflects
an engineering approach grounded in clarity, durability, and long-term impact.
Sai Srinivas Kador, Image, LinkedIn thank you for listening to this Hackernoon story, read by
artificial intelligence. Visit hackernoon.com to read, write, learn and publish.
