The Good Tech Companies - Why Teams Are Ditching DynamoDB
Episode Date: July 16, 2025This story was originally published on HackerNoon at: https://hackernoon.com/why-teams-are-ditching-dynamodb. Discover why teams are moving from DynamoDB to ScyllaDB—f...or lower latency, reduced costs, multi-cloud flexibility, and better performance at scale. Check more stories related to cloud at: https://hackernoon.com/c/cloud. You can also check exclusive content about #scylladb-migration, #dynamodb-alternatives, #database-performance-at-scale, #reduce-cloud-database-costs, #multi-cloud-database-solution, #low-latency-database, #high-throughput-database, #good-company, and more. This story was written by: @scylladb. Learn more about this writer by checking @scylladb's about page, and for more stories, please visit hackernoon.com. Teams are leaving DynamoDB due to rising costs, latency issues, and AWS lock-in. ScyllaDB offers a DynamoDB-compatible API, lower latency, reduced costs, and multi-cloud flexibility. Case studies from Yieldmo, Digital Turbine, and a global streaming service show how ScyllaDB enables massive scale with fewer trade-offs and minimal migration effort.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Why Teams are ditching DynamoDB? By SillyDB, teams sometimes need lower latency, lower costs,
especially as they scale, or the ability to run their applications somewhere other than AWS.
It's easy to understand why so many teams have turned to Amazon DynamoDB since its introduction
in 2012. It's simple to get started, especially if your
organization is already entrenched in the AWS ecosystem. It's relatively fast and scalable,
with a low learning curve, and since it's fully managed, IT abstracts away the operational effort
and know how traditionally required to keep a database up and running in a healthy state.
But as time goes on, drawbacks emerge, especially as workload scale and business requirements
evolve.
Teams sometimes need lower latency, lower costs, especially as they scale, or the ability
to run their applications somewhere other than AWS.
In those cases, Cilidibi, which offers a DynamoDB-compatible API, is often selected as an alternative.
Let's explore the challenges that drove three teams to leave DynamoDB-compatible API is often selected as an alternative. Let's explore the challenges that drove three teams to leave DynamoDB.
Multi-cloud flexibility and cost savings.
Yieldmo is an online advertising platform that connects publishers and advertisers
in real time using an auction-based system optimized with ML.
Their business relies on delivering ads quickly, within 200-300 milliseconds, and efficiently,
which requires ultra-fast, high-throughput database lookups at scale.
Database delays directly translate to lost business.
They initially built the platform on DynamoDB.
However, while DynamoDB had been reliable, significant limitations emerged as they grew.
As Todd Coleman, technical co-founder and chief architect, explained, their primary concerns were to fold, escalating costs and geographic restrictions.
The database was becoming increasingly expensive as they scaled, and it locked them into AWS, preventing true multi-cloud flexibility.
While exploring DynamoDB alternatives, they were hoping to find an option that would maintain speed, scalability, and reliability while reducing costs and providing cloud vendor
independence.
Yieldmo first considered staying with DynamoDB and adding a caching layer.
However, caching couldn't fix the geographic latency issue.
Cache misses would be too slow, making this approach impractical.
They also explored Aerospike, which offered speed and cross-cloud support.
However, Aerospikes in memory indexing
would have required a prohibitively large
and expensive cluster to handle
Yieldmo's large number of small data objects.
Additionally, migrating to Aerospike
would have required extensive and time-consuming code changes.
Then they discovered Silidibi,
and Silidibi's DynamoDB-compatible API, Alternator,
was a game changer. Todd explained, Silidibi supported cross-cloud deployments,
required a manageable number of servers, and offered competitive costs. Best of all,
its API was DynamoDB-compatible, meaning we could migrate with minimal code changes.
In fact, a single engineer implemented the necessary modifications in just a few days.
The migration process was carefully planned,
leveraging their existing Kafka message queue architecture to ensure data integrity.
They conducted two proof-of-concept, POC, tests,
first with a single table of 28 billion objects, and then across all five AWS regions.
The results were impressive, Todd shared, our database costs were cut in half, even
with DynamoDB reserved capacity pricing, and beyond cost savings, Yieldmo gained the flexibility
to potentially deploy across different cloud providers.
Their latency improved, and SilidDB was as simple to operate as DynamoDB.
Wrapping up, Todd concluded, one of our initial concerns was moving away from DynamoDB's
proven reliability.
However, Silidibi has been an excellent partner.
Their team provides monitoring of our clusters, alerts us to potential issues, and advises
us when scaling is needed in terms of ongoing maintenance overhead.
The experience has been comparable to DynamoDB,
but with greater independence and substantial cost savings.
Hear from Yieldmo migrating to GCP with better performance and lower costs.
Digital Turbine, a major player in mobile ad tech with $500 million in annual revenue,
faced growing challenges with its DynamoDB implementation.
While its primary motivation for migration was standardizing on Google Cloud Platform
following acquisitions, the existing DynamoDB solution had been causing both performance
and cost concerns at scale.
It can be a little expensive as you scale, to be honest, explained Joseph Shorter, vice
president of platform architecture at Digital Turbine.
We were finding some performance issues.
We were doing a ton of reads,
90% awful interactions with DynamoDB were red operations.
With all those operations,
we found that the performance hits required us to scale up
more than we wanted, which increased costs.
Digital Turbine needed the migration to be as fast
and low risk as possible,
which meant keeping application refactoring to a minimum. The main concern, according to Shorter, was, how can we migrate
without radically refactoring our platform, while maintaining at least the same performance and
value, and avoiding a crash and burn situation? Because if it failed, it would take down our whole
company. After evaluating several options, Digital Turbine moved to SilidB and
achieved mediate improvements. The migration took less than a sprint to implement and THE
results exceeded expectations. A 20% cost difference. That's a big number, no matter
what you're talking about, shorter noted. And when you consider our plans to scale even
further, it becomes even more significant. Beyond the cost savings, they found themselves barely tapping the CYLID-B clusters, suggesting
room for even more growth without proportional cost increases.
Hear from Digital Turbine High-Rite throughput with low latency and lower costs.
The user-state and customizations team for one of the world's largest media streaming
services had been using DynamoDB for several years. As they were array architecting two existing use cases, they wondered if it
was time for a database change. The two use cases were pause, resume. If a user is watching
a show and pauses it, they can pick up where they left off, on any device, from any location.
Watch state. Using that same data, determine whether the user has watched the show.
Here's a simple architecture diagram.
Every 30 seconds,
the client sends heartbeats
with the updated playhead position of the show
and then sends those events to the database.
The edge pipeline loads events
in the same region as the user,
while the authority, auth, pipeline combines events
for all five regions that the company serves.
Finally, the data host to be fetched and served back to the client to support playback.
Note that the team wanted to preserve separation between the auth and edge regions,
suddy weren't looking for any database specific replication between them.
The two main technical requirements for supporting this architecture were,
to ensure a great user experience, the system had to remain highly available,
with low latency reads and the ability
to scale based on traffic surges.
To avoid extensive infrastructure setup or DBA work,
they needed easy integration with their AWS services.
Once those boxes were checked,
the team also hoped to reduce overall cost.
Our existing infrastructure had data spread across various clusters of Dynamo D-band ElastiCache,
so we really wanted something simple that could combine the Cinto a much lower-cost
system," explained their backend engineer.
Specifically, they needed a database with, multi-region support, since the service was
popular across five major geographic regions.
The ability to handle over 170K writes per second.
Updates didn't have a strict service level agreement, SLA, but the system needed to perform
conditional updates based on event timestamps.
The ability to handle over 78K reads per second with a P99 latency of 10 to 20 milliseconds.
The use case involved only simple point queries, things like indexes, partitioning and complicated
query patterns weren't a primary concern.
Around 10 terabytes of data with room for growth, why move from DynamoDB?
According to their back-end engineer, DynamoDB could support our technical requirements perfectly.
But given our data size and high, write- heavy, throughput, continuing with DynamoDB would
have been like shoveling money into the fire.
Based on their requirements for write performance and cost, they decided to explore Cilidib.
For a proof of concept, they set up a Cilidib cloud test cluster with 6 AWS i4 i4 extra
large nodes and preloaded the cluster with 3 billion records.
They ran combined loads of 170k writes per second and 78k reads per second.
And the results?
We hit the combined load with zero errors.
Our P99 Red latency was 9 milliseconds and the write latency was less than 1 millisecond.
These low latencies, paired with significant cost savings, over 50%, convinced them to
leave DynamoDB.
Beyond lower latencies at lower cost, the team also appreciated the following aspects
of Silidibi.
Silidibi's performance-focused design, being built on the C-Star framework, using C++,
being NUMA-aware, offering shard-aware drivers, etc., helps the team reduce maintenance time
and costs.
Incremental compaction strategy helps them significantly reduce write amplification. Flexible consistency level and replication factors helps them support separate auth and
edge pipelines. For example, auth uses quorum consistency while edge uses a consistency level
of 1 due to the data duplication and high throughput.
Their back-end engineer concluded, choosing a database is hard.
You need to consider not only features, but also costs.
Serverless is not a silver bullet, especially in the database domain.
In our case, due to the high throughput and latency requirements, DynamoDB Serverless
was not a great option.
Also, don't underestimate the role of hardware.
Better utilizing the hardware is key to reducing costs while improving performance.
Learn more as your team next?
If your team is considering a move from DynamoDB, SilidDB might be an option to explore.
Sign up for a technical consultation to talk more about your use case, SLAs, technical
requirements and what you're hoping to optimize.
We'll let you know if Cilidibi is a good fit and, if so, what a migration might involve in terms of application changes, data modeling, infrastructure and so on. Bonus. Here's a quick look at how
Cilidibi compares to DynamoDB. Written by, Guilherme da Silva Nogueira and Felipe Cardenet de Mendes.
Thank you for listening to this Hacker Noon story, read by Artificial Intelligence.
Visit HackerNoon.com to read, write, learn and publish.
