The Good Tech Companies - Make it Rain: How Repatriating Your Public Cloud Workload Can Save You Millions
Episode Date: May 27, 2024This story was originally published on HackerNoon at: https://hackernoon.com/make-it-rain-how-repatriating-your-public-cloud-workload-can-save-you-millions. A high perfo...rmance, cloud-native object store offers you economic benefits, performance benefits, control benefits - and they compound with scale. Check more stories related to cloud at: https://hackernoon.com/c/cloud. You can also check exclusive content about #minio, #minio-blog, #cloud, #data, #data-storage, #cloud-native-object-store, #public-cloud-workload, #good-company, and more. This story was written by: @minio. Learn more about this writer by checking @minio's about page, and for more stories, please visit hackernoon.com. A high performance, cloud-native object store offers you economic benefits, performance benefits, control benefits - and they compound with scale.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Make it rain. How repatriating your public cloud workload can save you millions,
by Mineo. The phenomenon of the public cloud is difficult to get your arms around.
Since AWS kicked it off early in the century it has grown and evolved into a modern computing
platform, creating the cloud operating model as we know it. Ironically, this standardization
around the cloud as an operating model is one of the reasons that cloud growth has stagnated.
The things that we re-unique to the platform, the elasticity, tools like Kubernetes, software,
SaaS, application ecosystems and modern, high-performance object storage are now
available everywhere, from the edge to the core. Another reason that growth
is stagnating is that CFOs have stepped in. Early in the cycle, CTO, CIO won the argument for
productivity gains. Indeed, going to the cloud to learn the skill of the cloud is something we have
long recommended. But over time, it became clear to the organization that the cloud was very,
very expensive. This led to the rise that the cloud was very, very expensive.
This led to the rise of cloud finops. When the ZIRP era ended, the scrutiny on cloud spend increased significantly and the CFO inserted themselves into the conversation. This depressed
growth, even in the face of a general AI boom that ostensibly drove Sijup, because the hyperscalers
had all of the GPU inventory. Here is Ozgrowth rate sad face edit.
Following the publishing of this post Amazon posted earnings that returned AWS growth to 17%.
What doesn't get a lot of press, but should, is the emerging nature of the CFO's decision making
and the partnership of CTO, CIO and CFO. Because the cloud operating model runs anywhere and is a proven concept, the CTO, CIO,
and CFO are free to look at colos and on-prem alternatives to the cloud. What they have
realized is that the more you have on the cloud, the bigger the impact on the organization when
you repatriate that workload to a colo or your own private cloud. Here is its visually. These
are massive savings and they get bigger the more you repatriate.
We have one customer, a leader in the cloud workload and endpoint security,
threat intelligence, and cyberattack response space.
They repatriated 500 petabytes on athreat intelligence workload to an Equinix colo.
The savings generated by that effort improved the gross margin of the entire business by between 2 to 3 percent for a company
with tens of billions of dollars in market cap this is a massive improvement in the valuation
in the business needless to say they doubled the repatriation goals and are now north of an exabyte
this is just one example one of the biggest streaming companies took a similar approach
moving their workload back from aw. They reduced their costs by
morethan 50%. Let's explore what the core drivers of the savings are reduction in data transfer,
egress costs. This ranges from nearly 70% of overall costs on smaller workloads to 20%
for 100 petabytes. Even though the percentage drops at the 100 petabytes level, the number
is still material at nearly $3 million per year.
You can't negotiate that number much.
With Minio, that number is zero.
The cost of S3 isn't small either.
Yes, you can tier to lower cost options and AWS continues to make that more attractive,
but if you have to pull it out, the penalties are significant. At the 100 petabytes
level, one would expect to be paying around $33 million a year. With Minio that number is going
to be $4.3M. You can check our math here. Sure you have to buy your own HW. We think you can get
top of the line NVMe kit for $5 million, but it is a drop in the bucket we are at less than nine dollars 4m for hw and sw
add in some colo charges of 187 000 and your total is around nine dollars 5m so would you rather
spend 33 million dollars a year or spend nine dollars 5m in year one and five dollars. 1 M in the remaining 4, 4 dollars. 3 M plus 20% of HW replacement of 860,000 dollars
a year. Your 5 year cost is 166 million dollars with AWS. Your Minio plus COTS plus COLO cost is
30 million dollars. Those are real savings. More importantly, they are proven savings as evidenced
by the teams at 37signals, X, and Ahrefs. For clarity, these saving are not powered by MinIO.
When pulling out of the cloud saves you 60% or more, it's time to get smart about what you're
building and where you're running it. More importantly, 100 petabytes isn't that much
data in the age of AI. It is more like the unit you should be thinking about.
10 units is what some of our most
sophisticated customers buy we have others with 20 units they are posting on linkedin about the teams
they are building around minio it is truly exciting stuff it is not just about straight economics
either there are other arguments that drive such these massive private clouds tco is one
minio's legendary simplicity means that enterprises
can manage these exascale deployments with just a handful of resources. Our enterprise object store
features were built expressly for them. Things like observability for tens of thousands of drives,
or key management for billions of objects or cache for hundreds of servers. These are all
designed to simplify what can become very complicated. Backslash dot.
Performance is another, and specifically performance at scale. It is easy to be fast
at 200 terabytes. It is hard to be fast at 2 ebs. That is unless you are architected to do so.
Furthermore, enterprises want to be multi-modal on the application size at that scale.
That means, AI, advanced analytics, application workloads and
yes, the tried and true archival workloads. Backslash dot, control is a third, enterprises
that want full control over their stack run them in a colo or on their private cloud.
Don't want your cloud provider looking in your buckets, don't run on the public cloud.
Don't want your cloud provider security complexity but you desire
that level of comfort, run privately. Minio and Equinix for example are the cloud you control.
Data sovereignty is another bullet here. Backslash dot. The overall point is that enterprises do not
make decisions solely on costs. There are other considerations and if those considerations are
not met, they will not make the move. A high-performance, cloud-native object store offers you economic benefits,
performance benefits, control benefits, and they compound with scale.
If you want to learn more and take advantage of our value engineering function to run your own
models, reach out to us at helloadmin.io and we can start the conversation.
Thank you for listening to this HackerNoon story, read by Artificial Intelligence. Visit hackernoon.com to read, write, learn and publish.