Screaming in the Cloud - The Rapid Rise of Vector Databases with Ram Sriharsha
Episode Date: December 2, 2022About RamDr. Ram Sriharsha held engineering, product management, and VP roles at the likes of Yahoo, Databricks, and Splunk. At Yahoo, he was both a principal software engineer and then resea...rch scientist; at Databricks, he was the product and engineering lead for the unified analytics platform for genomics; and, in his three years at Splunk, he played multiple roles including Sr Principal Scientist, VP Engineering and Distinguished Engineer.Links Referenced:Pinecone: https://www.pinecone.io/XKCD comic: https://www.explainxkcd.com/wiki/index.php/1425:_Tasks
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by our friends at Chronosphere.
Tired of observability costs going up every year without getting additional value?
Or being locked into a vendor due to proprietary data collection, querying, and visualization?
Modern-day containerized environments require
a new kind of observability technology that accounts for the massive increase in scale
and attendant cost of data. With Chronosphere, choose where and how your data is routed and
stored, query it easily, and get better context and control. 100% open-source compatibility means
that no matter what your setup is, they can help.
Learn how Chronosphere provides complete and real-time insight to ECS, EKS, and your microservices,
wherever they may be, at snark.cloud slash chronosphere. That's snark.cloud slash chronosphere.
This episode is brought to you in part by our friends at Veeam. Do you care about backups?
Of course you don't.
Nobody cares about backups.
Stop lying to yourselves.
You care about restores, usually right after you didn't care enough about backups.
If you're tired of the vulnerabilities, costs, and slow recoveries when using snapshots to
restore your data, assuming that you even have them at all, living in AWS land, there's
an alternative for you. Check out Veeam. That's V-E-E-A-M for secure, zero-fuss AWS backup
that won't leave you high and dry when it's time to restore. Stop taking chances with
your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.
Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted guest episode is brought
to us by our friends at Pinecone, and they've given their VP of Engineering and R&D over to
suffer my various slings and arrows, Ram Srihasha. Ram, thank you for joining me.
Corey, great to be here. Thanks for having me.
So I was immediately intrigued when I wound up seeing your website, pinecone.io, because it says
right at the top, at least as of this recording, in bold text, the vector database. And if there's
one thing that I love, it is using things that are not designed to be databases as databases or inappropriately referring to things, be they JSON files or senior engineers as databases as well.
What is a vector database?
That's a great question.
And we do use this term correctly, I think.
You can think of customers of Pinecone as having all the data management problems that they have with traditional databases.
The main difference is twofold.
One is there is a new data type, which is vectors.
Vectors, you can think of them as arrays of floats, floating point numbers.
And there is a new pattern of use cases, which is search.
And what you're trying to do in vector search is you're looking for the nearest, the closest vectors to a given query.
So these two things fundamentally put a lot of stress on traditional databases.
So it's not like you can take a traditional database and make it into a vector database.
That is why we coined this term vector database and we're building a new type of vector database.
But fundamentally, it has all the database challenges on a new type of data and a new query pattern. Can you give me an example of what a, I guess, an idealized use case would be of what the
data set might look like and what sort of problem you would have that a vector database
would solve?
Very great question.
So one interesting thing is there's many, many use cases.
I'll just pick the most natural one, which is text search.
So if you're familiar with Elastic or any of the traditional
text search engines, you have pieces of text, you index them, and the indexing that you do is
traditionally an inverted index, and then you search over this text. And what this sort of a
search engine does is it matches for keywords. So if it finds a keyword match between your query and
your corpus,
it's going to retrieve the relevant documents. And this is what we call a text search, right?
Or keyword search. You can do something similar with technologies like Pinecone.
But what you do here is instead of searching over text, you're searching over vectors. Now,
where do these vectors come from? They come from taking deep learning models,
running your text through them, and these generate these things called vector embeddings.
And now you're taking your query as well,
running them to deep learning models,
generating these query embeddings,
and looking for the closest vector embeddings
in your corpus that are similar to the query embeddings.
This notion of proximity in this space of vectors
tells you something about semantic similarity between the query embeddings. This notion of proximity in this space of vectors tells you something
about semantic similarity
between the query and the text.
So suddenly you're going
beyond keyword search
into semantic similarity.
An example is
if you had a whole lot of text data
and maybe you were looking for soda
and you were doing keyword search,
keyword search will only match
on variations of soda.
It'll never match Coca-Cola
because Coca-Cola and soda have nothing to do with each other. Or Pepsi or pop, as they say in the
American Midwest. Exactly. However, semantic search engines can actually match the two because they're
matching for intent, right? If they find in this piece of text enough intent to suggest that soda
and Coca-Cola or Pepsi or pop are related to each other, they will actually match those and score them higher.
And you're very likely to retrieve those sort of candidates
that traditional search engines simply cannot.
So this is like a canonical example,
what's called semantic search,
and it's known to be done better
by these sort of vector search engines.
There are also other examples in say image search.
Just if you're looking for near duplicate images, you can't even do this today without a technology like vector search.
What is the, I guess, translation or conversion process of an existing data set into something that a vector database could use?
Because you mentioned it was an array of floats, was the natural vector data type. I don't think I've ever seen even the most arcane markdown implementation that expected people to wind up writing in arrays of floats.
What does that look like? How do you wind up, I guess, internalizing or ingesting existing
bodies of text for your example use case? Yeah, this is a very great question. This used to be
a very hard problem. And what has happened over the last several years in deep learning literature, as well as in deep learning as a field itself, is that there have been these large publicly trained models. Examples would be OpenAI, examples would be the models that are available in Hugging Face or Cohere. A large number of these companies have come forward with very well trained models through which you can pass pieces of text and get these vectors. So you no longer have to actually train these
sort of models. You don't have to really have the expertise to deeply figure out how to take pieces
of text and build these embedding models. What you can do is just take a stock model. If you're
familiar with OpenAI, you can just go to OpenAI's homepage and pick a model that works for you, hugging face models, and so on. There's a lot of literature to help you do this.
Sophisticated customers can also do something called fine-tuning, which is build on top of
these models to fine-tune for their use cases. The technology is out there already. There's a
lot of documentation available. Even Pinecone's website has plenty of documentation to do this.
Customers of Pinecone
do this today, which is they take pieces of text, run them through either these pre-trained models
or through fine-tuned models, get these arrays of floats which represent them, vector embeddings,
and then send it to us. So that's the workflow. The workflow is basically a machine learning
pipeline that either takes a pre-trained model, passes them through these pieces of text or
images or what have you,
or actually has a fine-tuning step.
Is that ingest process something that not only benefits from, but also requires the use of a GPU or something similar to that to wind up doing the in-depth, very specific type of expensive math for data ingestion?
Yes, very often these run on GPUs.
Sometimes, depending on budget, you may have compressed models or smaller models that run on CPUs, but most very often these run on GPUs. Sometimes, depending on budget,
you may have compressed models or smaller models that run on CPUs, but most often they do run on
GPUs. Most often we actually find people make just API calls to services that do this for them.
So very often people are actually not deploying these GPU models themselves. They are maybe
making a call to Hugging Face's service or to Opening Eyes service and so on.
And by the way, these companies have also democratized this quite a bit.
It was much, much harder to do this before they came around.
Oh, yeah. I mean, I've remembered it of the old XKCD comic from years ago, which was, OK, I want you to wind up.
I want to give you a picture and I want you to tell me it was taken within the boundaries of a national park.
Like, sure, easy enough.
Geolocation information's attached.
It'll take me two hours.
Cool.
And I also want you to tell me if it's a picture of a bird.
Okay, that'll take five years and a research team.
And sure enough, now we can basically do that.
The future is now. It's kind of wild to see that unfolding in a human perceivable time span on these things.
But I guess my question now is,
so that is what a vector database does. What does Pinecone specifically do? It turns out that
as much as I wish it were otherwise, not a lot of companies are founded on, well, we have this
really neat technology, so we're just going to be here more in a foundational sense to wind up
assuring the uptake of that technology. No,
no, there's usually a monetization model in there somewhere. Where does Pinecone start? Where does
it stop? And how does it differentiate itself from typical vector databases, if such a thing
could be said to exist yet? Such a thing doesn't exist yet. We were the first vector database.
So in a sense, building this infrastructure, scaling it and making it easy for people to
operate it in a SaaS fashion is our primary core product offering. On top of that, we very recently started also
enabling people who actually have raw text to not just be able to get value from these vector
search engines and so on, but also be able to take advantage of traditional what we call keyword
search or sparse retrieval and do a combined search better in Pinecone.
So there's value add on top of this that we do,
but I would say the core of it
is building a SaaS managed platform
that allows people to actually easily store this data,
scale it, query it in a way that's very hands-off
and doesn't require a lot of tuning
or operational burden on their side.
This is like our core value proposition.
Got it.
There's something to be said for making something accessible when previously it had only really
been available to people who'd completed the Hello World tutorial, which generally resembled
a doctorate at Berkeley or Waterloo or somewhere else, and turn it into something that's fundamentally
click the button.
Where on that spectrum of evolution do you find that Pinecone is today?
Yeah. So, you know, prior to Pinecone, we didn't really have this notion of a vector database.
For several years, we've had libraries that are really good, that you can pre-train on your embeddings,
generate this thing called an index, and then you can search over that index.
There is still a lot of work to be done, even to deploy that and scale it and operate it in production and so on. Even that was not being
kind of offered as a managed service before. What Pinecone does, which is novel, is you no longer
have to have this pre-training be done by somebody. You no longer have to worry about when to retrain
your indexes, what to do when you have new data, what to do when there is deletions, updates, and
the usual data management operations.
You can just think of this as like a database that you just throw your data in.
It does all the right things for you.
You just worry about querying it.
This has never existed before, right?
It's not even like we are trying to make the operational part of something easier.
It is that we are offering something that hasn't existed
before. At the same time, making it operationally simple. So we're solving two problems, which is
we're building a vector database that hasn't existed before. So if you really had this sort
of data management problems and you wanted to build an index that was fresh, that you didn't
have to super manually tune for your own use cases, that simply couldn't have been done before.
But at the same time,
we are doing all of this in a cloud-native fashion
that's easy for you to just operate and not worry about.
You've said that this hasn't really been done before,
but this does sound like it is more than passingly familiar,
specifically to the idea of nearest neighbor search,
which has been around since the 70s
in a bunch of different ways.
So how is it different? And let me, of course, ask my follow-up to that right now.
Why is this even an interesting problem to start exploring?
This is a great question. First of all, nearest neighbor search is one of the oldest forms of machine learning. It's been known for decades. There's a lot of literature out there. There are
a lot of great libraries, as I mentioned in the passing before. All of these problems have primarily focused on static corpuses. So basically,
you have a set of some amount of data, you want to create an index out of it, and you want to query
it. A lot of literature is focused on this problem. Even there, once you go from small number of
dimensions to large number of dimensions, things become computationally far more challenging.
So traditional nearest neighbor search actually doesn't scale very well.
What do I mean by a large number of dimensions?
Today, deep learning models that produce image representations typically operate in 2048
dimensions or 4096 dimensions.
Some of the open AI models are even 10,000 dimensional and above.
These are very, very large dimensions.
Most of the literature prior to maybe even less than 10 years back has focused on less than 10 dimensions.
So it's like a scale apart in dealing with small dimensional data versus large dimensional data.
But even as of a couple of years back, there hasn't been enough, if any, focus on what happens when your data rapidly evolves.
For example, what happens when people add new data?
What happens if people delete some data?
What happens if your vectors get updated?
These aren't just theoretical problems,
they happen all the time.
Customers of ours face this all the time.
In fact, a classic example is in recommendation systems
where user preferences change all the time, right?
And you want to adapt to that,
which means your user vectors change constantly.
When these sort of things change constantly, you want your index to reflect it because you want your queries to catch on to the most recent data, right? The queries have to reflect the recency of
your data. This is a solved problem for traditional databases. Relational databases are great at
solving this problem. A lot of work has been done for decades to solve this problem really well.
This is a fundamentally hard problem for vector databases, and that's one of the core focus areas
of Pinecone. Another problem that is hard for these sort of databases is simple things like
filtering. For example, you have a corpus of, say, product images, and you want to only look at
images that maybe are for the fall shopping line, right? Seems like a very natural
query. Again, databases have known and solved this problem for many, many years. The moment you do
nearest neighbor search with these sort of constraints, it's a hard problem. So it's just
the fact that nearest neighbor search and a lot of research in this area has simply not focused on
what happens to that sort of techniques when combined
with data management challenges, filtering, and all the traditional challenges of a database.
So when you start doing that, you enter a very novel area to begin with.
This episode is sponsored in part by our friends at Redis, the company behind the incredibly
popular open source database.
If you're tired of managing open source Redis on your own, or if you're looking to go beyond just caching and unlocking your data's full potential, these folks have you covered.
Redis Enterprise is the go-to managed Redis service that allows you to reimagine how your geo-distributed applications process, deliver, and store data.
To learn more from the experts in Redis how to be real-time, right now, from anywhere, visit snark.cloud slash redis. That's snark.cloud slash r-e-d-i-s. So where is this space going, I guess, is sort of
the dangerous but inevitable question I have to ask. Because whenever you talk to someone who is
involved in a very early stage of what is potentially a transformative idea.
It's almost indistinguishable from someone who is, whatever the polite term for being
wrapped around their own axle is, in a technological sense.
It's almost a form of Bruce Schneier's law of anyone can create an encryption algorithm
that they themselves cannot break.
So the possibility that this may come back to bite us in the future, if it turns out
that this is not potentially the revelation that you see it as, where do you see the future
of this going?
Really great question.
The way I think about it is, and the reason why I keep going back to databases and these
sort of ideas is, we have a really great way to deal with structured data and structured queries, right?
This is the revolution of the last maybe 40, 50 years
is to come up with relational databases,
come up with SQL engines,
come up with scalable ways of running structured queries
on large amounts of data.
What I feel like this sort of technology does
is it takes it to the next level,
which is you can actually ask unstructured questions
on unstructured data, right?
So even the couple of examples we just talked about, doing near duplicate detection of images,
that's a very unstructured question.
What does it even mean to say that two images are nearly duplicate of each other?
I couldn't even phrase it as a kind of a concrete thing.
Certainly, I cannot write a SQL statement for it, but I cannot even phrase it properly.
With these sort of technologies,
with vector embeddings, with deep learning and so on,
you can actually mathematically phrase it.
The mathematical phrasing is very simple.
Once you have the right representation
that understands your image as a vector,
two images are nearly duplicate
if they're close enough in the space of vectors.
Suddenly, you've taken a problem that was even hard to express, let alone compute,
made it precise to express, precise to compute.
This is going to happen not just for images, not just for semantic search.
It's going to happen for all sorts of unstructured data,
whether it's time series, whether it's anomaly detection,
whether it's security analytics, and so on.
I actually think that fundamentally a lot of fields are going to get disrupted
by this sort of way of thinking about things.
We are just scratching the surface here with semantic search, in my opinion.
What is, I guess, your barometer for success?
I mean, if I could take a very cynical point of view on this, it's,
oh, well, whenever there's a managed vector database offering from AWS,
they'll probably call it Amazon Basics Vector or something like that. Well, that is, it used to be
a snarky observation that, oh, we're not competing, we're just validating their market.
Lately, with some of their competitive database offerings, there's a lot more truth to that than
I suspect AWS would like. Their offerings are nowhere near as robust as what they pretend to be competing
against. How far away do you think we are from the larger cloud providers starting to say,
ah, we got the sense there was money in here, so we're launching an entire service around this?
Yeah, I mean, first of all, this is a great question. There's always something that's
constantly things that any innovator or disruptor has to be thinking about, especially these days.
I would say that having a multi-year head start in the use cases,
in thinking about how this system should even look,
what sort of use cases should it go about,
what the operating points for the solver database even look like,
and how to build something that's cloud native and scalable
is very hard to replicate.
Meaning if you look at what we have already done
and kind of try to base an architecture on that,
you're probably already a couple of years behind us
in terms of just where we are at, right?
Not just in the architecture, but also in the use cases
and how, and where this is evolving forward.
That said, I think it is for all of these companies,
and I would put, for example,
Snowflake is a great example of this,
which is Snowflake didn't have existed
if Redshift had done a phenomenal job
of being cloud native, right?
And kind of done that before Snowflake did it.
In hindsight, it seems like it's obvious,
but when Snowflake did this, it wasn't obvious
that that's where everything was headed.
And Snowflake built something
that's very technologically innovative
in a sense that's even now hard to replicate.
It takes a long time to replicate something like that.
I think that's where we are at.
If Minecone does its job really well
and if we simply execute efficiently,
it's very hard to replicate that.
So I'm not super worried about cloud providers,
to be honest, in this space.
I'm more worried about our execution.
If it helps anything, I'm not very deep into your specific area of the world, obviously,
but I am optimistic when I hear people say things like that. Whenever I find folks who are
relatively early along in their technological journey being very concerned about, oh,
the large cloud providers are going to come crashing in, it feels on some level like their perspective is that they have one weird trick
and they were able to crack that, but they have no defensive moat
because once someone else figures out the trick, well, okay, now we're done.
The idea of sustained and lasting innovation in a space, I think,
is the more defensible position to take.
With the counter argument, of course, that that's a lot harder to find.
Absolutely. And I think for technologies like this, that's the only solution,
which is if you really want to avoid being disrupted by cloud providers, I think that's
the way to go. I want to talk a little bit about your own background. Before you wound up as the
VP of R&D over at Pinecone, you were in a bunch of similar, I guess, similar styled roles,
if we'll call them that, at Yahoo, Databricks, and Splunk. I'm curious as to what your experience
in those companies wound up impressing on you that made you say, ah, that's great and all,
but you know what's next? That's right, vector databases, and off you went to Pinecone.
What did you see? So first of all, in some way or the other,
I have been involved in machine learning and systems
and the intersection of these two
for maybe the last decade and a half.
So it's always been something like in between the two,
and that's been personally exciting to me.
So I'm kind of very excited by trying to think
about new type of databases or new type of data platforms
that really leverage machine learning and data. This has been personally exciting to me. very excited by trying to think about new type of databases or new type of data platforms that
really leverage machine learning and data. This has been personally exciting to me.
I'd obviously learned very different things from different companies. I would say that
Yahoo was just a learning in cloud to begin with, because prior to joining Yahoo, I wasn't familiar
with Silicon Valley cloud companies and that scale. And Yahoo is a great company.
And there's a lot to learn from there.
It was also my first introduction to Hadoop, Spark, and even machine learning,
where I really got into machine learning at scale in online advertising and areas like that,
which was a massive scale.
And I got into that in Yahoo and it was personally exciting to me
because there's very few opportunities
where you can work on machine learning at that scale.
Databricks was very exciting to me because it was an earlier stage company than I had
been at before.
Extremely well run and I learned a lot from Databricks, just the team, the culture, the
focus on innovation and the focus on product thinking.
I joined Databricks as a product manager.
I hadn't played the product manager hat before that.
So it was very much a learning experience for me.
And I think I learned from some of the best in that area.
And even at Pinecone, I carry that forward,
which is think about how my learnings at Databricks
informs how we should be thinking about
products at Pinecone and so on.
So I think I learned, if I had to pick one company I learned a lot from,
I would say it's Databricks.
The most from.
I would also like to point out, normally when people say,
oh, the one company I've learned the most from,
and they pick one of them out of their history,
it's invariably the most recent one.
But you left there in 2018 and went to go spend the next three years
over at Splunk, where you were a senior principal scientist, senior director and head of machine learning. And then you decided, okay, that's
enough hard work. You're going to do something easier and be the VP of engineering, which is just
wild at a company of that scale. Yeah. At Splunk, I learned a lot about management. I think
managing large teams, managing multiple different teams, working on very different areas is
something I learned at Splunk. You know, I was at this point in my career where I was right around trying to
start my own company. Basically, I was at a point where I'd taken enough learnings and I really
wanted to do something myself. That's when Ido and I, Ido the CEO of Pinecone, and I started talking
and we had worked together for many years. And we started working together at Yahoo.
We kept in touch with each other.
And we started talking about the sort of problems that I was excited about working on.
And then I came to realize what he was working on and what Pinecone was doing.
And we thought it was a very good fit for the two of us to work together.
So that's kind of how it happened.
It sort of happened by chance, as many things do in Silicon Valley, where a lot of things
just happen by network and chance.
That's what happened in my case.
I was just thinking of starting my own company at the time, and just a chance encounter with
Edo led me to Pinecone.
It feels, from my admittedly uninformed perspective, that a lot of what you're doing right now
in the vector database area, it feels on some level like it follows the trajectory of machine
learning in that for a long time, the only people really excited about it were either sci-fi authors
or folks who had trouble explaining it to someone without a degree in higher math.
And then it turned into a couple of big stories from the mid-2010s.
Stick out at me when people were trying to sell this to me in a variety of different ways.
One of them was, oh yeah, if you're a giant credit card processing company and trying to detect fraud with this kind of transaction volume, it's, yeah, there are maybe three companies in the world that fall into that exact category. The other was WeWork, where they did a lot of
computer vision work, and they use this to determine that at certain times of day, there
was congestion in certain parts of the buildings, and that this was best addressed by hiring a
second barista, which distilled down to, wait a minute, you're telling me that you spent how much
money on machine learning and advanced analyses and data scientists and the rest to
figure out that people like to drink coffee in the morning? That is a little on the ridiculous side.
Now, I think that it is past the time for skepticism around machine learning when you can
go to a website and type in a description of something and it paints a picture of the thing
you just described. Or you can show it a picture and it describes what is in that picture fairly accurately.
At this point, the only people who are skeptics from my position on this seem to be holding
it out for some sort of either next generation miracle or are just being bloody minded.
Do you think that there's a tipping point for vector search where it's going to become blindingly
obvious to, if not the mass market, at least more run-of-the-mill, more prosaic level of
engineer that haven't specialized in this? Yeah, it's already, frankly, started happening.
So two years back, I wouldn't have suspected this fast-up and adoption for this new technology
from this varied number of use cases. I just wouldn't have suspected it because I still thought it's going to take some time for this field to mature and everybody to really start taking advantage of this.
This happened much faster than even I assumed.
So to some extent, it's already happening.
A lot of it is because the barrier to entry is quite low right now.
So it's very easy and cost-effective for people to create these embeddings.
There is a lot of documentation out there.
Things are getting easier and easier day by day.
Some of it is by Pimekone itself, by a lot of work we do.
Some of it is by companies that I mentioned before who are building better and better models,
making it easier and easier for people
to take these machine learning models and use them
without having to even fine-tune anything.
And as technologies like Pinecone really mature
and dramatically become cost-effective,
the barrier to entry is very low.
So what we tend to see people do
is it's not so much about confidence in this new technology it is
can i take something simple that i need this sort of value out of and find the the least critical
path or the or the simplest way to get going on this sort of technology and as long as you can
make that barrier to entry very small and make this cost effective and easy for people to explore
this is going to start exploding and that's what we are seeing.
And a lot of Pinecone's focus has been on ease of use
in simplicity in connecting that zero to one journey
for precisely this reason,
because not only do we strongly believe
in the value of this technology,
it's becoming more and more obvious
to the broader community as well.
The remaining work to be done is just the ease of use
and making things cost-effective.
And cost-effectiveness is also what we focus on a lot.
Like this technology can be even more cost-effective than it is today.
I think that it is one of those never-mistaken ideas to wind up making something more accessible to folks than keeping it in a relatively rarefied environment. We take a look throughout the history of computing in general,
and cloud in particular,
where formerly very hard things have largely been reduced down to click the button.
Yes, yes.
And then get yelled at because you haven't done infrastructure as code,
but click the button is still possible.
I feel like this is on that trend line based upon what you're saying.
Absolutely.
And the more we can do here,
both by clone and the broader community, I think the better,
the faster the adoption of this sort of technology is going to be.
I really want to thank you for spending so much time talking me through what it is you folks are working on.
If people want to learn more, where's the best place for them to go to find you?
Pinecone.io.
Our website has a ton of information about Pinecone, as well as a lot of starter documentation.
We have a free tier as well, where you can play around with small data sets, really get a feel for vector search.
It's completely free.
And you can reach me at Ram at Pinecone.
I'm always happy to answer any questions.
Once again, thanks so much for having me.
Of course, and we'll put links to all of that in the show notes.
This promoted guest episode has been brought to us by our friends at Pinecone. Ram Srihasha is their VP of Engineering and R&D. And I'm cloud economist
Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast
platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your
podcast platform of choice, along with an angry, insulting comment that I will never read because the search on your podcast platform is broken because it's not using a vector database.
If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started. this has been a humble pod production
stay humble