Screaming in the Cloud - SEARCHing in the CHAOS with Thomas Hazel
Episode Date: January 29, 2020About Thomas HazelThomas Hazel is Founder, CTO, and Chief Scientist of CHAOSSEARCH. He is a serial entrepreneur at the forefront of communication, virtualization, and database technology and ...the inventor of CHAOSSEARCH's patent pending IP. Thomas has also patented several other technologies in the areas of distributed algorithms, virtualization and database science. He holds a Bachelor of Science in Computer Science from University of New Hampshire, and founded both student & professional chapters of the Association for Computing Machinery (ACM).Links ReferencedCompany site: http://chaossearch.ioTwitter: @ThomasHazelLinkedIn: https://www.linkedin.com/in/thomashazel/
Transcript
Discussion (0)
Hello and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world of cloud,
thoughtful commentary on the state of the technical world,
and ridiculous titles for which Corey refuses to apologize.
This is Screaming in the Cloud.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
I'm joined this week by Thomas Hazel, founder and CTO of everyone's favorite company, Chaos Search.
Thomas, welcome to the show.
Thank you for having me.
So let's start at the very beginning.
Most people don't spring into existence having founded a company.
There's usually an origin story.
What were you doing before Chaos Search?
So I had a background in big distributed systems, made a career out of building God boxes back in the day in telecom. And then for the last 15 years, working on new computer science with respects to database data analytics, really trying to be inventive in the way of the force of computer science. living under a rock and not been paying attention to any episode of anything I've ever done until this one. You folks have sponsored a number of different things that I've been involved in,
including this episode. But for those who have not been paying attention, what does Chaos Search do?
So at a high level, we created some innovative technology that allows customers to do analytics, tech search, and both relational directly on their object storage, particularly Amazon S3. And we do it at a scale and cost and price point
that is quite unique, quite disruptive in the market. And so for the last four years,
we've been building out a new indexing technology, as well as associated architecture
as a service that customers log on to our service within five minute registration.
They're up and running doing terabytes of analysis, let's say for log analytics within
minutes and using their favorite API, Elastic API, or coming out in 2020, SQL API.
Before you folks had ever sponsored anything
that I was involved with,
I was aware of you because you employed Pete Cheslock,
everyone's favorite DevOps thought lord,
as the VP of product.
And when he started talking about what you folks did,
I said, it seems implausible
that it's as amazing as you say,
but all right, I'll suspend disbelief tell me about it and it turned out that everything that i was told was
actually in fact correct i recommended you folks to clients of mine not because there was any
business relationship and that is still something people can't buy as it turns out credibility it
matters but because for a certain subset of use cases, it is an incredibly cost-effective
approach to handling things. So I guess this is probably an idiotic question, but I'm going to
roll with it anyway. What is it that makes this such a unique thing? There's nothing else in the
market like this, but there should be. If we take a look at cloud, all the different providers took a vote
and the storage technology that won by orders of magnitude
in terms of price has always been object store.
Why is there nothing that provides a simple searchable API
for a rapid response for data that lives in S3?
Great, great question.
So, you know, my background, as I mentioned,
is in distributed computing and computer science, and, you know, there's existing technology out there. Lucene, it's an inverted index, really driver force behind log analytics. There's column storage for warehousing, B-trees, LSM trees, all these classic computer science data structures and algorithms that have been used for the last 30 years. And the real issue is that they are at the breaking point of the scale that we're seeing today.
Moore's law says one thing, but machine-generated data is outpacing it.
And about five to seven years ago, I wanted to crack that code of creating a new data structure
and algorithm that could really provide the next level of scale,
you know, 100,000 magnitude scale,
without having to use massive amount of compute
and a team of engineers trying to erect a system of terabyte and petabyte scale.
So that was what I wanted to set out and go do.
And so I reached out to people like Pete to say,
hey, I have this idea.
What do you think if I solve this problem? Would Pete to say, hey, I have this idea. What do you think if
I solve this problem with people care? He kind of laughed a little bit, says everybody wants this
problem solved. And, you know, you know, people want, you know, you know, unicorns and a pot of
gold, you know, call me when you when you figure it out. And a couple of years ago, I reached out
to him. I said, hey, Pete, I think I figured it out. And we went to market
with that solution. So really, the essence of it is I created a new representation,
really a compression of them that's both a database index that can uniquely support text
search, like the classic hunting and log analytics that a Lucene index would support,
as well as relational analytics that you think warehousing technology
with the same representation without having to have siloed databases that you would say,
I'm going to store data necessary for archival, but then move it out ETL into, say, a relational
system or, say, Elasticsearch cluster.
You can imagine that is of great complexity.
And at scale, these systems start to
break. And so I said, what if, what if you created a service that you leave the data in your storage?
Really the idea of storage analytics convergence and the power of object storage, Amazon S3 was a
great place to start where it's infinitely scalable, wonderfully inexpensive,
secure and durable. But the problem is technology that was built 30 years ago cannot access it in
a high performance way. So people move it out of S3 and then into one of these classic databases.
So I thought with this innovative technology that we have patent pending papers on that said, I could take our index and
create a new architecture that leverages distributed compute to essence unlock that
storage without having to move it out of the system. And that is what we did. Chaos search,
as you can imagine, searching the chaos is why we came up with the name because there's so much data being stored in S3.
It's like all data roads lead to S3 and object storage, and now every cloud provider has one.
And so that's what we did over the last four years is build out a service that takes the customer's S3 account.
They provide read-only access.
We provide the compute.
We index the data and write these indices back into their account.
And then they can do a tech search for say log analytics, as well as relational for say
business intelligence analytics. And so for the last several years, we've been building that
platform and we're super excited. So Pete said, Hey, you know, I got to get on board in this
thing because it seems like you guys cracked the code
and this is what I see every day as a problem in the market.
It's a great approach if for no other reason
than it finally does what I think everything should be doing
is separating out the compute and the storage layers
with anything that's legacy you're playing within this space
and you're running clusters of these things.
Oh, you need to store more data in it, add more nodes.
You're adding compute when you maybe or maybe don't need that.
And your storage is going in lockstep with that.
Conversely, you need better performance.
Well, add more nodes that add to the storage burden.
What I like about your entire approach
is that there is no management overhead.
The data lives in S3 and you don't have to touch it again
once that index is created.
It's just there available for querying. Absolutely. And the separation was obviously
extremely important where if you have one terabyte a day of data to 100 terabytes,
we just spin up additional compute. And that nightmare of trying to create shards of
attached storage compute is the nightmare that people deal with today,
particularly with, say, with Elasticsearch
and their clustering technology.
And so the ability just to have infinite storage
with dynamic compute,
and so you can have one node represent the entire dataset
or 100 nodes.
The ability for us to dynamically allocate
to make indexing faster or query searches faster is all on the fly and all dynamic.
But this technology, this index, does it at a price point that is very, very unique and very disruptive.
Our cost to do indexingthe-fly compute allocation for your text search, for your
aggregations across terabytes of data is extremely cost-effective. And you can see that in our
pricing today. One of the things that I find so interesting about this entire approach is that
people generally already have something that's using Elasticsearch out there. And I run the
numbers myself. I didn't need you to tell them to me. It's one of these
things of working from an economic point of view, nothing personal. I generally try not to take too
much of what vendors tell me on faith. I'd like to test these things out myself. And it was right.
It was knocking a majority of the spend off in virtually every case. And that was incredible,
especially when you apply it to something like log analytics.
You take a look at any of the log analytics companies, and people can think through a wide
variety of log analytics companies. It doesn't really matter which one. And at some point at
scale, you have to begin kidnapping princesses for ransom in order to pay for the bill.
So it's absolutely one of those challenging problems. And then when the bill gets too high,
you talk to that vendor, and the response is, oh, log less stuff, which sort of cuts against the entire premise of a
log analytics company, because you can start noticing relationships and data if you have the
data. But if you don't, that door is shut for you. It really has seemed like a disjointed,
fractured industry for an awfully long time. I'm still trying and failing to find examples of other
approaches that solve this problem the way that you have. There's nothing else like it.
You're exactly right. And what's interesting to me is every single time we hear a customer
want to increase their retention, they double their cost. So every time you double it,
your cost doubles and your complexity doubles. And so, you know, the idea that we've created that
the storage is infinitely scalable, SRE has been quite proven on that, and our ability to
elastically scale up the compute and deliver the compute where the storage is for those queries,
it seems so obvious, so natural. The problem is that the separation income storage compute is not the rocket science. It's the technology, this index that has allowed us to do it at a price point, all pure on S3 or object storage in general.
We take a look at reInvent last year, and they announced enhancements to the Amazon Elasticsearch service that they run.
And I thought, OK, this is it.
They're finally going to do the same thing. And they launched their badly named ultra warm tier that had a accelerated performance approach or
a lower cost of being able to stage old data out. And it's even, they went in the exact wrong
direction for this problem as I understand it. It's, it seems bizarre. No, you're, so the funny
thing is they solved a problem like everyone else has been solving.
They solved the Lucene problem via Band-Aids, if you will, to be so bold.
You know, the system was never designed to do this type of scale, right?
Elasticsearch was surprised that the log analytic community adopted this technology.
And, you know, UltraWarm is just another way to provide a caching layer to make it a little
bit better, a little bit more cost effective. And so what we wanted to do is, in essence, make S3
ultra hot. And the idea is that with this technology, we don't have to play those
Band-Aid caching games. It's pure access on S3, and it feels like it's a hot cluster, but it's on S3 with our
compute. And that's where our technology, our index technology has cracked that code, how to make S3
high performance, make S3 a data lake ground that Amazon's pushing, but not make it swampy.
We provide actually data discovery and
cataloging what's in there, but to ultimately index the data to provide log analytics via
the Elastic API and Kibana or SQL, say through a looker visualization for BI workloads or Athena
workloads. And again, the key thing is to make it performant, make it extremely scalable without having to worry about sharding, as well as high performance with a very low cost.
And I know that those are a lot of what-ifs that we start out this company, but we cracked that code, as I mentioned.
And we're super excited about what we've built because we're seeing at our customers that we take their bill and literally cut it in half, if not a third.
So one of the, I guess, constraints I have is that when you first learn about something,
it's difficult to go back and relearn it as something else.
I was introduced to Chaos Search as you effectively, whenever you're using Elastic Search as a part of something, maybe an elk snack, maybe something else, the API is equivalent to do
a drop-in replacement of Chaos Search.
And that's how I've always conceptualized. Anytime I see a big Elasticsearch bill,
this is one of the things that I tend to think about. The challenge, of course, is that it
sounds like you're going beyond being effectively just that. What do I misunderstand in that
oversimplified description of what you folks do? Yeah. I mean, I hate to say it, but we are
building the next generation database that really rethinks how stored analytics converge,
fuse together. We create an distributed fabric and we have this ability to export compatible
APIs. We don't run any Elasticsearch underneath the hood or the
Lucene index, but we support an open standard Elastic API that people know and love with the
Kibana integration for all that great visualization that people do in log analytics. And the idea that
you can do the Elastic API with, say, an index pattern, and the same index is a table in, say, a Presto dialect SQL interface with Looker,
without having that cost and complexity that standing up a relational system like a warehousing
or standing up Elastic Cluster,
that it's almost hard to believe because it hasn't been done before.
You know, there are some companies on the fringes of trying to solve this. Snowflake has done some separation of storage compute,
although they still have the cash out on a physical disk. We are 100% pure out of storage.
And the difference, it's your storage. You own the data. We are just the distributed compute
that manages the index and the query execution. So, you know, it's another
way of delivering the idea that you dump data. And then what our service does is we support what we
call this refinery within our service. So you index it once, you index everything. And then
through our refinery, you can create virtual transformation or views that look like index
patterns in Elasticsearch or tables in, say,
SQL, all in that same representation on the fly. So imagine if you had 100 terabytes of data and
you had a physically ETL, typically folks use Hadoop or EMR to take it out of S3, transform it
and put it into, say, a warehousing solution or Elasticsearch. What if you just brought up the
wizard, created a view, created your transform,
and it's available immediately
without having to do anything physical?
This is the power of this index technology created.
It is distributed.
It supports text search and relational.
It's uniquely compressed to save on cost,
but allows for these virtual transformations,
late materialized, that allows you to do all
that variation that each department, maybe in a company, wants to read and analyze that data.
So putting the shoe on the other foot a bit, in what use case is someone going to be using
Elasticsearch and for when Chaos Search is not going to be a fit?
Yeah, so a classic case is what I call the elk use case where let's say you have
logging for denial of service attack and you want to know what's going on. So often people stand up
CDN logs in say Elasticsearch or elk, and this can be get pretty big, you know, one terabyte up to 10 terabytes a day, maybe even more of log data.
And they typically see that there's a denial of service attack.
They want to figure out what's going on.
The problem with the Elasticsearch is more denial of service attacks come in, more logs come in, and now you're querying more often.
This is classically what makes Elasticsearch sad, where the cluster comes down because you've only provisioned so much capacity.
With our system, you dump your data into S3 and we index it.
We allow for dynamic scale to do that exact same security ops type use case, the novel service attack, maybe cloud flare logs, app logs.
Constantly, you know, people come to us and they have really messy data for
their app logs. They're dumping data. It's an S3 and they want to know what's going on. Again,
chaos search is a great way to do app log analytics. Hey, what is my application doing?
Is it running slow? Were there problems? So app logs, security ops, DevOps, those type of use cases are really a sweet spot because almost ever we've talked to when we build out this idea, it's like, do you use S3?
And they almost would say, duh, of course we do.
And do you use Elasticsearch?
Of course we do.
And they typically store in S3 first and then store it to the Elastic cluster. And we just say, keep it in S3, we'll index it,
and you can have that exact same ELK functionality
that you had without the cost and complexity.
Now, what we're not good for,
there was a time where we were staying away
from real-time functionality
where you wanted a sub-second type performance
for a short window of data.
And we were going after the big, big data where customers that had 10 terabytes a day, up to 100 terabytes a day of analytics that was just breaking the bank at that scale.
Actually, in Q1 of this year, 2020, we're coming out with a real-time.
So just as you would put data into Elasticsearch, for instance, you can put data to us and it's available real time.
We'll write this data into your S3 account.
And as we come out with our SQL interface, we'll support create updates and upserts as well.
So it's really turning into full-fledged data platform that can handle the real time and obviously that real scale.
So one of our limitations
was in the real-time because we were not focused on that. But as we go into more of a BI real-time
use case, we're adding that feature out. Other limitations, I would say, you know, parity with
the Elastic API. We're really focused on log analytics and coming out BI analytics.
You know, the Elastic API is quite broad and we don't support all the classic, you know,
low level, you know, texture type capabilities like fuzzy type searching.
Not that we won't, but that's not really where our wheelhouse is.
But we do support all the classic log analytic, text search, you know, wildcarding, etc.
You know, limitations. I don't know.
We have some big ideas
and some pretty powerful technology and architecture.
So if there are limitations today,
there won't be limitations tomorrow.
One of my personal guilty pleasures
is pointing out the terrible business practices of others.
And I've been vocal about this for a little while,
where there's a giant
slap fight between Elastic and pretty much anyone who is selling anything that looks like Elastic.
It seems like their approach to open source, by and large, is use our open source software. No,
wait, not like that. And so now there's a trademark lawsuit, among other things. There's a slap fight that's going on between AWS and Elastic, where AWS also launched their
open distro for Elasticsearch.
Elastic was doing their whole, only some of the code in the same repository and some of
the same commits is free and open source.
The rest comes with a commercial license.
Chaos Search bypasses all of that, correct?
Effectively, it's sit on the sidelines and watch popcorn.
There's no Elasticsearch under the hood here.
Yeah, yeah, yeah.
No, we're Switzerland in those battles.
I mean, you know, the open source community
is so important to all of us.
And, you know, Elastic has done some great work
and Amazon has done some great work.
And I know Amazon gets some bad raps
about, you know about taking open source and
making it a service and making a business. Open source companies, once they have to start making
money, may make some software proprietary. We were using the open source Kibana out of Elastic.
To be frank, when Elastic started closed sourcing, Bear Term or making a license different than the Apache 2 for some of their more advanced features out of Kibana,
this past summer, we adopted the Amazon's Open Distro for the alerting, the timeline, the role-based asset control because it was open.
And it was keeping with that philosophy where a whole community was
built based on the idea that this would be free and open.
Now, I understand business and I understand that we have a business.
We're creating a service to make money.
But the idea to have open APIs is really something that it's hard to fight that.
And I do believe that open APIs is the way to go.
I understand why Elastic
is doing what they're doing.
I understand what Amazon
is doing as a business.
We're in that service
of solving customer problems
and you get paid for it as a service.
So we're kind of in that mode
where let's keep the APIs open.
Let's make a business
offering a service or support
like open source used to always do.
Well, I wish you well, but I kind of think you're going to struggle until you learn to
do what real companies do and threaten to sue your customers.
Yeah. Yeah. Like I said, I'll, I'll, I'll play Switzerland on that one, but I mean, listen,
you know, it's, it's amazing what open source community has done. And, you know, we leverage open source.
The idea that you open source something and then essentially close in the future, that's a tough scenario for people who bet into this API that now everyone uses today.
So, you know, how it all plays out, I'm not sure.
There's a lot of money involved with that. And our viewpoint is if there's tooling and APIs
that we can leverage from the community,
we'll use it and bring those to our customers.
I love making fun of companies doing different things.
That is my part and parcel.
And I've got to say, I'm sorry, you're not immune either.
I have to ask the burning question
that's been on my mind since I first heard of you folks
and was corrected on this. Why is Chaos Search all in caps? You know, it is not as well thought out
as you might think. You know, I had the idea of naming a company Chaos Something. And when
originally there was, before we were Chaos Search, we actually were called
Chaos Sumo, the wrestling, the chaos. And as we're going after log analytics and we knew
Sumo Logic was out there, we didn't want to, you know, be confused with them. So I thought,
you know, Chaos Search, because we're searching the chaos is really where the value is, you know,
the the searching analytics. So I said, ah, let's rename the company Chaos Search.
And, you know, do we call it Chaos Space Search?
Do we call it Chaos Search?
Do we do all lowercase?
Do we do all copycase?
Really, we did all these variants really quickly on a piece of paper.
We looked at all three or four variants and we're like, oh, the capital looks pretty good.
Let's go with that.
And, you know, it was nothing more than that.
And so what we've been doing to get around
the all caps concept is making the first part
of chaos bold.
It seems to be working for us, but yeah,
we get teased about that all the time.
And, you know, it was more just it looked good in the font we used,
and that's why we chose it.
And sometimes that's all it takes is going down that path.
But it, of course, opens you up to all kinds of criticism from the peanut gallery,
by which, of course, I mean me.
I mean, you have search in the name.
Search a little harder.
You can find the caps lock key to turn it off.
And the counter response, of course, is that, oh, wow,
there's a caps lock key. That makes it way easier to type the name of the company. Great. Just great.
It's cruise control for cool. I know that asking for what's coming next is always perilous because
the best laid plans, et cetera, et cetera. What's next for you folks? What are you focusing on in this year of our Lord 2020?
Our big vision is to build out a new type of data platform.
And as I mentioned, we went to market last year going after the big scale of log analytics,
supporting the Elastic API and Kibana interface and solving those type of problems. But the vision is to have a true multi-model, real-time database
that the idea that we deliver on that data lake philosophy, and you have database tooling that
is natural, you're used to it. So once we come out with this multi-model capability,
we're going to go after the Athena use case. We hear a lot of complaints about the
costs and the scale and the idea that you have one data source or multiple data sources via the
Elastic API or SQL API. 2020 is going to bring out really the first true, true multi-model functionality, our database that we're really
excited about. We had customers asking us all the time, can you support the Athena use case or the
SQL Presto dialect? And that's what we're going to do. We're going to first offer it to our law
customers and then start growing the business into BI and ad hoc analytics all on S3.
You know, we do have a plan for later this year to go multi-cloud.
We've been pulled both from Microsoft and Google on which one we do next.
I'll keep that a secret, but we will be coming out with a multi-cloud play later in 2020.
So one problem I see in the world of cloud billing, historically, has been that it's,
when you switch to a consumption-based model,
people don't know what something's going to cost.
And sure, they'd like to whine and cry and complain about it,
but with a lot of systems with significant storage volume,
you can run a query that costs tens of thousands of dollars
without knowing it in advance.
Driving down the overall cost of querying these things is of course incredibly valuable
and helpful, but what are you seeing in the space as far as addressing that from a larger
perspective?
When you have an internal application that run a query here and it costs $20,000, when
someone hits submit, first even attributing that back to that one query is a super hard problem.
But the gold standard people are going for is a pop-up of, hey, if you run that query,
it's going to cost you a giant pile of money.
Continue yes or no.
So there is that problem of doing the cost attribution of querying interestingly large
data sets.
Is that something that's on your roadmap at all?
Is that not something you're seeing in your customers?
So I was holding that one back.
So yes, we actually have something
that we're coming out with
to make our system both be upfront,
storage type pricing,
as well as consumption based.
And that's very novel and unique in the logging space.
It's not so much in more of the BI. And that's very novel and unique in the logging space. It's not so much
in more of the BI. And part of that offering, when you start supporting a consumption-based model,
is we actually know the cost of a query. And so we're going to have tooling in our user interface
that you can say, oh, this is going to cost, X amount of dollars over these hundreds of terabytes. Do I want to do it? Or maybe, you know, Susie user can do that type of query,
but maybe Bobby only can do short time queries for this amount of costs and have a whole
billing construct within our user experience to keep those costs down. Now, to your point earlier,
we've cracked the code on reducing that cost dramatically low. But imagine if your cost of indexing was virtually free and it's all based on queries, but your queries are cost effective as well as intelligent.
And I'm coming out and I'm saying we are coming out with a feature that will be consumer based and the user will know and control how much data and what's going to cost.
To be clear, when you say that it predicts the cost and tells you what it runs in advance,
is that the cost for chaos search?
Is that the infrastructure cost underlying what's going on, or is it both?
So clearly we'll have a margin within...
Well, of course, I'm not suggesting you shouldn't.
Yeah, it'll be cost of the query.
So we want to not only be disruptive in the classic log pricing model where, of course, I'm not suggesting you shouldn't. Yeah, it'll be cost of the query. So we want to not only be disruptive in the classic log pricing model where everything
from $100 per gigabyte and up, where we're currently $10 to $15 per gigabyte, which is
very disruptive in this market.
We're going to be, instead of $5 per query per terabyte or $1, I'm not going to say what we're going to be.
We're going to be dramatically cheaper than that.
And the idea that, oh, these 10 terabytes of one query is going to cost me X, you can control that.
You can set policies so that makes sure that you only use what you want to pay for.
And not necessarily a credit base because that can get complicated. I know other
vendors do a credit base that you put up front a cost and then you eat into that. That's hard
to deal with. This is going to be a lot more driven by your controls and what you want to do.
So it's going to be significant. It's going to be disruptive. And hopefully, you know,
the customer is going to like it.
And they kind of a la carte.
Maybe you want to choose full upfront by storage
or maybe usage basis the way you want to go.
Having a variety of different options is always a good direction to go in
as far as meeting customer, I guess, requirements.
Everyone has a different use case
and everyone wants to express that in different ways.
And there are a few things more frustrating than when a vendor's pricing model doesn't align with how you are intending to use the service.
Yeah. I mean, here's a good example.
We have people that come to us say, I have 100 terabytes.
I only have to query it maybe once a week.
It doesn't make any sense for me to stand up a huge system to do that because consumption base makes a lot of sense.
Right. And so or, you know, when there's a denial of service hack, then you want to really hunt and figure out what's going on.
But the rest of the time, you know, the system's pretty idle.
Now, if you're doing some more real time where you're doing dashboarding or it's built into your application, okay, maybe the consumption base is not the right pricing.
But when you're doing a lot of ad hoc or investigation or you need it when you need it, but you don't need it when you don't, there's really no good solution out there in the market, particularly in log analytics.
So if people want to learn more about what you folks are up to, continue to follow your
exploits, get annoyed at your unnecessary capitalization, where can they do that?
Come look at chaossearch.io. We have a whole bunch of material that talks about the platform.
We're actually updating our content later this month on a whole bunch of detailed use cases and an e-book coming
out. So, you know, come to our website, ask for a free trial. It's fully automated. You're up and
running within five minutes on your S3. You can also set up a larger POC where we allow you to test out, you know, 250 gigabytes of data,
which is actually pretty big for the pre-trial.
And if you're a really big account, we have what we call a big POC where you can call us up.
And if you're looking to test out terabytes of data per day, we can work with that with you.
And, you know, we have a lot of good blogs and a lot of good documentation,
but sometimes just kicking the tires is the best way to learn. And our free trial is probably
the quickest way to learn what we do. When you first log in, it's quite unique. We're the first
company that starts with your storage and not just the idea that you dump data into them and then you
start playing with the product. The product starts when you first log into your storage.
We have a lens into your S3.
We have a refinery to create different viewpoints.
And then we have Kibana,
your favorite visualization tool in Elk
to do your analytics.
And we've automated the process from raw data
to insights, really best of breed.
Well, thank you so much for taking the time
to speak with me today.
I appreciate it.
Thank you.
Thomas Hazel, founder and CTO of Chaos Search.
To learn more, visit chaossearch.io.
I am Corey Quinn.
This is Screaming in the Cloud.
If you've enjoyed this podcast,
please leave it a five-star review in Apple Podcasts.
If you've hated this podcast, please leave it a five-star review in Apple Podcasts. If you've hated this podcast, please leave it a five-star review in Apple Podcasts.
And tell me what my problem is.
This has been this week's episode of Screaming in the Cloud.
You can also find more Corey at ScreamingInTheCloud.com or wherever Fine Snark is sold.
This has been a HumblePod production.
Stay humble.