Screaming in the Cloud - Cribl Sharpens the Security Edge with Clint Sharp
Episode Date: March 22, 2022About ClintClint is the CEO and a co-founder at Cribl, a company focused on making observability viable for any organization, giving customers visibility and control over their data while max...imizing value from existing tools.Prior to co-founding Cribl, Clint spent two decades leading product management and IT operations at technology and software companies, including Splunk and Cricket Communications. As a former practitioner, he has deep expertise in network issues, database administration, and security operations.Links:Cribl: https://cribl.io/Cribl.io: https://cribl.ioDocs.cribl.io: https://docs.cribl.ioSandbox.cribl.io: https://sandbox.cribl.io
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Today's episode is brought to you in part by our friends at Minio,
the high-performance Kubernetes native object store that's built for the multi-cloud,
creating a consistent data storage layer for your public cloud instances,
your private cloud instances, your private cloud instances,
and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work.
It's getting that unified is one of the greatest challenges facing developers and architects
today.
It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run
any workload, and the footprint to run anywhere. And that's exactly what Minio offers. With superb read
speeds in excess of 360 gigs and a 100 megabyte binary that doesn't eat all the data you've got
on the system, it's exactly what you've been looking for. Check it out today at
min.io slash download and see for yourself. That's min.io
slash download and be sure to tell them that I sent you. This episode is sponsored in part by
our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went
up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment.
They've also gone deep in depth
with a bunch of other approaches
to how DevOps and security are inextricably linked.
To learn more, visit sysdig.com
and tell them I sent you.
That's S-Y-S-D-I-G dot com.
My thanks to them for their continued support
of this ridiculous nonsense.
Welcome to Screaming in the Cloud. I'm Corey Quinn. I have a repeat guest joining me on this
promoted episode. Clint Sharp is the CEO and co-founder of Cripple. Clint, thanks for joining
me. Hey, Corey. Nice to be back. I was super excited when you gave me the
premise for this recording because you said you had some news to talk about, and I was really
excited that, oh, great, they're finally going to buy a vowel so that people look at their name and
understand how to pronounce it. And no, that's nowhere near forward-looking enough. Instead,
it's some, I guess, I don't know, some product announcement or something.
But, you know, hope springs eternal.
What have you got for us today?
Well, one of the reasons I love talking to your audience is because product announcements
actually matter to this audience.
It's super interesting as you get into starting a company, you're such a product person.
You're like, I have this new set of things that's really going to make your life better.
And then you go out to the general media and you're like, Hey, I have this product. And they're like, I don't
care. What product? Do you have a funding announcement? Do you have something big in the
market that, you know, do you have a new executive? Do you, it's like, but no, but like these features,
like these things that we, the way we make our lives better for our customers. Isn't that
interesting? No. The real depressing ones are, do you have a security breach to announce it? No,
God, no. Why would I wind up being that excited about it? Well, I don't know. I'd be that excited about it. And yeah, the stuff that mainstream
media wants to write about in the context of tech companies is exactly the sort of thing that tech
companies absolutely do not want to be written about for. But fortunately, that is neither here
nor there. Yeah, they want the thing that gets the clicks. Exactly. You've built a product that
absolutely resonates in its target market and outside of that market.
It's one of those, what is that thing again?
If you could give us a light refresher on what Cribble is and does, you'll probably do a better job of it than I will.
We'd love to, yeah.
So we are an observability company, fundamentally.
I think one of the interesting things to talk about when it comes to observability is that observability and security are emerging. So I like to say observability and
include security people. If you're a security person and you don't feel included by the word
observability, sorry, we also include you. You're under our tent here. So we sell to technology
professionals. We help make their lives better. And we do that today through a flagship product
called Logstream, which is part of this announcement we're actually renaming to Stream.
In some ways, we're dropping logs.
And we are a pipeline company.
So we help you take all of your existing agents, all of your existing data that's moving, and
we help you process that data in the stream to control cost and to send it multiple places.
And it sounds kind of silly, but one of the biggest problems that we end up solving for
a lot of our enterprises is, hey, I've got this old syslog feed coming off of my firewalls.
Like you remember those things, right?
Palo Alto firewalls, ASA firewalls.
I just need to get that thing to multiple places because, hey, I want to get that data into another security solution.
I want to get that data into a data lake.
How do I do that? Well, in today's world, that actually turns out as sort of a neglected set of features, like the vendors who provide you logging solutions, being able to reshape that data, filter that
data, control cost wasn't necessarily at the top of their priority list.
It wasn't nefarious.
It wasn't like people are like, oh, I'm going to make sure that they can't process this
data before it comes into my solution.
It's more just like, I'll get around to it eventually, and the eventually never actually
comes.
And so our streaming product helps people do that today.
And the big announcement that we're making this week
is that we're extending that same processing technology
down to the endpoint with a new product
we're calling Cripple Edge.
And so we're taking our existing
best-in-class management technology
and we're turning it into an agent.
And that seems kind of interesting
because I think everybody sort of assumed
that the agent is dead.
Okay, well, we've been building agents for a decade or two decades.
Isn't everything exactly the same as it was before?
But we really saw kind of a dearth of innovation in that area in terms of being able to manage your agents, being able to understand what data is available to be collected, being able to auto-discover the data that needs to be able to be collected, turning those agents into interactive troubleshooting experiences
so that we can kind of replicate the ability to zoom into a remote endpoint
and replicate that Linux command line experience
that we're not supposed to be getting anymore
because we're not supposed to SSH into boxes anymore.
Well, how do I replicate that?
How do I see how much disk is on this given endpoint
if I can't SSH into that box?
And so Cribble Edge is a rethink about making this rich interactive experience
on top of all of these agents
that become this really massive distributed system
that we can process data all the way out
at where the data is being emitted.
And so that means that now we don't necessarily,
if you want to process that data in the stream, okay, great.
But if you want to process that data
at its origination point,
we can actually provide you cheaper costs
because now you're using a lot of that capacity that's sitting out there on your endpoints that isn't really being
used today anyway. The average utilization of a Kubernetes cluster is like 30%.
It's that high. I'm sort of surprised.
Right? I know. And so Datadog puts out this survey every year, which I think is really
interesting. And that's a number that always surprised me is just that people are already
paying for this capacity, right? It's sitting there, it's on their AWS bill already. And with
that average utilization, a lot of the stuff that we're doing in other clusters or while we're
moving that data can actually just be done right there where the data is being emitted. And also,
if we're doing things like filtering, we can lower egress charges. There's lots of really,
really good goodness that we can do by pushing that processing further closer to its origination
point. You know, the timing of this episode is somewhat apt,
because as of the time that we're recording this,
I spent most of yesterday troubleshooting and fixing my home wireless network,
which is a whole ubiquity-managed thing.
And the controller was one of their all-in-one box things
that kept more or less power cycling for no apparent reason.
How do I figure out why it's doing that?
Well, I'm used to these days doing
everything in a cloud environment where you can instrument things pretty easily, where things
start and where things stop is well understood. Finally, I just gave up and used a controller
that's sitting on an EC2 instance somewhere, and now, great, now I can get useful telemetry out of
it because now it's stuff I know how to deal with. It also turns out that, surprise, my EC2 instance
is not magically restarting itself due to heat issues. What a concept. So I have a newfound appreciation for the fact that, oh yeah, months doesn't exist. But there's a lot of
data centers out there. There are a lot of agents living all kinds of different places. And workloads
continue to surprise me even now, just looking at my own client base. It's a very diverse world
when we're talking about whether things are on-prem or whether they're in cloud environments.
Well, also, there's a lot of agents on every endpoint,
period, just due to the fact that security guys want an agent, the observability guys want an
agent, the logging people want an agent. And then suddenly, I'm looking at every endpoint,
cloud, on-prem, whatever, and there's 8, 10 agents sitting there. And so I think a lot of the
opportunity that we saw was we can unify the data collection for metric type
of data. So we have some really cool defaults. This is a lot of the things where I think people
don't focus much on kind of the end user experience. Like let's have reasonable defaults.
Let's have the thing turn on and actually most people's needs are set without tweaking any knobs
or buttons and no diving into YAML files and trying and looking at documentation and trying
to figure out exactly
the way I need to configure this thing. Let's collect metric data. Let's collect log data.
Let's do it all from one central place with one agent that can send that data to multiple places.
And I can send it to Grafana Cloud if I want to. I can send it to Logs.io. I can send it to Splunk.
I can send it to Elasticsearch. I can send it to AWS's new Elasticsearch-y thingy that we don't
know what they're going to call it yet after the lawsuit.
Any of those can be done right from the endpoint from like a rich graphical experience
where I think that there's really a desire now
for people to kind of jump into these configuration files
where really a lot of these users,
this is a part-time job.
And so, hey, if I need to go set up data collection,
do I want to have to learn
about this detailed YAML file configuration
that I'm only going to do once or twice? Or should I be able to do it in an easy, intuitive way
where I can just sit down in front of the product, get my job done, and move on without having to go
learn some sort of new configuration language? Once upon a time, I saw an early circa 2012,
2013 talk from Jordan Sissel, who is the creator of Logstash. And he talked a lot about
how challenging it was to wind up parsing all of the variety of log files out there.
Even something as relatively straightforward, wink, wink, nudge, nudge, as timestamps was an
absolute monstrosity. And a lot of people have been talking in recent years about open telemetry
being the lingua franca that everything speaks.
So that is the wave of the future.
But I've got to level with you.
Looking around, it feels like these people are living in a very different reality than
the one that I appear to have stumbled into.
Because the conversations people are having about how great it is sound amazing, but nothing
that I'm looking at, granted from a very particular point of view,
seems to be embracing it or supporting it. Is that just because I'm hanging out in the wrong places, or is it still a great idea whose time is yet to come, or something else?
So I think a couple things. One is, every conversation I have about open telemetry
is always will be. It's always in the future. And there's certainly a lot of interest. We see
this from customer after customer.
They're very interested in OpenTelemetry and what the OpenTelemetry strategy is.
But as an example, OpenTelemetry logging is not yet a finalized specification.
They believe that they're still six months to a year out.
It seems to be perpetually six months to a year out there.
They are finalized for metrics and they are finalized for tracing.
Where we see OpenTelemetry tends to be with companies like Honeycomb, companies like Datadog with their tracing product, or Lightstep.
So for tracing, we see open telemetry adoption. But tracing adoption is also
not that high either relative to just general metrics and logs.
Yeah, the tracing implementations that I've seen, for example, Epsilon did this super well,
where it would take a look at your lambdas function built into an application and, ah, we're going to go ahead and instrument this automatically using layers or extensions for you. And life was good because suddenly you got very detailed breakdowns of exactly how data was flowing in the course of a transaction through 15 lambdas function. Great. With everything else I've seen, it's, oh, you have to instrument all these things by hand. Let me shortcut that for you. That means no one's going to do it. They
never are. It's anytime you have to do that undifferentiated heavy lifting of making sure
that you put the finicky code just so into your application's logic, it's a shorthand for it's
only going to happen when you have no other choice. And I think that trying to surface that burden to the developer instead of building it into the platform so they don't have to think about it
is inherently the wrong move. I think there's a strong belief in Silicon Valley that,
similar to Hollywood, that the biggest export Silicon Valley is going to have is culture.
And so that's going to be this culture of developers supporting their stuff in production.
I'm telling you I sell to banks and governments and telcos, and I don't see that culture prevailing. I see an application developed by Accenture that's
operated by Tata. That's a lot of inertia to overcome and a lot of regulation to overcome
as well. And so we can say that, hey, separation of duties isn't really a thing, and developers
should be able to support all their own stuff in production. I don't see that happening.
It may happen.
It'll certainly happen more than zero.
And tracing is predicated on the whole idea that the developer is scratching their own
itch, like that I'm in production troubleshooting this.
And so I need this high fidelity trace level information to understand what's going on
with this one user's experience.
But that doesn't tend to be in the enterprise how things are actually troubleshot.
And so I think that that more than anything
is the headwind that's slowing down
distributed tracing adoption.
It's because you're putting the onus
on solving the problem on a developer
who never ends up using
the distributed tracing solution to begin with,
because there's another operations department over there
that's actually operating the thing on a day-to-day basis.
Having come from one of those operations departments myself, the way that I would
always fix things was, you know, in the era that I was operating in, it made sense. You'd SSH into
a box and kick the tires, poke around, see what's going on, look at the logs locally,
look at the behaviors the way you'd expect it to. These days, that is considered a screamingly bad
anti-pattern, and it's something that companies
try their damnedest to avoid doing at all. When did that change, and what is the replacement for
that? Because every time I ask people for the sorts of data that I would get from that sort
of exploration when they're trying to track something down, I'm more or less met with blank
stares. Yeah, well, I think that's a huge hole in one of the things that we're actually trying to do with our with our new product and i think the how do i replicate that linux command line experience
so for example something as simple like we'd like to think that these nodes are all ephemeral but
there's still a disk whether it's virtual or not that thing sometimes fills up so how do i even
do the simple thing like dfkh and see how much disk is there. If I don't already have all the metrics collected that I needed,
or I need to go dive deep into an application
and understand what that application is doing,
or seeing what files it's opening,
or what log files it's writing even.
Let's give some good examples.
How do I even know what files an application is writing?
Actually, all that information is all there.
We can go discover that.
So some of the things that we're doing with Edge
is trying to make this rich interactive experience where you can actually teleport into the end node and see all the processes that are running and get a view or exploit holes in various pieces of software, but really trying to replicate getting you that high fidelity information and every log and moving all of that data
is the data is worthless until it isn't worthless anymore. And so why do we even move it? Why don't
we provide a better experience for getting at the data at the time that we need to be able to get
at the data? Or the other thing that we get to change fundamentally is if we have the edge
available to us, we have way more capacity. I can store a lot of information and a few kilobytes of
RAM on every node. But if I bring thousands of nodes into one central place, now I need a massive amount
of RAM and a massive amount of cardinality when really what I need is the ability to
actually go interrogate what's running out there.
The thing that frustrates me the most is the way that I go back and find my old debug statements,
which is, you know, I print out whatever it is that the current status is.
And so I can figure out where something's breaking.
Got here.
Yeah.
I do it within AWS Lambda functions, and that's great.
And I go back and I remove them later when I notice how expensive CloudWatch logs are getting.
Because at 50 cents per gigabyte of ingest on those things, and you have that Lambda function firing off a fair bit, that starts to add up when you've been excessively wordy with your print statements.
It sounds ridiculous, but okay, then you're storing it somewhere. If I want to take that
log data and have something else consume it, that's nine cents a gigabyte to get it out of AWS,
and then I'm going to want to move it again from wherever it is over there, potentially to a third
system, because why not? And it seems like the entire purpose of this log data is to sit there
and be moved around because every
time it gets moved, it winds up somehow costing me yet more money. Why do we do this?
I mean, it's a great question because one of the things that I think we decided 15 years ago
was that the reason to move this data was because that data may go poof. So it was on a, you know, back in my day, it was an HP DL360 1U rack mount server that I threw in there and it had RAID 0 disks.
And so if that thing went dead, well, we didn't care.
We'd replace it with another one.
But if we wanted to find out why it went dead, we wanted to make sure that the data had moved before the thing went dead.
But now that DL360 is a VM.
Yeah, or a container that is going to be gone in 20 minutes.
So yeah, you don't want to store it locally on that container.
But disks are also a fair bit more durable than they once were as well.
And S3 talks about its 11 nines of durability.
That's great and all, but most of my application logs don't need that.
So I'm still trying to figure out where we went wrong.
Well, I think it was right for the time.
And I think now that we have durable
storage at the edge where that block storage is already replicated three times and we can
reattach, if that box crashes, we can reattach a new compute to that same block storage.
Actually, AWS has some cool features now. You can actually attach multiple VMs to the same block
store. So we could actually even have logs being written by one VM, but processed by another VM.
And so there are new primitives available to us in the cloud, which we should be going back and re-questioning
all of the things that we did 10 to 15 years ago and all the practices that we had,
because they may not be relevant anymore, but we just never stop to ask why.
Yeah, multi-attach was rolled out with their IO2 volumes, which are spendy but great,
and they do warn you that you need a file system that actively supports that and applications that are aware of it.
But cool, they have specific use cases that they're clearly imagining this for.
But 10 years ago, we were building things out.
Ooh, EBS, how do I wind up attaching that from multiple instances?
The answer was, ooh, don't do that.
And that shaped all of our perspectives on these things.
Now, suddenly, you can. Is that, ooh, don't do that, and that shaped all of our perspectives on these things, now suddenly you can,
is that ooh, don't do that gut visceral reaction still valid. People don't tend to go back and
re-examine the why behind certain best practices until long after those best practices are now
actively harmful. And that's really what we're trying to do is to say, hey, should we move log data anymore if it's at a durable place at the edge? Should we move
metric data at all? Like, hey, we have these big TSDBs that have huge cardinality challenges, but
if I just had all that information sitting in RAM at the original endpoint, I could store a lot of
information and barely even touch the free RAM that's already sitting out there at that endpoint.
So how to get at that data, like how to make that a rich user experience so that we can query it?
Now, we have to build some software to do this, but we can start to question from first principles,
hey, things are different now.
Maybe we can actually revisit a lot of these architectural assumptions,
drive costs down, give more capability than we actually had before for fundamentally cheaper.
And that's kind of what Kribble does as we're looking at software is to say,
man, like let's question everything and let's go back to first principles. Why do we want this
information? Well, I need to troubleshoot stuff. Okay. Well, if I need to troubleshoot stuff,
well, how do I do that? Well, today we move it, but do we have to, do we have to move that data?
No, we could probably give you an experience where you could dive right into that endpoint
and get really, really high fidelity data without having to pay to move that and store it
forever. Because also like telemetry information, it's basically worthless after 24 hours. Like if
I'm moving that and paying to store it, then now I'm paying for something I'm never going to read
back. This episode is sponsored in part by our friends at Vulture, spelled V-U-L-T-R, because they're all about helping save money,
including on things like, you know, vowels.
So what they do is they are a cloud provider
that provides surprisingly high-performance cloud compute
at a price that, well, sure,
they claim it is better than AWS's pricing,
and when they say that, they mean that it's less money.
Sure, I don't dispute that.
But what I find interesting is that it's predictable.
They tell you in advance on a monthly basis what it's going to cost.
They have a bunch of advanced networking features.
They have 19 global locations and scale things elastically,
not to be confused with openly, which is apparently elastic and open.
They can mean the same thing sometimes. They have had over a million users. Deployments take less
than 60 seconds across 12 pre-selected operating systems. Or if you're one of those nutters like me,
you can bring your own ISO and install basically any operating system you want.
Starting with pricing as low as $2.50 a month
for Vulture Cloud Compute,
they have plans for developers and businesses of all sizes,
except maybe Amazon,
who stubbornly insists on having something of the scale
all on their own.
Try Vulture today for free
by visiting vulture.com slash screaming
and you'll receive $100 in credit.
That's v-u-l-t-r dot com slash
screaming. At worst, you wind up figuring out, okay, I'm going to store all that data going back
to 2012, and it's petabytes upon petabytes. And great, how do I actually search for a thing? Well,
I have to use some other expensive thing of compute that's going to start diving through all
of that, because the way I set up my partitioning. It isn't aligned with anything looking at like recency or based upon time period.
So great. Every time I want to look at what happened 20 minutes ago, I'm looking at what
happened 20 years ago. And that just gets incredibly expensive, not just to maintain,
but to query and the rest. Now, to be clear, yes, this is an anti-pattern. This is how things should
be set up, But how should they be
set up? And is the collective answer to that right now actually what's best? Or is it still
hearkening back to old patterns that no longer apply? Well, the future here is just unevenly
distributed. So there's, you know, I think an important point about us or how we think about
building software is with this customer's first attitude and fundamentally bringing them choice.
Because the reality is, is that doing things the old way may be the right decision for you. You may have
compliance requirements that say, there's a lot of financial services institutions, for example,
like they have to keep every byte of data written on any endpoint for seven years. And so we have to
accommodate their requirements. Like, is that the right requirement? Well, I don't know. The
regulator wrote it that way. So therefore I have to do it, whether it's the right thing or the
wrong thing for the business, I have no choice.
And their decisions are just as right as the person who says this data is worthless and it
should all just be thrown away. We really want to be able to go and say like, hey,
what decision is right? We're going to give you the option to do it this way. We're going to give
you the option to do it this way. Now, the hard part, and when it comes down to like marketing,
it's like you want to have this really simple message, like this is the one true path.
And a lot of vendors are this way.
There's this new, wonderful, right, true path
that we are going to take you on
and follow along behind me.
But the reality is enterprise worlds are gritty and ugly
and they're full of old technology and new technology.
And they need to be able to support
getting data off the mainframe
the same way as they're doing
a brand new containerized microservices application.
In fact, that brand new containerized
microservices application is probably talking to the mainframe
through some API.
And so all of that has to work at once.
Oh, yeah.
And it's all of our payment data is in our PCI environment.
That PCI environment needs to have every byte logged.
Great.
Why is three quarters of your infrastructure considered the PCI environment?
Maybe you can constrain that at some point and suddenly save a whole
bunch of effort, time, money, and regulatory drag on this. But as you go through that journey,
you need to not only have a tool that will work when you get there, but a tool that will work
where you are today. And a lot of companies miss that mark too. It's, oh, once you modernize and
become the serverless success story of the decade, then our product is going to be right for you.
Great. We'll send you
a postcard if we ever get there, and then you can follow up with us. Alternately, it's, well, yeah,
this is how we are today, but we have visions of a brighter tomorrow. You've got to be able to meet
people where they are at any point of that journey. One of the things I've always respected about
Gribble has been the way that you very fluently tell both sides of that story.
And it's not their fault.
Yeah.
Most of the people who pick a job, they pick the job because like, look, I live in Kansas
City, Missouri, and there's this data processing company that works primarily on mainframes.
That's right down the road.
And they gave me a job and it pays me $150,000 a year.
And I got a big house and things are great.
And I'm a sysadmin sitting there.
I don't get to play with the new technology.
Like that customer is just as an applicable customer. We want to help them exactly the same as the new Silicon Valley hip
kid who's working at a venture-backed startup that's doing everything natively in the cloud.
Those are all right decisions depending on where you happen to find yourself. And we want to support
you with our products no matter where you find yourself on the technology spectrum.
Speaking of old and new and the trends of the
industry, when you first set up this recording, you mentioned, oh yeah, we should make it a point
to maybe talk about the acquisition, at which point I sprayed coffee across my eye, Matt.
Thanks for that. Turns out it wasn't your acquisition we were talking about so much as it
is the, at the time we recorded this, the yet-to-close rumored acquisition of Splunk by Cisco.
I think it's both interesting and positive for some people and sad for others. I think
Cisco is obviously a phenomenal company. They run the networking world. The fact that they've
been moving into observability, they bought companies like AppDynamics, and we were talking
about Epsilon before the show. They bought ServiceNow, they bought LightStep. There's
a lot of acquisitions in this space. I think when it comes to something like Splunk, Splunk is a fast-growing company by compared
to Cisco.
And so for them, this is something that they think that they can put into their distribution
channel.
And what Cisco knows how to do is to sell things.
They're very good at putting things through their existing sales force and really amplifying
the sales of that particular thing that they have
just acquired. That being said, I think for a company that was as innovative as Splunk,
I do find it a bit sad with the idea that it's going to become part of this much larger behemoth
and not really probably driving the observability and security industry forward anymore because I
don't think anybody really looks at Cisco as a company that's
driving things, not to slam them or anything, but I don't really see them as driving the industry
forward. Somewhere along the way, they got stuck. And I don't know how to reconcile that because
they were a phenomenally fast-paced, innovative company, briefly the most valuable company in the
world during the dot-com bubble. And then they just sort of stalled out somewhere. And on some level, not to talk smack about it, but it feels like the level of innovation we've
seen from Splunk has curtailed over the past half decade or so. And selling to Cisco feels almost
like a tacit admission that they are effectively out of ideas. And maybe that's unfair.
I mean, we can look at the track record of what's been shipped over the last five years from Splunk. And again, they're a partner, their customers are great. I think they still have
the best log indexing engine on the market. That was their core product and what has made them the
majority of their money. But there's not been a lot new. And I think objectively, we can look at
that without throwing stones and say like, well, what net new? You bought SignalFX, like good for
you guys, like that seems to be going well. You've launched your observability suite based off of these acquisitions.
But organic product-wise, there's not a lot coming out of the factory.
I'll take it a bit further slash sadder. We take a look at some
great companies that were acquired. OpenDNS,
Duo Security, SignalFX, as you mentioned, Epsilon,
ThousandEyes. And once they've gotten acquired
by Cisco, they all more or less seem to be frozen in time, like they're trapped in amber,
which leads itself to the natural dinosaur analogy that I'll probably make in a less
formal setting. It just feels like once a company is bought by Cisco, their velocity peters out,
a lot of their staff leaves, and what you see
is what you get. And I don't know if that's accurate, and I'm just not looking in the right
places, but every time I talk to folks in the industry about this, I get a lot of knowing nods
that are tied to it. So whether or not that's true or not, that is very clearly, at least in some
corners of the market, the active perception.
There's a very real fact that if you look, even in very large companies,
innovation is driven from a core set of a handful of people. And when those people start to leave,
the innovation really stops. It's those people who think about things back from first principles, like, why are we doing this? What different can we do? And they're the type of drivers that drive change.
So Frank Slutman wrote a book recently called Amp It Up that I've been reading over the
last weekend.
And he has this article that was on LinkedIn a while back called Drivers vs. Passengers.
And he's always looking for drivers.
And those drivers tend to not find themselves as happy in bigger companies.
And they tend to head for the exits.
And so then you end up with a lot of the passenger type of people, the people who are like, they'll carry it forward. They'll continue to scale it. The business will continue to grow at whatever rate it's going to grow. But you're probably not going to see a lot of the net new stuff. And I'll put it in comparison to a company like Datadog, who I have a vast amount of respect for. I think they're an incredibly innovative company. And I think they continue to innovate, still driven by the founders. The people who created the original product are still there driving the vision, driving forward innovation.
And that's what tends to move the envelope is the people who have the moral authority inside of an
even larger organization to say, get behind me. We're going in this direction. We're going to go
take that hill. We're going to go make things better for our customers. And when you start to
lose those handful of really critical contributors, that's where
you start to see the innovation dry up.
Where do you see the acquisitions coming from?
Is it just at some point people shove money at these companies that get acquired that
is beyond the wildest dreams of avarice?
Is it that they believe that they'll be able to execute better on their mission than they
were independently?
These are still smart,
driven people who have built something. And I don't know that they necessarily see an acquisition as, well, time to give up and coast for a while and then I'll leave. But maybe it is. I've never
found myself in that situation. So I can't speak for sure. You kind of, I think, have to look at
the business and then whoever's running the business at that time, and I sit in the CEO
chair, I think you have to look at the business and say, what do we have inside the house here? What more can we do? If we think that there's
the next billion-dollar or multi-billion-dollar product sitting here, even just in our heads,
but maybe in the factory and being worked on, then we should absolutely not sell because the
value is still there. We're going to grow the company much faster as an independent entity
than we would inside of a larger organization.
But if you're the board of directors and you're looking around and saying like,
hey, look, like I don't see another billion dollar line of business at this scale, right?
If you're Splunk scale, right?
I don't see another billion dollar line of business sitting here.
We could probably go acquire it.
We could try to add it in.
But, you know, in the case of something like a Splunk, I think, you know, they're looking for a new CEO right now.
So now they have to go find a new leader who's going to come in, re-energize and kind of reboot that. But that's
the options that they're considering, right? They're like, do I find a new CEO who's going
to reinvigorate things and be able to attract the type of talent that's going to lead us to the next
billion dollar line of business that we can either build inside or we can acquire and bring in-house?
Or is the right path for me just to say, okay, well, somebody like Cisco is interested. Or the
other path that you may see them go down is something like Silver Lake. So Silver Lake put a billion dollars into
the company last year. And so they may be looking at it and saying, OK, well, we really need to do
some restructuring here. And we want to do it outside the eyes of the public market. We want
to be able to change pricing model. We want to be able to really do this without having to worry
about the stock price's massive volatility because we're making big changes. And so I would say that
there's probably two big options they're considering. Like, do we sell to Cisco? Do we sell to Silverlake? Or do we
really take another run at this? And those are difficult decisions for the stewards of the
business. And I think it's a different decision if you're the steward of the business that created
the business versus the steward of the business for whom this is, I've been here for five years
and I may be here for five years more. For somebody like me, a company like Cribble is
literally the thing I plan to leave on this earth.
Yeah.
Do you have that sense of personal attachment to it?
On some level with the Duckbill Group, that's exactly what I'm staring at, where it's great.
Someone wants to buy the last week in AWS media side of the house.
Great.
Okay.
What is that really beyond me?
Because so much of it's been shaped by my personality.
There's an audience, sure, but it's a skeptical audience, one that doesn't generally tend to respond well
to mass market generic advertisements.
So monetizing that is not going to go super well.
All right, we're going to start doing data mining on people.
Well, that's explicitly against the terms of service
people signed up for, so good luck with that.
So much starts becoming bizarre and strange
when you start looking at building something
with the idea of, oh, in three years,
I'm going to unload this puppy and make it someone else's problem.
The argument is that by building something with an eye towards selling it, you build a better
structured business. But it also means you potentially make trade-offs that are best not
made. I'm not sure there's a right answer here. In my spare time, I do some investments,
angel investments, and that sort of thing. And that's always a red flag for me when I meet a
founder who's like, in three to five years, I plan to sell it to these people.
If you don't have a vision for how you're fundamentally going to alter the marketplace
and our perception of everything else, you're not dreaming big enough. And that to me doesn't look
like a great investment. It doesn't look like the, how do you attract employees in that way?
Like, okay, our goal is to work really hard for the next three years so that we will be
attractive to this other bigger thing.
They may be thinking it on the inside as an available option, but if you think that's
your default option when starting a company, I don't think you're going to end up with
the outcome that is truly what you're hoping for.
Oh, yeah.
In my case, the only acquisition story I see is some large company buying us just largely
shut me up.
But that turns out to be
kind of expensive, so all right. I also don't think it'd serve any of them nearly as well as
they think it would. Well, you'll just become somebody else on Twitter. Yeah, time to change
my name again. Here we go. So if people want to go and learn more about a Cribble Edge,
where can they do that? Yeah, Cribble.io. And then if you're more of a
technical person and you'd like to understand the specifics, docs.cribble.io. That's where I always
go when I'm checking out a vendor, just skip past the main page and go straight to the docs. So
check that out. And then also, if you're wanting to play with the product, we make
online available education called Sandboxes at sandbox.cribble.io, where you can go spin up your
own version of the product, walk through some interactive tutorials, and get a view on how it might work for you.
Such a great pattern, at least for the way that I think about these things. You can have flashy
videos. You can have great screenshots. You can have documentation that is the finest thing on
this earth. But let me play with it. Let me kick the tires on it, even with a sample data set.
Because until I can do that, I'm not really going to understand where the product starts
and where it stops.
That is the right answer from where I sit.
Again, I understand that everyone's different.
Not everyone thinks like I do, thankfully.
But for me, that's the best way I've ever learned something.
I love to get my hands on the product.
And in fact, I'm always a little bit suspicious of any company when I go to their web page
and I can't either sign up for the product or I can't get to the documentation
and I have to talk to somebody in order to learn. That's pretty much, I'm immediately
going to the next person in that market to go look for somebody who will let me.
Thank you again for taking so much time to speak with me. I appreciate it. As always,
it's a pleasure. Thanks, Corey. Always enjoy talking to you.
Clint Sharp, CEO and co-founder of Cribble. I'm cloud economist Corey Quinn, Thanks, Corey. Always enjoy talking to you. angry comment, and when you hit submit, be sure to follow it up with exactly how many distinct
and disparate logging systems that obnoxious comment had to pass through on your end of things.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started. this has been a humble pod production
stay humble