Screaming in the Cloud - The Joy of Building Enterprise Software with Ben Sigelman
Episode Date: March 25, 2020About Ben SigelmanBen Sigelman is a co-founder and the CEO at LightStep, a co-creator of Dapper (Google’s distributed tracing system), and co-creator of the OpenTracing and OpenTelemetry pr...ojects (both part of the CNCF). Ben's work and interests gravitate towards observability, especially where microservices, high transaction volumes, and large engineering organizations are involved.Links ReferencedOpenTracing: https://opentracing.io/OpenTelemetry: https://opentelemetry.io/Twitter: https://twitter.com/el_bhsEmail: bhs@gmail.comThis podcast: http://ScreamingintheCloud.comÂ
Transcript
Discussion (0)
Hello and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world of cloud,
thoughtful commentary on the state of the technical world,
and ridiculous titles for which Corey refuses to apologize.
This is Screaming in the Cloud.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
This promoted episode is brought to you by LightStep.
And as a result, I am speaking with Ben Siegelman,
founder and CEO at LightStep.
Ben, welcome to the show.
Hi, Corey.
Thanks for having me. You have an interesting backstory. We'll get to the whole modern LightStep. Ben, welcome to the show. Hi, Corey. Thanks for having me.
You have an interesting backstory. We'll get to the whole modern LightStep story,
but originally, some folks are born in the cloud. You instead were born at Google.
You were a co-creator of Dapper, which is, to my understanding, their internal distributed tracing system. And you've done a lot of open source work too, open tracing, then open telemetry,
both part of the CNCF.
So you've been focusing on the monitoring slash observability slash don't ever get those two
words confused movement for a while now. What's your backstory? Where'd you come from?
Well, my mother and father lived in, no, let's see. Where did I come from? I was in college
and I started off freshman year with all of the seniors getting like a thousand job offers in 1999.
And then I graduated in a very different environment and all the Internet busts had happened and things were looking a little grim.
And I just barely was able to get an offer anywhere, but I was lucky to get it from Google.
And I went there and worked on ads originally, which I frankly didn't enjoy at all. My first couple of months there, I was pretty
unhappy, actually. I didn't like the work I was doing. I didn't like the product. And then I had
a meeting with this woman named Sharon Pearl, who was at the time, she was working on five or six
different kind of computer science research projects. She had come over from Digital Equipment's
research lab, along with a bunch of the other old guard people at Google, like the first 100
employees. And she was super, super, super, well, she is super, super, super smart. And I just,
we had this literally the serendipitous, completely random conversation. And she asked me what I was
doing. And I said, I didn't like it that much and asked her what she was doing, and she rattled off a list of several projects. One of them, I remember,
was this distributed blob store, kind of like a S3 or GCS type of thing. There's another one that
was like a global identity management system for all Google end users, etc. But there's this one
called Dapper that she'd prototyped with Mike Burrows and Louise Burrows, who also came from these research labs in the late 90s, early 2000s.
And it just sounded fascinating.
Unfortunately, it wasn't done.
So, you know, no one could really use it.
But they realized that it was possible to trace requests across what we would now call microservices at Google. They didn't call it that, but you could watch a single request go from a web user all the way down through thousands of services and back to the user in 100 milliseconds
or whatever it was. And I just thought it was fascinating. And at the time, my direct manager
had 120 direct reports. I don't mean that his org was 120 people, but he had 120 direct reports,
one of which was me, which is to say he had no idea what I was doing, because how could you? And I just started working on Dapper instead. And I thought it was awesome. And it started to
work, actually, I got it to work in a pre-production environment and built a team around it and then
put that into production. And that was 2005. And I've just been pretty mesmerized by this
overall problem space. And I don't think that's ever really going to let
up. So I just keep on working on it. It's strange in that I had the privilege of working with a
Google VP many years ago who had left Google and was talking about some of the same principles of
tracing. Specifically, every system should expect a event identifier in it. And if it doesn't wind
up getting one of those, first, it should
add one, and secondly, it should raise an exception so that that can get caught as to
the fact there's something that is not participating in this event tracing system.
Now, what made this unique at the time was this was circa 2011 or so, and we all looked
at him like he had just grown a second head, because how big do you think this website
is, buddy?
Maybe that's fine for Google, but here in reality, that's not how the rest of us tend to conceptualize these things. Well, then we went into a microservices direction, which turns every outage
into its own version of a murder mystery. And now having something like that is no longer optional
for reasonable troubleshooting perspectives. It sort of suffered on some level from the curse of being
too early to the market. It seems like you folks are right on time.
Yeah, at the moment, it does seem that way. When we started the company,
that was my biggest concern, actually. I wasn't worried about whether this would be necessary,
but I didn't know when. And I think in retrospect, our timing was right on target.
There are other products that came before LightSteps that were in a similar vein that I
think were actually too early, you know, that started four or five years earlier. And they
built a great product, but all that you could install it on was like a PHP website or something.
And it just, you know, not like a Facebook kind of thing, but just like a blog. And it's just,
you don't need distributed tracing to manage a personal blog. So I think we did get the timing right on the nose, but frankly, that was an accident.
Just good luck. One of the things that I've always found to be a challenge for the distributed
tracing set has been in trying to articulate the value of what you do. For example, I've had,
I've gone round and round with this, with the Honeycomb folks in previous venues and different
folks. And I know, for example, that you are legitimately in this space because whenever I
refer to you as being observability focused, Charity Majors doesn't punch me in the face.
So first, you have the charity not screaming at people seal of approval. So good job on that.
This is legitimate, not a branding exercise. I'll put it in my tombstone. Yeah, Charity is
my best fren of me in the
industry. We obviously compete at some level, but I think Honeycomb does great work and she doesn't
suffer fools. So I'm glad that so far she hasn't called me out or something like that.
Yeah, I mean, to be honest, I don't really like positioning LightStep as a distributed
tracing company. That's not really how I think of our mission or even really exactly our product.
I think our technology under the cover is certainly all about distributed tracing.
But that's, in my mind, an implementation detail.
We do see a lot of people in the market that have heard about distributed tracing,
can recognize at some instinctual level that being able to follow requests across services is going
to be part of the solution. And then they start looking for distributed tracing. And, you know,
frankly, if someone comes to our door and says, I want to buy tracing, and we have a tracing-based
solution, we can sell them that product. And I think there's a lot to be said for that. Like,
both parties benefit from that. But it's not really the way I think about the space. And I do
think that for distributed tracing going forward,
it's important that we talk about what it does
and not what it is, if that makes sense.
I mean, the point of distributed tracing for me
is just to satisfy the same old requirements
we've always had for observability or monitoring before.
We need to deploy our software faster.
We need to understand why it's
performing slowly and where. And then we need to reestablish regular performance if there's been
some kind of emergency. I mean, really, those three things are the driving forces behind every
observability product. And tracing just happens to be necessary if you want to do any of that in a
distributed environment like microservices
or serverless. One of the big challenges you have is that historically describing what it is an
offering like this does presupposes a that someone has a extensive background in running large scale
applications and doing that in a very public very very global fashion. And secondly, that they have a spare 45 minutes
to sit there and listen to the in-depth exposition
that describes what the heck your thing does.
So has that problem gotten better?
In other words, is it easier to describe to people today
what you folks do than it was a few years back?
It has gotten a bit easier.
I think you hit the nail on the head though.
We were chatting before we started recording and I was explaining that I have no interest
in turning this into a product pitch.
And this question, it risks us going in that direction, which I really don't want.
But part of the reason it's gotten easier is that products like LightStep's product,
they solve
problems, right?
And I think it's much easier to explain these problems to people when they're actively feeling
like a lot of pain around them, as opposed to it being a theoretical exercise.
I think before people moved to microservices, we could draw diagrams of, you know, this
is what it's going to look like in a year or two years when you make all these transitions and when your system is distributed
and ordinary transactions no longer exist in only a single host.
It was a theoretical exercise at that point.
Now it's a much more visceral thing where we can say,
have you ever had an experience where you have two teams shouting at each other
because they can't decide which one is the root cause of the problem
and they both have dashboards saying they're healthy, yet it's clear that one or the other is actually responsible.
Or have you ever had Kafka just totally on fire because you have one of the 10 tenants is suddenly sending more traffic and you can't figure out which?
Or have you ever had a situation where you're dealing with a P0 emergency and the one person who actually understands how to debug it is on vacation?
Like these sorts of things are symptoms of microservices and deep multilayered systems.
And once people can identify those problems, it's much easier to say, well, let me explain how the sort of technology we're bringing to bear is relevant to those problems.
So it has gotten easier over the last couple of years, frankly, because the level of active pain has gone way, way, way up with, I think, the credible migration towards more distributed architectures in the last couple of years in ordinary mid-market enterprise companies. And well, let's go back in time a little bit, if we may, to originally, I don't believe
LightStep was aimed at the monitoring observability space at all.
To my understanding, you were something of a social media company, and then you had won
a heck of a hard pivot.
Did it turn out that you just had, you sucked at telling a social media story, and then,
well, we've raised this money, may as well do something fun because we've made ourselves
unemployable along the way?
Or is there something more nuanced to it?
Yeah, it's not a well-known fact. It's not a secret. It just feels, I don't know,
it's not something that I expect people to ask about and I forget to tell them.
So LightStep per se, when I left Google, I really had a bee in my bonnet about Facebook,
actually, as a product. I thought it made people miserable.
I actually still think it makes people miserable.
And the observation was that most people, certainly including myself, are complicated.
And if you compare your inner experience as a human being to other people's carefully curated vacation photos, it doesn't feel very good.
And this is, at this point, it's almost a trope at this point. But when I left Google in 2012, that wasn't as well known. I wanted to create a social media
product where people were encouraged to be more candid and to be themselves. And then we would
connect each other, they could, I don't know, find some common ground. The funny thing about
the product, so I did, I managed to raise a seed round around that idea, which I'm
forever grateful for. I mean, it was a really fun thing to build. And I had a very small but very
high quality team. And we built a prototype of this and got it out in the app store and so on.
And the surprising things, one, it actually kind of worked. Like we had a bunch of people that
loved this product. They really loved it. And you'd say, what do you think of this product?
And they'd say, oh, this is the most important app on my phone. This has gotten me through really
hard times, that sort of stuff, which is great if you're building a social product to have that kind
of zeal. And then you'd ask the same people, okay, well, you know, who would you tell about this
product? And they would say, oh, I would never tell anybody about this. It's way too personal,
way too private. And so after about a year, after having the product in market, I decided that we had built
almost exactly what I wanted to build.
Like the vision had been achieved and it was a total failure.
I mean, the people that we retained were, I would say 90 something percent of them were
depressed introverts.
And I love depressed introverts.
A lot of my friends are depressed introverts, but they're a terrible, terrible audience
if you're trying to build a viral social media product.
They just won't talk about anything.
It's impossible to get them to talk about it.
So at that point, I told the investors, I wrote a board deck that one of them is actually
anonymized and used with her other portfolio companies that won't admit that they're failing.
And I basically explained why this is never going to work.
And I said,
you can have your money back if you want it, but I'm done because I'm not interested in running
out the clock. Or I'm going to do something totally different. And I do think it was relevant
because prior to that, I had been working on this observability type stuff at Google. And I actually
really enjoyed it intellectually, but had this idea that I wanted to work on something that was more meaningful to society in a direct kind of human-to-human way.
And that experience building that social product was quite sobering for me.
First of all, I'm really bad at it.
I mean, really, really bad at it.
I think my intuition around what's going to work and what's not going to work is not that
good compared to how it is in other areas like in the subservability realm. Second of all, I think to win in those games, you have to
play dirty. And I don't like doing that. And the funny thing about enterprise software
is that when people are paying significant amounts of money for a product, they don't
just take your word for it. You actually have to deliver value. And it kind of goes back to high school economics
where it's a really,
it's a mutualistic thing for all parties.
Like a vendor can exist by amortizing the cost
of developing something really powerful
across many customers.
And the customers win
because they could never build something like this.
And, or if they did,
it would cost them way too much money.
So it's this thing where you have this really clean,
honest sales
process. And at the end of it, both parties feel like they're winning because they are. And I find
that after working with consumer, which I thought was frankly kind of depressing, I found it to be
a huge relief. And the reason that we're working on this is just that it's an area where we think
we're contributing actual value in a way that's tangible.
Like you can tell that you're doing something valuable because people pay for it and they
want to renew year after year.
And to me, that's a more validating feeling than trying to sneak a couple seconds of people's
time while they're in the bathroom or something like that, which is like how it felt on social
media, frankly.
Like it just wasn't that gratifying when it was all said and done.
So a common problem that you're going to see with a lot of companies that trend into the
monitoring slash observability slash yelled at by charity major space is the propensity
to wind up going broad where, yeah, today you do, for example, distributed tracing.
Tomorrow you'll do log analytics.
The next week you'll do log analytics. The next week
you'll do alerting. And suddenly you're trying to be Datadog Junior. But we already have a Datadog.
And as you look at these companies continuing to expand to all of these different coverage areas,
it becomes very challenging to differentiate any of them other than that one area that they excel
at. First, you haven't done that. So how have you avoided it?
And secondly, what do you think drives that?
Well, I mean, we're not as old as Datadog.
So I mean, one way to not to do it
is just not to be around long enough, right?
But there's also why we wouldn't do that.
I don't want to throw too many stones at Datadog either.
I mean, they've obviously-
No, and to be clear,
I'm not trying to insult Datadog with that comparison. They're fantastic,
but they are the best breed in this space. So everything that's the newer generation trying
to become the next coming of Datadog, well, why? I can see the story around individual
components being awesome. What I'm not loving is this idea that everyone needs to be a broad
platform for all use cases. I think the problem in my mind,
it really comes back to what I was saying earlier about whether LightStep is a distributed tracing company.
Again, we use distributed tracing and it's the core of what we do. I do not think of us as a
distributed tracing company, nor do I think the problem that we solve, the problem we solve is
not distributed tracing where it really shouldn't be. And I don't consider to be dishonest, but I do think it's confusing for the
market to have large vendors, whether it's Datadog or Splunk is also acquiring their way into a
similar position, right? I don't think it's helpful for the market to position the problem space in
terms of these technologies. Not just because it's confusing, because tracing is not a problem,
it's not even a solution. It's just a technology,
right? Like it doesn't, it solves nothing on its own. It's partly that. And I think more importantly that you don't want to solve problems by having three or four different tabs open,
like having a tab open to the logging and tracing and metrics portions of some product suite is not
a useful workflow. In my mind, the things that
people are trying to do on a day-to-day basis are to deploy software with more confidence,
to improve performance, and or to recover from errors with haste, right? Like some kind of on-call
firefighting workflow. That's like, you know, of, we can drill down into the ontology below that,
but those are the main things you're trying to accomplish
if you're actually an end user of, say, Datadog.
And the thing that my issue with Datadog's product strategy
is not so much the accumulation
of all these different data sources,
which I think actually is totally reasonable,
but the fact that they're positioned...
Oh, that's what you want if you're a Datadog customer, absolutely. But they're positioned almost as separate SKUs. In some
cases, they literally are separate SKUs you pay for separately. But they're not, they're actually
not integrating that data from a workflow standpoint in a way that I feel like is quite
is very beneficial for their end users. I think because it's hard to do that, like, it's not that
they don't want to, I think if you watch their keynotes and stuff,
I think that's what they'd like to do as well,
but they haven't been able to do it
because there's too much gravity and too much velocity
around the products they do have for them to execute on that.
So I think the way you become...
Let me say one more thing.
I definitely hear vendors talking all the time
about building this platform or that platform. When you go and
talk to buyers, especially at larger organizations, nobody is saying that we want to have one vendor
for everything. I talked to a buyer at one of the major investment banks once and he was saying,
you know, we already buy 57 different monitoring products. So Don't tell me you're going to sell me one tool to rule them all.
Of course, I asked him why he couldn't buy 58.
That's the question you've got to follow up with.
Exactly.
Seriously, maybe at some fantasy level they would like that,
but it's completely unrealistic because you're dealing with
maybe four or five different generations of application technology.
And so if nothing else, I mean, Datadog is great, but it doesn't really do a lot for
your mainframe, right?
It's like, I think you're going to, at the very least, have to integrate generationally.
And then I think you also have to do some integration across, you know, different business
units and pieces of the org that buy different tools for whatever reason.
And so the one platform thing isn't something I really hear from the market
as much as I hear it from vendors for what it's worth.
Now, going back to the heart of the question, though,
I think that the only answer in terms of the company that wins any of this stuff,
whether it's LightStep or someone else, is to actually focus on specific jobs to be done
and to try to solve them end-to-end within a single tool.
I do think that it's necessary to bring other forms of data to bear on that problem, which is
why LightStep, frankly, by the end of the year, I don't think will be thought of as a tracing
company per se as much as we are right now. I do think other forms of data are necessary.
What I think is a mistake to position the product as a series of modules that are tied and tightly coupled to specific types of data.
Like, for instance, metric data is mostly totally unused and totally useless in any given investigation.
There's a very small subset of metric data that's actually relevant.
In order to figure out what subset that is, you have to, I don't mean you should, but you have to be able to understand the relationships between different services on a per transaction basis.
The only way you can do that is to look at traces in the aggregate.
So in my mind, the tracing data, it's not the product.
In LightSense product, you spend a very small amount of time actually looking at traces.
You spend a much larger amount of time looking at statistical aggregates drawn from those
traces, either directly or used to inform the interpretation
of other data, whether it's metrics or logs or whatever. So in my mind, the user needs to tell
you what they're trying to do. Are you trying to deploy software? Are you trying to solve a
performance problem? Are you trying to resolve some kind of page? That context is enough to
take all of this telemetry data, which is what we're talking about here, like metrics, logs,
traces, et cetera, our telemetry data. That context is enough to interpret the telemetry data
in a way that doesn't require some kind of advanced degree or dozens of years of experience
with tracing systems. And I don't think that Datadog's strategy is a bad one from a data
acquisition standpoint, but from a product standpoint, I think it's ultimately, it leads to a really fragmented end user experience.
And I find that to be kind of problematic.
So that's how I see it.
And, you know, LightStep's overall strategy
is just to focus on specific workflows
and to be the best at that.
I mean, that's what we're trying to do.
And not to be terribly distracted
by the portfolio of telemetry data types that
various other companies are integrating or acquiring through partnership or otherwise.
And I think that that's a very fair assessment.
To be clear, I have no problem at all with Datadog providing this.
It's something that is, I think, the right move.
The problem I have is that you see so many companies that specialize in one thing and they they're founded, and they do that one thing so well, that then it feels like they're suddenly
veering into the, we've got to do everything story. And for example, Lightstamp does a phenomenal job
of effectively instrumenting observability into microservices applications. I don't know that you
would necessarily do nearly as well with log analytics, for example. The idea
of that's the loss of focus on the one differentiating factor that makes everything
work. It's the same story as why no one has ever bought a multifunction printer that they liked.
It's do one thing, do it well, and leave the rest for other folks.
Yeah, I think I was talking to, this is early on in LightStep when we were just in customer
discovery mode. We didn't really have a product.
But I talked to someone who bought, well, I'll just say it.
They bought New Relic.
This is like in 2016 or something.
And I asked them if they liked it.
And they're like, no, not really.
And I was like, well, why do you buy it?
And they said, well, it's B- everything.
And I think that was supposed to be sort of a good thing, right? It's like they
didn't need to buy, they wanted to have fewer vendors, not more vendors, and that helped them
in that goal, but they weren't particularly enthusiastic about it. And to your point about
LightStep, if someone has a three-tier app, like if they've got some Java app sitting on top of
Oracle and that's the whole application, we would immediately walk away from that. I mean, you said log analytics.
To me, it's less about that particular data type
and more about the architecture.
LightStep is very focused on organizations
that have incorporated some microservices.
I mean, 100% of our customers still have a monolith as well,
but the point is that they're actually doing microservices
in some capacity, and that opens up a whole set of problems
that we're designed to
address. So I think it's funny when vendors claim that they're the right thing for everybody.
Everyone should be focused on a particular part of the market, and that's the part that we're
focused on. And I think that that's very fair. Now, what makes this interesting is that you're
also involved heavily with the Open Telemetry Project, which is a CNCF open source project. Do you find that that
is a, I guess, either a diversion of focus or a conflict of interest, given that you have a
private company that's aimed at something that is very similar, if not identical, from a naive
third-person point of view? I really don't think that LightStep and OpenTelemetry have that much
overlap, actually.
I mean, any of these companies or open source projects, if you were to look at things like
Jaeger, Prometheus, that sort of stuff, the problem can be segmented pretty neatly between
the acquisition of data, which in this case is telemetry data for observability, and the
interpretation and analysis of that data.
LightStep has long believed that the
acquisition of data really should be something that's done in the commons. This has a lot of
benefits for customers in that integrating with LightStep or anything else that supports open
telemetry is a matter of binding yourself to a portable, non-vendor-specific,
I don't want to use the S word, standard,
because it's not like an IEEE thing, right?
But the point is that it's a portable framework that you can use to integrate any number
of potential downstream analytical tools.
That decision is completely decoupled
from which of those analytical tools you want to use.
I think the fact of the
matter is that open telemetry doesn't actually do anything. It doesn't present you with a UI.
It just gathers data in a way that is vendor neutral. And that's the point of that project.
The reason that LightStep pursued that is partly just, frankly, trying to be customer focused.
That's what we think is best for our market. And so we want to bet on that technology. And then the other piece of it was that, you know,
if you go and talk to people who worked at New Relic and AppDynamics and their kind of glory
days when they were ascending very quickly, they were spending like 80 or 90 percent of their
engineering resources on agents, which are actually not even differentiated anymore. I mean, for a
while there, APM agents were the thing you really were buying,
and then the analytical tool was pretty basic.
That's kind of flipped over at this point,
where the analytics are getting much more elaborate,
mainly because of the rise of deep multilayered systems
and microservices and so on.
But the collection of telemetry data,
people expect it to be automatic at this point.
No one has patience for manual instrumentation. The collection of telemetry data, people expect it to be automatic at this point.
No one has patience for manual instrumentation.
And the idea with things like open telemetry and auto-instrumentation open telemetry is to make that shared responsibility of everyone who's trying to do observability,
rather than having every vendor repeat the cost of building all that in a proprietary way. Because that's how things were
several years ago. And the irony of all this is that the vendors that did that work, I mean,
they've spent a lot of muscle marketing those agents, but privately, they're very excited about
getting out of that business. I mean, it's not differentiating for them. And it's still a big
cost center. It's not something that their customers really benefit from anymore. And it
takes up a lot of the resources. So there's pretty wide alignment around the value of something like
OpenTelemetry. And with so many different competing vendors involved in the project,
at this point, I think there are like eight or nine of us. It's difficult for anyone to kind
of run away with the ball. I mean, there's a governance structure and so on. So I don't think
there's much of a risk of it turning into a mechanism for any one vendor to win or lose. The main thing I see is a potential
for us to have some kind of rising tide that our customers benefit from as well. And that sounds
like BS, frankly, but I mean every word of it. You're welcome to call me on it.
No, no, I accept that. One thing as well that I think has been extremely valuable from the perspective of looking at LightStep, understanding what it is, is the interactive sandbox that really takes you by the hand through using it in a production style environment.
It's handy to help folks the same level of experience. But it is useful and credit where due, it only demands an
email address and not a 15 field form to start playing around with it. So if people are curious
about what Lightstep does, I would encourage them to go and take a look at the interactive
sandbox at lightstep.com. Yeah, I think that's a good idea, mainly because people often ask us to describe what we do and how we're different.
And we can describe it in words, but we realize what people want to understand is how it's useful.
It's not how it's different.
And the sandbox environment allows you to walk through scenarios like deploying software or finding the root cause for an error or performance anomaly in a way that lets you do all the clicking.
And you can explore the whole product from there if you want, but it gives you some guardrails just
to actually solve the scenario. And people have found it to be quite educational, I think, just
in terms of how we build it. And we've actually had folks from much larger organizations like the
kind of Googles and Facebooks of the world have been using it as well to help develop their own
internal approach to observability. So even if you have no interest in LightStep, I think it's still a worthwhile thing to check out
because a lot of the stuff that we're showing in there,
I think is actually somewhat novel and just kind of fun.
So we've had a lot of people,
many people have told us that's been a really helpful thing for them
just to understand the space better independent of LightStep.
Excellent.
Well, Ben, thank you so much for taking the time out of your day
to speak to me today.
If people want to hear more
about what you have to say,
where can they find you?
Yeah, you can find me on Twitter,
EL underscore VHS,
like the Spanish article LBHS.
And you're welcome to look me up
on the internet and send me email
or whatever.
I'm actually pretty good
about responding to that.
And of course, you know, LightStep is at lightstep.com.
The sandbox really is the best way to understand what we do if you're an engineer, but you're always
welcome to reach out to any channels to me if you want more
feedback, if you want to provide feedback or ask any questions about any of that stuff. I love talking with folks.
Thank you so much for taking the time to speak with me today. I really do appreciate it.
Thank you, Corey. It's been really fun.
Ben Siegelman, founder and CEO with me today. I really do appreciate it. Thank you, Corey. It's been really fun.
Ben Siegelman, founder and CEO of LightStep.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review in Apple Podcasts.
If you've hated this podcast, please leave a five-star review in Apple Podcasts and make sure to instrument it appropriately so that we can trace where it entered and exited.
This has been this week's episode of Screaming in the Cloud.
You can also find more Corey at ScreamingInTheCloud.com or wherever Fine Snark is sold. this has been a humble pod production
stay humble