Screaming in the Cloud - The Need for Speed in Time-Series Data with Brian Mullen
Episode Date: November 29, 2022About BrianBrian is an accomplished dealmaker with experience ranging from developer platforms to mobile services. Before InfluxData, Brian led business development at Twilio. Joining at just... thirty-five employees, he built over 150 partnerships globally from the company’s infancy through its IPO in 2016. He led the company’s international expansion, hiring its first teams in Europe, Asia, and Latin America. Prior to Twilio Brian was VP of Business Development at Clearwire and held management roles at Amp’d Mobile, Kivera, and PlaceWare.Links Referenced:InfluxData: https://www.influxdata.com/
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is brought to you in part by our friends at Veeam.
Do you care about backups?
Of course you don't.
Nobody cares about backups.
Stop lying to yourselves.
You care about restores, usually right after you didn't care enough about backups. If you're tired
of the vulnerabilities, costs, and slow recoveries when using snapshots to restore your data,
assuming that you even have them at all, living in AWS land, there's an alternative for you.
Check out Veeam, that's V-E-E-A-M, for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore.
Stop taking chances with your data.
Talk to Veeam.
My thanks to them for sponsoring this ridiculous podcast.
This episode is brought to us in part by our friends at Pinecone.
They believe that all anyone really wants is to be understood.
And that includes your users.
AI models combined with the Pinecone Vector database let your applications understand and act on what your users want without making them spell it out.
Make your search application find results on relevance instead of just tags,
and your security applications match threats by resemblance instead of just regular expressions.
Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable.
Thanks to my friends at Pinecone for sponsoring this episode.
Visit pinecone.io to understand more.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
It's been a year, which means it's once again time to have a promoted guest episode
brought to us by our friends at Influx Data.
Joining me for a second time is Brian Mullen,
CMO over at Influx Data.
Brian, thank you for agreeing to do this a second time.
You're braver than most.
Thanks, Corey. I'm happy to be here.
Second time is the charm.
So it's been an interesting year, to put it mildly,
and I tend to have the attention span of a goldfish most days.
So for those who are similarly flighty,
let's start at the very top.
What is an influx DB slash influx data slash influx?
When you're not sure which one to use,
just shorten it and call it good.
And why might someone need it?
Sure, so InfluxDB is what most people understand
our product as, pretty popular open source product,
been out for quite a while.
And then our company, InfluxData,
is the company behind InfluxDB.
And InfluxDB is where developers build IoT,
real-time analytics, and cloud applications,
typically all based on time series.
It's a time series data platform,
specifically built to handle time series data,
which we think about as any type of data
that is stamped in time in some way.
It could be metrics like taking every one second,
every two seconds, every three seconds,
or some kind of event that occurs
and is stamped in time in some way.
So our product and platform is really specialized
to handle that technical problem.
When last we spoke, I contextualized that
in the realm of an IoT sensor
that winds up reporting its device ID
and its temperature at a given timestamp.
That is sort of baseline stuff that I think aligns with what we're talking about.
But over the past year, I started to see it in a bit of a different light,
specifically viewing logs as time series data,
which hadn't occurred to me until relatively recently.
And it makes perfect sense on some level.
It's weird to contextualize what
Influx does as being a logging database, but there's absolutely no reason it couldn't be.
Yeah, it certainly could. So typically, we see the world of time series data in kind of two big
realms. One is, as you mentioned, think of it as the hardware or physical realm, devices and sensors.
These are things that are going to show up in a connected car and in a factory deployment,
in a renewable energy wind farm.
And those are real devices and pieces of hardware that are out in the physical world, collecting
data and emitting time series every one second or five seconds or 10 minutes or whatever
it might be.
But it also, as you mentioned, applies to call it the virtual world, which is really
all of the software and infrastructure that is being stood up to run applications and
services.
And so in that world, it could be the same, you know, it's just a different type of source,
but it's really kind of the same technical problem.
It's still time series data being stamped or data being stamped every, you know, one
second, every five seconds, in some cases every millisecond.
But it is coming from a source that is actually in the infrastructure.
It could be virtual machines.
It could be containers.
It could be microservices running within those containers.
And so all of those things together, both in the physical world and this infrastructure world, are all emitting time series data.
When you take a look at the broader ecosystem, what is it that you see that is being the most
misunderstood about time series data as a whole? For example, when I saw AWS talking about a lot
of things that they did in the realm of, for your data lake, I talked to clients of mine
about this and their response is, well, that'd be great, genius, if we had a data lake. It's,
what do you think those petabytes of nonsense in S3 are? Oh, those are the logs and the assets
and a bunch of other nonsense. Yeah, that's what other people are calling a data lake. Oh,
do you see similar lights go on moment when you talk to clients and prospective clients about what it is that they're doing that they just hadn't considered to be time series data previously?
Yeah.
In fact, that's exactly what we see with many of our customers is they didn't realize that all of a sudden they are now handling a pretty sizable time series workload.
And if you kind of take a step back and look at a couple of pretty obvious, but sometimes unrecognized trends in
technology. The first is cloud applications in general are expanding. They're both horizontally
and vertically. So that means like the workloads that are being run in the Netflix of the world or
all the different infrastructure that's being spun up in the cloud to run these various,
you know, applications and services, those workloads are getting bigger and bigger.
Those companies and their subscriber bases and the amount of data they're generating
is getting bigger and bigger.
They're also expanding horizontally by region and geography.
So running Netflix, for example, running not just in the US, but in every continent and
probably every cloud region around the world.
So that's happening in the cloud world.
And then also in the IoT world, there's this massive growth of connected devices, both
net new devices that are being developed, kind of the next Peloton or the next climate control unit that goes in an apartment or house.
And also these longtime legacy devices that have been on a factory floor for a couple of decades, but now are being kind of modernized and coming online.
So if you look at all of that growth of the data sources now being built up in the cloud, And you look at all that growth of these connected devices,
both new and existing, that are kind of coming online,
there's a huge now exponential growth in the sources of data.
And all of these sources are emitting time series data.
You know, you can just think about a connected car,
not even a self-driving car, just a connected car,
your everyday kind of 2022 model.
And nearly every element of the car is emitting time series data.
It's engine components, you know, your tires, like what the every element of the car is emitting time series data. It's engine components,
you know, your tires,
like what the climate
inside of the car is,
statuses of the engine itself.
And it's all doing that in real time.
So every one second,
every five seconds, whatever.
So I think in general,
people just don't realize
they're already dealing
with a substantial workload
of time series.
And in most cases,
unless they're, you know,
using something like influx,
they're probably not, you know, especially tuned to handle it from a technology perspective.
So it's been a year. What has changed over on your side of the world since the last time we spoke?
It seems that, well, things continue and they're up and to the right. Well, sure,
generally speaking, you're clearly still in business. Good job. Always appreciative of
your custom, as well as the fact that, oh, good, even in a world where it seems like there's a macro recession in progress,
that there are still companies out there that continue to persist, and in some cases,
dare I say, even thrive. What have you folks been up to?
Yeah, it's been a big year. So first, we've seen quite a bit of expansion across use cases. So we've seen even
further expansion in IoT, kind of expanding into consumer, industrial, and now sustainability and
clean energy. And that pairs with what we've seen on fintech and cryptocurrency, gaming and
entertainment applications, network telemetry, including some of the biggest names in telecom,
and then a little bit more on the cloud side with cloud services, infrastructure, and dev tools and APIs.
So quite a bit more broad set of use cases we're now seeing across the platform.
And the second thing is, you might have seen it in the last month or so, is a pretty big announcement we had of our new storage engine.
So this was just announced earlier this month in November and was previously introduced to our community as what we called IOX, which is how it was known in the open source. And think of this really as a
rebuilt and reimagined storage engine, which is built on that open source project, InfluxDB IOX,
that allows us to deliver faster queries. And now pretty exciting for the first time,
unlimited time series or cardinality, which as it's known in the space. And then also we introduced SQL for writing queries and BI tool support. And this is,
for the first time, we're introducing SQL, which is the world's most popular data programming
language, to our platform, enabling developers to query via the API, our language Flux,
and in FluxQL in addition. A long time ago, it really seems that the cloud took a vote, for lack of a better term,
and decided that when it comes to storage,
object store is the way forward.
It was a bit of a reimagining
from how we all considered using storage previously,
but the economics are at a minimum of 10 to 1
in favor of object store.
The latency is far better.
The durability is off the charts better.
You don't have to deal, at least in AWS land,
with the concept of availability zones and the rest.
Just from an economic and performance perspective,
provided the use case embraces it,
there's really no substitute.
Yeah, I mean, the way we think about storage is,
you know, obviously it varies quite a bit
from customer to customer with our use cases.
We, especially in IoT, we see some use
cases where customers want to have data around for months and in some cases years. So it's a pretty
substantial data set you're often looking at. And sometimes those customers want to downsample
those. They don't necessarily need every single piece of minutiae that they may need in real time,
but not in summary looking backward. So really, we're in this kind of world where we're dealing with both high fidelity, usually in the moment data and lower fidelity when people can
down sample and have a little bit more of a summarized view of what happened. So pretty
unique for us. And we have to kind of design the product in a way that is able to balance both of
those because that's what, you know, the customer use cases demand. It's a super hard problem to
solve. One of the reasons that you have a product like InfluxDB, which is specialized to handle this
kind of thing, is so that you can actually manage that balance in your application service and
setting your retention policy, etc. That's always been something that seemed a little on the odd
side to me when I'm looking at a variety of different observability tools, where it seems that one of
the key dimensions that they all tend to, I guess, operate on and price Bob on is retention period.
And I get it. You might not necessarily want to have your load balancer logs from 2012 readily
available and paying for the privilege, but it does seem that given the dramatic fall of archival storage
pricing, on some level, people do want to be able to retain that data just on the off chance it'll
be useful. Maybe that's my internal digital pack rat chiming in at this point, but I do believe
strongly that there is a correlation between how recent the data is and how useful it is
for a variety of different use cases.
But that's also not a global truth. How do you view the divide and what do you actually see people saying they want versus what they're actually using? It's a really good question
and not a simple problem to solve. So first of all, I would say it probably really depends on
the use case and the extent to which that use case is touching real world applications
and services. So in a pure observability setting where you're looking at perhaps more of a kind of
operational view of infrastructure you're monitoring, you want to understand kind of
what happened and when. Those tend to be a little bit more focused on real time and recent. So for
example, you of course want to know exactly what's happening in the moment, zero
in on whatever anomaly and kind of surrounding data there is.
Perhaps that means you're digging into something that happened in, you know, fairly recent
time.
So those do tend to be, not all of them, but they do tend to be a little bit more real-time
and recent-oriented.
I think it's a little bit different when we look at IoT.
Those generally tend to be longer timeframes that people are dealing with. They're physical, out-in-the-field devices. Many times, those devices are kind of coming online and offline, depending on the connectivity, depending on the environment. You can imagine a connected smart agriculture setup. I mean, those are a pretty wide array of devices out in who knows what kind of climate and environment. So they tend to be a
little bit longer in retention policy, kind of being able to dig into the data, what's happening.
The time frame that people are dealing with is just in general much longer in some of those
situations. One story that I've heard a fair bit about observability data and event data is that they inevitably compose down into metrics rather than events
or traces or logs. And I have a hard time getting there because I can definitely see
a bunch of log entries showing the web server's return codes. Okay, here's the number of 500
errors and number of different types of successes that we wind up seeing in the app. All right,
how many per minute, per second, per hour,
whatever it is that makes sense,
and you can look at aberrations there.
But in the development process, at least,
I find that having detailed log messages
tell me about things I didn't see
and need to understand in order to continue building
the dumb thing that I'm in the process of putting out.
It feels like once something is productionalized and running,
that its behavior
is a lot more well-understood.
And at that point, metrics really seem to take over.
How do you see it, given that you fundamentally live
at that intersection where one can become the other?
Yeah, we are right at that intersection.
And our answer probably would be both.
Metrics are super important to understand
and have that regular cadence
and be kind of measuring that state over time.
But you can miss things depending on how frequent those metrics are coming in.
And increasingly, when you have the amount of data that you're dealing with coming from these various sources, the measurement is getting smaller and smaller.
So unless you have perfect metrics coming in every half second or some sub-partition of that in milliseconds,
you're likely to miss something. And so events are really key to understand those things that
pop up and then maybe come back down. And in a pure metric setting in your regular interval,
you would have just completely missed. So we see most of our use cases that are showing a balance
of the two as kind of the most effective. And from a product perspective, that's how we think about solving the problem, addressing both.
One of the things that I've struggled with is it seems that, again, my approach to this is
relatively outmoded. I was a systems administrator back when that title was not considered disparaging
by a good portion of the technical community the way that it is today. Even though the job is the
same, we call them something different now. Great. Okay. Whatever. Smile, nod, and accept the larger paycheck.
But my way of thinking about things are, okay, you have the logs. They live on the server itself,
and maybe if you want to be fancy, you wind up putting them to a centralized RSS log cluster
or whatnot. Yes, you might send them as well to some other processing system for visibility or a third-party monitoring system, but the canonical truth slash source of logs tends to live locally.
That said, I got out of running production infrastructure before this idea of ephemeral containers or serverless functions really became a thing. Do you find that these days you are the source of truth slash custodian of record
for these log entries? Or do you find that you are more of a secondary source for better visibility
and analysis, but not what they're going to bust out when the auditor comes calling in three years?
I think, again, I feel like I'm answering the same way.
Oh, of course. Let's be clear. Use cases are going to vary wildly. This is not advice on
anyone's approach to compliance and the rest. I don't want to get myself in trouble here.
Exactly. Well, you know, we kind of think about it in terms of profiles and we see a couple of
different profiles of customers using InfluxDB. So the first is, and this was kind of what we saw
most often early on, still see quite a bit of them, is kind of more of that operator profile. And these are folks who are going to, they're building some sort of monitor kind of
source of truth for, that's internally facing to monitor applications or services, perhaps that
other teams within their company built. And so that's kind of like a little bit more of your
kind of pure operator. Yes, they're building up in the stack themselves, but it's to pay attention
to essentially something
that another team built. And then what we've seen more recently, especially as we've moved
more prominently into the cloud and offered a usage-based service with APIs and endpoint people
can hit, is we've seen more people come into it from a builder's perspective. And similar in some
ways, except that they're still building kind of a source of truth for handling this kind of data.
But they're also building the applications and services that themselves are taken out to market
that are in the hands of customers. And so it's a little bit different mindset. Typically, there's
a little bit more comfort with using one of many services to kind of be part of the thing that
they're building. And so we've seen a little bit more comfort on from that type of profile using our service running in the cloud, using the API and not worrying too much about the
kind of, you know, underlying setup of the implementation. Love how serverless helps you
scale big and ship fast, but hate debugging your serverless apps? With Lumigo's serverless
observability, it's fast and easy and maybe a little fun, too.
End-to-end distributed tracing gives developers full clarity into their most complex serverless and containerized applications,
connecting every service from Lambda and ECS to DynamoDB, API gateways, step functions, and more.
Try Lumigo for free and debug three times faster, reduce error rate, and speed up development.
Visit snark.cloud slash Lumigo.
That's snark.cloud slash L-U-M-I-G-O.
So I've been on record a lot saying that the best database is text records stuffed into Route 53,
which works super well as a gag.
Let's be clear, don't actually build something on top of this.
That's a disaster waiting to happen.
I don't want to destroy anyone's career as I do this. But you do have a much more
viable competitive threat on the landscape, and that is quite simply using the open source version
of InfluxDB. What is the tipping point where, huh, I can run this myself, turns into, but I shouldn't.
I should instead give money to other people to
run it for me. Because having been an engineer where I believe I'm the world's greatest everything
when it comes to my environment, a fact provably untrue, but that hubris never quite goes away
entirely. At what point am I basically being negligent not to start dealing with you in a
more formalized business context? First of all, let me say that we have many customers, many developers out there who are
running open source and it works perfectly for them. The workload is just right. The deployment
makes sense. And so there are many production workloads are using open source, but typically
the kind of big turning point for people is on scale, scale and overall performance related to
that. And so that's typically when they come and look at one of the two commercial offers.
So to start, open source is a great place
to kind of begin the journey, check it out,
do that level of experimentation
and kind of proof of concept.
We also have 60,000 plus developers
using our introductory cloud service,
which is a free service.
You simply sign up and you can begin
immediately putting data into the platform
and building queries.
And you don't have to worry about any of the setup and running servers to deploy software.
So both of those, the open source and our cloud product, are excellent ways to get started.
And then when it comes time to really think about building and production and moving up in scale, we have our two commercial offers.
And the first of those is InfluxDB Cloud,
which is our cloud-native, fully managed by Influx Data offering. We run this not only in AWS,
but also in Google Cloud and Microsoft Azure. It's a usage-based service, which means you pay
exactly for what you use. And the three components that people pay for are data in, number of queries,
and the amount of data you store, storage. We also, for those who are data in, number of queries, and the amount of data you store, storage.
We also, for those who are interested in actually managing it themselves, we have InfluxDB Enterprise,
which is a software subscription-based model, and it is self-managed by the customer in
their environment.
Now, that environment could be their own private cloud.
It also could be on-premises in their own data center.
And so lots of people who are a little bit more oriented to kind of manage software themselves rather than using a service geared toward that.
But both those commercial offers, InfluxDB Cloud and InfluxDB Enterprise, are really designed for massive scale. earlier with the new storage engine, you can hit unlimited cardinality, which means you have no limit on the number of time series you can put into the platform, which is a pretty big
game-changing concept. And so that means however many time series sources you have and however
many series they're emitting, you can run that without a problem, without any sort of upper limit
in our cloud product. Over on the enterprise side with our self-managed product, that means you can
deploy a cluster of whatever size you want.
It could be a 2x4, it could be a 4x8 or something even larger.
And so it gives people that are managing in their own private cloud or in a data center environment, really their own options to kind of construct exactly what they need for their particular use case.
Does your IOX storage layer make it easier to dynamically change clusters on the fly?
Historically, running things in a pre-prisoned cluster
with EBS volumes or local disk was,
oh, great, you want to resize something.
Well, we're going to be either taking an outage
or we're going to be building up something,
migrating data live,
and there's going to be a knife switch cutover
at some point that makes things relatively unfortunate.
It seems that once you abstract the storage layer away from anything resembling an instance, that you would be able to get away from some of those architectural constraints.
Yeah, that's really the promise and what is delivered in our cloud product is that you no longer as a developer have to think about that if you're using that product.
You don't have to think about how big the cluster is going to be.
You don't have to think about these kind of disaster scenarios. It is all kind of pre-architected in the service.
And so the things that we really want to deliver to people, in addition to the elimination of that
concern for what the underlying infrastructure looks like and how it's operating. And so with
infrastructure concerns kind of out of the way, what we want to deliver on are kind of the things that people care most about.
Real-time query speed.
So now with this new storage engine, you can query data across any time series within milliseconds.
A hundred times faster queries against high cardinality data that was previously impossible.
And we also have unlimited time series volume. Again, any total number of time series you have,
which is known as cardinality, is now able to run without a problem in the platform.
And then we also have kind of opening up, we're opening up the aperture a bit for developers with
SQL language support. And so this is just a whole new world of flexibility for developers to begin
building on the platform. And again, this is all in the way that people are using the product without having to worry about the underlying infrastructure.
For most companies, and this does not apply to you, their core competency is not running
time series databases and the infrastructure attendant thereof. So it seems like it is
absolutely a great candidate for, you know, we really could make this someone else's problem
and let us instead focus on the differentiated thing that we are doing or building or complaining
about. Yeah, that's a true statement. Typically what happens with time series data is that people,
first of all, don't realize they have it. And then when they realize they have time series data,
you know, the first thing they'll do is look around and say well what do i have here you know i have this relational
database over here or this document database over here maybe even this kind of search database over
here maybe that thing can handle time series and in a light manner it probably does the job
but like i said the sources of data and the just the volume of time series is expanding really across all these
different use cases exponentially. And so pretty quickly, people realize that thing that may be
able to handle time series in some minor manner is quickly no longer able to do it. They're just
not purpose-built for it. And so that's where really they come to a product like Influx to
really handle the specific problem. We're built specifically for this purpose.
And so as the time series workload expands, when it kind of hits that tipping point,
you really need a specialized tool. Last question before I turn you loose to prepare for reInvent,
of course. Well, I guess we'll ask you a little bit about that afterwards. But first, we can talk
a lot theoretically about what your product could or might theoretically do.
What are you actually seeing? What are the use cases that, other than the stereotypical ones
we've talked about, what have you seen people using it for that surprise you?
Yeah, some of it is, it's just really interesting how it connects to, you know, things you see every
day and or use every day. I mean, chances are many of the people listening have probably used InfluxDB and, you know, perhaps didn't know it. You know, if anyone has been
to a home that has Tesla power walls, Tesla is a customer of ours, then they've seen InfluxDB
in action. Tesla's pulling time series data from these connected power walls that are in solar
powered homes. And they monitor things like health and availability and performance of those solar
panels and the battery setup, et cetera.
And they're collecting this at the edge
and then sending that back into the hub
where InfluxDB is running on their backend.
So if you've ever seen this deployed,
like that's InfluxDB running behind the scenes.
Same goes, I'm sure many people
have a Nest thermostat in their house.
Nest monitors the infrastructure,
actually the powers that collection of IoT data
collection. So you think of this as InfluxDB running behind the scenes to monitor what
infrastructure is standing up that back-end Nest service. And this includes their use of Kubernetes
and other software infrastructure that's run in their platform for collection, managing,
transforming, and analyzing all of this aggregate device data that's out there.
Another one, especially for those of us that streamed our minds out during the pandemic,
Disney Plus.
Entertainment streaming and delivery of that to applications and to devices in the home.
And so this hugely popular Disney Plus streaming service is essentially a global content delivery
network for distributing all these movies and video series to all the
users worldwide. And they monitor the movement and performance of that video content through
this global CDN using InfluxDB. So those are a few where you probably walk by something like this
multiple times a week, or in our case of Disney+, probably watching it once a day.
And it's great to see InfluxDB kind of working behind the scenes there.
It's one of those things where it's, I guess we'll call it plumbing for lack of a better term. It's great to see InfluxCB kind of working behind the scenes there. It's one of those things where it's, I guess we'll call it plumbing, for lack of a better term.
It's not the sort of thing that people are going to put front and center into any product or service that they wind up providing.
Yeah, except for you folks.
Instead, it's the thing that empowers a capability behind that product or service that is often taken for granted.
Just because until you understand the dizzying complexity,
particularly at scale, of what these things have to do under the hood,
it just, well, yeah, of course it works that way.
Why shouldn't it?
That's an expectation I have of the product because it's always had that,
yeah, this is how it gets there.
Our thesis really is that data is best understood through the lens of time.
And as this data is expanding exponentially, time becomes
increasingly the kind of common element, the common component that you're using to kind of
view what happened. That could be what's running through a telecom network, what's happening with
the devices that are connected to that network, the movement of data through that network,
and when. What's happening with subscribers and content
pushing through a CDN on a streaming service? What's happening with climate and home data in
hundreds of thousands, if not millions of homes through a common device like a Nest thermostat?
All of these things, they attach to some real world collection of data. And as long as that's
happening, there's going to be a place for time series data and tools that are optimized to handle it.
So, a last question, for real this time.
We are recording this the week before reInvent 2022.
What do you hope to see?
What do you expect to see?
What do you fear to see?
No fears, even though it's Vegas, no fears.
I do have the super spreader event fear, but that's a separate issue.
Neither one of us are deep into the epidemiology weeds
to my understanding.
But yeah, let's just bound this to tech.
Let's be clear.
Yeah, so first of all, we're really excited to go there.
We'll have a pretty big presence.
We have a few different locations where you can meet us.
We'll have a booth on the main show floor.
We'll be in the Marketplace Pavilion.
As I mentioned, InfluxDB Cloud is offered across the marketplaces of each of the clouds, AWS, obviously in this case,
but also in Azure and Google. But we'll be there in the AWS Marketplace Pavilion showcasing the
new engine and a lot of the pretty exciting new use cases that we've been seeing. And we'll have
our full team there. So if you're looking to kind of learn more about InfluxDB or you've checked it out recently and want to understand kind of what the new capability
is, we'll have many folks from our technical teams there, from our development team, some of our
field folks like the SEs and some of the product managers will be there as well. So we'll have a
pretty great collection of experts on InfluxDB to answer any questions and walk people through demonstrations and use cases.
I look forward to it. I will be doing my traditional Wednesday afternoon tour through
the expo halls and nature walk. So if you're listening to this and it's before Wednesday
afternoon, come and find me. I am kicking off and ending at the Fortinet booth, but I will make it
a point to come by the Influx booth and give you folks a hard time
because that's what I do. We love it. Please. You know, being on the tour is on the walking tour is
excellent. We'll be mentally prepared. We'll have some comebacks ready for you. Therapists are
standing by on both sides. Yes, exactly. Anyway, we're really looking forward to it. This will be
my third year on your walking tour. So the nature walk is one of my favorite parts of AWS reInvent.
Well, I appreciate that. Thank you. And thank you for your time today. I will let you get back to your no doubt frenzied preparations. At least they are on my side.
We will. Thanks so much for having me and really excited to do it.
Brian Mullen, CMO at Influx Data. I'm cloud economist, Corey Quinn, and this is Screaming
in the Cloud. If you've enjoyed this podcast,
please leave a five-star review
on your podcast platform of choice. Whereas
if you've hated this podcast, please leave
a five-star review on your podcast platform
of choice, along with an insulting
comment that you naively believe
will be stored as a text record in
a DNS server somewhere, rather than
what is almost certainly a time series
database.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group. We help companies fix their AWS bill by making it
smaller and less horrifying. The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business,
and we get to the point.
Visit duckbillgroup.com to get started. This has been a humble pod production
stay humble