The Data Stack Show - 94: Notebooks Aren’t Just for Data Scientists With Barry McCardel of Hex Technologies
Episode Date: July 6, 2022Highlights from this week’s conversation include:Bary’s background and Hex (3:05)Reconciling two sides of data (9:16)Collaboration at Hex (15:10)What it takes to build something like Hex (20:02)De...fining “commitment engineering” (26:01)How to begin working with Hex (30:56)Hex customers and uniqueness (40:31)The future in a world of data acquisition (45:30)Crossover between analytics and ML (51:33)Advice for data engineers (57:19)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Transcript
Discussion (0)
Welcome to the Data Stack Show.
Each week we explore the world of data by talking to the people shaping its future.
You'll learn about new data technology and trends and how data teams and processes are run at top companies.
The Data Stack Show is brought to you by Rudderstack, the CDP for developers.
You can learn more at rudderstack.com.
Kostas, we are talking to Barry, who co-founded a company called Hex. They're in the analytics space.
And this is my burning question. So, you know, you have Looker acquired by Google, you have Tableau acquired by Salesforce, Periscope. Someone bought Periscope, right?
No, they merged with, yeah, I mean, acquired by Sisense, but yeah, you can.
Sure. Amplitude went public. So there's, in some ways it's like, you sort of see industry
movement like that and it can feel like, wow, okay, sort of the core analytics problems have
been solved at a massive enterprise scale. Great, check, okay, sort of the core analytics problems have been solved at a massive enterprise scale.
Great, check, let's sort of move on.
But my strong sense is that that is not how Barry feels.
And I think that he is going to help us understand what innovation is happening now and will happen in the future with analytics.
So I want to get his take on that.
What parts have actually been solved? And then where are we in early innings?
So that's what I'm going to ask.
How about you?
Yeah.
I mean, I don't know.
I think one of the things that we should learn from when it comes like to technology
and this industry is that everything happens in cycles and whatever has been
invented gets reinvented, right?
Probably that's one of the mistakes that IBM did.
Like, you know, they came up like with all these huge servers and they were
like, okay, like we solved the problem.
Right?
And now today you have AWS, like Azure and like cloud computing and suddenly
even infrastructure got reinvented, right?
And probably we are also like, even there, we are entering another cycle of innovation.
But anyway, I think it's a very interesting time because yeah, like, as you said, this
market hasn't, like the BI market and visualization market hasn't like produced any
innovation for a while now.
And when we say a while, we are mean, talk about like two or three years,
like more than that, right?
But that's a long time when it comes to technology.
So I'm very excited that we have Barry today because I think we are going to
see what's next in this space and we will learn more about the product of their
building, which is going to be, which is a very good example of what's next in this space.
So let's go and talk with him.
Let's do it.
Barry, welcome to the Data Sack Show.
We are super excited to chat with you.
Thanks for having me.
Okay.
Give us your background and tell us a little bit about Hex.
Yeah.
So my background, I'm Barry, the co-founder and CEO of Hex.
I have been working around data basically my whole career.
In undergrad, I kind of stumbled into some really interesting research
around social networks and was doing stuff both in spreadsheets
and then R, and this was sort of before data science was really a thing.
I went into consulting and I was doing dark, unholy things in spreadsheets.
So I was building like whole data apps in Excel.
Data trips, tab by tab data transformations and drop down interfaces.
And I was writing like BBA to build UI.
That is deep and dark.
Illicit access databases, PC towers.
I pilfered at clients to run a database.
So a major US airline had a lot of their pricing and Wi-Fi maintenance infrastructure running
off of a spreadsheet I made, which is horrifying.
Every time I fly that airline, I wonder if they've migrated from it yet.
And then I went to Palantir and I was there for about five years.
And that was really an opportunity for me to sort of like do that type of stuff. I was really enjoying data analysis and data, you know,
building different apps of data sort of in the big leagues. And I was there through a really
interesting time, like 2013 through 2018 around like the really emergence of a bunch of things
that we, I think almost take take for granted around really working with
large-scale data.
Big data was the buzzword, data science as a discipline, and a lot of technologies that
were emerging that are now quite widely adopted, like Spark, HTFS, even just AWS in general
and the possibilities of the cloud.
So it was a great time to be there, and I got to build a lot of very interesting technologies.
And I also met a bunch of folks that I'm working with now, including both my co-founders.
After that, I went to a healthcare startup in New York.
That was really the acute moment for Hex where we had this problem.
We had a really kind of quote unquote modern data stack.
We had like Redshift and Stitch.
We were early adoptees of DBT.
We had Looker for BI, but we were still doing a lot of our
exploratory analysis, modeling, storytelling work in one-off Jupyter notebooks or SQL scratch pads.
And we were sharing everything through spreadsheets and screenshots and Slack.
And I just started this journey of X as a buyer. I was looking for a piece of software that fit my
picture of the type of platform that I wish we had. And it looking for a piece of software that fit my picture of the type of
platform that I wish we had. And it took a few months of looking and realizing that it wasn't
out there to come around to this. So in a way, I think X is a culmination of all these experiences
I've had. It's basically the tool that I wish I'd had at every phase in my career. I would have used
it as a user at basically every time, every job I had had. So, you know,
that's sort of the backstory. So I started the company about two and a half years ago with
Caitlin and Glenn, who I'd both met and worked with at Palantir. Glenn was actually my intern
in 2014. We've been working together ever since. So I'm very, very fortunate to get to work with
some great folks on this. Yeah. So I'd love to hear about maybe the moment
or sort of the experience of realizing like,
am I going to start a company to solve this problem?
You know, because sort of going from like a vendor search
to embarking on an entrepreneurial journey is, you know,
is pretty bad.
Maybe it's a small step, but tell us about that experience and like maybe some of the circumstances
surrounding it yeah sure i mean i i started this like looking for software i was like googling
a bunch of terms that i figured someone has to have built this thing and i couldn't find it and
i wound up asking a lot of friends who are at other companies doing stuff you know on data
teams or whatever like how do you guys solve this? And everyone kind of had the
same answers, which is like, we don't.
We've cobbled together
a JupyterHub thing
with a bunch of other open source stuff
and it's all really brittle, but
it kind of does the thing.
None of this felt really satisfactory
and I had just come off of
five years at Palantir where our whole
shit was building really great software.
And so I think I kind of felt this sense of dissonance.
Like, wait, someone should be building this.
And I just got off of five years of enjoying building those types of things.
And so I came to this pretty reluctantly.
I'm not someone who was set out like, I want to be a founder.
I just need an idea and a co-founder.
It's almost like I had to get dragged into it like the stars really had to align of like a problem that felt glaring that i understood there's a clear gap two co-founders who by like
circumstance were also like getting bored in the saying their things and wanted to do the next thing
and so yeah i mean it's funny you know it was a pretty dramatic turn, but it was
like, it kind of just dawned on me slowly.
And when we decided to jump in and do it, it felt very natural.
It felt quite organic.
Yeah.
Okay.
And then give us just the sort of one click deeper of detail and like, what does Hex do
and like what problem does it solve?
Hex is a platform for
collaborative data science and analytics. We kind of do three things really well. We have a, you
know, online collaborative first notebook experience where it's very easy to come in,
connect to data, ask and answer questions, work together as a team. We make it very easy to share
your work with anyone as an interactive data app. So it's literally just one click, or I guess it's
two clicks to go from, you know, an interesting analysis or a model you've built
to something that anyone can use as sort of a published web app, whether that's a simple report
or something much more complex. And then we allow the outputs of the data work to contribute to an
overall base of knowledge within an organization. And we can get more into this, but we really think of
the art and the science of analytics as contributing to knowledge. And that sounds a
little abstract, but at the end of the day, what you're really trying to do is influence decisions
and help an organization understand the world better. And we think there's some big gaps in how
the great work that data people are doing every day sort of translates to knowledge. And so
we've built a series of features that we can sort of contribute to that mission. And we're really excited about that.
Awesome. Okay. I want to dig into that more, but before we get into some of those details,
I want to zoom out a little bit and I'm going to sort of paint a picture of, I would say,
two different sides of the same coin when it comes to analytics. And these are two perspectives that we've,
that we've heard on the show at some point.
Okay.
And I'm going to intentionally sort of probably draw the,
like make these a little bit overrun just to make a point, but you know,
so forgive me and our listeners,
forgive me,
but so the first side of the coin is that the analytics game is kind of,
you know,
it kind of reached like an infinite and like most of the
hard problems have been solved. Right. And whatever the reasons are for that. Right. It's because of,
you know, modern storage and separation of storage and compute and like flexible visualization layers
and, you know, all the stuff where it's like, OK, I mean, you can do advanced analytics like
way more easily than you could ever
do them before, right?
And so from that regard,
you know, some people
would say like, okay,
that sort of wave is over
and like the next big wave
in the data stack
is things related to ML
and, you know, ML workflows
and all that sort of stuff, right?
That's sort of the next,
you know, phase of the
modern data,
whatever you call it.
Yeah.
The other side of the coin
is that,
is the opposite, right?
It's like we're actually in early innates, right?
Like the advances that have been made
are actually just the foundation
on which like the really cool stuff with analytics
is now going to be possible.
And so we're like pretty early, right?
And so one good example of that
is things like the metrics layer, right? Where you're kind of now seeing this like, you know, agnostic stack-wide,
you know, sort of accessible layer, right? That solves a lot of different problems. So anyways,
I'm interested in your perspective because, you know, sort of as a, you know, long-time practitioner
and now someone that is really thinking through solving a
problem with a product in the space.
What do you think? Is there truth in both
of those?
There's some truth in that.
I think the narrative to the analytics
world is
solved is very far too true.
I think there are parts that have gotten way better. And for folks
who have been in the world for a little while, which
I've been very fortunate to be,
there's a very dramatic shift from 10 years ago to today
in terms of organizations' ability to bring data in,
even just to have their data,
especially for SaaS tools or other places in their possession,
have it stored at scale.
Obviously, tools like a platform like Snowflake
and cloud data warehouses generally have been locked a lot there.
Be able to transform and model it.
And the revolution that DBT has propagated over the last few years
has been very, very powerful and very meaningful there.
And so I think we're just at the actual beginning of the situation
where a lot of organizations can claim to even have
a corpus of like clean
and reliable data that people can actually go tap into at their fingertips so you know from that
perspective i think we're just very clearly still in the early innings of of that and there's some
solved problems in there there's some problems that are still very far from being solved in there
when we think about a lot of the unsolved problems in the analytics world,
when we see our customers and potential customers come to us, it is very clear that there is a lot of people who still have a lot of friction around being able to ask and answer questions of data,
be able to work with data at cloud data scale. And then just a lot of the downstream workflows
of that, like how do you collaborate on these workflows? How do you share this work with others
in a way that's actually useful and usable?
And so just having the data in the data warehouse
is like, you know, it's a big part,
but it's not like, it doesn't mean
that all these downstream workflows are solved.
And there's a lot of innovation happening here.
I mean, you mentioned the metrics there.
I think that's extremely interesting.
And that sort of gets back to, you know,
great, you have all the data in your warehouse, but
like, is everyone looking at the same measures?
Are we looking at these things the same way?
Are we asking and answering questions in the same way?
Like, that's a very unsolved problem.
So I think it's a little naive to say that, like, well, you know, check the box in analytics
and it's all about ML.
That said, there's a lot of interesting stuff happening in ML too. I've been part of some really interesting projects that have leaned very heavily on ML.
I've also been part of projects where we tried to use a bunch of ML and found out that a simple
scatterplot was actually all we needed. And so I think we're in the very early innings of figuring
out where ML can best be applied. I'm personally a big believer in the idea of sort of this human-computer
symbiosis and this idea of, I think a lot of ML techniques can best be deployed in helping people
better ask and answer questions themselves, helping them better understand. I think I have
been around the block with the idea of, let's just feed all the data and the machine will tell us
everything. If that's ever going to work, I don't think that's particularly imminent.
And so at Hex, I think the way we look at a lot of this is like,
there's a lot of unsolved problems in analytics that we're focused on.
And there are a lot of opportunities to bring,
to make ML workflows easier and to make it easier to bring ML
into analytics workflows.
And that's what's exciting to us.
Yeah.
Okay.
I have another question on that, but I'm going to, I will,
I will use self-control. I want to dig in on one more thing and then hand the mic over to Kostas. Could you dig in a little bit on collaboration? Because that's something that it feels like a marketing term that's been used with analytics, like since the beginning of analytics has you know or it's like finally collaborate
you know whatever and in reality i think anyone who's even using like relatively modern like bi
tools it really still is like a data producer and then there's just someone downstream consuming it
like it's still pretty hard to collaborate practically at least in my experience yeah so
could you tell us what does collaboration actually mean in the like for you at hex like does that take on sort of a different form yeah we have two
big collaborative sort of loops that we think about and focus on so there's like collaboration
between two creators or two editors and that might look like you know i'm going and doing
an exploratory analysis or i'm taking a first draft of a model.
I want to be able to work with you on it. And, you know, Hex is fully collaborative and multiplayer,
which means it works just like in Google Doc or Figma
or Notion or other types of tools like that
where you and I can both be in there at the same time.
Now, the reality, the little secret about multiplayer is like,
it's very rare that you and I are both working on the same thing
at the same time.
In fact, it could be quite annoying.
It's nice when you need it.
What we see really on the editor side is really it's about enabling review workflows.
And what we'll see a lot of is like doing code reviews, like, hey, I'll tag you in.
Hey, can you go review this?
You can go comment on something, give feedback on it.
We can iterate on something together.
Maybe you and I are passing a baton back and forth on working on something um there's also a lot of things in there around versioning which is a sort
of famously difficult problem around analytics of like you know how we're managing version control
on this and the joke is always you know most people are still doing version control for their
analytics by passing around a spreadsheet that's just like incrementing, you know, V5 final in the title.
It's so true.
I mean, literally, it's so true.
Or, you know, if you're very modern
and you're a data scientist,
you're passing around the Jupyter notebook
with V5 final in the file name.
So, I mean, that was one of the first things
we really wanted to tackle at Hacks.
So we have a great built-in version control system
that allows you to save versions.
You can see a full edit history.
It also supports full sync to GitHub.
So we have a cleanly diffable file format
that I can sync to GitHub
and we can manage the whole thing through pull requests.
So again, on that sort of creator-creator-editor loop,
we think we've really sort of,
we're well on our way to nailing it
in terms of what that should really look like for individuals or teams to be able to work together on analytics and data
science projects together. The other loop, and this is actually really what was the initial focus
of Hacks, is the creator-consumer feedback loop. And this is not necessarily new to analytics.
You mentioned like BI and the traditional BI world, you can create a dashboard and send it out.
What's really cool for us with Hex is we're enabling a much,
that sort of easy sharing for a much broader set of things.
And going back a little bit,
the acute pain point that brought us to Hex
was we were doing a lot of work in Jupyter notebooks.
And we were trying to share it.
And it was incredibly frustrating.
We were like screenshotting charts and putting them in Google Docs. Or we were like rendering share it and it was incredibly frustrating. We were like screenshotting charts
and putting them in Google Docs
or we were like rendering things as PDFs
and sending them around via email
and it was like, what year is it?
And so it had to make it very easy to go from
the sort of notebook type work you're doing.
You can publish this as an interactive data app.
It could be something simple and static,
like just charts and a narrative around it. And then your consumers, your stakeholders can comment
on it directly. They can see live data much, much better than throwing screenshots around.
Or it could be something much more interactive. With Hex, it's very easy to go through and add
parameters to your work. It's easy to have a lot of customization on how something's going to be. So we see people build dashboards.
We also see people build
very complex,
like workflow apps and hacks
because you've got the full power
of SQL and Python under the hood.
And so it's very easy
to sort of take that
and publish that.
So what you wind up seeing
and what's really exciting to us
is this change
in how people communicate
their work to their stakeholders
and to the rest of the organization and the impact that work have. And we fundamentally believe that by making it easier
to share things, by making those things more useful and usable, by making them discoverable
and easy to organize and a knowledge base, you can actually increase, meaningfully increase the
impact that a data team can have. And that kind of is our maybe most fundamental mission.
Yeah.
Love it.
All right, Kostas, I've been holding the mic for too long, please.
Oh, that's fine. I mean, there were some amazing questions and answers there as well.
No worries.
So Barry, I have like a couple of questions about like the product itself, but before
that, I want to ask you something else. So you mentioned at the beginning that you've been working on Higgs for like two and a half
years now. And two and a half years, like in startup life is like a long time. Can you take
us through like your journey and tell us a little bit more about the people and the things that happened from day one where you decided to, okay, we commit to that until today, right?
Just to get an idea of what it takes to build something like what Higgs is today, right?
Yeah.
Yeah.
So it was about three years ago, exactly, that i was sort of in this like wow gosh why has
no one solved this and what should we do it took a few months to sort of go from that to actually
quitting our jobs and deciding we were all in on this and i mentioned glenn he and i were working
together at the time and and for personal reasons we were both actually going to have to move to
california and find new jobs anyway so we were both following our amazing significant others who had dream jobs out here. So it was easier for us.
Like we were kind of jumping off the ledge. Caitlin was gainfully employed and it took a
little more nudging to get her out the door. But the three of us were sort of full time on it
in December, 2019. So the three of us were headstart working for the first few months.
And then kind of
in March 2020, which was a very eventful month for everyone, both made our first hire, which
was someone we had worked with at Palantir as a first engineer.
And we'd also raised the seed round.
And those first few months we had actually already built like a functional prototype
and actually already had a few users poking around and using it and we're kind of well on our way to getting one of them to commit to paying so
that was enough evidence to the folks at Amplify to back us and then the first year I would say
really was a very small team and we were just iterating and experimenting and throwing some
things away you mentioned it feels like a long time. That first year was very heads down, just building and trying to figure out exactly
what this wanted to be.
And it's interesting when you're building in a space that's as crowded and with as much
going on as in the data space, because you're constantly seeing other things pop up and
other things happening.
And we felt through that whole time that we had a really clear thesis, a set of things that we were excited about that we just didn't see anyone
else doing. And so with a product this complex, you kind of need that and you need to stay focused
and you need to sort of have some belief in, we are going to go build this and this will be the
highbrow. And so that first year was really about getting there. And then in 2021 is really when we started bringing it to market.
So it was not much more than a year ago.
It was early 2021 that we launched the company, announced our seed, announced the product,
and started taking signups for it.
And so all of that to say that that two and a half year span, it's really only the last
year that we've really had the product to market.
And really only in the last, I would even say six to year span, it's really only the last year that we've really had the product market and really only in the last,
I wouldn't even say six to nine months,
that it's felt like a full,
mature sort of expression
of what we had hoped it would be.
And so maybe the message I would say
to anyone else who's thinking of starting something
is like, you know, that beginning journey,
it can feel like it's going slow
and it can feel like it's taking a while,
but if you're making the right investments,
when they start to pay off, they will really start to pay off.
And that's a really good feeling.
Absolutely.
How many people are in HEX right now?
We're about 40 people right now.
And we're distributed all over the US and Canada.
Oh, nice.
So how does it feel from your position, like from the three of you at the beginning, being with 40 folks today, like how does it feel as a founder?
It feels great.
It's humbling.
I'm daily, I'm mindful of the trust that all the stakeholders, our employees, our investors, our customers have put in our team and me.
So there's, you know, you try not to feel that too acutely day to day,
but it's a good reminder
that there's some higher stakes now.
But I'm extremely proud of this team.
I'm kind of a disbelief every day
that I get to work with such a bizarrely
talented group of humans.
And the joy and pleasure as a founder
is you get to spend your time
hiring a bunch of people
who are better at things than you.
I can say objectively that for everything I do, there are people on the team that are
better at those things than me.
And now you get to have all these really smart, capable people working with you.
You get to watch them go and do their best work.
And I think of my job as really like, how do I set up and enable this fantastic people
to do their best work?
That's what I try to spend my time doing. Alexi Vandenbroek Absolutely.
I think you're making like a very important observation here.
And it's one of the things that we don't talk that much, like the people that have
been founders, is like these privileges that you have as a founder, things go well
to work with all these like amazing people.
That's very, very important.
Barik, how did you get from this initial experience and idea that you've had,
what you described with the frustration that you were going through, right?
As a buyer at that point, trying to find a solution that didn't exist,
to actually end up with a product that you can sell, right?
Because from the idea moment to building and selling, there's like a huge gap there, right?
Like many things need to cash.
So can you take us a little bit like through this journey and so we can
understand like what it takes to do that?
Yeah.
I think the core of it is a philosophy that I learned at a previous job that is called commitment engineering, which is the sort of art and science of building, going from
an idea to a product someone's actually willing to pay for.
And the first step is really finding a problem that's cute and that resonates with people.
And for me, I just felt this problem myself and started asking people, hey, do you also
have this problem?
We started writing blog posts about this problem, specifically the problem being around being able
to share and communicate work that we're doing as data scientists and analysts.
And you'll find people who are interested in that problem. And then the loop we went through and the
loop that I think a lot of successful companies will go through is this commitment engineering loop where you start by saying, hey, talk to someone about their problem.
And you basically offer, if I came back in a few weeks with the first version, a first prototype of this, would you take 30 minutes to run through it with me?
And if someone says yes, then you're now in a commitment engineering loop with them.
You have asked for a commitment of their time and you've in exchange
going to go do some engineering for them.
And you can kind of ride this loop
all the way up to getting them to pay.
Like the next step might be,
you know, hey, great.
Thanks for the feedback on the prototype.
You know, if I came back in a few weeks
with the next version,
I've addressed all of that.
Would you take 45 minutes
and click through it with me
and actually use it for real?
And you keep asking for commitments. The next one might be, would you invite your team to this
thing? Would you show this to your boss? Would you demo it to your whole team or whatever it is?
And then the last one is, would you pay for this? And so for us, going through that with our early
users and customers was extremely effective. It let us figure out where we were barking up the
right tree, where people were excited to spend more time with us
and spend more time with the product,
or where we were not, which is where people were like,
oh, I'm busy.
It's kind of how you know that you're not doing something
that's actually that exciting to them.
And so that first year especially was really about
just trying to find users to be in that commitment
engineering with Glyph and build for them
instead of just building in a vacuum.
Okay, that's something.
I don't know if that was that interesting.
No, no, no.
I mean, to me, at least, it's very interesting
because I have been in the position of like
starting something from scratch.
And I know how hard it is
and how difficult it is to get advice
on how to start doing these things.
I mean, it's even hard to find, let's say, you know, we have models and processes and playbooks for many different things,
but these are not like the stuff that you can easily find a playbook for.
So any kind of like this, I think it's super valuable.
I think it also applies internally at an organization, right?
Like even if you're not going to found something, you know, like start a new company and try to raise money or whatever.
If you think about like an initiative that you want to take on internally, like you have internal customers, right?
And I'm just thinking about, you know, projects that I've thought about trying to start, you know, whatever, even in my job now.
And I love that mindset of thinking about asking for those commitments as a way,
like as a litmus test, right?
Because-
Yeah, you're validating that you're on the right track,
that you're thinking about the right problem,
that you're product solving it,
that they're excited enough about it to pay for it.
Yeah.
And this solves for the problem
that you see a lot of people,
maybe most people get into,
which is, I'm going to build this in a little vacuum. I'm going to build this thing for the problem that you see a lot of people, maybe most people get into, which is, I'm going to build this in a little vacuum.
I'm going to build this thing for me.
And especially for a founder like me that had a lot of personal experience, that was the user, it's very tempting to just build the thing you want.
And of course, there's a degree of judgment and intuition and taste that you need to put in something.
There's bets that you need to put in something or some bets that you need to make. But I think a lot of people, whether they're a founder or a product manager or an engineer,
often can be too slow to get into that iteration loop.
Or the other mistake you'll see people make is they'll start a relationship,
especially very early when you're looking for more of early customers or design partners.
They'll start a relationship saying, hey, would you pay for this?
Well, often if you ask that at first, they're going to say no because the product sucks
because it's early and all early products suck.
It's not personal.
It's just your product sucks because you've only been working on it for a couple months.
So that's where I think getting into that iteration loop that's based on
you really understanding their problem and you asking for those commitments at their time, that's how you're going to build up to actually having a customer in those early days.
And I think your point, Eric, about that being applicable for internal stuff as well is I think really great.
A lot of data people are effectively building products.
I mean, we see this with an X.
People build data apps, their shipping, and whether it's for internal use or to clients or whatever,
that type of iterative process, I think,
can serve people really, really well.
So, Barry, let's go,
let's talk a little bit more about the product,
how it is today, okay?
So, let's say I'm just landing on your landing page
and I go through the a signup process.
Like what do I need to start like working with Hex?
What should I bring with me to do that?
David Pérez- Should bring data.
So we, Hex, you know, is really useful if you can connect to your underlying data
sources, so whether you're using a cloud data warehouse like Snowflake or
BigQuery or Redshift or Databricks.
We have connectors for dozens of these now, different data sources.
And actually, really, from a getting started perspective,
really, if you can connect to your data, it's very quick to get going. We'll have people sort of be writing their first queries
within a few minutes of first logging in.
And yeah, it's really all you need to bring is your
data and a great attitude.
Okay.
That's, that's awesome.
And you mentioned notebooks.
Can you take us a little bit into this and like, first of all, tell us a few
things about notebooks in general, because it's not like everyone, I mean,
everyone has heard probably of like a Jupyter notebook, but it doesn't mean
that everyone has used one, right?
So tell us a little bit about the history of notebooks, why they were created and like what they are.
And yeah, what Higgs brings that is new and maybe fixes like what notebooks were doing so far?
Yeah, of course.
So notebooks for the unfamiliar are a format for effectively for coding,
which is where the code is broken up into cells.
So that's like chunks.
And you can evaluate those cells of code independently of each other.
And those cells will show you the output, the results of that.
This is a form of what's called literate programming, which is a term that basically
refers to a programming style where the code and the outputs and the narrative,
like the explanation of it, are all sort of tied together. Notebooks really excel for workflows
that are exploratory or iterative, where being able to run just that one little chunk of code instead of the whole script is really useful.
You know, I'm going to try a different technique for this.
I'm going to try a different bidding.
I'm going to cut this a different way.
And you can sort of do that and immediately see that output, which is really great for
sort of, you know, iterating through things you're working on.
But notebooks also have a lot of problems.
And like, there's this sort of of famous set of critiques about notebooks.
There was a talk that Joel Gruss gave at JupyterCon 2018 that was called, I don't like notebooks.
It was a bold thing to don't do at JupyterCon.
Effectively, he was sort of calling out, rightly, a lot of the issues notebooks have specifically around state. And
because the cells are broken up into these different chunks of code, you can actually
run these out of order. And they're running through an in-memory kernel, which stores state.
So if I have a cell that says x equals one, and my next cell says x equals two, if I run them in
order, x in memory will be assigned to two. But if I run them out
of order, then all of a sudden, maybe it's one, maybe it's two, which cell did I run last?
So without going too deep in that, you can wind up in these really weird spots where you have
inconsistency. And this causes three big problems. There's a problem around reproducibility, which is
if I want to go rerun this, am I going to get the same results?
Notebooks are sort of notoriously difficult to make reproducible.
It has problems for interpretability, which is like notebooks can often, I think,
be very hard to know what's going on.
The code's broken up into a bunch of different places and maybe it was run out of order.
So like, how does this thing even work?
I'd had this for going back and looking at notebooks I've made myself, you know, months later, like, what is this thing even doing?
But it's also true, you know, if you're trying to collaborate on the notebook, being able to understand what's going on is really important. And then the last thing is performance. The way
that a lot of people wind up solving this is just constantly restarting and running all,
like just run all the code again, top to bottom, basically like a script,
which is very high overhead. So that's all the problems around state with notebooks.
There's also a couple other big problems
with the notebooks that traditionally work,
which is one is scale,
which are traditionally run in in-memory kernels,
which is great if your data fits in memory.
It's very fast and snappy,
but it's awful if your data is bigger than that.
And in this cloud data era,
we are in a time when people are storing terabytes of data
in cloud data warehouses, and that in-memory model does not always scale very elegantly.
And then finally, I think there's a problem around accessibility.
Costas, you mentioned like there's a lot of people who have heard of notebooks,
but might not have used them.
A lot of times that's because they're very hard to use.
Like you have to like both understand these state and scale issues.
You also traditionally had to be able to install
a Jupyter or Python environment
locally, and that's called Jupyter.
You're doing package management and environment management,
and you're trying to roll your own data connections
using SQL Alchemy, and the whole thing was
very messy and very difficult to use.
And you'll often see a new data scientist
start at a company, and they've lost
the first two weeks just trying to get all this work.
And so it's no wonder that there's millions of people working in data every day who haven't
traditionally been able to access these workflows. So that's sort of the background that we walked
into this with. We've been long-time Notebook users and I've used them for years and years.
And so we're very familiar with all these problems. And I think at least part of
what we came at Hex with was
what if we fix those things I think a lot of people look at notebooks and they look at all
these problems they're like well we should get rid of notebooks everyone should be writing things
like scripts or whatever we kind of came at it from a different angle we were like well this is
Sable and we can build a really good version of this because this format actually rocks
and so I don't call Hex like a notebook company but I think at the core is that we built a really amazing experience around this notebook concept.
And there's a few parts to that.
So one, I mentioned this earlier, but we made it fully collaborative, online, hosted.
So no more setups, one click, create a new notebook.
We've managed the environment for you.
It's very, very easy to get started.
It's very easy to connect to data.
So we have built-in SQL cells.
We were really kind of the first to do this,
where it's very easy to set up a data connection.
You can write SQL right in your notebook,
immediately visualize your outputs.
In Hex, you can actually go back and forth
between SQL and Python or just work in one or the other.
So we actually sort of opened it up to this universe
of people who are SQL-first users or SQL literate
and maybe haven't necessarily learned Python.
And then around that state issue, we had a pretty big innovation here around what we
call a fully reactive execution model.
And that is where each cell in the notebook is treated as effectively as a node in a DAG.
So folks will be familiar with DAGs from a lot of tools at the ETL and orchestration layers,
like DAGs to Airflow to DDT, which are all sort of built around a DAG concept.
We bring that concept to notebooks and we say, each cell is really just a node
and variables that are referenced between cells.
So like X equals one, if I'm referencing X in another cell,
that's a link then,
it's an edge between those cells.
I'm modeling a notebook this way
and by turning it into something reactant,
which means if you modify one node,
only the downstream nodes update,
you actually get a lot of advantages
and it solves those three problems I mentioned.
It is much, much easier to reason
about the state. And so your state is always in a consistent, clean place and it's reproducible.
So it solves that problem. If you run something, it's always going to run the same way.
It's more interpretable. We have a nice DAG UI in Hex where you can actually see
your full flow of your logic and your project, which makes it very easy to literally visually
see what's going on. And it's way more performant. You don't have to restart and run all the kernel every time.
You can just change one cell and only the things that need to be updated will be.
So this was a lot of hard engineering work at the back end and the front end that went into this.
But the NetResult is a product that really solves a lot of these problems around notebooks.
It's also just much more interpretable and accessible to a big population of people.
It's very, very interesting.
Very interesting that a lot of our customers, most of the users were not using Jupyter before.
These are people who were using SQL scratch pads or they were trying to do their work in a BI tool or a spreadsheet.
They don't even have a baseline of understanding how Hex is better than Jupyter.
They just know Hex is great and I like it. And so, you know, that was really always part of the mission for us
is like opening up access to these work flows
to a bigger group of people.
And it's very cool to see how
we've been able to bring the great parts about notebooks
to like a 10x bigger audience
than could have taken advantage of them before.
Yeah, absolutely.
So if I understand correctly,
and that's also like, okay,
my, like the perception that I had about notebooks, they used to be like a tool mainly for data scientists, or let's say, not a tool for data analysts.
Okay.
Like you had the data analyst who was mainly working in the BI tool.
And then you had more niche use cases with data scientists or like some other like people that were using like notebooks.
And you mentioned that you see like a change there, like more and more people
that didn't have access to that, they are using it now, like through Higgs.
So who is like the typical user today?
That's one question that I have.
And the second question is how does Higgs fit in your organization
compared to more traditional, let's say, data visualization tools? Let's say BI tools,
you know, like Tableau, like in the data, right? Yeah. So it's really interesting what you mentioned
of like notebooks for traditionally just for data scientists. It's kind of worth asking why. And I
think there's this sort of baseline problem that we see in a lot of places where like,
there's these tools that it's like, depending on what you're doing and what language you're in, you're jumping between different tools.
Like you got people who are just working in spreadsheets.
You got people who are in no code BI tools.
You got people who are like SQL scratch pads or SQL IDs.
And then you've got people over in notebooks.
And it's kind of these like artificial barriers just based on like, do you know a certain language or can you get like a python environment working locally i don't think there's anything about
the notebook format that makes it like uniquely useful for only people who are doing modeling
in fact i think notebooks are probably even more useful for a lot of like exploratory analytics
workflows they just weren't available to people because super high overhead, hard
to get started, and really only worked if you were working primarily in Python, which
is very widely known, but not nearly as widely known as SQL in the analytics world.
So I think just in the first instance, we came at this and looked at this as like, wow,
notebooks should just be able to be used by a lot more people.
And so when you ask about the core users that we see at Hex, we have a ton, maybe
most, we don't have job titles for everyone, but like when we just empirically
talk to users or customers, like SQL first data analysts are a huge, huge part of
our user base, they get a ton of utility out of Hex.
In fact, it's awesome to be able to do SQL work in Hex because one amazing thing
is like, you can do SQL on SQL. Like you can, we call it chained SQL or in Hex it's called to be able to do SQL work in Hex because one amazing thing is like you can do SQL on SQL.
Like we call it chained SQL
or in Hex it's called data frame SQL.
Really just means you can have one SQL query
and another SQL query that queries the results of it.
And for the SQL heads out there,
this is effectively what you might do
if you're using like CTEs,
like the with as statement.
In Hex, instead of writing like a three page long CTE,
you can break this up in the cells.
You can actually see the results of each step.
You can have them chained together.
So it's much more elegant, much more powerful.
And so there's all sorts of great things we've been able to bring to that sort
of like analytics workflow that has nothing to do with like ML modeling or
I think what traditionally got labeled as quote unquote data science.
And so that's very, very cool for us to see.
On the other hand, people who are Jupyter users,
people who are Pythonistas,
who spend their day building models in Jupyter,
Hex is familiar and powerful.
It has all the best things about notebooks.
We fixed a lot of the problems.
So at Hex, you really think of the product
as this low floor, high ceiling product
that should be accessible to a much
bigger population of people, but not artificially constrain you, which is what you've seen in
all last generation of products and why you have that five different tools and you're
jumping between them.
It's either low floor, low ceiling or high floor, high ceiling.
With Hex, we really think that people should be able to collaborate between these personas
and these workloads in a much more seamless way.
And so to the second part of your question about where it fits in, at most of our customers,
we are deployed alongside or are very complementary to a traditional BI tool like Tableau or Looker.
These are products that have, I think, a pretty specific and well-understood mission around
like, I want to build some point-and and click dashboards that people can go and look at once a week.
I want to, you know, maybe there's a bigger population of non quote unquote, non-technical
people, non-data people who want to be able to go point and click and look at some metrics.
Those workflows are important.
They have their place.
But if you go talk to most folks who are in a data analyst or data scientist, or even
just much broader population of people who are just data literate, they're not actually spending
their days in those BI tools. They're spending their days in notebooks or SQL IDs, or in many
cases, winding up packet spreadsheets, or they're dumping data out because these BI tools are really
not built for deep exploratory workflows. They're not built for the type of flexible off-roading analysis that a lot of people are looking to do.
And they're certainly not built for what you would traditionally think of as data science.
And so HEX really fits a big gap alongside these.
Now, what we see with most of our customers is alongside these, there's a lot of workflows that used to wind up in BI, kind of shoehorned into into BI that now can live much more natively in hex.
So I don't think of us as competing
with traditional BI,
but we do wind up having a bunch of workflows move over
and in some ways take some pressure off those tools
to be like the everything to everybody.
I mean, what you'll see a lot of people do
is like Tableau or whatever.
It's like the only way I can build a chart
that I can like share
and give to other people to visualize.
So the hobbies really could try workflows
like getting data into a chart Tableau
so they can just have a UI on it.
X is a much better native way
to do a lot of the things
those people are trying to do.
So at least today,
they're very complimentary.
Okay.
That's super interesting.
I'd love to hear from you also,
what do you think about the future though?
Because the BI market is,
I mean, after like the acquisition of Looker,
like we haven't seen like that much things happening there.
There was like consolidation happening.
We had like Sisense and Periscope data,
like getting there, which was interesting
also because you had a tool, Periscope data, primarily used by more, let's say, data science
people merging with more of a BI and trying to like...
We have a lot of X Periscope customers and team members now at Hex.
So yeah, very familiar.
Yeah, yeah.
So how do you see the future?
And like, do you feel like Hex
is part of a new wave of innovation?
Yeah, absolutely.
We see ourselves as very,
very much part of that.
I think, I wonder sometimes
how useful the term BI even is.
It's like, it's either extremely broad
and it encompasses everything
that's, you know, a chart.
Or, you know, you can also prescribe it a little more narrowly and be like, it's referring to a class of reporting dashboard tools like Tableau and Looker.
Either way, I think we're coming into a very dynamic new phase with a lot of upheaval.
And there's a couple of really interesting trends here. One, I think you're just seeing a much, much broader set of
people who are what I would call analytically technical or data literate, where they can think
about data in more sophisticated ways. They can reason about tables and relationships between
things. They, in most cases, can actually write SQL. And we see this not just with quote-unquote
data people, but we have a lot of Hex users who are PMs or marketers or salespeople.
We see all sorts of different personas using Hex.
So this population is really growing quite fast.
And these are people for whom a lot of the work they want to do just does not fit well in that traditional BI paradigm.
Second, there's these bigger secular trends we've seen.
One being the advent of like the cloud data warehouses allowing data to be available at these bigger secular trends we've seen. One being the advent of the cloud data warehouses
allowing data to be available at these bigger scales.
I think there's a new set of assumptions
around what analytics tools
that fully embrace that look like.
And then I think,
depending on who you're talking to,
it's called the metrics layer
or the semantic layer or something,
but there's almost disaggregation of BI
and unbundling
of BI that seems to be happening where BI that traditionally included both the visualization
layer and the metrics and modeling layer are sort of coming apart.
And there's companies like Transform and dbt and others that are really looking to sort
of have that metrics layer either be standalone or integrated into the bigger data transformation
pipeline. I think that's going to actually have a lot of big downstream effects. And
we partner very closely with the folks at dbt and the folks at transform and hex has
integrations with both of them. I think as you see this part of the stack come to fruition,
I think we're going to see a lot of interesting things happen. And candidly, I think it has
implications for our product where
you can imagine being able to, in a Hex project, in a Hex notebook, effectively be able to have
the first cell instead of it needing to be a SQL query or pulling some data in via Python.
It can be a metric cell where you're able to pull in data from one of these sources and start
working with it downstream. At that point, what, you know, what is the line between that and BI?
How does this, what does this mean for BI?
You know, how these different layers of the stack are configured?
I think there's a lot of interesting questions
and we have a lot of ideas on where this goes.
And so we're very excited about the next chapter of this.
And we're very focused on a bunch of the things
we want to go build around that over the next few months.
Awesome.
And I'm looking forward to see what you are going to be
releasing in a couple of months.
All right.
I have monopolized the conversation, so I think I need to give some time and
space also to Eric because I see he has some issues that he wants to ask.
So Eric, all yours.
Eric Boerwinkle- Well, I'm controlling the recording.
So, there we go.
Which is always super exciting when Brooks is away.
I'd like to cover one last subject.
So, Barry, we've been talking about analytics and ML as sort of two distinct,
you know, separate workflows,
separate teams, even in many ways,
like you were talking about hex users, right?
And saying, well, you know,
a lot of our users, like,
hadn't ever even used notebooks before, right?
Maybe most of them at this point.
Which is super interesting.
And so I agree, like, If you think about the discipline of analytics and building ML models, they are distinct, right? There's overlap, right? But
even sort of the workflows and a number of other things like that, right? But how much crossover
are you seeing, right? So if you think about building an ML model,
like a huge part of that workflow
is getting the data right
in order to be able to actually build a model, right?
I mean, you sort of have to have that as a starting point, right?
And so if you think about traditional BI,
you're sort of prepping data
to show up in a dashboard and look at your tableau and great, right? People can click around and sort of learn what they want to learn. you it's like you're there's like it's bleeding over heavily into like okay well you're basically
you know halfway there to sort of doing the work the workflow that's required for model building
as well so it was really interesting for me to hear that like a lot of you just hadn't used
notebook so two questions do you think that that venn diagram will have increasing overlap over
time and then two do you think that's a good
thing? Is that something that you want or even are building into Hex? Yeah. So, wow. That's an
awesome question. I would say I've got a general philosophy of like, I do think there are two
distinct parts in what we talk about, like the data world at large,
of analytics and then ML engineering.
Like I think with analytics,
your deliverable, your end result,
your job to be done is like,
you want to influence a decision.
Like you're going to try to find,
ask and answer some question,
and you want the results,
the answer to that question to influence a decision.
And it has to be kind of have a bigger picture of that. We say, you're trying to contribute to knowledge. How does your
organization know what's true and then be able to make decisions based on that?
That is kind of fundamentally the story of analytics. And it's important and it's big.
And there's a lot of people who do that every day. And I think that's just going to become
more part of the firmament of how everyone does their job. On the other side, you see ML engineering
and the deliverable there is like a trained model.
It's like a prediction.
It's like often it's like an endpoint.
You see a lot of ML models being developed
to be able to run online
where it's like a real-time fraud prediction score
or it's for something like that.
That is a very different world.
It's a very different tool chain.
And there's a lot of really interesting stuff
over in that side of the camp around model training,
hyperparameter tuning, and MLOps,
and deployment, and monitoring, and understanding drifts,
and scaling models up.
There's all sorts of really interesting stuff over there.
Now, you know, clearly these two camps share some firmament.
Like you're, you know, in both cases, you're using data
and then you're probably sharing data infrastructure.
Maybe the data in both cases is coming out of a warehouse.
Maybe you're using something like a DBT to do a data prep
and transformation for both of them.
But I really do feel like these are separate workflows.
And one interesting point here is like, you know,
this title data scientist, I think has often been conflated
or maybe spans both of these.
But if you talk to 10 data scientists, I think seven of them are really doing analytics.
And they might be doing very statistically rigorous analytics.
They might be doing pretty interesting predictive techniques in analytics, but fundamentally
they're there to help influence a decision.
And I think the people who are doing more of the ML engineering, I think they're actually just starting to call themselves ML engineers, actually.
And their workflows wind up looking a lot more like the software engineers' workflows.
So this is sort of a macro theme of where I see this going.
Now, it is true that in the analytics,
I think there are really interesting opportunities to bring ML techniques into the analytics.
But that doesn't mean that it's converging with ML engineering. And I think as you were alluding to in ML engineering workflows,
often the early parts of those workflows are doing some analytics to understand the data,
whether it's understanding maybe a problem that exists that you want to solve with your model or
to do data prep and understand it. So we do see a lot of ML engineers use Hex in that phase of their work,
but it has not been a focus of ours at Hex to like go into like a full stack
ML platform.
Like there are a bunch of really great tools,
whether it's ML flow and weights and biases or all sorts of other things that
are really built around that.
And so these worlds have some connectivity, but I think it is important to understand
where you're focused and as a product where you want to succeed.
We are all in on analytics.
We think that it's a huge market.
We think it has a lot of unsolved problems.
And I'm very proud of having built a product that is just, I think, really accelerating
and improving the analytics workflows of thousands of people every day now.
Yeah, I think that's a super helpful distinction between sort of statistically rigorous analytics.
Yeah. maybe like another way to rephrase what you were saying is like actually delivering a model or like
the results of the model as an experience right because in order to actually take that and like
deliver it as part of whatever's happening like a recommendation on a website or whatever
like it is software engineering right like you're you're having to like deliver like a literally
there's like a development life cycle and a lot of software infrastructure required to actually
take that and like yeah these people are typically using ide's they're deploying their code through
stlc they're deep running cicd there's a whole world in model ml ops now around deployment and
monitoring and like it's got its own similar
and parallel world as DevOps. Now, I do
think there's some really interesting opportunities to bring software engineering
best practices to analytics. And I think we've seen this
in a lot of things being defined as code movement. I think
one of the best parts of TPT in my
estimation is that you manage everything through pull
requests and it's all code. And there's a lot of great things there. But then that's separate to me than the story of and, you know, sort of making a lot of the
analytics workflows easier for our listeners who work on analytics, you know, or work on sort of
data engineering workflows or data teams, you know, that are part of, you know, analytics workflows.
If you could give them sort of one piece of advice, maybe especially the ones who are earlier
in their career, you know, as a practitioner who's now building a product and sort of serving people, you have a unique perspective on that.
And maybe you could give a couple pieces of advice because just one is kind of tough.
Yeah, well, I actually, I do have sort of one big piece of advice that I keep coming back to.
And it's, you know, at HECS, we think a lot about how data teams could be more impactful.
You know, companies are investing a lot of money in their data teams.
They want to be able to get some impact out of that.
They want to make sure it's moving the needle.
You'll see this show up a lot where people ask, you know, like, what's the ROI of a data
team?
What's the ROI?
As if that's something that can just go be like penciled out.
And, you know
you'll get into
these weird exercises
where you're like
well we built this model
or we built these
five dashboards
and they
we think they helped
do this thing
10% better
and that's
and it's like
you need the data scientist
to answer the question
about ROI
of the data scientist
self-quantifying
yeah
and like really
I think this is kind of
I understand why it happens,
but it's kind of silly.
And so my big piece of advice
to people who are on data teams
or starting data teams
or running data teams
is the way that you're actually
going to feel that impact
is if your data team is embedded
and aligned with the actual
functional people in the business.
I think the last thing you want to do
is set up an ivory tower.
Just sit together.
They only talk to themselves.
They're off doing sort of like R&D.
And the results and the things they're building aren't necessarily influencing decisions.
And so we built this model and it adds this prediction.
And what was the impact of it?
Well, it would be this.
And you get that a lot with folks.
And I think personally,
this is kind of an org chart thing,
but I think like folks on data teams
should be really closely aligned.
I think they should be planning their work
and they should be embedded with teams.
And this even gets down to like
how you're setting like things like OKRs and polls.
Like I kind of don't believe that data teams
should have big sets of their own OKRs. I think that individuals on data teams should be accountable for OKRs and polls. I kind of don't believe that data teams should have big sets of their own OKRs. I think that individuals on data teams should be accountable for OKRs
and sharing OKRs with folks on marketing or ops or product or sales. And I think if you do that,
when you're asked about the ROI of your data team, you as the head of data or the data scientist or
data practitioner don't have
to go and try to pencil it out, you can redirect that person to those stakeholders.
Because if you're doing your job well, the VP marketing or the product manager or the
head of ops should be the one standing up and saying, oh no, we couldn't have done this
without Amanda embedded with us.
This was a huge part of our ability to go and solve this.
In fact, you should have those stakeholders advocating for you to have more headcount.
And so I think that's really important.
I think that we've lost when people are doing a lot of data work.
It's like, how is this actually going to get used?
Is this actually going to move the needle?
And is the work that I'm doing closely aligned with the needs of the business? It's such an important thing. And I would
encourage people to really just stay focused on that. And I actually have some small part of that.
And I think we help make that data work more useful and usable and easier to share. I think
we can help influence that. But I think it really needs to start with how you're thinking about the
role of your data team and how it's organized within the company. Love it. Amazing advice. All right. Well, we are over time. Sorry, Brooks.
But Barry, this has been an incredible conversation. Thank you.
It was my pleasure. Thanks for having me on and really enjoy the show and I hope folks enjoy the
episode. Costas, my big takeaway, well, there's so many actually. I'm going to actually show because I've already broken so many rules with Brookscon, I'm going to actually have some self-control here and only have one takeaway.
So the commentary around the distinctions between work in analytics and work in ML was really helpful. And even though we didn't talk about that for a super long time,
but I thought it was really helpful how he pointed out that in many ways,
you know,
I think he said seven out of 10 data scientists,
if you talk to them about what they do,
you could really roll a lot of that into actually like analytics work,
you know,
and it may even be predictive analytics, but it really
sort of falls on the analytics side of the house. And that was just very helpful as you sort of look
out of the landscape and job titles and all the gray areas and crossover. I just thought that was
a really, really helpful perspective. How about you? I think it's pretty hard for me to come up
with just one thing that I'm going to keep from this conversation.
Overall, it was a great conversation.
We talked about, first of all, the advice around building a product at an early stage.
That was great.
We talked a lot about notebooks.
Hopefully, more and more people out there will hear about them and give them a try.
It is a very interesting, let's say, computation model and companies like
HEX really innovate on them and make them more accessible and we should be
able to consume data in a more exploratory and narrative way, right?
And that's great.
That's something that is missing from like the BI tools out there.
And yeah, like that was great.
And those are all the conversations
that we had around like BI
and the next wave of innovation there.
So yeah, I'm really looking forward
to have Barry again on another episode
in a couple of months
and get even deeper into these questions and more about visualization
and data platform, notebooks, and beyond. Absolutely. All right. Well, thanks for
joining us again on the Data Stack Show, and we will catch you on the next one.
We hope you enjoyed this episode of the Data Stack Show. Be sure to subscribe on your favorite
podcast app to get notified about new episodes every week.
We'd also love your feedback.
You can email me, ericdodds, at eric at datastackshow.com.
That's E-R-I-C at datastackshow.com.
The show is brought to you by Rudderstack, the CDP for developers.
Learn how to build a CDP on your data warehouse at rudderstack.com.