The Data Stack Show - 26: Democratizing the Insurance Market with Daniel Gremmell from Policygenius Inc.
Episode Date: February 24, 2021On this week’s episode of The Data Stack Show, Eric and Kostas are joined by Daniel Gremymell, head of data at Policygenius, Inc. Policygenius, an insurance marketplace, strives to make it easy for ...people to understand their options, compare quotes, and buy a policy all in one place with help from licensed experts.Highlights from this week’s episode include:What brought Daniel to Policygenius and how his background in industrial engineering and statistics impacts what he does (1:49)Policygenius consolidates carriers and pairs insurance customers with live experts to get the best prices and plans (6:29)How data analysts and data scientists re-shape the customer experience of selecting insurance (10:36)How roles and titles like "head of data" are changing the industry (24:32)Organizing a company with structured embedding (27:28) Policygenius' data stack (31:31)The Data Stack Show is a weekly podcast powered by RudderStack. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Transcript
Discussion (0)
Welcome to the Data Stack Show, where we talk with data engineers, data teams, data scientists,
and the teams and people consuming data products.
I'm Eric Dodds.
And I'm Kostas Pardalis.
Join us each week as we explore the world of data and meet the people shaping it.
We have Daniel from Policy Genius on the show today.
My burning question for Daniel is really his opinion on titles around data in leadership roles. So we've increasingly seen even C-suite level data titles, which is really interesting.
It hasn't been that way for a super long time.
So I'm just interested in his opinion on that
because he has such a wide purview
over the different data functions.
Kostas, what interests you?
What's the one question that you want to get an answer to?
Yeah, I really want to learn more about how data
and data analysts, scientists, data engineers,
and this whole new organization interacts with
products. I think it's a very good case because they are B2C in the marketplace, which means that
they have to deal with a lot of data. So I'm pretty sure that product and data analysts are
working very close together and have quite a few questions around that, that I'm really looking
forward to the answer from Daniel. All right, let's talk with Daniel and get our answers. All right, welcome back to the Data
Stack Show. Really excited to have Daniel from Policy Genius on the show. Daniel, thank you for
joining us. I appreciate it. Thanks for having me. Absolutely. Well, we're excited to chat. Before we
get going, we'd just love to hear a little bit about your background and what
led you to PolicyGenius and what you do there.
Yeah, absolutely.
So I'm going to go the reverse order of that question.
So I'm the head of data here at PolicyGenius.
I've been at PolicyGenius for the past year, and I came on to help us build out and expand
our data capability.
So I'm the first head of data that we had.
When I came on board, I started with two folks and we've since built a team. At Pulsar Genius, the data team
oversees data engineering, data analytics and analysis, and then data science and machine
learning. So we have various roles from data engineers, data analysts to data scientists.
So prior to that, I was at Plated and I was VP of data science. Plated was the old
meal kit company. And so it was food and supply chain problems, which was super fascinating.
And then even going further back before that, I worked in many different industries and
their data leadership roles, including publishing, aerospace, automotive, healthcare a little
bit.
And so it's been interesting to work in a lot of different fields and get a lot of industry
experience.
My training is in statistics.
So I did a master's at Rochester Institute of Technology in statistics with a lot of industry experience. My training is in statistics. So I did a master's at
Rochester Institute of Technology in statistics with a focus in machine learning. And before that,
I was an industrial engineer and have a background in industrial engineering as well.
Very cool. It's fascinating to hear about just guests on the show who work in the data space,
their various backgrounds. Well, quick question on that. What types of things did you do in
industrial engineering, just out of curiosity? Yeah. So just a little more context to that. I
mean, data science didn't always exist. Before there was data science, there was statistics.
And statistics is one of those things that's embedded across many different professions
and industries. Actually, my first degree, my bachelor's is in operations management.
And so early, early on in my career, I worked in operations management at both the manufacturing and distribution level.
I then moved into industrial engineering because I was fascinated with solving and crafting systems
to help people become more productive and efficient as opposed to just directly managing
folks. So naturally when I moved into engineering, if anybody has ever gone to school
for any type of engineering, mechanical, industrial, etc, there's a lot of math. And so that's where I
found my love for mathematics, statistics, and then eventually went back to school for my master's
in statistics, and hence how I ended up in analytics, data science, etc. So yeah, that's
kind of the pathway and how I got there.
I think all of those things kind of led into each other, but yeah.
That's very interesting, Daniel, what you said about engineering disciplines in general. That's also my experience. I remember when I started studying electrical and computer engineering,
and by the way, my dream to do that is because I only cared about computers, right? And I remember that at some
point, I realized that almost half of the semester at the beginning were mainly math and physics. And
I was like, okay, when is the fun stuff going to start? But that's when I was, okay, I was
still a teenager, right? After I finished and graduated,
I really appreciated all the exposure
to both disciplines, both physics and mathematics.
And it's one of the best things
that at the end you go through
because of following an engineering discipline.
So that's a very, it's a great point
what you talked about.
I mean, that's super fascinating.
And yeah, I mean, when you go through
in engineering, et cetera, absolutely right.
Like when you take your first course
on differential equations and whatnot,
you're like, oh my God, like this is just insane
or calc four, calc three or something like that.
You're like, oh man, what is all this integration about?
But yeah, I think what attracted me to analytics
and data science in general is, and especially more on the applied side, is just being able to take all that math and actually translate it into something you can see and touch and actually solves real problems.
That's where it becomes super fascinating.
And luckily, you know, when I did all my degrees, I was working full time.
So I was able to take directly what I was learning in school and apply it on the job, which was always, always fascinating to be able to do that. Yeah. That's super interesting. You know,
looking back not too long ago, I was thinking about, you know, my coursework in college and
I have a business degree in marketing, but I took a lot of statistics classes because I really
enjoyed the sort of the idea of being able to answer questions with math, you know, and statistical
significance.
But I was thinking about my favorite classes in college. I think someone asked me that question.
And I said, it'd probably be a tie between statistics and consumer behavior.
And then I realized, oh, I guess I work in data. That makes a lot of sense. Like that's kind of interesting. So the application there of the math is really interesting. Well,
getting back to Policy Genius, can you just give us a little overview of the math is really interesting. Well, getting back to Policy Genius,
can you just give us a little overview of the business and the problem you solve?
Yeah, absolutely. A hundred percent. And definitely feel free to go to our site and
read more if you're listening. But yeah, so we are an insurance marketplace. That is definitely
the best way to describe us. And basically the way that our process and business works is we make
money as an insurance broker. So if I can describe the process to you, we have multiple products on
the insurance space, life insurance, home and auto insurance. And really the way the process works is
folks begin their journey online. And usually they come to us to compare options, compare prices.
There are people that are generally curious about the life
insurance or the home and auto insurance industry, and they're trying to get the best product they
can. So they'll come to our site, they'll explore, they'll come through our product funnel. And then
as they get through our product funnel, we're giving them education, information, and collecting
info about them. And then once they get to the end of that online funnel, we connect them with one of
our live agents. So we have a series of live agents that we work with that work with us.
They work directly for us.
They're licensed insurance brokers.
And so they can definitely partner with you to get you the best coverage possible as far
as life insurance goes and home and auto insurance.
And so they'll have those deep conversations with you.
They'll help you.
They'll collect some life insurance side.
They'll collect some of your health information, et cetera.
And then basically they'll compare rates, help you select a carrier,
and then they manage the process of getting your information over to that carrier for underwriting,
et cetera. So we do still have to go through carriers and whatnot to get a policy actually enforced. We don't actually put policies in force ourselves, meaning we don't insure you.
Whatever carriers do,
we'll facilitate that process. So some benefits of actually coming through us versus going directly to a carrier. If you go directly to a carrier, they're going to give you a quote. And then when
you decide to move forward on that quote, they're going to put you through a process called
underwriting. So in underwriting, what naturally happens is you could apply, like you could,
if something comes back, you could, your price could actually change.
And then that's the policy that you have to choose from.
And that's it to go in force.
What we're able to do is if we put you through to an insurance company and then your application comes back and it's your price is adjusted.
We're able to compare that with other carriers before we actually choose to put your policy in force with that carrier.
We do that in conjunction with you. So that's really the value that we're adding as a marketplace
is we're consolidating all the carriers that you can choose from. We're providing you the education
and the support to help you make the decisions that are best for you. We're not biased by
commissions, et cetera. And then we're also looking out for you to make sure that you get
that best price possible so that you're not going at it alone or having to work through an individual
carrier. So in general, that's the process and that's the
business model. And again, we do on the life and home and auto side. We also look at ourselves as
more than an insurance company. So we also look at ourselves as a financial services company.
So we have products like wills and trusts that just rolled out last year on the mobile app.
And that allows you to just go on our app and go through that process and get a
will in place that covers you and your family in the event that something happens to you. That is
normally a very complicated and expensive process that folks have to work on with lawyers and legal
counsel. And we've taken all that and productized it into an experience and it's legally binding.
We've consulted with numerous lawyers and law firms around the country based on states, et cetera, to provide that service. And so that's just one pinnacle of a strategy of financial services that we've
implemented since insurance. Got it. We have so many questions to ask you and so many things I'm
interested in, but let's start with, you know, one pattern we've seen on the show is that
a lot of times our guests work in a context where there's sort of a traditional process and they're using technology to help reshape that process.
You know, which it seems like is what Policy Genius is doing with the data and the ways that that helps shape the experience
as head of data, how do you view the role of data and the process of reshaping that experience? And
how do you use that to create and sort of inform the customer experience? Yeah, that's a really
great question. So some of it comes back to how we structure the data team to kind of answer your
question. We're a product focused company. So what that means is that we are continuously looking to improve our customer experience
and translate that into gains for the company to help us grow and scale.
So we have a series of roles from data engineers, data analysts, data scientists.
They all kind of interact with the product, contribute to the product in different ways.
So starting with data analysts, data analysts are embedded directly into our product teams
or closely with product managers and engineers.
And they can observe the product process and the customer experience in two ways.
Number one, they participate in discovery through contributing with research.
So if you've ever worked on a product team, you know, Kostas knows this as leading product
for your company.
You have designers, et cetera, who are helping with UI research,
customer experience, and designing what the customer sees.
A really good way that we think about data analysts is they're kind of like a designer,
but on the quantitative side.
So all of their research is quantitatively driven.
They're looking for trends and patterns that we've seen
through how our customers interact with our site,
through where we're losing them in product funnels.
And they're taking those insights and questions and they're developing and crafting research
around that that helps influence the product experience that we're going to show customers
as they come through, or even on the backend side, how agents kind of work through their
process when we get customers on the phone.
So that's on the discovery piece.
It's heavily driven by quantitative research.
The second way an analyst contributes is on the delivery side. So whenever we find a feature that
we want to develop, we develop that feature. And then most of the time, I would say 90% of the
time, we're going to A-B test that feature and look for impact and how it affects customers.
And we're trying to learn. So we have a strong test and learn culture and process in which we want to try new things, interact with our consumers and really understand if it's
working for them, if it's benefiting them in the experience. So we do controlled A-B tests.
Our analysts help lead that process by structuring the test, helping out with the sample size and
power calculations and helping ensure that we have its scope and we're measuring for the right
outcomes. So that's one way we contribute to the product experience, what the customers see
by leading that experiment process. The second way that we contribute to the customer experience
and what folks end up seeing is on the data science side. So admittedly, it's a little bit
new and we just built a team last year, so're still we're still working on use cases but we use machine learning to to help influence and drive the product experience and
process that includes use cases around personalization propensity modeling routing as
well as anything we can do to augment the the agent efficiency and customer experience in the
physical process as well so in a nutshell those are some of the things and the ways that we contribute to the customer experience. Oh, this is great, Daniel. I have
a couple of questions actually. And I'd like to start from what you described around how the
analysts work together as part of the product team. And I think that's an amazing metaphor that
you use there about them being like the designers. Can you help us to understand a little bit better
how and at which points the analyst works together
with the product team in evaluating the features?
I mean, you talked about the A-B testing,
but before you reach the A-B testing,
I'm more interested in,
let's say we create a new feature, okay?
We need to measure some things
that they are going to be used afterwards
and through the A-B testing to figure out what works best.
How does this work in your organization?
Like who is responsible for that?
Like who is responsible for what's going to be tracked and why?
And how is this communicated with the analysts?
And what's the process there?
Can you help us understand a little bit better this?
Yeah, 100%.
I know this quite well because when I got to PolicyDuty,
this was one of the first things that I worked on
in collaboration with the product
was really standardizing this process.
And so in my experience, when it comes down to it,
if you think about accountability and whatnot,
the entire product team is accountable
for the results of that experiment.
So that's the way we think about experiments from the get-go. That whole team is accountable for the results of that experiment. So that's the way we think about experiments from the get-go.
That whole team is accountable for ensuring that we're developing experiences that are beneficial to our customers and are value-added.
Now, from there, through the experimentation process, we development and really defining what it is we're trying to test.
And so they're just kind of the centralization point for that piece of the process, meaning that they take our company strategy.
They take the insights from analysts and designers, as well as the rest of the team to really define what it is we're building
and why we're building it and what hypothesis we're trying to validate or invalidate.
From there, there's a strong collaboration point that happens with our analysts.
And what our analyst does is they're going to go through the mathematical side of this,
as well as align with our PMs on the primary metric that we're trying to drive impact to.
And again, this happens much more
natural than maybe I'm describing it, just because these folks are embedded. They're working together
on a regular basis. They're going through and sharing context on a regular basis. So it's less
like check off the process box and more of these natural synergies occur. So there's alignment on
that primary metric that happens usually between the PM and the analyst. And our analysts will come with recommendations on a regular basis as better
indicators of success of that experiment, depending on what we're driving. Yep. And then from there,
it's a joint process kind of between engineering, getting the change finished, the PM kind of
managing the rollout, and then the analyst doing the monitoring of the experiment when it's live.
And then to wrap up the process, the analyst is keeping an eye, the analyst doing the monitoring of the experiment when it's live. And then to wrap
up the process, the analyst is keeping an eye, the analyst and PM are keeping an eye on the new
feature as it launches to make sure we're not seeing massive drops or anything that would be
detrimental to the experience. And then once we hit our sample size, our analyst will do that
final analysis and provide the final result as to whether or not the test was successful.
And one thing I forgot to mention early on the process, right before we launch anything,
part of our process is PMs and analysts work together on that decision criteria. What's
really important about any AAB testing process is based on the outcome of the experiment,
what are you going to do and how are you going to action what happens? And so normally it's
pretty simple and pretty standard is if you see
a significant increase, you roll out the experience. If you see no effect, you might iterate. And then
oftentimes if we see a detrimental effect to the customer experience, then it's a kill feature or
a potential iterate. But we align a lot of upfront, that way we have a blueprint for when the test
actually finishes. So I think my answer was probably a little more nuanced and complicated
than maybe you thought.
But the answer to your question is
the whole product team is accountable
for the experience itself
and the experience and the experiment itself.
But then there's definitely places of centralization
that occur within the process.
Oh, that was great.
That's exactly what I was looking for, to be honest.
I'm very interested.
I mean, I'm coming more from a B2B space.
So we will also discuss about that
because I have some questions to ask about it
based on your experience.
So I'm super interested in how this looks
in the B2C environment
where you have actually a lot of data to work with.
But before we go there,
do these experiments ever fail?
I mean, can you reach like a point
where actually the output of the
experiment is we cannot decide between A and B, or you have to go back and see what went wrong
with the experiment itself and not with the feature? Is this ever a case? Yeah. So, I mean,
to answer that question, I think the best way to answer it is if you never have experiments that
fail, that means that you're not testing
aggressive enough changes, right?
The whole point of a test and learn process
is you're trying to learn and fail fast.
So, I mean, yeah.
I mean, there are often times
where tests are successful.
There are times when tests are non-significant.
And then there are times where tests just fail.
And that happens in any company
that I've worked at and experiment with.
And the idea is you're trying to push aggressive changes out there. You're trying to overhaul what the
customer sees and really try to find something that's better. Rather than testing really small
incremental things, you get there faster by taking bigger jumps. So that's the idea. And that's what
we try to do. So we do see tests that are not significant or tests that fail. And that's where
that decision-making criteria really comes in. Because if you don't align that up front,
you might get into this horse trading ambiguous world afterwards in which,
okay, well, this wasn't significant.
Should we roll it out or should we not roll it out?
Usually there's discussions that happen beforehand
and things that you want to align on based on what the feature is.
And honestly, with some features, we're kind of testing for parity.
Sometimes we are testing for a non-significant effect.
If we roll out something that's aggressive, that's beneficial to our process or something
like that, if it's not significant, sometimes that's a good outcome for us and we will roll
that out.
So it really depends on the nature of the test and what we're looking for.
This is great.
And the reason I ask you this question is because I have the feeling, and I also have
the same feeling for a long time, to be honest, A-B testing for example is a process where at the end we are going
to say okay we are going with A or B right but what I think most people forget is that data cannot
always give answers and one of the in my opinion at least like one of the responsibilities that
the analyst has
inside an organization and team,
it's also to point out when we can trust
and when we cannot trust this data.
And based on that, rate a rate and fix the problem
or try again later or whatever,
trying to figure out how to solve that.
So yeah, that was great.
So question about B2B now.
I mean, B2C has access to a lot of data, right?
If things go well, you will probably be interacting with thousands of users. You have like thousands
to millions of data points. And that's quite important when you do statistics. In B2C,
you have the opposite. You don't have that access to the same amount of data. Based on your experience, how
do you see the techniques that you are using right now in Policy Genius work and what does not work
in a B2B environment? What's your advice around that? How someone in a B2B environment should
apply statistics and analytics to drive the product? Yeah, 100%. That's a great question.
Before I answer that question, I'm going to jump back to my last question for a second. You pointed something out that I forgot
to hit on. So talking about failed tests, again, for one brief sec, what also happens for us,
like if we have an experiment that doesn't go as planned, our analysts do what we call like
secondary follow-up analysis. So, you know, normally when you test something at an A-B test,
you're testing like an aggregated metric. Well, if that metric doesn't turn out like we expect, our analysts will dive in and do
much deeper analysis and modeling around that to really understand how different consumers
were behaving when they experienced that feature.
And we use those insights to help us drive the product experience or iteration going
forward.
So that's just a little more nuance into some of what we do for failed experiments.
But to answer your question about B2B, it's a great question.
And really, when you think about B2B and lower traffic scenarios, you have to call back to
your statistical training.
So yes, in the B2C world, we're able to use tests that we're able to get sufficient power
on.
And even sometimes with our traffic, sometimes tests need a little bit of time to run. In the B2B world, you often have a different challenge where
you just have smaller sample sizes. And so the challenge becomes, okay, well, I can't use just
regular hypothesis tests. So you have to kind of adapt and use either tests that are more robust
to lower samples, something like a Fisher's exact test or something of that nature, or just use,
you have to use models that have a higher degree of power, right? And so that means that you're able to
build a little more rigor into some smaller sample assumptions, and then kind of use those to
extrapolate out and then implement those changes. I mean, if you think about, you know, back in the
day, the t-test itself was actually invented by guinness and the reason why they invented it
is so they could test batches of beer well they didn't have thousands of batches of beer to test
they really had a couple of batches they can only do at a time so the t-test was invented to help
them with things like that and the same principles apply with b2b right you don't always have
thousands of reps so you have to use tests that are applicable to the scenarios that you're in
and that you're able to get high degrees of confidence in, or at least are directional enough for you to be able to make decisions and advance your business strategy.
I think one thing to remember from a business strategy perspective is, you know, we're not writing research papers here.
So we're trying to grow businesses and we're trying to advance that business culture. So being able to take some liberties with that, or at least have something
that's directional for you to react to is much better than just kind of flying blind or going
by your gut. Yeah, no, this is so, so interesting to learn from you. Daniel, I want to step back
from the details of Policy Genius a little bit and talk about, as funny as it sounds, your job
title. So head of data is a leadership role that sort of implies coverage of a lot of different
areas of data.
And this is a discussion we had earlier this week, actually, just around, you know, it's
the concept of leadership in data, you know, hasn't been around.
I mean, to some extent, it's been around, like you think about IT leadership, you know,
sort of the history of technology and all of those sorts of things. But in roles like you have in terms of head of data that have a broad scope, we're seeing more and more of that. And so I'm just interested in your opinion holding that title. What does that look like for you, number one? And then because you work in the field, how do you think that that is changing in the industry?
And how do you think that will look moving forward? Do you think we'll continue to see
more and more of that? And what are those roles look like inside of companies?
Yeah, that's a great, that's fantastic. Yeah. Simply the role of head of data can sometimes
be a little bit ambiguous because you're like, oh, what does that actually mean? Because data is like a thing. So yeah, but in terms of policy genius,
and actually like most data roles I've held,
the definition has consisted and spanned over
those three core areas that I mentioned earlier
in this conversation, data engineering,
data analytics and analysis,
and then data science and machine learning.
And I make those distinctions for purposes of scoping,
scoping roles and helping define people that are specialized in those fields and functions.
So I think the way we define data policy genius is very similar to other places I've been at,
even if the role was called something kind of different, right? When I was at Plated,
my title was VP of data science. So even though my role was data science, I still oversaw data engineering, data analytics,
and then the data science itself. So really, it's less around the title itself and more around the
scope and breadth of the role. And there's real power and synergy to owning those three components
because really what you're doing is you're providing a service know, you're providing a service back to the company.
And the idea is to make the most use of that service as possible.
And by having those,
those roles kind of under the same function and leader working together,
you're able to develop efficiencies internally because they all kind of rely
on each other.
Data analysts work heavily with data engineers and they often need that data
engineering support.
And anybody I've ever talked to out in the industry and peer benchmarked with, they always
tell you one of our biggest struggles is getting the data structured in the fashion that we
need.
And so having a data engineering team outside of data with different priorities isn't always
helpful.
Yep.
So those internal efficiencies are really what we strive for by kind of embedding these
functions together.
And going forward, it's, it's, you know, it's hard to tell.
I mean, like the way, the way that our team is structured is not the same across every
company.
Some companies that you look at, they have a more functional model where data science
kind of reports into like a marketing leader, somebody like that, or a finance leader.
And then data engineering maybe doesn't exist or is classified as engineering.
Data analytics might be BI or something of that nature.
So there is some decentralization that happens.
And really what I think it's a factor of
is the scope and size of your company,
the presence of where it is globally.
Yeah, those are kind of the factors
and features to think about.
And so what is the trend going to look like going forward?
I think only time will tell.
I do see specialized roles coming out, like people that are simply focused on AI and data science. But again, I think it's a factor and feature of the type of business that you're running and the industry that you're in that kind of defines what the scope of the role should be. I think there's power to having data centralized under one leader. It just helps provide a more cohesive and consistent vision,
but there could be arguments for other cases as well.
Sure, sure.
Yeah, I mean, I was thinking about this,
just sort of working on some ideas for blog posts.
And it's interesting to see different models, right?
You kind of have disaggregated teams that you mentioned, right?
So data science is going to report up to,
you know, marketing or the role, you know, and then maybe data engineering separate.
Sometimes you have them combined. I would say that's probably less common, but becoming more
common. But I was thinking about this concept and would be interested in your feedback on it.
So there's this, you see kind of a model many times of almost like a shared service center,
you know, where data functions have
internal customers. But one thing that I think is really interesting to think about, and it sounds
like a dynamic that you've created at Policy Genius is data, the data function really being
a strategic partner, not necessarily just a function that has internal customers, right?
Well, we need this, or we need that, or we need analytics, or we need a model, but more of saying, okay, we're trying to solve
a problem. How can data help us do that? Right. And so data becoming a strategic partner,
as opposed to just sort of a business function that serves other functions.
Yeah, that's 100% right. Buckle your seatbelts, folks. Hot take coming. Yeah. Having a shared service center
and having data folks basically be button pushers is not really an efficient use of their skills,
nor does it motivate them. So I'm familiar with the model you talked about. I've seen that model
implemented and I've even worked in organizations where that's implemented. I don't think it's as
effective because what you're relying on, if you think about it,
you're relying on your business stakeholders whose responsibility is overseeing a department
or function where they have specialty in trying to come up and brainstorm a solution that they
then pass off to have somebody implement. So what you end up with is basically a suboptimal solution in the sense of you didn't
have people work on problems with their strengths at play. So by partnering with data folks and data
science strategically and allowing them to identify opportunities and help contribute to
solutions that provide paths forward, it's going to cause you to think differently and approach
problems in a way that you probably would not have thought of before, because you're not a data scientist,
you're not a data analyst, and hence vice versa. It also goes the reverse way. Data scientists are
not marketers often. Sometimes they are, but not always. And so they don't always think in terms
of the business of the marketing perspective or the end user perspective. And so having those
collaboration points ultimately creates a better outcome than simply passing something off for people to implement.
And so I kind of mentioned it a little bit at the beginning, you hit it right on the head.
We go with a model here. I mean, I've preached this model in conferences and places around the
world because it's a great model. I've used it many times. It's called structured embedding.
And so what I mean by that is, yes, we have a centralized data team, but we don't just take
requests and farm out resources. We actually embed our resources into
different product teams and business functions. And the idea is that they're on the ground,
they gain and learn the business context, and then they're able to contribute in ways that
are more beneficial to the company than simply taking orders and executing work.
Very interesting. Structured embedding. That's the first time I've heard that term.
But yeah, I totally agree.
That's something that we see.
We have the privilege of just talking to so many different companies.
And I think that's a trend that we'll see increasing significantly in coming years.
Yeah, thanks for your perspective on that.
Well, I know we're closing in on the hour here. So why don't we jump over to
the technologies that you use in your data stack? And I'm really interested in your perspective on
this because you have a wide purview over all of the different data functions. So we'd just love
to know, just kind of run us through the high level, you know, what do you use to sort of
collect and process data, store it, and then the various ways
you pipe it to all the different, you know, places and teams that need it.
Yeah, definitely.
Yep.
No, no, no trade secrets here.
I mean, we, we, on every job post we have, we, we put the technology that we work with,
but at a high level, we're a GCP shop.
So we work in Google cloud.
That's our main cloud provider. And then we have
various amounts of tools that we work with to help us capture and move and process and transform data,
kind of events and whatnot that happen on the site. We have event streaming that happens with
providers like Segments. We then, on the backend, utilize Airflow to help us with our ETL between our databases.
So pretty common stack that a lot of people see across data architecture and kind of ETL and processing and moving data.
We also use Airflow to do our ELT processes internally in our data warehouse.
So everything kind of gets piped and centralized within our data warehouse.
Our data warehouse is like the hub and kind of like the center of consolidation
for everything that happens around the company.
And then we're going to connect that to various,
you know, reporting BI,
as well as modeling tools to help us,
help us do more advanced and sophisticated things
in that data as well as make decisions.
So yeah, I mean, in a super high nutshell,
that's really what the process is.
Super simple.
Yeah, yeah.
Question on the data side of things. Are there any sort of unique challenges you face with the type or
structure of data that you deal with at Policy Genius? You know, so is there some sort of data
format? And there may not be. We just like to ask because we find out interesting things. But
some sort of data format, you know, related to dealing with insurance information applications
or other components like that, that sort of presents a particular challenge or a unique
requirement around moving the data or processing it?
I would say, I don't think there's anything that jumps out at the top of my mind. I mean,
underneath the hood, every now and then we have some JSON strings and whatnot to parse.
So there are things like that we run into.
But as far as the industry itself and the types of things that happen, our data is relatively structured.
And so we don't really run into a scenario a lot where we have to work with a lot of unstructured data.
There are small pockets of use cases here and there.
But for the most part, everything that's collected on the site,
et cetera, is pretty well structured.
So nothing really unique to the insurance space,
nor us about some of our data structures.
Yeah, very cool.
So Daniel, last question before we conclude our conversation, although I think we have many more questions to ask.
Maybe we should arrange
another episode with you. What's next? I mean, what fascinating projects you have internally,
and what you are really looking forward in terms of either technologies out there that you're going
to use, or I don't know, even organizational changes. Share something that excites you about
the future inside Policy genius in terms of like
the technologies and anything related to data well i love i love doing shows like this because it
also gives me an opportunity to uh recruit if you're out there listening we are we're we're
continuing to grow and scale the team so we're we're looking for data engineers data analysts
data scientists so you know we we grew aggressively last year and we're growing aggressively again
this year. And it's really a testament of the value that we brought the company and the value
that people have seen and being able to use data to help drive strategy and help drive product and
help drive what the consumer sees. And so we want to do more and more of that and get better and
faster at it. So actually what's exciting is, you know, we're kind of year one into having the data team structured and having brought
many resources. And so year one is kind of like foundational people are learning and trying to
understand the business and data, et cetera. And so now we're getting into that stage where
we have people here for a decent amount of time and they're
starting to think a little more creatively. They're starting to think a little more generatively.
And they're starting to think bigger roles on their embedded product teams to help drive product
strategy forward and provide those insights that are needed to help advance our business.
So that's what's really exciting about what's coming up. So yeah, again, if you're out there,
we're hiring. Go to our site, look at our roles, and please come through the process because there's a lot of exciting things on the horizon.
But we're also, you know,
so analytics is definitely a place we're always investing.
You know, the faster we can decision and research
and help drive product strategy,
the better off we're going to be.
Obviously, like we're trying to accelerate
our machine learning velocity.
It's something we dabbled in last year
and definitely is something that we're continuing
to dabble in this year and expand and accelerate. So yeah, those are some of the cool,
exciting things that we're looking at from a Halle-Elba perspective.
That's amazing. To be honest, Daniel, after our conversation, I would definitely consider
applying for a job there and working with you. Thank you so much.
Especially looking for data engineers.
They're a little hard to find.
So if you know any other, send them through.
Thank you so much.
And I personally really looking forward to chat again
and learn more from you.
Appreciate it.
Thanks for being on the show, Daniel.
And we'll catch up with you again,
maybe in another three or six months
and have you back on the show to tell us
what new things you're working on.
Cool, thanks.
Well, that was a really interesting episode.
I think my big takeaway was the concept
of structured embedding.
I have not heard that term before.
I mean, I'm sure it's been around,
but really fascinating to hear about
sort of the strategic placement of people in the data function in various parts of
the organization. I love it. And I think we're going to see more and more of that and hopefully
hear more and more of that term. Kostas, what was your big takeaway? Yeah, I agree with you. I think
that as in the past, we were hearing the motto that every company is a technology company. Like
in the future, we will say that every company is a data company.
And as this happens, I think we will see very interesting restructuring and structures around how data works inside the organization and how this affects the structure of the organization.
So that's super interesting. I really enjoyed chatting with Daniel mainly because he has a
very unique and amazing overview around all the different functions that are related to data because of his role as a VP
of data, right? So he has a very good understanding of data engineering, data science, data analytics,
and how all these different things around data work together to provide value to the company.
And of course, to the customers.
Super interesting for me that data analytics can also work in B2B,
where it's the typical problem of, we always say we don't have enough data,
so why do A-B testing?
But there are solutions there from what we heard from Daniel.
And to be honest, we have many, many more questions to ask him.
So hopefully we will have the opportunity again in the near future.
I agree.
All right.
Well, subscribe on your favorite network to get shows weekly.
And we will catch you next time.