Drill to Detail - Drill to Detail Ep.96 'Omni's Mission to Answer the First 100 Questions' with Special Guest Colin Zima
Episode Date: April 26, 2022Mark Rittman is joined in this very special episode by Colin Zima, ex-Head of Analytics at Looker and now co-founder of Omni, a new analytics startup looking to become the fastest way to ask the first... 100 questions.Introducing Omni: The fastest way to turn SQL into intelligenceOmni Analytics Homepage
Transcript
Discussion (0)
Hello and welcome to another episode of Drill to Detail and I'm your host Mark Whitman.
So today I'm very pleased to be joined by returning guest Colin Zima, who first came on the show back in 2019.
Colin, it's great to have you back on our podcast.
Thanks for having me.
So Colin, for anybody who doesn't know you, maybe just kind of introduce yourself and the role you're doing now, but also how we knew, how I know you from before, really.
Yeah, sure. Yeah, yeah. I'll do my data life story. So about 10 years ago, started a company, didn't really go anywhere. We ended
up selling it to a company called Hotel Tonight. Hotel Tonight was the fourth customer of a company
called Looker. And I grew very close to that team as a customer, and then eventually joined about a
year later as the 40th employee or so, and spent about the last eight years at Looker.
And at Looker, I started managing customer support and customer success.
I took over product as we built the product org.
I managed analytics for a portion of the time.
And I also spent a lot of time as sort of a pseudo field CTO.
So spending time talking with customers, trying to understand how people were using the Looker product. And in the last month,
or I guess it's been two months at this point, I stepped away from Looker and I'm starting to
build a new data company. It's called Omni. Maybe just, we're going to get into a lot of
detail about what Omni is in this conversation, but maybe just as a starting point, start with,
you know, what is Omni and what is the problem you're trying to solve really?
Sure. So obviously kind of as one of the caretakers of Looker over the last almost 10 years,
a lot of our point of view and my point of view is really founded in a lot of the core
Looker concepts around in-database analytics and software as code, a lot of the kind of modern data stack trends that we see today. I think at the same time,
there's this constant tension that exists in the analytics world around either doing things
quickly and directly or building big systems that support lots of individuals and really
complex deployments. And I think at Looker, we really spent a lot of time thinking about the latter problem.
So how do we build a BI environment for a giant organization
where hundreds of developers can collaborate and be really effective?
At Omni, what we're trying to do is really straddle this sort of uncomfortable tension
between what tools like Power BI and Tableau do really well.
So single player, do analytics really quickly, a lot of just sort of directness, and the things
that Looker and business objects and sort of the mega monolithic tools do, which is support
governance in a diverse environment. What we want to do is try to straddle both sides. So can we
provide lots of direct value to users, and over time build you know, what I'm hearing from you is that, or what I see is that there's often oscillations
between one extreme and the other, really,
where we have tools that were in the old days
where maybe like an OLAP server that was very much standalone,
then it went into kind of enterprise BI.
Maybe just talk about your understanding of the history
of some of these ideas and kind of, I suppose,
where Looker fit into this, really.
Yep. So at Looker, we always talked about sort of these waves of BI. And I'm sure, I think Gartner
has a different version of the waves. I'm pretty sure every vendor has some version of the waves
of BI. But essentially, I think to your point, the market tends to oscillate between these
centralized products and sort of the 10-year period that they own into a decentralized product.
So the first wave that I think of, I guess, would be something like Excel or even like desktop analytics.
But the first true analytics products were things like MicroStrategy, BusinessObjects,
Cognos, OBIEE.
They were built for relatively sort of expensive data warehouses, and they were built to do really tight analytics
for reporting use cases. And they were very good at creating a reliable environment. They were a
little less good at things like change management and sort of letting many users participate in the
development of analytics. The backlash to a tool like that is often the exact
inverse. So you saw the rise of things like Tableau and Power BI, which are effectively
desktop software that graduated into the cloud. So giving a user a full stack in the browser or
on their desktop where they can load in data, do an analytic end-to-end, it may or may not agree
with their neighbor, but that's not really important because they can get to an answer really quickly. Looker was founded as a backlash to that movement.
We've got Tableau sort of injected into every company. As that grows over time and that
deployment becomes sort of unmanageable, there's an appeal for something like governance that
Looker had, which is the centralized model where everything is going through this tightly managed layer. And we think that we did it much better
in terms of how the collaboration works and the speed with which you could build that data model.
But again, it was very much a swing back to what business objects in MicroStrategy
are sort of founded in. And I think when you look at a typical company, and to your point,
you mentioned that Gartner has talked about this idea of bimodal analytics. There are absolutely
needs for both of these things. There are questions that you want to answer in isolation quickly.
And there are reasons to build a large, reliable, governed environment that other people can depend
on. And so I think we're going to
constantly see these waves back and forth where it either feels too rigid and I want to sort of
blow the whole thing up and have desktop glorified desktop software, or I've got all these, these
sort of isolated business logic everywhere in my organization and I want to centralize it.
Okay. Okay. And do you think also, I think sometimes there's a danger that we sometimes live in a bit of a bubble
where we think that every business
is like the businesses that we work with day to day.
They're a sort of startup.
They are, you know,
they're staffed by very engineering focused,
you know, analysts and so on.
Is it the case that Looker
and this approach to doing things
with analytics engineers
is only really part of a wide, there's a bigger market out there for people who aren't necessarily happy to work within a kind of like a maybe a code ID and so on? I mean, are there different user personas that we often don't consider when we just think about tools about Looker, this was actually something that we had to unlearn as an organization, which is there was so much focus on the technical user building the environment.
It sometimes was forgotten that 95% of the users had no idea what LookML was or kind of what the governing layer was for the application.
So I really like to think of the market as serving all of these users in different layers.
I'm a huge believer in the idea of technical people creating a service environment for other people in the organization.
And that doesn't mean necessarily sort of like a ticketing system for the organization.
But rather, I think it was a little bit more in vogue even in the last five years, but this idea of kind of the citizen data analyst or something like that. And I think the more pragmatic way to view the world is that there are people with different levels of both technical people and the people that want to work in code-based environments.
And you need to provide people that don't want to work in a code-based environment and want to just answer their question or not care about the application that they're using.
You just want to get out of their way and give them what they need and have them go along with their day.
And so I think that a great tool can be technical, but to be technical for everyone would be a mistake.
And we'll get into this later on, but there are, you're not the first sort of founder startup,
you know, first startup founder on the first person to think about solving this problem. I
mean, this is, you mentioned Gartner and their sort of bimodal, I think one of the first episodes
of the podcast was talking about that. And the idea that ETL should not be something that you
have to, you have to explicitly do it emerges
out of what you do and you've got you mentioned things like obi where you have and other tools
where you have so i suppose the sort of frankenstein version of a kind of a suite where
you've got a data discovery tool linked to a to a kind of a more enterprise tool so it's not you're
not the first to do it but maybe let's go into what is your take on this given that it's not a
new problem but it's not probably been solved properly yet.
What's your philosophy around this and how is Omni going to try and solve this?
Yeah, I mean, I think also I'd throw Microsoft in as that are building products, naturally gravitate very hard towards one direction or the other.
So while every tool tries to bootstrap these two products together, like Tableau started with workbooks, and then they built prep and all these tools underneath it to try to build, quote unquote, centralization or governance or whatever it's going to be,
effectively underneath that product is a workbook product. And I think if you try to solve these problems underneath an architecture that you establish early, it gets very difficult to do
it well. And I look at sort of the Looker example, where it's this very governed platform, and you want to find ways
to inject in CSVs or sort of islands of data. And it just breaks the core permission model of the
system because it's not built to do that. And so I think when people have tried to solve this,
to your point, they try to Frankenstein together, like Microsoft, again, is a great example.
Power BI was just built and sort of latched to solution services. And it works probably better than anything else
out there. But there was no grand vision on how the products fit together in terms of how it was
architected. It's just like, there's an isolated problem, there's a centralization problem,
I'll build a product for each. And I'll sort of figure out threads between them. I think the different
point of view that we have is first the power of the data model in general. And we're going to talk
about sort of data models and metrics layers and things like that. But I do think that this idea of
business logic as code and a more programmatic attachment to SQL gives us a lot of the tools
that we need to solve these problems better.
I think the second one is that when people generally build workbooks and workbook style
approaches, they're intentionally building them in a very disconnected way.
And I've seen this in some of the younger startups that are sort of coming to market
now, where ultimately, the way that you push data into the product is that you write some piece of SQL,
or that you do something that cannot become scalable over time. And I think when you
approach the problem, wanting to straddle both sides from day one, we can build the right things
underneath the quote unquote workbooks to thread into a data model in a much more sort of pre-thought out
architected way that can actually support the workflows kind of from day one better.
And I just think that this idea of trying to latch things onto a system, it often becomes
very hard to compromise sort of the core tenets of the problem in a way that actually lets you
solve both sides well. So I still think the most common way that you see workbook style analytics
and centralized analytics is often in just two completely different products. So a company that
has Tableau and Looker at the same time. And I think that by architecting the system from day one,
we're going to be able to do much better at
scale if we can actually think about how those two worlds interact with each other.
Okay. So how would the user experience, I mean, I appreciate the product is fairly new and we're
just starting out. How do you envisage the kind of the user experience to go with this? I mean,
would it be the same as going in and building, say, a Tableau workbook and you extract some data
and so on, and you build a data model in there, or do you envisage it being a different type of experience, really, that bears
in mind what you've just been talking about? Yeah. And again, with the caveat that absolutely
everything about what we're doing could change since the company's only a couple of months old
at this point, I think going back to sort of the previous answer, I think it's really important
that when we're thinking about doing things in an iterative,
direct way, we're thinking about how does that become scalable over time. So I think there are
a few core tenets that we care really deeply about. So obviously, connecting to cloud data
warehouses is central to the thesis that we have about how people are building environments.
So this idea of operating in database is very core to everything that we're doing,
like with a star that we want to figure out acceleration, and we can talk about that later.
But I think the idea is, can we give someone the bones of a SQL runner?
So let people ask questions directly from their database.
But as they do that, compose the pieces of model that underlie the questions that are being asked. So a simple
example would be, I want to go query a table. I can go start writing queries to a table.
And effectively, that table is a view, which is the underlying component of a workbook.
So we can very easily put workbook concepts on top of that view. But we can make it much easier to add joints to that
data set, to add sub queries and join them into that data set to create fields. And the idea is
let users do that in a sandbox. So as if they're using a SQL runner or something like Tableau.
But the idea is that when that analysis wants to become production ready, rather than just saying, great, rework
everything that you've done and go write it into a data model, what we can do is we can say,
we've actually built a data model underneath the things that you're doing. And we can just hit a
button or go through a workflow to push that into a core data model that everyone else is using.
So the idea is really, can we give someone a workbook,
but in doing that analysis, do it with the types of tools and the types of underlying structures that a governed data model would use so that I can essentially flip a switch and go from a workbook
that's isolated to sitting inside the governed model very seamlessly. And again, a lot of hand-waving
because we need to figure out how to do this well. But that's the core concept of what we want to
create. Do you think people would be explicitly thinking about building a data model as they're
doing this? Is it something that just happens in the background? I'm trying to think how much
might you use the data itself to infer some of these things. What's your thoughts on that? Yeah. And I think it's going to vary widely by stage of company. So my envisioned user experience
when we get going is really a young company with maybe a much more dynamic underlying data schema.
So the business is changing underneath the data model. And really the goal is just answering
questions and working directly. So in that case, you're not really thinking about data modeling at all.
You're just saying, what do DAUs look like today?
How does the funnel of my user on my website look?
Like literally just answering questions
a lot like a SQL runner would.
I do think that as the company matures,
the rigidity of that business logic also matures.
So you buy Salesforce and you create a pipeline that has stages that are
well-defined that the business relies on. That's when you take the business logic out of that
funnel that you built on a one-off and you actually push it down into something that looks like a core
data model. And the idea is just when you have things that are known that you want to lock up and make governed,
you can go do that. And you can start using data model. And even we can go persist that into the
database as a materialized view potentially. But when you're not, you're not thinking about
data modeling. So a lot of the core concept is sort of removing the burden of thinking about
where something goes, because you can make it less
risky to do both behaviors. If you want to operate in like quote unquote dev mode and in a sandbox
and be sloppy, we have an environment to do that, but you can do it in a way that it doesn't
immediately get thrown away. Okay. So I suppose other product companies have tried to do this as
well. And so two examples that come to mind for me is both of them have actually been guests on the show.
We had ThoughtSpot on, I think, a while ago.
We had them on a while ago.
And they had a kind of, again, maybe it was a bit hand-wavy, but it's certainly a product that's been successful,
trying to use, I suppose, almost like search thinking over the data to think about you know how do we
how do we treat data as a a sort of as a search problem um and using ai and and that sounds pretty
hand wavy and so on yeah so there's that approach there there's other approach which was um we had
future model on a while ago who who were looking at using actually using looker um to allow the
users to in a kind of uh maybe in a declarative way define, define some of the domains that they work with and so on.
So I don't know if you've had experience on those,
but taking search first of all,
how much do you think that BI can be a search problem
as well as a kind of, you know,
certainly some of the thinking applies to what you're doing?
I certainly think that search offers a lot of power
over the type of analysis that people want to do.
So like the example I
love to cite for where search based BI really excels is if you're searching for a value in the
database, for example, so I know I want to filter over region, but I don't know that it's called
region. But I know that I have a region called the Northeast, I can just go type Northeast in I can
reverse index a field. Those are the types of things I think where search really excels. I can just go type Northeast in, I can reverse index a field.
Those are the types of things I think where search really excels. I think the inverse,
and the example I always love to give is that if you're doing airline search, for example,
when you do free text airline search, so when people try to get you to book through like a
chat app or something like that, it's an incredibly frustrating experience because you want structure to be able to query. The idea of where are you
going to, where are you coming from, and what time do you want to leave are sort of well-known
field inputs that you just want to use. And so I think the idea of like, how do we use search
and discovery to find fields, to find values? Makes a lot of sense.
But this idea that you can sort of ignore creating business logic and that search will
magically solve your problems, I think is more of a hopeful concept than it is a reality.
Yeah.
Yeah.
I've seen search work in some...
So a product you mentioned OBIE a while ago, and that certainly was a product in there
called the BI Server, which applied search thinking over the data, but actually, first of all, structured the data in the form of a dimensional model.
So that you had to clearly define concepts of measures and dimensions there and so on.
So that was one way I saw that working, really.
But I suppose another thing, I suppose, that's really the elephant in the room here is
the metrics layer, and I suppose the role of the analytics engineer. I mean, to my mind,
what you're describing is about as different to the way we currently do things and think about
things from a perspective of an analytics engineer and a very formally defined metrics layer.
Let's start with that, first of all. So what's your take on the idea
of a metrics layer at the moment? I mean, I think the appeal of the metrics layer is one of sort of
extreme governance. So if you think about sort of the original version of a metrics layer as an
OLAB cube, which is I literally define physical data schema that can answer questions X, Y, Z. That is almost the most naive
version of a metrics layer. And I think Looker is viewed as a sort of a graduation. And obviously,
Looker was not the first product to do this. But any centralized product is sort of the next
abstraction, which is rather than materializing it, I will write it, but I will virtualize that
modern layer. And I think there are very good reasons to have metrics layers. The challenge
with the metrics layer as sort of pitched, I think, today in today's environment with this
isolated metrics layer is it also sort of implies ubiquity of tools that it connects to in terms of ability to consume
the metric. So like the inverted example, going back to the materialization example,
if you're writing a table in dbt, the API in and out is SQL. So when I create a metric in dbt,
I know that every tool that sits above me can consume that metric because i
physically defined it in a database and i know that things above me consume database i think
the challenge for a metrics layer that's decoupled from the consumption layer is that you're now
reliant on every consumption layer to properly take advantage of the metrics that you're creating
and so i would argue the reason that looker was able to be so successful of the metrics that you're creating. And so I would argue the reason that
Looker was able to be so successful with the metrics layer was not the metrics layer in
isolation. It was rather the workflow for publishing data into an organization extremely
quickly and having structure for how that gets defined behind the scenes. So the metrics created
advantage for us because we didn't need
to materialize, for example. And to that end, metrics can be really powerful. But the real
power was actually the coupling of the front end with the metric. So the ability for someone to ask
a question, for me to instantaneously pseudo model that data and publish that metrics out to the
organization, without this back and
forth of like, can you materialize this table for me? Or can you redo the ETL cycle? And that's the
real power of the metrics layer is reducing the feedback loop between question and answer.
So I think metrics will always have an important place moving forward in the BI stack of every
company. I think the question is going to be how they couple to moving forward in the bi stack of every company i think the question
is going to be how they couple to different products in the ecosystem like including ours
yeah and i suppose also the practical thing of of i mean we had like dash on here before i'm quite a
fan of the products i think as a developer as a dbt developer i love the fact that that i can define
my um my my bi tool metadata layer as part of the development work for my DBT project.
But it then puts even more reliance on me as the DBT developer to make everything happen in the BI tool.
I mean, it's, again, beyond the world of people on the DBT Slack.
How's that going to fly, do you think?
Where's the flaw in that plan? I mean, I think you're citing exactly the challenge, which is, if the metrics live
entirely in the dbt layer, now to define a metric, definitionally, you need to be a dbt user,
and you need to sort of live in that environment. And I do think that there's an elegance in
combining the two together. To your point, like, now they're in the same Git repo, you can probably test and deploy them in a very governed way. It does make
sense in like a highly service oriented deployment model, which is I have a small group of people
that are going to create an environment and they're the only person that can touch that environment,
which honestly looks a lot like circa 2000 business objects style deployment.
The challenge is that a lot of Looker's superpower, for example, and a lot of what the modern data
stack I think opens up is actually the ability to publish more raw data out to your users.
So the idea is I put data into a database at low latency, and I don't have to think about
modeling that data. I just publish it
and I can create metadata around it and my users can consume it. And I can think about the modeling
as almost an exhaust from how consumption works. And so like in the Looker version of the world,
it was really, let's put raw data in someone's hands. Let's see how it gets consumed. And then let's
optimize data model underneath it. And I worry about the metrics layer inside something like dbt
as being too tightly coupled away from consumption. It's almost a publish mode rather than
a sort of munch how users consume. And I think it could be bridged if tools are adopting dbt as, for example,
just like the hosting environment for a metric. Like we could we could happily host our metrics
in dbt. But I think unless users outside of that ecosystem can really interface well,
with that metrics layer and make it agile enough, you're going to have these natural,
very slow feedback loops that are going to challenge publishing data effectively to an
organization. Okay. But given there are some legitimate interest in the metrics layer and
there are some good things in there, albeit we think about how it's operationalized,
how do you think Omni might play in this space? What role do
you think Omni might have in relation to some part of the process of building or consuming
the metrics layer? Yeah. I think that we think about ourselves first and foremost as almost
the environment for creating metrics. And we're almost ambivalent to where the metric lives.
So I think, again, going back to sort of what is this
ideal workflow, I think the ideal workflow is publish data at low latency, make it consumable,
as it's consumed, build data model to support that consumption. And as that data model is consumed,
go materialize it and optimize it upstream. So creating this very fluid sort of process from raw to highly optimized with
a data model that sits sort of in between and straddles both sides. I think the question then
is like, okay, what is the data model? And I think that's where we need to learn what the ecosystem
wants and try to be able to adopt and adapt to what the environment is sort of doing. So I think
we could happily write dbt metrics, if that metric environment can support us effectively.
We can create our own potentially, we could even create metrics that publish to look ml,
if look ml serves as sort of the governance layer for an organization. But again, like,
I think our superpower is going to be
the creation of the business logic versus thinking about the metrics layer as purely
the consumption vehicle for metrics. But we're going to have to, we're gonna have to see if the,
if the ecosystem wants that. Yeah. Okay. So, so I appreciate this is, you know,
you're only two months into, into, into the, into the business and it's early days and so Okay. So I appreciate this. You're only two months into this business and it's early days and
so on. So what kind of roadmap can we expect or what problems are you trying to solve first? And
I guess also for whom? Who would be your ideal, I suppose, ideal customer, ideal user?
Where is this journey going to start, William, for whom? Yeah.
So the first product that we're working on is essentially a pivot table meets SQL runner experience.
So the idea is connect to a database at low latency, be able to write and visualize SQL,
but in doing so, create models for what that user is querying.
And so I think the ideal use case is young
companies with dynamic schemas that maybe see the value in modeling data, but don't really want to
do it in advance of that need. So kind of the aspirational looker customer, the young looker
customer that's not quite ready. I think similarly, there are types of data sets that we will excel on. So I love to cite event data or log data as a great example, where it can be very difficult to model event stream effectively. And often what you're forced to do is either programmatically model that data, under model it or essentially model it to query it. And I think those are the types of data sets
where this more fluid sort of SQL meets modeling meets pivot table experience is going to be really
productive for a user. So instantaneously create and rejoin a fact table, sort of filter over a
sub query, all the types of sort of little micro SQL hooks that sort of a classic
BI tool struggles with, but a SQL runner is great at, I think those will be the types of experiences
that the early product will excel at. And then kind of over time, and I think you've mentioned
this a lot, but the BI space is sort of crowded and mature. And there's a lot of stuff that just needs to exist around permissioning and governance and sort of dashboarding, scheduling, all of that kind of stuff.
We know that we're going to need to build in over time as well.
But really, we're focused on this sort of interactive query experience to start. And kind of the tagline that I've got on my LinkedIn right now is the fastest first 100
queries to your database. And that's really what we're thinking about is not the first query,
and not the 10,000th query by a non-technical user, but letting someone who knows SQL and has
data in a database be really, really productive and share the results of those things instantaneously
with the person sitting next to them so how big i mean how big do you see this going is this is this
is this if you think about tools like say hex or whatever that that arguably are are by by you know
knowingly a a niche product for example or you look at tools other tools like i don't know
superset which which in theory could be used by a lot of people for a lot of things.
I mean, how kind of niche versus mass market is your vision on this, really?
Yeah, I don't think that we're ready to declare that we want to hit any data outside of the data warehouse right now.
So sort of companies that are mature enough to have a data warehouse, I think is going to be our wheelhouse kind of always.
But any company that has data in a data warehouse and kind of wants to be moving more quickly, I think is going to be a type of customer that we want to address.
So I think it's going to be pretty broad in terms of customers um hopefully younger it's i i hate to say kind of like every other
startup that we're starting with young tech companies but that's probably the type of user
that is is going to have the most appeal to start um but like fortune 10 i is ultimately going to
be our target market for core analytics um it just might be 15 years from now okay and and i suppose
in terms of the business that you've built and the people, I mean, it's
quite an all-star team, I think you've managed to pull together and maybe talk about some
of those people.
And what's the business model you've chosen with this?
I mean, have you gone for the kind of open source, open core is a big kind of thing at
the moment.
What's your thinking about how you're going to make money out of this and build a business
out of it?
Yeah.
I mean, I think we're kind of a little bit boomer and we're going for just the classic
SaaS model, user-based pricing. It's just tried and true. And I think it most closely aligns with
how value is created for the product. So I'm a big believer in aligning it with value creation.
In terms of pricing, I think we want to figure out ways for customers to scale
into the product more easily than with something like Looker that tends to be a lot more expensive
now. But probably not on the open source side for us. I shouldn't say probably not. We have no plans
to do anything on the open source side for now. In terms of the team, right now, we're a team of 10, and nine of us are technical, and I'm probably the only one who's not technical.
So we've got a lot of product design engineering talent right now.
A lot of the folks came from Looker.
Our CTO came from a company called Stitch, which is an ETL services company that you're probably familiar with.
So we have broad experience up and down the tech stack.
And it's meant that we can move pretty quickly because we have people that have worked on the model and sort of the API layers, the caching layers, the front end sort of component layer, and really just sort of trying
to undo some of the mistakes that we think we made in terms of architecture. And again, it's like,
it's always hard to call something a mistake, because I think we were making the right decisions
at the right time. But we just we know things differently now. So we can re architect from
scratch, to sort of work around the problems that we experienced before.
This must have been your kind of not dream, but certainly your, it feels like your life from scratch to sort of work around the problems that we experienced before.
This must have been your kind of, not dream, but certainly it feels like your life's mission in a way that you certainly, if you're like me, you've been thinking about this sort of thing for a long
time. I mean, is this something that's been on your mind for a while and this is your,
this is the, you know, this is a big thing for you really, isn't it?
I mean, like, yes, it's just in working on Looker there.
The we know the BI landscape is just massive. And there's always so many problems that you can't quite get to.
And it like, I cited like the innovators dilemma in our blog post, but it's just, I don't want
to say we're victims of our own success.
But success means that you need to support those customers and try to make their experience
great.
And I think that we did a pretty good job of that for most of the life of Looker. But success means that you need to support those customers and try to make their experience great.
And I think that we did a pretty good job of that for most of the life of Looker.
And people generally appreciated the product. But it is nice to sort of not have any customers and be able to build in a very opinionated way.
Like at Looker, we couldn't just rip out the data model and say, like, great, here's's a little side project that is entirely non-governed because that was not our brand promise.
So we get to reset that and work on some of the problems that maybe were just outside
the wheelhouse before and now can be more directly focused on.
Fantastic.
Okay.
So again, appreciating this is early days,
but how do people get to find out more about the product?
How do they get involved?
How do they maybe sort of speak to the team
and find out whether this is a fit for them really?
So, I mean, certainly send me an email.
I'm Colin at exploreomni.com.
We've got a little button on our website,
exploreomni.com, where you can just send us an email.
We're probably going to start
with some early design partners around June,
sort of using the product.
If there are folks that want to be involved in that
and sort of help us build
and get a lot of free analytical help from me,
please reach out and come talk to us.
But I feel like we're probably going to be open
in the market kind of tail end of the year when
we feel a little bit more comfortable around visualization and just sort of the surface area
of the product being sufficient but I think people will start touching the product in in probably
June we have we have a very rough internal version right now but it's we're we're still working
fantastic well Colin it's been fantastic
speaking to you really exciting what you're getting involved in and uh i really wish the
best for you um thanks thank you very much for coming on it's coming on the show and um i
certainly when he's interested then you've i'll put the details in the show notes as well but um
no good luck to you and i'm really excited to see the way this turns out so thank you very much
colin can't wait to share it thanks for having me Thank you.