Drill to Detail - Drill to Detail Ep.95 'Mozart Data and solving the Modern Data Stack Skills Shortage' with Special Guest Peter Fishman

Episode Date: April 7, 2022

Mark Rittman is joined in this episode by Peter Fishman to talk about Mozart Data, the modern data stack and the need to automate the role of the analytics engineer.MozartData websiteData Bites: A Vir...tual Lunch and Learn with Powered by Fivetran, Snowflake and Mozart Data Launch HN: Mozart Data (YC S20) – One-stop shop for a modern data pipeline

Transcript
Discussion (0)
Starting point is 00:00:00 So welcome to the Drill to Detail podcast. I'm your host, Mark Whitman. So I'm very pleased to be joined today by analytics industry veteran and current co-founder and CEO of Mozart Data, Peter Fishman, or Fish, is that right? That's right. Excellent. Well, Fish, it's great to have you on the show. Why don't you just introduce yourself, first of all, and just tell us who you are and what you currently do, And then we can talk about your roots into the industry after that. Well, great. So like you mentioned, I'm Pete Fishman. I go by Fish.
Starting point is 00:00:51 And I'm, like you said, a sort of an industry veteran. I've been setting up and building data stacks for the past few decades and been part of a really incredible run in the analytics space, again, over the last few decades. And then just watching that unfold and being able to be part of the analytics journey at a number of sort of mostly late stage startups. Today, I'm working as the co-founder of Mozart Data. So Mozart Data is a tool I'm sure we'll dive into, but we call it the easiest way to spin up the modern data stack. So another topic I'm sure your audience is very familiar with. And my role there is to largely lead many elements of the company,
Starting point is 00:01:44 but especially work with the go-to-market team as well as product engineering. Okay. So tell us a bit about your journey then into this role. I mean, you've got very interesting CV and very interesting sort of, I suppose, backstory really, but tell us about how you got into it and what led to your role you're doing currently at Motorcycle Data? Like a lot of data scientists, I'm effectively a failed academic. So I did a PhD in behavioral economics and mostly really the applied version of it. So taking what were considered large data sets and trying to find behavioral anomalies in them from an economics perspective. And after grad school, I really fell in love with the Bay Area and wanted to stay. And just sort of by accident, the way to work in the Bay Area is to get a job in tech.
Starting point is 00:02:37 At the time, there weren't that many companies, mostly larger companies, would hire people to do business intelligence, take their statistics background, and apply it. Now, today, every company is desperately trying to hire great applied statisticians. But at the time, it was a little bit more of a niche role. I found my way into the business intelligence world via the gaming industry. So when social games on Facebook were really exploding in popularity and you started to see hundreds of millions of users on games, that's when companies really started hiring folks like myself. And I got a role at a gaming company called Playdom, where I learned a lot about sort of, you know, applying analytics in the gaming space, and ultimately the consumer space, really understanding kind of the real key concepts of, you know, measuring CAC and LTV. But then I had my big sort of career moment
Starting point is 00:03:47 when Playdom was acquired. I was at Disney for a brief while and ultimately jumped to a startup called Yammer. Yammer was in the B2B space, building out an enterprise social network, and they wanted to apply the same type of consumer development thinking in the B2B world, which was pretty novel at the time. I was pretty excited about that challenge and had the opportunity to build out a data team. And that meant data infrastructure as well. building teams and sort of almost impact at these sort of later stage startups, often in the B2B space, but also in B2C. Okay. So Yammer is an interesting place that you've worked at. And it seems to be quite a lot happened there.
Starting point is 00:04:39 I mean, you mentioned the law you did there as well, but didn't Mode spin out there at some point? Yeah. So at Yammer, we built a tool called avocado so the the team really came together um well what's now over 10 years ago i mean it really came together um uh you know over 10 years ago at a time where you did build not buy so. So, you know, a lot of sort of the great tools of today are descendants of tools that essentially companies in totally adjacent spaces, sorry, totally irrelevant spaces, think about all the tools that have come out of, say, Airbnb or, you know, which is certainly, you know, not on paper a data company.
Starting point is 00:05:26 But what you saw was the need and the value that these companies were getting out of data created a sort of demand for building data infrastructure, and specifically data infrastructure that followed the flow of how those teams worked. So you mentioned Mode, which is, yes, in fact, a descendant of this tool called Avocado. The three founders of Mode were on the analytics team at Yammer. If you think about sort of the pieces of data that come before a data tool like Mode, so before your visualization, before your sort of ad hoc querying, before essentially the queries that are going to inform upon your business,
Starting point is 00:06:13 all of those parts that came before it, we also worked on and built as part of this tool called Avocado. And today that tooling looks like Mozart data. So, you know, there have been a few iterations since then, but sort of that inspiration comes from certainly the way that we worked at Yammer, which was how we were able to have an impact on that company. And then ultimately, Yammer was a company that got acquired by Microsoft, where I spent three years and, you know, trying to, you know, apply the same thinking towards some of the Microsoft products. Okay. So if we sort of fast forward a bit to, I suppose, the kind of the market now and the ecosystem that you and I work in.
Starting point is 00:06:57 So the modern data stack. Okay. So just the benefit of anybody who hasn't heard that phrase or maybe could benefit from it with definition, what is the modern data stack and what is the market like at the moment with these frag data tools and data infrastructure that makes sense given kind of where, you know, essentially the state of data tools today. So, you know, a decade ago or even further ago, it made a lot of sense to build out your data infrastructure often by hiring your own engineers to sort of make, to build to spec kind of the way your team was working. The modern data stack is about, there's enough commonality between the needs of so many of these data teams that I think, you know, what's become called the modern data stack is leveraging the power
Starting point is 00:08:05 of cloud data warehouses, and then essentially all of the tools that surround it. So explicitly, the modern data stack is EL tools, so tools that essentially, you know, bring data to that cloud data warehouse, and then essentially a big amount of T, so a big amount of transformation. So taking the data in the data warehouse and making it consumable downstream in the BI tools and applications that consume data. So largely speaking, the modern data stack is a fancy way of saying ELT and data warehousing. Okay. Okay. So I suppose the other kind of distinguishing thing about the modern data stack, and I suppose the market we're in now, is I suppose the demand is
Starting point is 00:08:58 putting on people who have the skills to work with these tools. So as a person running a consultancy, one of the challenges we have is that that um you know we just can't get good enough we can't get good people or we can get good people but we can't get enough of them really um you know and i suppose the the the kind of i suppose the the ideal unicorn um analytics engineer someone who knows the e the l and the t um end to end someone who can kind of actually put it all together and make something happen for a customer is pretty rare. And do you think, I mean, do you think it's the point now that, I mean, are you experiencing this as well? And are you finding that this skills gap
Starting point is 00:09:35 and this reliance on quite rare people is kind of holding back the modern data stack from actually going beyond maybe the early adopters? Well, I'd almost challenge you. I think the exact opposite is happening. The modern data stack is developing as a function of the shortage. Actually, you know, I think you hit the nail on the head, which is these companies are getting so much value and competitive advantage out of the ability to consume data.
Starting point is 00:10:06 And what you find is that all of, you know, it's a little bit ambiguous about the causality, but I would say that all of the very, very strong companies are very good at consuming data. Now, again, it might be, again, that's probably not a causal inference, but I would say that what sort of aspirational companies see is that all these other companies have this data mode or data advantage, and then they want that. And in order to get that, you don't just need to hire folks that can, quote, do data. You have to hire, like you said, really skilled practitioners that have often a variety of skills. And the problem with that is that there's only a handful of those folks, those exact unicorns. So what typically happens is you figure out a way to divide and conquer.
Starting point is 00:10:59 You figure out a way to have skill specialization and to play catch up. And as a result, you know, what a data team typically looks like is, you know, a data engineer, maybe a data scientist, maybe a few data analysts or business analysts that are, you know, maybe sitting centrally or decentrally. And you have this sort of skill set specialization and then a collection of these specialized skill sets that make up a data or an analytics team. And, you know, I think ultimately the modern data stack is about sort of a technology solution to what happens when there just aren't enough people to do that thing. So as a sort of a bad comparison, if you really wanted to get transported around and have everybody move between places very easily at their whim. What you would want is, you know, effectively for, you know, Ubers or rideshares to be incredibly inexpensive.
Starting point is 00:12:13 And, you know, they're not. So, you know, sometimes I do, you know, consider, okay, do I want to spend the money to go cross town in this Uber? And the solution that people are driving to is to automate some of that. So, you know, about seven years ago, I worked at a company called Zenefits. And Zenefits was a very, very rapidly growing B2B company. And, you know, we had a lot of data. We had lots and lots of companies using the platform. And in order to sort of take advantage of that data and better understand the customer and understand our business motions, we kept, you know, we had a huge sales team. I mean, it was approaching a thousand people. And that sales team would record tons of data and events and, you know, relevant information in Salesforce. They were really comfortable working out of Salesforce. Now, one thing that I could have done was hire some engineers to, you know, essentially take the data from Salesforce to Redshift, which was, you know, our data warehouse at the time. Instead, what I did was I worked really closely with George and Taylor at Fivetran, who decided essentially to build this effectively as a service. You know, George who, you know, was George and Mel who were incredible at engineering
Starting point is 00:13:42 this solution, I felt like could do this for us. And then they could do it for many companies. So hiring George would have been a very, very expensive proposition. Hiring Fivetran as a service was what we thought of as an incredibly efficient deal. It was a great opportunity for us to take advantage of our data in Salesforce. And then many companies have sort of subsequently taken advantage of Fivetran Salesforce Connector. I was actually the first consumer of Fivetran Salesforce Connector. Okay. Okay. So you've got, I suppose, so I suppose, again, moving forward to now, you've got these companies like Fivetran and, you know, DBT Labs with DBT and so on there. You've got all these products out there. And you've got, I suppose, still the demand for kind of analytics engineers and so on. So where does Mozart data fit into this? What's your take on this? And what problem are you solving in this market then really? Yeah, so I think for a lot of your listeners, they're probably very familiar with what you might call the best in slice or the best in breed tooling of the stack. And when you actually kind
Starting point is 00:14:57 of lay out the picture of the modern data stack that exists in sort of your listeners' minds, it's, um, it's like a NASCAR page that would be really daunting to look at if you were, um, you know, if you were a novice and, um, you know, and, and beyond that, uh, it's not so clear to me that we want to, you know, start by hooking up, you know, five to 12 tools in order just to get sort of the basics of what we need going. So if you think about what most, what most small companies analytics team is, is a, you know, a non data oriented person, so somebody not in a data role using Excel. So that is by probably a couple orders of magnitude, the biggest version of an analytics team that we have. So what Mozart kind of sees is that there's this opportunity, you know, to improve upon that, which is to say there's sort of a real appetite
Starting point is 00:16:07 in our eyes for these data savvy consumers of data that maybe don't go extremely deep into essentially the nuances of the modern data stack, the ability to spin those folks up really quickly without needing to specialize and hire data engineers to do it. So we think of the opportunity as sort of delaying the data engineering hire and getting companies to start to use better data practices earlier on by hiring data savvy folks, as opposed to trying to hire a world-class data engineer who is incredibly expensive and incredibly, even if you had the appetite and the budget to spend what it costs to hire these folks, doing so is a very non-trivial endeavor.
Starting point is 00:16:59 Okay. So paint a picture then of what the product is and how it works. So it sounds like what you're saying there is it fills in the gaps between steps in the process and it substitutes for a kind of like a trained data engineer. But what does it look like in terms of the user experience and I suppose what it does when you kind of use it for the first time? paint a picture of how it looks and what it does. Sure. So, I mean, it looks like a dev tool, so it doesn't look like magic. It looks like a tool that you would use to essentially organize your data. So in the same way that you think about understanding your data, consuming your data, It looks like a tool that has effectively many lists and snapshots and views into your data set. So that can be, you know, the first few rows of tables, that can be table organization, can be a description, but ultimately what the tool looks like is effectively a few tabs, an extract and load tab, so we have hundreds of connectors. So most people are using a standard set of SAS tools and databases.
Starting point is 00:18:14 We have hundreds of connectors under the hood. And then another tab that is basically a view into your warehouse. So it looks at your transformations and your tables. And then a tab that helps you build transformations. So if you think about the concept of write SQL, get tables, and then from there, being able to QC that work. And then last, just connect that data and start making it useful. That can mean querying it, that can mean hooking up a BI tool or a reverse ETL tool.
Starting point is 00:18:56 So the real magic here is getting this done all in under an hour. So if you sort of go through a Mozart workflow, what you do is you have multiple data sources that you can be joining, cleaning, and effectively using downstream all in what feels like no time. And again, the bar is largely the ability to, one, understand what the columns mean. That's actually a pretty high bar, but that is something that companies can easily hire on. And then secondly, just sort of the bar being write SQL and then clean and consume the tables
Starting point is 00:19:36 that you need downstream. Okay, okay. So I guess this isn't a new idea, is it? So there are other products that I've heard of doing the same sort of thing. You've got Panoply. You've got maybe sort of like more recently, you've got things like. So what's the kind of the unique thing you bring to the market with this?
Starting point is 00:19:55 What's the innovation really? Yeah, I mean, sometimes. So often it's sort of a cop out to say good UIs and UXs, which is to say, I think that Dan and I both have a gaming background. So if you think about one of the things that is really surprising is how important the new user experience is, your sort of onboarding experience. And, you know, I think that from a, from a, hey, this is, this has been done before. I think that that's very true. But so much of things are one, a new user experience and two, and two timing. So there's been such an incredible wave in the last five years towards the best in slice. And you, you know, you've named some of those tools and it's very, and now as kind of the different slices
Starting point is 00:20:49 have become even more refined and the problems to be solved are real and big companies do experience them. You've found this world where the smaller company has been forgotten. The sort of seed and series A and even series B company has been forgotten. The sort of seed and series A and even series B company has been forgotten in favor of real problems
Starting point is 00:21:10 that companies with big data teams face in terms of being able to keep that data being both a source of truth and very impactful within an organization. So as people race to solve those problems, what I've found is that the problems of sort of growing up past Excel or past G-sheets or whatever it is, I think are being sort of missed. I mean, the market opportunity tends to be upstream. So
Starting point is 00:21:38 bigger companies pay more for their data, pay more for their data tools, pay more for their data practitioners. So typically, these companies all feel this gravitational pull towards it. So I would say that, like, you know, what's new, sometimes, like, what's new is what's old. So when I think about sort of the tools that existed when I started, were these sort of horrendous sort of monoliths that people sort of still derive today, right? They talk badly about those. And the idea of the modern data stack is, you know, it tends to be centered around a bunch of best in class slices. That's not to say that an all in one solution is not a good one. It just happens to be the case that so many of our legacy solutions to this problem are, you know, so much below sort of the all-in-one solutions historically have been much
Starting point is 00:22:34 worse than the sort of better slices that have evolved. So now I think that there is, now that there's been so much energy sort of going in that direction, I think that there's opportunity to get folks set up really quickly to essentially solve the core problems, the most important problems of that user group, the user group that really desperately wants to start joining multiple data sets together and doesn't want to sort of have to negotiate half a dozen contracts for that to really get started effectively. So how do you avoid then the problem that I suppose vendors and tools like Alteryx have, where it may well be a good solution to one part of the market, the small part, but as the customer scales their ambition and their needs and so on, they have to leave that tool behind. I mean, is that something that you've considered? And is it something you've got in the strategy for the product, how you handle that kind of need? Sure. So, you know, I mentioned that I had worked at Zenefits and Zenefits, you know,
Starting point is 00:23:32 when I was there was in my mind, sort of the best place to get insurance if you were a company of a hundred people or less. And then the second that you became, you know, a hundred people, it was no longer the most cost-effective place to get insurance. So by the time you turned 150 people, you'd almost surely have churned off. So I think that that's certainly a problem for a lot of tools that address the SMB part of the market, which is actually, there's loads of material on these podcasts about the types of problems that larger data teams face. And so I think the answer is, one, you do have to make very savvy tradeoffs about what solves those problems. And you have to, you know, create enough escape hatches so that if you want to append, and we work with so many folks
Starting point is 00:24:25 within the data space, I mean, so all of these sort of best in class tools, you know, that, that, you know, either you've had on the, on the show or that kind of people sort of love using, we see our largest customers starting to adopt. So I think that, I think that that's a sort of nuanced challenge and one that I would love to face more of, which is what happens when the teams become too successful and they're getting too much value out of the data and they scale their team and their team starts saying, hey, we need some of these upmarket solutions. I think the answer is, one, have close working relationships and design partners that can really inform what are the, you know, most pressing needs of the business that we can build to. Because what you don't want to do is overdevelop. You don't want to deliver the solution that
Starting point is 00:25:16 applies to just the far right tail of users. In fact, because then, you know, you're stuck in that sort of gravitational field that i said created the opportunity for a tool like mozart okay so so is that so how would you how would how would the relationship with five tram work then i mean i think that i think i've heard you say in the past that that you you can you can um make use of five transformations as well as the ones that you provide so so how do your tool and Fivetran interoperate then? Yeah. So again, I think to be an all-in-one solution, you don't want to build a mediocre all-in-one. You want to be best in class or have a world-class experience for a
Starting point is 00:26:00 subset of users. And I think that's where people confuse and sort of almost bemoan all-in-one solutions because they're so used to it being mediocre across the board. So under the hood, we use what's called Powered by Fivetran or White Label Fivetran for over 100 of our connectors. So a vast majority of the rows that get ingested via Mozart are leveraging Fivetran technology. So we think of Fivetran as a best-in-class. So we want to be able to leverage that best in class and offer that best
Starting point is 00:26:47 in class experience to our customers. Now, how does that work in practice? Well, it's sort of making Fivetran essentially more consumable to a more novice user of Fivetran. So I think like that's one of the, you know, you have to add value on top of Fivetran, right? So if you're offering sort of that as a solution, what you need to do is make elements of using Fivetran easier. And that can be, you know, that can be essentially using it, using that data downstream, that can be elements of, of sort of scheduling or visualizing. So, so there's a lot of opportunity in my mind to sort of serve the sort of smaller customer that, you know, a company like Fivetran, you know, faces a variety
Starting point is 00:27:46 of challenges, given that it's also serving, you know, very large companies. And, you know, I think what we like about Fivetran is the sort of reliability and the automagicness. And we think of that as a key part of the experience of like sort of a true modern data stack, which is really delivering on reliability in the EL. Okay. Okay. So what about content? What about data models and kind of industry models and so on? Because again, if you think about things that Fivetran or other kind of more specialized vendors won't do, it's kind of like putting together things like, an industry standard, I don't know,
Starting point is 00:28:32 sort of SAS data model, for example, for SAS company. Is that something you consider at all? So, I mean, so first of all, I think Fivetran actually does do that now. Well, they do it for individual sources, but not the downstream models, really, I suppose. Sure. This is my point on that. Yeah. So I think, yeah, I do think that there is a huge opportunity. And many companies, not just ours, but many companies are trying to tackle the problem
Starting point is 00:28:58 of there are a bunch of very standard tools. So if you think of yourself as a DTC business, you're almost certainly using a platform like Shopify. If you're a B2B company, you're almost certainly using Salesforce. Many companies are using Stripe. If you're advertising, you're probably using Facebook and Google. So given what is a well-known stack and what the sort of, uh, we'll call it like the raw transform looks like from a tool like five Tran. Um, how do you then make, um, whether it's some combination of joins or, uh, some combination of like very standard tables, uh, downstream from essentially multiple sources. And I think that that's a huge opportunity because again, the faster, going back to the gaming background, the faster you can get somebody into the game, the more they're going to consume it and enjoy it. And the more likely they're going
Starting point is 00:30:00 to return and effectively be playing that game for a long, long time. And I think that there is this challenge, which is, you know, you're trying to race to get to an answer, to get executive buy-in, to get people sort of using data the right way, thinking about data the right way. And there's only so much sort of, we'll call it rope, that the executive team or the finance team is going to give you. They're going to say, okay, we want to see an answer from this. I mean, we put all this money and energy and you into this, and we want to do something different as a result. And the problem is, is that, and you know, all the practitioners sort of listening to this will know that actually, you know, 90% of the time is in the setup. I mean, maybe it's even more. And if you're, you know, if you're doing all this, you know, it's, it's kind of like when I watch cooking shows, and, you know, they're, if you ever watch a cooking competition show, they're sort of like, the whole time they're sweating because they're moving so incredibly fast, but you know, it, you know, they might have an hour to cook a meal and it'll be minute 55 and
Starting point is 00:31:09 they haven't even put it in the pan yet. And, and what happened? Well, they spent all this time essentially prepping all of their ingredients, which was actually perhaps a good strategy. But you know, if you can come to a place where you've already got, I think it's called a mirepoix or something like that, where you've already got sort of everything all chopped up, you're at a huge advantage in terms of delivering on that. And then that's only going to spiral. There's this sort of virtuous cycle within data, which is when the executive team or the operations team or the business teams that are consuming it and using it, believe in it and have gotten a win from it. They just sort of care so much more and they give you so much more latitude to actually find those insights. I tell a story that at the start of my career, and actually, you know, this relates to mode as well. At the start of my career, I was working at Yammer, and it just so happened that we ran an A-B test almost by accident in the first sort of three months that I was at the company.
Starting point is 00:32:11 And that A-B test was one of the biggest, you know, there would be such an appetite for investing in data and data tools. It happened to be that it not only was a winning experiment, it happened to reject the sort of the product philosophies of the executive team. So very rarely does that actually happen. So most often you run an experiment and you get sort of a null result. And when you do get a win, it tends to be like economically reasonably small, unless you're working at a very big scale with a very big company. This happened to be one of those times where that wasn't true. And that's really sort of a blue moon situation. But that effectively funded, you know, tools like Mode and Mozart today, because the executive team said, wow, we really need to release everything in an experimental fashion. And, you know, in some sense, that's buy in. And I think of that as sort of the virtuous cycle. Okay. So is your end goal, is your kind of the point of all this for you, the productization of the analytics engineer? Do you think that's kind of a goal you should aim for? Anything that's possible? Yeah. I mean, if you think about how
Starting point is 00:33:37 expensive analytics engineers are, if you think about, again, if we sort of go back to the start of our conversation, what you find is that the analytics engineer today adds tons and tons of value to these companies. So they are setting up, you know, court tables in a way to make data incredibly impactful throughout, you know, throughout the company. But what you find is that they are, you know, they are in some sense expensive. Now, a lot of that work is very nuanced and subtle. It's about consuming and understanding data in a way that, you know, in the way that like kind of it felt like self-driving cars are just around the corner every single day. And then in reality, it's going to be probably like decades. I think
Starting point is 00:34:26 the analytics engineer is not going away. We don't want the analytics engineer to go away. We think the analytics engineer is an incredibly awesome role and title and description of what many data folks and analytics folks have been doing for years. I think the things that we want to go away are some of the rote parts of their job that they kind of come into new companies and just have to do over again. So part of it is setting up the stack. Part of it is creating basic transforms of tools that they're almost certainly going to see in a new context across companies. Okay. Okay. Okay.
Starting point is 00:35:06 So we're almost out of time now, but just to kind of round things off, what's so, so you describe the product as it is now, what, what's your kind of roadmap and what's the, what's the goals you've got for the product over the next kind of six months or a year, really?
Starting point is 00:35:16 Sure. So there's three sort of product focuses. So one is get people on the platform. So make it essentially more consumable. So sometimes that's more lower code options, more automated options, sort of automated transforms, low code abilities to do joins. And then second, it's get people to use the product. So again, this has been a theme of the conversation today. But for too many people, data tooling or tooling is a lot like exercise apps for me. So my exercise app ends up know, ends up on my phone, I like care about it for a day, and then I like never use it again. And what we think is that data is incredibly powerful,
Starting point is 00:36:14 incredibly addictive in a good way. So we really invest in tools that help to get teams going to write their first transformation, to start to consume it downstream. And, uh, you know, for us, that sort of second pillar of our product is, you know, getting people to use it. And the third is kind of also like you mentioned, uh, to solve sort of, uh, you know, graduation risk issues, which is to say, try to build very, very efficiently the key features that larger customers need that eventually, you know, smaller customers will grow up into. So, you know, we have, you know, in our roadmap, you know, a number of features that relate to any of these three main themes or initiatives. But ultimately, you know, the one that I'm most interested in and sort of that reflects sort of the rate at which we're growing is really, you know, the first and second. So it's how do we really onboard this giant amount of data savvy users into a data infrastructure that's not daunting, that enables it to happen incredibly quickly,
Starting point is 00:37:36 and that doesn't require you to develop a new language or a new skill set. So that's kind of where my focus is right now. Fantastic. Okay. And just to round things off then, Fish, how do people find out more about Mozart Data and what online resources are there available for people? Great. So, you know, the best place to start is, of course, www.mozartdata.com. So, you know, the types of online resources that I would also suggest, one is obviously joining any number of sort of analytics and data communities. We try to obviously blog a lot. And we also try to, you know, post content, you know, in these places or, or be part of sort of podcasts and other places to places to share more about what we're doing at Mozart. Ultimately, I think the data education of all these very data savvy and data interested
Starting point is 00:38:35 individuals is going to ultimately help almost everybody that's playing in the modern data stack. Fantastic. Well, it's been great having you on the show, Fish. Thank you very much for coming on and sharing your really interesting views on the market and where it's going. And good luck with the product. And thank you very much. Thanks, Mark. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.