Orchestrate all the Things - Faros AI raises $16M to shine a light on developer productivity and the value of software, launches free open source platform. Featuring CEO / Co-founder Vitaly Gordon

Episode Date: March 2, 2022

What if what you think you know about developer productivity and the value of software is off the mark, and that is hurting the quality of your software, the operation of your organization, as we...ll as your bottomline?  Article published on ZDNet

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Orchestrate All the Things podcast. I'm George Amadiotis and we'll be connecting the dots together. What if what you think you know about developer productivity and the value of software is off the mark and that is hurting the quality of your software, the operation of your organization, as well as your bottom line? I hope you will enjoy the podcast. If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn and Facebook. Let's start from the beginning since you and I have never connected before. I thought I would just ask you to just share a few words about yourself and your
Starting point is 00:00:36 background and sort of the founder story for Pharos. Yeah, kind of uh not going too much back i um kind of been living here in silicon valley for about the last 10 years now um prior to you know started this company i worked at selfers where i started in initiative that was called Selfverse Einstein. That was their machine learning platform that helped a company to use data in order to improve their internal company operations. And throughout that journey, and we can expand more, this is where I found that actually we were not practicing what we preach in a sense that engineering organizations are not at all data driven as they should be. And that's what led me eventually kind of to kind of try to fix this problem first within Salesforce and eventually started my own company that kind of operates in the same space.
Starting point is 00:01:48 Yeah, I'm familiar with Salesforce Einstein actually and at some point I have to also tell you that well I wear many hats one of those is I organize an event and so we had some people from Salesforce doing research that's applied in Salesforce Einstein presenting. So that's, I would say it's judging as an external viewer, let's say it seemed like a successful initiative. So I guess you were probably motivated by what you said to go find your own way and try and apply analytics and AI in other organizations as well. Yeah, it was definitely something that exceeded my expectation.
Starting point is 00:02:35 And I know how much you know the background story, but it's literally started with five people in a basement and then it became this behemoth of hundreds and hundreds of people that were working on it with, like you said, partially cutting-edge research. There was a lot of very interesting engineering challenges that were solved as well. I think by the time I left, we were producing over 10 billion predictions every single day and that number is much higher today. And I think about 10,000 customers were using it. So it definitely exceeded my expectations of in terms of kind of commercial traction.
Starting point is 00:03:14 But yes, I felt that there was something kind of specific that I wanted to take kind of the learning from that journey and solve this kind of new problem that I became really passionate about. Okay, so yeah, one thing I wanted to ask you was not entirely clear to me was when exactly did you start this new company that you're working on now, Pharos AI, and by the way, what gave you the inspiration to name it like that? I happen to be Greek and that's a Greek name. So I guess you somehow stumbled upon it or did you specifically look for a name? No, so we have, we actually have a name and you know, I hope that as a grid person, you would appreciate. So in the kind of in the infrastructure space, there was starting to be kind of this, you know, marine kind of analogies, which started with kind of Docker and then Kubernetes, which became the sea captain.
Starting point is 00:04:19 And we were saying, well, if, you know, Kubernetes is the helmsman that,msman that steers the ship, what tells them where to go? And that is the lighthouse. And we kind of said, hey, we want to be the lighthouse. That's kind of what inspired the name. Okay. Yeah. Interesting. Yeah, that's a nice metaphor or analogy or whatever it is you want to call it. Yeah, and then also it's kind of very nice to tell it in retrospect, but also a big factor of it was having the domain available. So we went through multiple iterations and I think that once we found that this idea and that the domain was also available, we became very excited. So when did you start the company and who did you start it with? Yeah, so the company started at late 2019. So we barely were operating for like three months before kind of the pandemic hit at least, you the united states and i started with with both our
Starting point is 00:05:29 kind of my two co-founders worked with me at salt first at einstein so one of them was our the chief architect of our platform and the other one was kind of one of our kind of chief scientists for the platform and basically you know two of the kind of finest individuals I got to work with. So Shuba, she's our also now kind of current chief scientist, got her PhD in computer science at Stanford, was recognized by Forbes as one of the top 20 women in AI. And Matthew was the, I've known even from before, so first for both of them we're knowing about for you know 10 years each and we work at multiple companies together and that's where you know my co-founders at Pharos. Okay, yeah thanks and so I had a look at well what it is that
Starting point is 00:06:23 Pharos does and conceptually at least it seems I would say well straightforward enough. There is this well in your own words or at least your marketers words this connect, analyze, customize flow which to an engineer like and that's also something that I wanted to discuss with you actually so who's your ideal audience let's say but assuming that's also something that i wanted to discuss with you actually so who's your ideal audience let's say but assuming that's engineers or engineering leads it seems like i said pretty straightforward but i wanted to let you explain it in in your own words because well apparently you're much more familiar and you can do it much better than i could yeah so we'll kind of see about that it It's kind of the idea going a little bit backward
Starting point is 00:07:08 came from our realization within Sulfurs that like I mentioned, engineering that is filled with kind of these people with engineering degrees and that supposedly are extremely analytical are not using data. But in comparison, if you have organizations like sales and marketing in a company that are not necessarily filled with those type of individuals are way more data-driven in their practices. And there are multiple explanations why, but the other thing that we also found is that in many of these organizations, there seem to be a kind of centralized system of record that kind
Starting point is 00:07:51 of has all the data in one place for that analysis to be much simpler. And because we were, you know, part of our Einstein using self-worth as this, you know, database to do all of our kind of machine learning on top of. That was extremely evident of just how convenient it is to apply these techniques when all the data is in one place, where in comparison, when we try to apply those techniques in the engineering realm, where all the data was spread across many, many different systems, that was much, much harder because you have to gather in, this is where the connect
Starting point is 00:08:22 slash collect comes from. And if you look at some of these leading SaaS companies like Workday or Selfverse or ServiceNow or SAP for that matter, they have this one giant database that describes everything that the organization cares about. But for engineering, as a VP of engineering, I didn't have that visibility. And in order to get it, we were using spreadshe spreadsheets and scripts to couple all that data together so that was the first thing that we had to get all the data but get it in an intelligent way that you know all of the pieces are connected so we can trace back everything from you know the idea that might be like a ticket or a document until that code is in production and throughout the dozens of steps
Starting point is 00:09:05 in between that this kind of passes from. And then we wanted to have all of these metrics that are available because one of the things that also engineering teams don't have is kind of the agreed upon set of metrics that people know that are important to measure and analyze the same way that manufacturing or sales or marketing, they have these or finance, they have these set of metrics that they all agree on. So that was kind of the second part of the analyze. And then the customize was just also a realization, even within Salesforce, that was a very prolific acquirer of companies that no two companies are alike in terms of their technical stack and the way they develop software. So in order for us to be successful, we have to adjust to the way the companies work and we cannot just come in and say,
Starting point is 00:09:56 hey, work our way in order to get that visibility. So that is kind of what build the way the platform was built. And these are the three key components of it. Okay. So apparently the first thing that people need to do in order to use Farrer AI is to go through the connect phase. So they have to connect, I guess, well, first their code repositories, then their ticketing management systems, and probably also their project management systems, and then all their wikis and whatnot. And their CITD system and their production systems and testing environments.
Starting point is 00:10:35 So those are the typical systems that companies connect to Ferris. And then what does the analyze phase entail? Apparently you have to, like you said, to do the data integration part and get all these data points together in a format that's easy to use for the next stage. Is there anything else? Yeah. So the analyze is actually more of the thing that suddenly shows up is we have already built in, I think, over 100 metrics that just light up as soon as you connect the system. They all come in and you start for the first time seeing that visibility that cuts across systems.
Starting point is 00:11:23 I'll give you a very typical metric that became very popular in our space that is called lead time to production. So it means like from the moment that an engineer finished their work on, you know, writing the code for, you know, a specific feature, how long until a customer can use that feature in production, right? And that work item goes between many stages that are captured in different systems like we just production, right? And that work item goes between many stages that are captured in different systems like we just discussed, right? So just understanding that is really what gives,
Starting point is 00:11:52 it gives you the understanding of the bottlenecks in your system and where that code gets done. So one thing that we do when we start projects with companies, we try, you know, to get them to guess how, and most cases they're off by an order of magnitude in terms of like they, you know, how long does it really take for a work to become kind of consumable by customers.
Starting point is 00:12:16 Okay. And then the customization phase, does it only refer to, actually does it refer at all to metrics? So are users free to tinker with metric definitions and what else can they customize? Yeah, so it's really the metric definition is where because like I said, no two companies are alike. And also the definition, like for example, what I said is when a work is done or when the work is in production, it might mean different things for different companies. So I'll give you just an example. If a company is using feature flags as a solution, then for them, the state that they care about, not that the work item is in production,
Starting point is 00:12:57 they care about that it's actually available and went through all the phases of the feature flag system until it's really available to, let's say, 100% of the customers to consume, right? As opposed to, there are companies that might not be using those kind of solution and all the stages in between. For example, we have customers that their code goes through two environments before it goes to production. There are companies that might go
Starting point is 00:13:21 through five different environments and to be able to actually customize that these metrics make sense for your business. That is what kind of the customization kind of entails. Okay. Okay. Well, you already kind of alluded to some examples of how people are using that, I was wondering if you'd be able to share something more concrete, like an end-to-end use case, let's say, from an existing client where they have applied Pharos AI and what was that they got out of it.
Starting point is 00:13:56 Because it sounds great in theory, like fine, you have visibility into your engineering process, but how does that translate to gains in, I don't know, time to production or eventually how long, well actually how long it takes to get to production also translates into monetary gains. So I was wondering if you have any use cases to share there. Yeah, so I can tell, but first I would say with that, I would not underestimate the importance of the visibility because it's truly eye-opening for the first time you see it. And we get it customer after customer that the second that you start seeing the metric, they cannot take their eyes away from it because it's like this eureka moment where they just realized for the first time really
Starting point is 00:14:45 how things look like because it's just so hard to kind of couple these systems together so the way we kind of approach it this is the way our customers describe it is you know is basically you know flying blind versus you know flying with all the instrumentation that you have on a plane right that's really um this but in terms of actual kind of years after kind of implementation, so we are working with mostly large companies, some of our kind of public companies, companies like Box or Coursera or GoFundMe, which I guess not public yet, but it has over basically hundreds of engineers and some of our customers have even thousands, if not more, engineer. And for them is also the realization of kind of resource allocation. So one of the things that we keep
Starting point is 00:15:41 hearing from our customers, and it comes a lot from the high-level management or even sometimes the board, is we seem to hire more engineers, but we don't seem to get more things done. So why is that? Especially in an environment that is so hard to hire more engineers, why don't we see the results? And some of the things that we show them is, well, if your bottleneck is not on the engineer's writing code, but let's say your bottleneck is only in, you know, quality assurance, and you don't have enough people, then actually hiring more engineers to write more features will actually make things slower and not faster. And this is a very counterintuitive example. And some of the customers that we actually showed them just by transitioning resources and like his changing hiring plans in order to meet those bottlenecks made a huge difference in just code being sent to customers. So some of the research we've done with customers shows that the effect of actually just kind of instrumenting the data and then kind of improving the overall kind of pipeline and who works on what is
Starting point is 00:16:55 equivalent to kind of hiring 20% more engineers. You can achieve the same result with just the existing engineering force, and you can imagine just how expensive it will be to increase your engineering workforce by 20%. So that's one number. Another kind of things that we've shown is because some of these kind of metrics also deal with the quality of features and kind of downtime. So this was actually a study that was conducted by Google themselves based on the same research that we base our platform on. It can be $16 million a year for a small team up to about $250 million per year for larger teams that have thousands of engineers. And a lot of it is basically there are three main elements. One is doing less rework of like basically that most of the work is going on for new features as opposed to fixing old ones. There is the value of just the customer value that they get from those features. So that's the additional cost that by delivering features faster and so having
Starting point is 00:18:19 customers use them, that also carries a benefit and also the cost of downtime which can be tremendous if you really kind of instrument everything in your system and you know what what is happening so when your you know kind of website goes down it's like being able to understand where it's at and measure it and fix it um kind of as quickly as possible that also has a huge economical implication. So between all of that and the Google research that is based on some kind of academic research is splits these groups to four groups, so low, medium, high, and elite. And it depends on where your starting point, but the gains can be very substantial. Okay, thanks. Well, hearing you speak about, describe how Pharos is used in production, let's say,
Starting point is 00:19:15 you said a few words that sort of triggered me. You mentioned value and you also mentioned quality. So as someone who has an engineering background myself, and I've worked as a software engineer for years, there is something that I think every engineer probably has thought of at some point, since pretty much everything we do is, you know, in one way or another digital, would it be, has it occurred to you, has anyone ever asked you whether it's possible to come up with something like a value definition, let's say, for different features in software or different teams and how productive they are?
Starting point is 00:19:54 Something like, well, I don't know, this person or this team is the one who's constantly generating successful features or this sort of thing? So it's a great question. I think actually trying to attribute the work of engineering teams to, let's say, high-level business metrics is kind of the holy grail of what we're in. Unfortunately, we're not there yet. But there is another thing that is far more achievable and actually you know um and also important that people can measure and there is a very clear line between work that
Starting point is 00:20:34 drives value and work that doesn't drive value and that is whether that work is in production which means like it's actually consumed by customers then we can you know debate on the perception of the value but there is no doubt to anyone that work that did not reach a customer has no value. And this is kind of some of the areas that we focus on where people, a lot of engineers think that their work is done when they finish writing the code. And now we know that this is absolutely not true. Their work maybe is on when code,
Starting point is 00:21:06 and this is the area that we focus on, is this kind of factory assembly line that takes the work that the engineers did and moves it all the way to customers and how to improve that assembly line to make sure that it is as efficient as possible. Okay. Another thing that I wanted to ask you about was actually, who would you say is this offering aimed at? I could assume probably engineering managers, CTOs, and this type of persona and you can confirm or disconfirm that. And the follow up to that question is if you have any feeling as to how it's typically received by the people who are in the production line, let's say. So the engineers who work on different features, the people who do the testing and all of the people who are involved in
Starting point is 00:22:04 one way or another in the process of producing software. Because what this does is it kind of shines a light on how they fare basically. I wonder how that's perceived. Yeah, so it's a great question and it's actually, I think, I'll just preface it because that was also the way I got into that space is when I started to become data driven and I was looking for solutions in that space. I found that a lot of the solutions were actually shining the light at the wrong place and things that are not actionable. And so prior kind of state of the art was things like, oh, let's measure like lines of code by engineers, or let's measure ticketing story points, or all of these things that were easy to measure. But when I did the kind of empirical analysis and tried to measure these
Starting point is 00:22:56 metrics and compare them to my own perception of the value provided by my kind of team, there was absolutely no correlation. If anything, there might've been a reverse correlation between those kind of type of metrics. So we actually were looking for what is kind of a research that can suggest something that is much more predictive. And we found an organization that's called DORA, which stands for DevOps Research and Assessment, that actually studied over a thousand companies and measured over a hundred metrics. And they found things that, and their findings were that actually,
Starting point is 00:23:31 you know, there are metrics that focus on process and not people, and the metrics that focus on outcomes rather than outputs. And that is the kind of thing that today we use our own platform and make available. So they published a book and became kind of a national bestseller that's called Accelerate. And now it's like almost every kind of engineering leader has it.
Starting point is 00:23:52 And that's kind of where we base. So to your question is we do definitely, I would say the main value is for heads of engineering organization. However, because there is a really in, especially in a larger organization, at some point you see that an organization starts to spawn up and they have different names. So they have names like developer productivity, engineering efficiency, and, you know, they have kind of engineering empowerment or those kinds of names. And those organizations, what we found, and this is kind of what, when I studied before, like after leaving Salesforce and before starting the company,
Starting point is 00:24:34 they do exactly the thing that we discussed earlier in our conversation, this connect, analyze, customize, and they do it again and again at every single company they work on. And 90% of that work is completely undifferentiated. That is not company specific. Like you have to build that infrastructure. So today we actually, you know, our target audience is those internal teams
Starting point is 00:24:58 that then need to build these kinds of solutions. And the same way that today it doesn't make sense to build your own ERP or CRM software. You just want a team that will manage and administer it. That is what we are trying to become is this ERP for engineering specifically that comes out of the box with the entire data model, all the metrics, all the reports, all the extensions and automation on top of it. So those type of folks can focus on that. And the second part of your question was, how do engineers respond to it? And I think what we found from also working with our customers is that there is actually
Starting point is 00:25:36 a very large increase in employee satisfaction because the thing that is probably the most frustrating for engineers is to see that their work gets stuck in some you know basically um kind of this uh internal bureaucracy of a B and not getting to the customer engineers love and what they do it's a very creative field what they do is to actually see customers and enjoy the benefit so once they start seeing the work be instead of like waiting for two months to go to production gets there in like one week, the satisfaction went up and all of the investment in their quality of life and how easy it is to now kind of ship their code. There is a very clear correlation, not just from our empirical data, but also
Starting point is 00:26:25 the research shows it very clearly. There's one more thing I wanted to ask about now that I have a better picture and hopefully people who are listening as well have a better picture of what it is that you do and why and how. I wanted to ask you about the name, actually, the name that you used to describe it, which is EngineeringOps. In my humble opinion, I think it may not be super representative of what you do in the sense that it kind of alludes to something like DevOps, let's say. Well, it seems to me like what you really do is analytics on top of DevOps. So we may want to have called that like DevOps analytics or something that's tuned. So it's a great question. We kind of debated it
Starting point is 00:27:16 internally quite a lot. And the reason why we decided to go with this name is kind of more to draw an analogy to two things. One is if you look at the, you know, the role of a COO at a company, it's like almost, you know, it's, you know, all of the work that is not the, you know, kind of all the work that needs to be done on top of the work that is, you know, the company usually does, which the CEO kind of takes care of. But really the analogy comes more from, if you look at sales or marketing or even recruiting, there are now roles that's called sales operation, marketing operations, and recruiting operations, which are basically people who are highly
Starting point is 00:27:57 analytical. Most of the people who occupy those positions used to work for consulting firms like McKinsey or Bain come from these, you know, and their job for example if you take the sales operations role their job is to get data from multiple sources and to analyze analyze kind of you know sales pipelines find the bottlenecks and then go and report that information to the VP of sales and that and kind of work with them on improving. So while, you know, while DevOps has this ops, you know, and at the end, DevOps is more about the administration of the systems and, you know, as opposed to actually the analytical role. But if you look at at least all other organizations, the operations side, it actually talks about the analytical part. And we wanted to kind of take and basically, you know, our entire company is kind of in this attempt to kind of evangelize that kind of role. And we believe that every single company should have these type of people that are highly analytical, that analyze data, and then, you know, advise then advise the engineering leaders of how they should allocate the resources, make decisions, and things like that. Okay. By the way, the other thing I wanted to ask, so far it's clear how you operate,
Starting point is 00:29:20 aggregating data, and then you have metrics and all of that. Where does the AI part come in? Do you also extend beyond showing people what works well and what works less well? Do you also do predictive analytics or recommendations on how to improve certain flows? Yeah, so there are two ways. One of them is more about the attribution of the data. It's like we said, when you collect data from all of these different systems, those systems were not designed to integrate with those other systems. So they have a completely different set of identities. For example you know people might use one username in jira and then a different username on github and then you know the
Starting point is 00:30:09 and then you know if you go to cicd system it's actually there is not it's no longer a user it's a team and so there are all of these different identities that in order for us to be able to trace all the work from idea to production you have to stitch them together and use some kind of intelligence there. So that is one aspect. The second aspect that we uncovered in this was even a while back, we were still first as we were adding more and more data because we wanted to get a more and more kind of complete picture, we realized that there are a lot of spurious correlations that might come in
Starting point is 00:30:44 and things that are not very indicative. So it became harder and harder to analyze and this concept in kind of machine learning that would be called the curse of dimensionality, where if you have too many dimensions, then you start seeing kind of wrong things. And so we realized that at some point, there will be too much data for humans to analyze.
Starting point is 00:31:07 And we need to actually help those humans in kind of first kind of showing them what is real, kind of change and what is just, you know, kind of noise. Right. And the other thing is also help them to understand what is the next insights they should be looking at. So I'll give you an example. If suddenly you see that one of the metrics that this company is now love to track is number of weekly deployments, just how many times you deploy in, let's say, a certain timeframe. And now let's say you woke up one day or you open your dashboard and you see a significant drop, right? And then the next question is, you know, is it a real drop? Let's say yes. And then the next question, what can explain it? Like what, why is the drop? And now you have,
Starting point is 00:31:56 you know, multiple hypotheses that you can check of what could explain it. So what we do is we make it easy for you to basically get those hypothesis surface to you in terms of like by some, let's say, probability. So an example could be because we also have a calendar information, let's say maybe you had, and also we sometimes have the recruiting information. Maybe you had many engineers participate this last week in interviews, right? And that was, so you had this week a very high load of interviews, which, you know, therefore meant that engineers had less time. Like, but this interviews hypothesis is just one of the, you know, hundreds of hypotheses that you might have.
Starting point is 00:32:43 And the fact that, you know, we show it to you without you need to go and search them one by one, that is the second part of the intelligence. Okay, okay. All right, I think we've covered enough of the underpinnings, let's say, as well as some guiding principles. Let's switch gears a bit and talk about the product itself and the funding, because this is what you're actually about to announce. You're getting a nice amount of funding and you're also making a product announcement. So you're announcing one version of your product. So if you'd like to talk a little bit on the product side of things and also cover the funding. Yeah, so the product that we are announcing is the community edition of Ferus. Throughout our work with some of these leading enterprises, we found that there is also a way for us to make this product available for a much larger target audience
Starting point is 00:33:54 of companies, even smaller ones. And there are, because like I mentioned before, our goal is to evangelize this concept of engineering operations and make sure that more and more companies start taking a data-driven approach to the scaling of their engineering teams. And because of that, the best way to evangelize is to get as many people on your side as possible and we thought that the open source side of product might have this thing we still will have our enterprise product because quite honestly there are many things that um large companies are interested in besides just the function of the product mostly around you know security compliance and all of these kind of features so that is still will be the, you know, kind of the monetizable part of the product, but we can actually make the product also available.
Starting point is 00:34:50 And our open source version is actually not, you know, it's not at all kind of just a mini version of the main product. It's actually a fully functioning version with like, where no limit you can do most of what you can do in the, in the main product with that one. Okay, so the main product I presume is probably offered as software as a service. Yes. And again, I'm kind of assuming that because it's a standard feature these days that organizations that want to have a self-hosted version can also do that. So, yeah, so the main product are, we have SaaS and we do have a multiple version of,
Starting point is 00:35:30 let's say, hybrid to fully kind of on-prem deployments that we offer our customers. This is one of the things that working at Salesforce, as was a pioneer in that space gave us the experience that we need in order to how to talk to customers about these security issues and privacy and all of that. It seems you went in a way
Starting point is 00:35:59 you went a bit backwards. Typically, more companies start with the open source version and then they scale up to the enterprise version. You start with the enterprise version and then you decide to roll out the open source one. But I think it will also work out and I understand the logic behind it, as you explained.
Starting point is 00:36:21 And what about the funding? Would you like to say a few words about that? And well, you know, the typical ones like who is going to fund you and then actually we can wrap up with and connect that to your future plan. So how are you going to use the funding and what's your roadmap going forward? Yeah, so thank you. And yes, we would like to be, I guess, you know, to do things differently. So one of them is like you pointed out is kind of our product strategy and also our funding strategy. So one of the things that we were very fortunate based on our success at Salesforce Einstein
Starting point is 00:36:58 that we, you know, could have conversation with, you know, a lot of partners about kind of what it is that we want to do and how do we want to go about it. So one of the reasons that we have this, let's say, unusually large seed investment is because we had the right partners that were willing to back us when we went to our partners, which are SignalFire, Selfers Ventures, and Global Founders Capital, and explain to them that we truly want to build something that is equivalent to the other companies that I mentioned, like SAP, or Workday, or ServiceNow, or Salesforce, but to do it for the engineering space. And that will require some investment capital.
Starting point is 00:37:42 And the way we structured it was that, you know, we thought it would be kind of a win-win for both sides, that we will, you know, take that capital in steps that is kind of based off kind of milestones. And so both like our founding team can avoid the, you know, large dilution that comes with like taking a lot of capital, you know, before, but also will give our investor the way to minimize the risk and give us the funds based on these models.
Starting point is 00:38:13 And now that we achieved and now we actually have all that capital, we already, since the last, from about a month ago, we already kind of doubled the team already with this kind of new capital. I think our investors were really impressed with just how quickly we were able to have people join our ranks. But we plan to kind of probably double again kind of by the end of the year. And most of the capital still to this company is going for product development, which is really how our goal is to be the only dashboard that the CTO and the VP engineering look at, which means that we will have to integrate
Starting point is 00:38:59 with many, many systems and make many, many, kind of these metrics available to them. So that breadth and just the number of systems we will need to cover will require investment in the R&D side. So we're fortunate to have a lot of demand from customers to work with us and we need to meet that demand with the product features that they need. I would say that probably the fact that you're open sourcing the base version of your product may help in connecting to more data sources as well.
Starting point is 00:39:42 So if people want to connect to something that you don't already have a connector for, they may as well dive in and start building it themselves. Exactly. And that's really the part of the strategy. Thank you for pointing it out. So all of our connectors are now open source. So basically we kind of offload and this is why our kind of customer like it because in pretty much every single case when we deal with a customer, they have a much larger engineering organization than we do. And they actually don't want to be, you know, kind of held hostage by our ability to provide them with the value that they need. So they say, hey, we have the resources. Can we do it ourselves?
Starting point is 00:40:25 Right. And the answer is yes, now you can do. So it's not just, it's the integration layer that is now completely open source. And it's also the entire business intelligence layer that is also completely open source that they can do everything. And it's a full-fledged BI tool and they can customize every single metric and they can do. And this is kind of why our customers choose us because, again, when we are missing something that they need, they can actually, they're empowered to fix that problem themselves as opposed to waiting for our product roadmap to catch up. Yeah, well, like you said, it's not a very conventional way of doing things. But in addition to hiring more people for engineering, you may also want to hire people like in roles like community development and developer relations and the like, because
Starting point is 00:41:20 well, it sounds like you may end up with an influx of people wanting to use the open source version and join the community and all of that. Yeah, so we definitely already have people in that and we're hiring more. like the funding was most about kind of, you know, things that are directly kind of, you know, I think customers as opposed to, let's say more, you know, increasing our marketing budget to advertise on social media or something like that. Okay, well, great. Thanks. It's been a very interesting conversation and well, good luck with everything. It seems like things are going well for you, but you still have a busy, long and winding road, let's say, ahead of you. Definitely. For what we're trying to achieve, there is a very, very long road ahead of us. But thank you, George, for the conversation. I really enjoyed it and looking forward to see what comes out of it. I hope you enjoyed the podcast. If you'd like my work, you can
Starting point is 00:42:31 follow Link Data Orchestration on Twitter, LinkedIn and Facebook.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.