Drill to Detail - Drill to Detail Ep.42 'Evaluex, ML and Optimizing BigQuery & Athena' With Special Guest Avi Zloof

Episode Date: November 12, 2017

Mark is joined in this episode by Avi Zloof from Evaluex to talk about the new world of elastically-provisioned cloud-hosted analytic databases such as Google BigQuery and Amazon Athena, how their pri...cing model and vendor strategy differs from the traditional database vendors, and how machine learning can be used to automate performance tuning and optimize workloads in this new world of large-scale distributed query and storage.

Transcript
Discussion (0)
Starting point is 00:00:00 So welcome to another episode of the Drill to Detail podcast with me, Mark Rittman, as your host. And my special guest in this week's episode is Avi Zloof, CEO of a startup called EvaluX. So welcome to the show, Avi, and it's great to have you with us. And yeah, let's introduce yourself and let's have a chat. Thanks, Mark. I'm Avi, CEO of EvaluX. And we are in the big data business, specifically in the optimization and services for serverless big data, which is, I think, the most exciting technology for us, the analysts, the data analysts today.
Starting point is 00:00:54 Okay. Okay. So what we're going to do then is talk a bit about the products you're creating and how it works with BigQuery and Athena and other kind of the new sort of generation of databases in the cloud. But let's start off, first of all, just by a little bit of kind of background to yourself. So you're now currently CEO of ValueX, but your kind of history and route into that company is quite interesting. It didn't really kind of strike me as the traditional route that I've seen in the past, people working in other software vendors, other database vendors.
Starting point is 00:01:22 You've got quite an interesting route into your current job so tell us about that and I suppose how that leads into what you're doing now okay so it's quite interesting I must say because originally 20 years ago I started as an Oracle developer with all the PL, SQL, and the classic databases. And during the years, I came to, my expertise was on the client side mainly. So HTML technology, when it wasn't that cool as it is today, and JavaScript was my base. And throughout the years, in some weird manner I found out that
Starting point is 00:02:09 client technology and databases, especially data warehouse, are joined together in a very strong manner where you have to visualize data. When every company that I joined, my expertise in client-side helped data warehouse and analysts visualize their data and in some weird manner I was always pulled to this kind of activity. So it was in OVO and it was in DSNR and it was in Wi-Fi and it was a trader which is a major fintech company today that is doing amazing work. databases 10 or 15 years ago and a few years ago with a huge databases with tons of updates and very complex visualization for enterprises specifically for banks. Okay, okay. So how I came to know about your company was you, I mean I do a lot of work with BigQuery which most kind of listeners will be aware of and there's a kind of a, I suppose an ecosystem of plugins you can have for for BigQuery and
Starting point is 00:03:27 there's BQmate which which Philippe in the last episode talked about which is a kind of standard I suppose DBA developer Chrome plugin that you can use with BigQuery that helps you with helps you with I suppose kind of putting queries together and doing admin tasks and so on but your company produces a BigQuery Chrome plugin that looks at the, I suppose, the cost of queries and generally analyzes that sort of thing. And I was kind of interested to understand the company behind it and why you're trying to do it and so on. So maybe, Avi, first of all, just talk about this plugin you produce, how it works and what it does, really, just at that sort of level first of all so so it's
Starting point is 00:04:06 fascinating because when we started this they start up a year ago and we looked at the various of things that we could disrupt in the impact this several speak data Google big gray Amazon Athena and stuff. And we built this infrastructure. And then a few months ago, I think about three months ago, we thought, okay, let's see how we can distribute that after our design partners really loved what we are doing and we got very positive attraction from the market. And we have thought about the plugin because Google BigQuery is
Starting point is 00:04:46 in some way a very unique technology because it's the first database that is actually serverless and a very big scale. So because it's serverless, it drives all the classic analytic guys to work without the classic tools. And they have to work with Google BigQuery console because this is the only place where they can get some idea of the cost or how much data is going to be scanned. And then they can know how much they will be charged. So in some weird manner all the analytic guys are working with Google BigQuery console which is quite interesting because most platforms have some kind of fragmentation but not in this case. So the Google BigQuery console I learned to admire it because at the beginning I was always complaining about what it was missing, but now that we are building parallel tools, I really admire
Starting point is 00:05:54 the work they did. It's really complex and it has tons of features, but it lacks the enterprise element. It lacks how to translate infrastructure, amazing infrastructure, to a tool that a company, a data-driven company, can work with. And this is this gap we try to provide. And we are started with the Chrome extension that is currently distributing a viral, really amazing result, really more than anything that we have expected. And it's kind of a nice, very helpful way for you to know how to work with Google BigQuery
Starting point is 00:06:46 in a manner where you have a very solid cost prediction when you run a query, a very strong query results at the bottom that can do pivot tables and sorting and column changing. We have enhanced autocomplete for queries which all of those all of this stuff is really nice to have not a must where you just need to get just install the extension nothing else but then we added to the to the extension a very interesting layer of cost analysis.
Starting point is 00:07:28 Now, this is like the candy extension because the unique thing about our cost analysis is, first of all, it's the first ever that gives you a cost analysis that is context aware, meaning it's analyzing your queries and helps you find the queries that are not efficient. And it's the first one that is multi-project. Most of companies today, Rubik's Query, are working with more than one project and it's the only one that integration is a single click. You don't have to have an administrator or somebody that will help you integrate the system or whatever. You just get in, give us the minimum permission possible and you get a full-blown analytics and I must say that we got a really amazing feedback about on that okay okay when
Starting point is 00:08:33 there's a few is quite a lot in that in what you talked about there to kind of unpack ready so so let's let's kind of start off first of all so you talked about serverless platforms and you mentioned BigQuery and you mentioned other ones like Athena and and so on. It'd be quite good in a moment to talk about I suppose how they differ and how they're similar and some of the characteristics there and so on. But you also talked there about the BigQuery web UI and that's the, for anybody who is new to BigQuery, that's the web-based Google interface that you can use to run queries. You can run, I suppose, exports and imports and so on there. But what you've been talking about there is the cost of a query,
Starting point is 00:09:12 something that you don't get from that interface by itself, and there are plugins to do that, but yours takes that to the next stage. And so maybe just talk about, I mean, you talk about cost there, and people who might come from the Oracle world that you and I used to come from would think about, well, you know, you pay for your Oracle license and you pay for it on a named user or a per processor basis. But BigQuery is quite different about how it charges for its use, isn't it? Maybe tell us about how that works really and what are some of the influences on the cost that you might pay? So I find it mind-blowing because I think in every service, good and bad,
Starting point is 00:09:51 cost is the elementary entry point in the way that you evaluate whether it's good for you or not. And what Google did is too good to be true. It's so good that a lot of companies are not using Google BigQuery because they don't understand it. They don't know how to encapsulate it into the daily usage or the organization. So it works like this. It's quite straightforward. You pay two cents a gig, meaning if you have one gigabyte, one terabyte, you pay $20, something like that, a month.
Starting point is 00:10:36 And you pay for what you scan. So if you scan one terabyte, you pay $5. So it's quite straightforward. If you scan more than that, you would pay more. If you scan less, you pay less. And it's a column-based database, so a table can have as many columns as you wish. And if you select only a few columns,
Starting point is 00:11:01 depends on how big is the data, you will be charged. So how much would be your bill in Google BigQuery depends on how much you scan. It can be $5 a month and it can be $2,000 a month. It's up to you. And I suppose one of the characteristics of cloud is that, certainly I found with work I do at the moment, is that you don't typically hit the day-to-day constraints of capacity that you used to have with on-premise software. But the problem is you can spend all
Starting point is 00:11:28 of your money on this because, you know, there is almost no constraint to how much money you can spend if you run queries that are inefficient or expensive. So, you know, even though these things do start off cheap, you've got to watch that cost because it can snowball over time can't it yeah this this is the main drawback because i find this kind of pricing an amazing thing for data driven so a company instead of having one data warehouse can have 10 teams that each is a data driven small company in the company and they don't need the $100,000 just to start to have the infrastructure. They can start running.
Starting point is 00:12:11 But you find a lot of companies that are feeling much better with paying a significant amount of money, but they know that this is it. There is nothing else. It won't be overcharged. And it's hard for them to shift, even if they would know that it can cost them 50%. It's still very hard for them to shift. I just would point out one thing
Starting point is 00:12:38 to the guys who are not that familiar with Google BigQuery or some of them that are saying that Google BigQuery is expensive. Just a couple of days ago, Spotify just put out their blog and they talked about they had everything on-premise and they talked about how cheap and much better performance was transferring everything to Google and specifically Google BigQuery. So, you know, you and I, again, coming from the world of Oracle and those other types of databases, cost can mean two things. Cost can mean the money you pay to the vendor for the software, but cost can mean how efficient or how much resource the query uses as well. And in BigQuery to an extent,
Starting point is 00:13:27 the two things are kind of like, there's a linear relationship. But one of the things that I've found with using BigQuery is that you think at the start, you've got access to all the resources that Google has in the world, but each customer has a certain amount of resource they can use and how you run the query
Starting point is 00:13:43 and how efficient that query is can mean the query might actually execute or it might run out of resources at the end i mean what what again you know where where does that come into things and how can maybe talk to us about how the cost of a query in terms of the computing results can have it can have an impact on customers as well I always give a simple example that in Google BigQuery, if you will ask what is the weather, you might get an enormous bill because it might just scan all your data, give you the average of the weather from ever and give you a result that you are not interested even and it might have cost you let's say for example we see $1,000 queries in our database for nothing so you have to ask a very complex question to get a simple answer
Starting point is 00:14:39 this is the type of work usually being done by most companies that are working with Google big way to get the maximum efficiency. They write really long queries to get a very simple, direct question not the most healthy long-term approach to this type of technology, let's say, or leverage or data. So your plugin, I mean, so talk us through, how does it actually do the analysis of cost on there? And is it something where you maybe in time are going to start to look at maybe optimizing those queries to cost less? Or is it more of a kind of retrospective reporting of cost? I mean, tell us how that kind of works. So I will tell you a glimpse of the mind-blowing technology that we are looking forward to release in the next year.
Starting point is 00:15:51 That I think you could really make a change in the way companies work with big data today. So first of all, let's talk about what we have today. So first of all, the key element is the one click. So in one click, you get an integration. So it's very important. So the companies will be, it will be a really easy, simple and short to integrate into our system. So it's really a one click.
Starting point is 00:16:19 And what we do at that moment, we crawl and we get all the metadata and all the history of queries to the end of time for a customer. Then we run full analysis in two metrics. One is queries. What query cost him the most, what group Matt reduced, took the most resources, what day, and how long, and every type of perspective that can help a user drill down and understand how he worked and how he can change it. We had an interesting case where we had a customer that had a cost issue like everybody else. And we looked at it, we helped him. Usually they do it by themselves,
Starting point is 00:17:18 but we really love this customer. So first-hand help. And we tell him, hey, you have a guy there that runs $30 queries every couple of seconds. He runs it again and again and again. So he said, yes, this is my son. So we said, okay, if it's yours, don't fire him. But get him into the room and tell him to stop doing it because he's really wasting tons of money. So this kind of serverless is very hard to pinpoint issues and efficiency.
Starting point is 00:17:55 And with our system, this is one side of the query, it's making the job much easier. The second part of the analysis is the storage, where we help you again. The leak of storage is very problematic because you store tons of data. We have customers with petots of data. We have some of them with 20 or 30 petots of data. Understanding what is being accessed, what not is being accessed and how you can delete some of it or maybe download and an archive is crucial for for cost optimization so we are doing our best there to help companies okay so that's that's that's kind of retrospective so
Starting point is 00:18:42 you're making that data more i suppose accessible and you're making it um so that that's kind of retrospective. So you're making that data more, I suppose, accessible, and you're making it so that you can kind of, I suppose, explore into it and so on. But yeah, and that's a situation that we found ourselves in as well in the company I work at now. We take the logs, and we noticed a very similar thing. One customer was costing us a fortune in queries because they were refreshing a real-time dashboard each day. They were doing a select style, and they were refreshing a real-time dashboard each day. They were doing a select style and they were doing it every day and so we were able to work with them to bring that query down to just the columns they need and just the kind of the time range they wanted.
Starting point is 00:19:14 We then partitioned their tables and so but is your plan to be able to offer some maybe sort of proactive advice with the tool as well to say things like were you to partition the tables you would do this um you know what would your what's your plans in that sort of area then the next step i must say this is the mind-blowing that i talked about before so we just issue a patent on something we call a super cray now super cray query is the holy grail. Super query enables us to take your query, take it apart and build it back again in the most efficient way. I usually address super query something like a virtual machine for Java. So a virtual machine for Java means that you can write code in Java and it runs on Linux or in Windows. You really don't care. And it makes sure it will be the most efficient, it translates to C and stuff. SuperQuery is basically the same, meaning you will write your questions
Starting point is 00:20:21 in the most simple way in SQL and we will make them the most efficient possible and we are doing it by with two elements one we are giving enhanced caching and so we really they're one of the most problematic issues in service big data that once you acquire your data once you create data and you got the results it's very easy to to lose it because if you don't didn't save it you don't have access to it again and you have to ask your question again so let's give an example let's add let's suppose you are asking what is the sales average of the last seven days so you will get seven records of results how much you did in
Starting point is 00:21:17 the first day second day etc and if you ask the same question a day later you have to pray all the day they again and get all the results again and have to pay everything, even though for six days you already acquired this data, but you lost it. It's not accessible for you anymore. Okay. So that sounds like a kind of results cache, but it's a bit like in Oracle terms, it would be results cache with fast refresh or something so you're actually taking i suppose you're taking the aggregate from the previous date and then just incrementally updating that as opposed to that's quite clever stuff but but it doesn't end there because we we are doing various of gains there with with the infrastructure to maximize your cache and to enhance your cache. But what we also do is understanding your organizational usage.
Starting point is 00:22:13 So I know what you asked yesterday, and I know what your teammates asked also. You had an analysis asked again so it's easy for me to prevent replication and to pinpoint your question so if you would ask what is the weather you didn't ask what is the weather in London today you ask what is the weather I will understand that you mean what is the weather today in London because you are in London and this is what normally you would ask and if you want something different you will have to ask a complex question but if you ask a simple question you will get a simple answer and join it with multi provider usage and then you have a very interesting breed of of of crane technique that you can that can evolve to a really strong data
Starting point is 00:23:16 driven analytics okay a few things in there there's a few things in that I kind of recognize so you've got so a moment ago you talked about, I suppose, a results cache, which is typically an aggregate at the end, and then incrementally updating that results cache with whatever today's data is. So that's interesting. I think there's also almost a kind of like a Google Assistant style contextual kind of query search thing in there as well where you're saying because we know that you ask questions about London for example we will kind of assume
Starting point is 00:23:52 that context in the query the next time as well there's quite a lot going on there really you've got and so and I suppose it also addresses that the issue we have with kind of BigQuery that there is no I suppose aggregate management in there I mean is that do you think that thing about aggregates is is an interesting thing in in BigQuery is that is is producing numbers small numbers from big numbers one of the BigQuery and doing it very fast is that one of BigQuery's weaknesses at the moment yes it's like it's like an amazing wife, Google BigQuery. When you work so much with it, although you love it, you can pinpoint a lot of disadvantages because you know every aspect. I would just say this.
Starting point is 00:24:40 You mentioned your customer that did very expensive queries and you optimized so you won't get this cost leak and all those issues. In my world, or at least in my company today and hopefully soon for everybody else that is using Google BigQuery, you won't have to do it. Continuous optimization will be an integral part of the daily work because we will be able to do this work for you. And even though the customers or the users might be not focused, let's say, with their queries, we'll be able to pinpoint and make it more efficient and and when you combine it with multi-provider and ai you get a very interesting sauce there okay where does the ai come into it then again i think i've read some of your material and you mentioned about an AI algorithm. What's that then? Well, just before I describe that, I just might say that in the current world, everybody is saying AI. And it's kind of overused.
Starting point is 00:26:01 So I would just suggest a different phrase for that computer intuition i think i think the major shift in the next 10 years would be computer intuition if we think today that that computers can do one plus one better than human beings this is kind of everybody common knowledge since I don't know the 1970s but up until now human intuition was stronger than any computer but this is going to change so when you have a big amount of data computer can assume much better than a human what is needs to be done and when we discussed about serverless big data what an amazing match so when I need to optimize your data and your queries for your benefits what better way to do it with AI and
Starting point is 00:27:01 this is what we are working very hard and this is the secret sauce there so when you ask a question a simple question you get a simple answer and it will know what you want because there would be some kind of computer intuition behind it for you to understand and you will trust it to know better than you and that the results are what you asked or what you need. Okay, okay. So you mentioned, I mean, we mentioned back at the start that, I mean, we've been talking about BigQuery because it's the thing that I'm most familiar with. But is your objective to try and do this for Amazon Athena, for example,
Starting point is 00:27:43 or kind of Oracle 12c or you know or the autonomous state is this a concept that would apply across all different of these of these serverless databases or is it very BigQuery specific? I think it's one of the most exciting thing today is Amazon Athena, Amazon Spectrum, Oracle 18c. The solution for Microsoft that is really close to come out very soon. When we started a year ago, there was one player. And it was super clear that everybody is going to jump aboard. It was only a question of time. And now we are a year later and there are four solutions
Starting point is 00:28:27 for a customer to select from for serverless big data and there are several more that are least not as recognized or not as yet famous or they've got that publicity but this is the way that we're going to interact in the future. And whoever choose to work with serverless big data need help with optimization. And we plan to be this front end for him. So yes, the provider treating a query as a commodity is a basic structure of our strategy that we are working very hard to achieve. Okay, okay.
Starting point is 00:29:12 So the problem that you're looking to solve, I've solved in the past by using Druid. And Druid struck me as a sort of not entirely dissimilar approach with the way that it maybe stored data and so on. But it also had the kind of roll- up on load that helped me with the aggregation problem I mean I'm presuming you're familiar with Druid and what it's trying to do how does how why would a customer not want to go say to Druid instead and what's your views on Druid as a as a kind of alternative to BigQuery in this kind of area? Well, here I must say that the main factor here is our client side. So I think I don't, from my expertise,
Starting point is 00:29:58 I cannot see a strong infrastructure without a strong connection to visualization. And the main thing is here that we are not looking at infrastructure itself. We are giving a full-blown solution that goes up to, let's say, we call it the first stage analytics. So there are really minimal tools today for analysts to share their knowledge. And we are looking at enterprise services. There are a lot of aspects today
Starting point is 00:30:35 that are missing for enterprises to work with this kind of technology and aggregation and utilization alone is not sufficient. It's like, but it's not sufficient. We are looking at the full-blown solution, and I must say that it makes me super excited because we are working internally.
Starting point is 00:30:57 It's like a dream come true to work in this kind of environment. Well, let's get on then. So you touched there on the kind of enterprise and the full service and so on. And again, looking into a little bit of what you guys are trying to do, again, this kind of query optimization and so on, that's just one part of, I guess,
Starting point is 00:31:16 where you're looking to go. And you described your company as the serverless big data management service, or what you're trying to do is that. Just paint a picture, really. I mean, you started to talk about it a minute ago, but what do you see as the bigger problem to be solved here and the bigger kind of, I suppose, footprint and influence
Starting point is 00:31:33 you'd have on organizations using this kind of software? To put things simple, you take your iPhone, you say, hey, Siri, and you ask a question, and you get an answer. You go hey siri and you ask a question and you get an answer you go to organization you ask the question and you need infrastructure you need data warehouses you need analytic guys you need it guys you need devops you need this and schedulers and and and a lot of combination of things to connect together. And then after two years, you can ask to add a layer of AI on top of that.
Starting point is 00:32:10 And it's quite crazy that the difference between, hey, Siri, to what is my revenue or who visited today, it's really, really, really, really a big leap. And only, let's say, 5% of the companies today can reach for the big data with AI that is really scalable and really big. And it's not technology today. And it's not technology today. And it's not that far.
Starting point is 00:32:47 You can really today do much with much less. But you need help there. Okay. And that's an interesting statement you made there, actually. Much less with, you know. So that was part of, I think, Oracle's picture with autonomous database and so on there. I mean, is that just a kind of a catchphrase? Is that just a kind of a trite statement?
Starting point is 00:33:08 Or do you think that's going to affect the way that we kind of use and pay for databases in the future? Totally yes. So the simple answer is totally yes. And I think that most people don't really understand. Tomorrow morning you will be able to go to EvaluX, set up a connection to Oracle C, Google BigQuery and Athena and start querying your data and the data will be placed automatically wherever is the right place for your queries because we would know what is your needs and you will get the results and you get the most attractive bill in the end and if you and all your queries will be short and understandable um this this is the kind of future that we are going to uh and we just we at valex
Starting point is 00:34:01 we just helped to make it faster than others expected. That's interesting. I mean, I don't know if you have heard of the company Gluent. So Tenor Poder is the guy running that. And I actually worked there for a little while last year. And they had an interesting kind of story, which had a few kind of parallels to what you're talking about there. So their kind of pitch was that they liberated enterprise data and they, I guess, produced the plumbing that would allow you to offload a workload from, say, Oracle, the storage of the data onto, say, Hadoop. And then you could then choose to run your data stored on Hadoop and then potentially access it using Spark and stuff like that and then offload work from Teradata. But longer term, again, the kind of the story there and the interest was that you could move data to where it was most economically suitable to kind of run.
Starting point is 00:34:51 And it was down to you where it ran rather than the software vendor. It looks like what you're doing is almost providing the brains for that to say that, you know, were there to be an infrastructure there, we could tell you where to put it, really. So it's interesting parallel to what gluant are doing maybe yes i think the the etl kind of world the place where the the the place that drives data from one place to another is full complex world by the way that we this is not our specialization there are a lot of good amazing companies that are doing really good work. But the other side, when you need to decide what to create, where to create, and to calculate your needs, you really get very little help there. So, yes, we will connect with this type of services to help customers shift the data from one side to another. But we just will be the, let's say, we will be the top analysts. We
Starting point is 00:35:53 will be the, we call, Ido, my partner, we are taking a lot of offload from the analysts and the DBA and help them do what they do best and most of the technical things, the expertise in infrastructure will be taken out because really no need you to be an expert in Google BigQuery. You need to be an expert in AI, an expert in big data, an expert in analysis, not in Athena, not in an autonomous database. Really, these type of things are changing all the time, and it's very hard to keep track. Okay, okay. So let's just change track a little bit on this. So as a business then, so what you're saying sounds very interesting, and thein is free to use at the moment and so on I mean the business
Starting point is 00:36:48 model behind what you're doing and I guess people would tend to think that what you're looking to do is something that Google themselves should build into the product as just one of the features so you know query optimization and and so on is yeah how is that a business for you and how is that something and what's your what how do you intend to to grow and to take this forward and, I suppose, to convince customers to go with you rather than just wait for that to be delivered as a feature in Google, in BigQuery? Well, I think, first of all, convincing customers to work with us it's the it's mostly is very interesting because it's
Starting point is 00:37:27 that it turns out that they thrive to new technologies and help in optimization and help with for helping them focus on their business and not in the technology so and we quite we have quite a it's a easy work there when we bring out good technology. I suppose one better way of putting that out to my side really is to say who is your target customer? You know, who is the person you're selling this to and what problem you're solving for them? I mean that probably is a better way of asking that question.
Starting point is 00:38:01 Well the thing we look at any data driven company today that is using serverless database is our target company. Now what we say is we will help you for free in every aspect optimize, but when we deploy the super query where you get a seamless continuous optimization, for that we will charge money, of course. And again, the beautiful thing with that is that you will know exactly your patents today, how your patents will be when you are using our system, and how much cost reduction you will get from using it. So it would be a very nice dance with the customers to show them that we can make the
Starting point is 00:38:58 system much more efficient, cost them much less, and hopefully get our bite out of it. But I think it's pretty normal. It's quite easy to quantify, isn't it? I mean, certainly we found that when we started to optimize the queries we ran, even if we ended up doing more compute to create summaries, to pre-join tables, then that was by far outweighed by the savings we made in queries. And I guess because cost is so visible in BigQuery, and as it is so kind of, you know,
Starting point is 00:39:28 I suppose what you're saying there, that it's very easy to attach a value to what you're doing, really, and then say we've saved you X, we've cost you that less something else. So I suppose it's quite easy to make that sound. If a product works, it's easy to justify, really, isn't it? Yes. I must say that we are lucky in this term that it's pretty easy to understand
Starting point is 00:39:54 the value of what we're doing. And we are really looking at the world is shifting to big data. I'm not sure that, well, I'm pretty convinced that today a lot of the tools today are having a really hard time struggling with big data. So the basic strategy for most companies today is take your big data, transfer it to data, and then work with it. And this is a really problematic approach because eventually it's taking time and time is crucial for success so you really have to be able to work on your big data and to work with your big data you need big data tools and we are in the business to leverage big data tools for for enterprises okay
Starting point is 00:40:44 so so beyond I suppose optimizing the query performance in time and so on what's there anyway you know for you what's the next problem to be solved in the industry really I mean you mentioned there about I supposed people dealing with it scale is an issue but what's the next unsolved problem do you think in this industry so I think the most attractive thing that we saw is the analytics collaboration. Up until now, analysis or analytics guy usually worked as a single person, usually. And if he wanted to collaborate his data usually use used an email
Starting point is 00:41:27 or text editor to do it with other analysts uses excel maybe and if they really and if we really had the time usually takes longer he used one of the bi tools that is commonly used. So what we are trying to do is to motivate data-driven companies to share data fast in a controlled environment. And it's very hard when you're talking about serverless when if you want to share a query and somebody might use it and it might cost you $100 per run. And in this field, we think we have a quite good edge and technology to help companies the moment you have this aha moment with your data which is quite common for us
Starting point is 00:42:16 the data persons you will be able to share it now and fast and impact your business. So this is quite a thing that we have a very interesting thing to offer. And I believe it will come soon next year. Okay. Well, I'm conscious it's late over in Israel at the moment. So just to kind of wrap up really, how will people get hold of the Chrome plugin and find out a bit more about your company and what you do?
Starting point is 00:42:45 So first of all, it's pretty simple. Simply go to our website. It's evaluex.io. And you have a link to the Chrome extension. If you just want to get the analysis without the Chrome extension, you can get it too. We're simply registering. Just put your email and we will we will take you from there it's pretty simple piece of straightforward nothing too fancy and that's about it yeah it's excellent I mean it's been
Starting point is 00:43:17 it's been great to speak to you and so thanks very much coming on the show and yeah I look forward to hearing about how you got in the future and yeah it's been it's been a speech you great thank you it was a great change and we'll be happy to talk to you with you again okay thank you Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.