Drill to Detail - Drill to Detail Ep.53 'ThoughtSpot, Search and AI-Powered BI' With Special Guest Doug Bordonaro

Episode Date: April 23, 2018

Mark Rittman is joined by ThoughtSpot's Chief Data Evangelist Doug Bordonaro to talk about the value of data, issues around trust and consent raised by the EU's new GDPR regulations, and how ThoughtSp...ot are applying ideas from search engines combined with artificial intelligence smarts to surface insights and drive real value for business users from their analytics investment.Value Becomes the 5th “V” in Big Data FactorsThoughtSpot - Search and AI-Driven Analytics for HumansThe ThoughtSpot BlogDoug Bordonaro on LinkedIn“Will GDPR Make Machine Learning Illegal?”

Transcript
Discussion (0)
Starting point is 00:00:00 So welcome to another episode of Drill to Detail and I'm your host Mark Rittman. So I'm joined today by Doug Bordenaro, Chief Data Evangelist at ThoughtSpot. So welcome to the show Doug and why don't you introduce yourself to the audience and tell us who you are. Sure, it's great to be here, Mark, and thank you very much for having me. So as you said, I'm the Chief Data Evangelist for ThoughtSpot, typically talking less about ThoughtSpot and more spending my time with executives around the world who are managing data or data-centric organizations, CIOs, CTOs, CDOs, every kind of C acronym you can think of, really talking about how they use data, what best practices they're seeing in terms of structuring their organizations
Starting point is 00:00:56 and making data-driven decisions in their organizations, and then just sharing them across industries so that there's some kind of ongoing dialogue about how to be successful in this world that seems to be changing at a record pace. Excellent. So, Doug, I had a look through your LinkedIn profile before we did the call and noticed you actually worked on the Walt Disney Data Warehouse. So, I mean, what was that about then? What was your role there and what, I suppose, what did the Walt Disney Data Warehouse. So, I mean, what was that about then? What was your role there? And I suppose, what did the Walt Disney Data Warehouse do for Walt Disney? Disney's been a rapidly changing organization for many years.
Starting point is 00:01:33 And when I went in there, it was just after the Eisner years where there was a very decentralized organization and moving more towards the collaboration that Bob Iger's been fostering. So I managed the online data warehouse and BI platform for Disney. It wasn't the theme park information, which is really about hotel optimization. It was kind of everything else, right? So the ESPN data and ABC and all the online properties. So Disney was very good at – they've always been a very mature data organization,
Starting point is 00:02:06 and they're very good at getting a comprehensive view of a Disney customer, whether you go to ESPN or go to Disneyland.com, and there's a central ID that ties everything together. So it was a fascinating opportunity to work at a world-class company, really building out this modern day data warehouse, transferring away from the technologies and processes that I think worked well in the late 90s when they bought the company that all this was based on, and really bringing them into the modern age with platforms like Netiza and MicroStrategy and state-of-the-art technology 10, 15 years ago.
Starting point is 00:02:51 And so beyond that, you actually went to work for Netiza. Is that correct? I did after that. That's right. Disney was a great place to be. I would recommend it to anybody, but a little too big for me. So I had a lot of coworkers and friends who were at Natiza at the time. They were actually public at the time, but still very much a startup mentality, trying to change the way people access data. And for me, it's always been obvious when I see something that just makes sense.
Starting point is 00:03:28 I've recognized those opportunities somehow in the past when I went to AOL. And my relatives thought I, they asked me for free airline tickets because they thought I worked for American Airlines instead of America Online. And the Disney thing was similar. It was a chance to really reinvent the way that they manage data and then the way they got answers in front of people. And when I went to Natiza, it was to an organization that I saw as fundamentally changing the equation around how data is stored and how to scale large-scale data warehouses. And in its time, Natiza, there was nothing better than Natiza, and I think it's still a world-class technology. So for people that are maybe new to, I suppose, old-school data warehousing, what did Natiza do differently then to the competition before that?
Starting point is 00:04:16 Why was it, as you say in your words, a game-changer, really? Old-school data warehousing. Way to make me feel old, Mark. So Natiza and technologies like it, and I'll put Teradata in that as well, but certainly since then, you know, there was Greenplum and then Vertica and all sorts of different. Now we, of course, have Redshift and Snowflake. It was really the advent of this massively parallel processing technology. And when I look back at it from the vantage point of time, which is always a luxury you wish you had then, it was a great place to be. I learned a lot at Netezza. It was a great technology. It was a good solution. And it really, I think, that the long-term value of Netiza is really, I think,
Starting point is 00:05:07 popularizing this MPP, massively parallel processing approach to managing data. I don't think anybody today would consider building out a five-terabyte data warehouse on anything that wasn't MPP, whether it's, again, whether it's Redshift or whether it's Netiza or Oracle Exadata or one of many, many options. But back then, back in the early 2000s, it was a radical approach. And we would go in at Netiza with the tagline, you know, 800 times faster than Oracle because Oracle in the market.
Starting point is 00:05:43 So that was largely who we targeted. And it was actually an understatement. We kept the number artificially low because if we said thousands of times faster, nobody would believe us. But we regularly were. And it wasn't necessarily that Netezza was better than Oracle. It was more that the MPP approach was far superior than just the approach of scaling up by adding more memory or more CPUs. And that really has led itself, I think, directly and indirectly to the architectures we see today where we have MPP ETL tools and MPP data warehouses and even companies like ThoughtSpot who at least part of what we do is leverage
Starting point is 00:06:27 that type of architecture to scale seamlessly. So that's a good lead-in to actually your role now. So you were the first sales hire at ThoughtSpot. So tell us what it was like when you first arrived. What interested you about ThoughtSpot? And just give us a very brief overview now of what they do, and we'll go into more detail later on. Sure.
Starting point is 00:06:48 Well, it's almost, you know, it sounds more glorious than it is to say I was the first sales hire. It was really the first business hire outside of an office admin. And at that point, there wasn't really much of a product. There were no customers at all. Nobody really ever used the technology. And so you might ask, well, why would you go to a company with no product and no customers? I knew I didn't want to stay at IBM.
Starting point is 00:07:21 IBM was a good place to be as well. But it's a big, big machine, and I need to be on the front lines, I think, to be happy. So I had been talking to a number of different people about what to do next and really just looking at what's in the market. And every conversation I had with people, I would sit down and say, well, explain what you do and explain what your differentiation is. And generally, they would take 20 minutes or so.
Starting point is 00:07:50 And at the end of that conversation, I would say, oh, that makes sense. I understand. It wasn't so much a matter of having to be convinced. It was just a matter of not understanding immediately and really wanting to dig into it. When I first saw ThoughtSpot, though, I was introduced to a mutual friend of the CEO's. When I first met Ajit Singh, who founded ThoughtSpot, it was obvious to me.
Starting point is 00:08:14 As soon as I saw it, I immediately thought, this is just how it should have been for the last 20 years. It's so simple and obvious. When you really do see a better mousetrap, you don't need to explain to you. You just see how it's revolutionary. And so immediately I just said, look, what role can I play in doing this? I mean, this is something that I think is really going to change how people access information. And the good news is almost five years later, I'm an even stronger believer in that than I was then.
Starting point is 00:08:44 I think that's been well validated in the market. news is almost five years later I'm an even stronger believer in that than I was then. I think that's been well validated in the market kit and it's been an exciting ride mainly because we're solving problems that people have had for decades all over the world and that's a great place to be. Okay, so we'll get into how ThoughtSpot do that a bit later on then and the reference to IBM is obviously in the teaser were bought by IBM, is that correct? So you had the choice then of kind of i've kind of been being absorbed by by ibm or find something new really and uh i mean this you know the startup world is always interesting really but i mean i i would be interested to see particularly with talk spot i mean things you're saying there about it solves problems the way they should be i mean that's a
Starting point is 00:09:20 very generic kind of phrase i think it would be interesting later on to see particularly you know the angle that thought spot i've got in this area really um and i suppose as a way of doing that it'd be worth um having a chat about there was a there was a some things i've seen i think you've written or certainly thought spot i've written about um about data recently and some of the challenges i suppose really are making data actionable and and getting meaning from it and there was a i think something you kind of said recently was that data is in abundance, but insights are hard to find, which is a fairly, not obvious statement, but it's a statement that is true,
Starting point is 00:09:51 but it's, you know, what do you do from that point onwards? What was your point of that, really? What were you trying to say with that statement? Well, there are multiple points here, I think. You know, it's always struck me how fragmented this world of solving, getting insights from data is. I'm putting a lot of different technologies in this bucket when I talk about this, everything from operational databases to capture source data to data cleansing to ETL to data warehousing to business intelligence. But these are all very stack-oriented technologies.
Starting point is 00:10:26 If I'm selling Natiza, for example, I think it's a fantastic technology, but an end business user never sees it. It's a piece of the stack meant to solve a business problem. And I think one big problem we've had in this industry is not enough focus on real business value, almost an outcome-based focus as opposed to an infrastructure-based focus. And this advent of big data, a term I really don't like, but that I think I'll
Starting point is 00:10:53 define for this purpose as a bunch of data over there and I don't know what it is, it's growing every day. And this is really a forcing function for these technologies, because we've gone through this period over the past five, six years where it's really been about managing data and the rise of Hadoop, which gives you cheap storage without imposing business rules on data and other technologies like that. But I think that what's happened is if you look at the consumption of information, it hasn't really changed over the past decades, right? If I look at the products we were using at AOL back in the late 90s, and then you look at the products that we call modern today, the reality is that they look very similar. They're palettes of options and things to drag
Starting point is 00:11:47 and drop and buttons, which all are focused around publishing information to people. So this publication paradigm is, for me, I think the core thing holding us back now. It's not about great technologies underneath. It's the fact that if I'm a non-technical business user, the only way to get information today in the vast majority of organizations is to have an analyst or somebody build it for me, whether it's a dashboard or a report. We solve everything through this mechanism of publication. But that's not how we get information in our personal lives. And I think it's the only real way to access vast amounts of information. So I'll point to, for example, if you look at the consumer space, if you look at the advent of search engines,
Starting point is 00:12:35 they didn't really start as the search engines we know today. They started as curated directories of the Internet. And really, you know, really quickly, I think it became apparent that it just wasn't scalable to have somebody at Yahoo or Infospace or whatever, AltaVista or whatever it is, building a manual directory of the web. There had to be a better approach. And that approach that's proved itself over the last decade in the consumer space is search, right? We use Google to search for websites and its petabytes of information.
Starting point is 00:13:08 We use Amazon to search for things to buy and its terabytes of a retail data model. We use Yelp to find restaurants or search websites like kayak to book flights. And nobody's ever been to Amazon training. So I think that the real opportunity here in the industry and really why we started ThoughtSpot was to take what we know works in the consumer space and accessing a large amount of data and just apply it at work. So instead of searching all day at home
Starting point is 00:13:42 and then going to work and waiting 20 minutes to find out where the requirements form is on the K drive just to ask a simple question, it's the search engine that lets you get that analytical response. And I think that's what we'll see even pulling back from BI. A real trend over the next five to ten years is maybe not that long, three to five years, is this trend toward taking all this value we built in infrastructure and actually making it making it provide repeatable value to to business users in a very easy way just like we've seen in the consumer space yeah i agree i mean i
Starting point is 00:14:19 think certainly my own experience is is that you know you can give data to people you can give bi tools to people but the reaction I kind of hear from from people in that case is well I've got it I've already got lots of BI tools I've already got lots of data you know that that's of no help to me that's actually one more problem yeah well either these insights what I need is is the kind of nuggets of information and I mean search is one way of doing that but certainly you know data itself just by itself does not have inherent value, does it? It's not. In fact, arguably, it's a cost really to people. Yeah, and I think this is one of the key ideas behind the value of what we're seeing Hadoop
Starting point is 00:14:56 play in this space is that I don't have to make a decision what's valuable in the data and what's not. Instead of imposing business rules and then probably winnowing down the data and what's not, instead of imposing business rules and then probably winnowing down the data to store in a more expensive relational format, I just dump it all in Hadoop, right? And then if I find data later or information in that data later, I can go back and find it. And I think that that acknowledges implicitly the problem that we all see but don't really talk about very often,
Starting point is 00:15:30 which is that we're looking for nuggets of gold in tons of raw earth. And most of this stuff will never be valuable. Most of what we collect, we're collecting because it's there. An engineer puts a monitor on an airplane that produces seven terabytes of data every 30 minutes, not because it's needed by anybody, but just because it's so cheap to do it. And so we end up with all these flows of data coming off of everything, whether they're streams of Twitter data or electrical power grid information every 15 minutes or every 30 seconds from households in entire counties.
Starting point is 00:16:08 We're collecting all of this, and the challenge becomes finding the gold in it. And I think the key problem, and this is kind of my point about both accessibility and the fact that raw data doesn't have any intrinsic value. The key problem is that the people who today are best equipped to find things in that data, analysts and data scientists and technical people, are exactly the people who are getting further away from the business as organizations grow. The person who's a marketing manager doesn't have the skills to find the gold in that dirt, and the person looking for gold doesn't really understand what gold is to the organization because they're not in the line of business. So you get these formal processes where they communicate with each other via requirements instead of understanding it. And what we're trying to do and what I really think that the industry, not only in BI but in this entire kind of data management space, is moving toward is tools and processes not to support, you know, 1880s style of publication,
Starting point is 00:17:14 but to actually enable direct access to this information by the people who are best equipped to find the gold. So, I mean, there's a couple of angles we can take on that, really. I mean, there's, so, well, actually, one really kind of topical angle, I suppose, is the whole thing, really, with having data. And so not only is there getting the value of it, but it's understanding what's there and really making sure you get the absolute value from it. Anything you do have has to be there for a reason. And make sure when you do have it, you exploit it for the maximum value for the end user as well, really. Yeah, I think ultimately, if you're doing it right, those things coincide and that there is value to the end user. But of course, this is the Wild West in terms of data.
Starting point is 00:18:12 And the core problem, I think, is that it always takes time for legislation to catch up with reality and what's actually functionally possible. And so you see this with some of the recent challenges Facebook is having where I don't necessarily think that there's a big ethics gap. I don't see anybody there sitting around stroking white cats and lifting their pinky to the corner of their mouth. Instead, what I just see is naivete, right? People who don't quite understand how data can be used. And so ultimately, I think education is a big part of that. I think, like any other problem in this space, it comes down to tools, to processes, and to, I think, an understanding of what really is possible. Because we're still figuring this out.
Starting point is 00:19:07 Legislation hasn't caught up, but it's really incumbent on companies to, I think, be a step ahead of that because there's a responsibility, and we've all seen what happens to organizations that don't take that duty seriously, whether it's the tribulations Facebook's going through now or the very well-publicized data hacks we've seen with credit reporting agencies and retailers recently. BI tools are much more in the hands now of, say, end users. And you talked about kind of finding nuggets of information, you know, getting data in people's hands and the IT staff building reports being separate from the business. Has that not solved the problem, really? You know, putting BI tools in everybody's hands on their desktops, really? Or does that still just confuse things or not really help?
Starting point is 00:20:02 Well, I think that it's not as simple as desktop tools, because I think that's the way the industry's been going, right? To effectively make things simpler, make them easier, we used MicroStrategy at Disney, and it was, there's no question, it was an IT tool. And it was a very good platform, but no non-technical end user was ever a user of MicroStrategy. They just consume what's much more technical built for them. And I think desktop tools go to the other end of the spectrum. I think desktop tools make it much easier for somewhat technical analysts to get information.
Starting point is 00:20:43 It doesn't fix the problem. It's not as easy to use as Amazon, but three days of training is much better than a month. But they have the opposite problem. They are much easier to use, but they don't often have that enterprise completeness piece to it. So most of the companies that have specialized in desktop tools are very good at getting into an organization because they can sell one license by talking to a business user with a problem. But you can't back into the governance piece. You can't back into the scalability problem. And even with products where you can publish to a centralized server as a potential solution to that, you still have 40 or 100 or 500 desktop people potentially creating 400 different versions of revenue or other metrics. And so really the
Starting point is 00:21:33 long-term solution, I think, is a balanced approach, an approach which has a great end-user story, simple to use. I don't need training, I feel like I have the ultimate power, just like you would have with Amazon or Google, but a back end where you can align with the enterprise governance standards and processes where there's one version of a metric and you can see what people are doing. Again, very much like Google or Amazon, these very robust platforms where I can go to Google and search for anything and feel like I have total power and never have to call them and ask for help. But they really do know everything I've ever searched for.
Starting point is 00:22:16 And in fact, they make very intelligent suggestions as I type because they know all of this about me. And that's really kind of what we're going for is, you know, desktop tools solve one side of the problem. Traditional publication-based reporting tools solve another. But it's putting those two things together and solving the usability consumption problem as well as the scalability and enterprise management problem.
Starting point is 00:22:41 That's the only way we're going to actually find that value in these increasing mountains of information. Otherwise, we'll just become a technical exercise of analysts churning through data without any real firm understanding of tactical business problems they're trying to solve. Okay. So tell us about what ThoughtSpot do then. What is the product and what problem does it solve then really at a high level? So largely it's what we've talked about. It's that access to information.
Starting point is 00:23:12 We're really trying to solve this last mile problem. Companies may have invested in data warehouses and Hadoop and data movement tools and even publication type business intelligence products, but there are still thousands of end users out there who are in sales or finance or marketing or any non-technical role that just want answers. And one of the things that surprised me when I first came to ThoughtSpot, which makes sense in retrospect, was how many reports that we have to create every day in organizations, how many dashboards really aren't needed. It's just that because it takes three weeks to get a request fulfilled,
Starting point is 00:23:53 an end user will just load up their requirements so they don't have to get back at the end of the line if they have a follow-on question. And so what we're really doing is we are, and Analyst still has a role, and Analyst has a role in making sure that when a user looks at a metric or a user looks at an attribute, it is the right number. It is defined the right way. But instead of creating that metric or attribute every time we create a new reporting environment
Starting point is 00:24:18 or every time we create a new universe or framework or whatever the term is for the product or workbook, Analyst defines it once for the entire organization, and then it can be used in any combination by an end user just using a search box like they already know. So we're really trying to solve this accessibility and adoption problem, taking the data that customers have today and making it very quickly available in a governed, repeatable, scalable way to the entire organization to try to solve this promise of data-driven decision-making and data
Starting point is 00:24:51 democratization that will never be solved by just creating more reports. So it sounds to me as if there's a process where you go through and you define these central definitions of metrics, and then there's a search interface over that, and that's what you use to query the database. Is that correct? Or is it more kind of conversational or what, really? So think about it this way. Think about a useful way to think about it
Starting point is 00:25:17 might actually be thinking about the lottery, right? So when you're doing any kind of country or state lottery, you might have eight different combinations of numbers you have to pick. Well, even in just that six to eight different numbers that you have to be picked, there are billions of possible combinations. And in any normal business, you have a lot more than six to eight factors. You might have hundreds. So think about the possible combinations that people might want to ask. Your chance of having a report that already answers that next question are probably worse than your odds of winning the lottery because nobody's thought of that combination.
Starting point is 00:26:00 And it's not just the number of different things. Let's say date. Do I want it weekly? Do I want it monthly? Do I want it monthly? Do I want it daily? And it's not easily divisible, especially at scale, because there aren't an even number of weeks in a month. So you have to predict what combinations of factors people might want to ask
Starting point is 00:26:18 when you produce a report, and you do that via requirements. And this is why the reporting cycle in a traditional BI shop is never-ending, because there are an almost infinite number of combinations. And instead of creating these environments that say, well, we'll take six of these 100 factors we have in our company, and we'll assume that you can only do it by week and that you're only going to want product at this granularity, making assumptions about these factors to try to minimize the size of this report. What we can do at scale is take each of those individual variables and just make it available at its most granular level. So I don't have to pre-calculate dates. I just say, or aggregations
Starting point is 00:27:02 of those by week or month, I take the raw data and I can say this is shipment date and make it available to the company. And it comes from this column in the database. And effectively, I'm giving end users access to all of these individual things and they can create their own combinations. So you're much more focused on building a governed, scalable architecture as an analyst with ThoughtSpot. And an end user is just asking the question they want to ask and getting any combination of these things
Starting point is 00:27:32 because we can leverage all of these technologies, which just weren't around six or seven years ago, like very responsive HTML design and very functional HTML5 specifications, like MPP architectures we talked about earlier, or like an understanding, a better understanding of how search bars work and to lead people to insights. So we can take all the machine learning that is starting to converge with BI and put it behind the search bar so that instead of, you know, if you think about how a consumer search engine works, you don't really type anything in the search bar. You type half of your question, and then the
Starting point is 00:28:08 product reads your mind, and you choose it and say, oh, that's what I was thinking. And that's what we're doing with BI. And now we're starting to take that machine learning behind the search bar and turn it around on the data itself so that instead of just making great suggestions that are relevant to something I might want to know, we can actually have ThoughtSpot go and just return a whole page full of things that are really interesting in the data and start to get to the point where we can do what's really valuable and not just answer your question simply and easily and correctly, but even give you the answer to a question you didn't ask but should have. And finding those undiscovered questions, I think,
Starting point is 00:28:50 is really where we're going with analytics and BI, not just better reporting or even optimization, but really identifying opportunities for data to provide value that as an end user I never would have thought of because my processes and systems and solutions are leveraging past questions and other people's searches to actually find that unasked question. So who would be a typical user of this product then? Would it be anybody who wants data and reports and so on, or is it more of a kind of exploratory kind of analyst type sort of person that would use it?
Starting point is 00:29:23 It's typically the former. It's typically somebody who is, who wants data to make decisions in their job, but is not technical. Somebody who traditionally would have to go to an analyst, whether that analyst sat in IT or three cubicles away from them in marketing, and say, hey, could you get this for me?
Starting point is 00:29:45 Could you build this report for me? There's no dashboard that answers this today. I would say non-technical, it's your user of Amazon or your user of Google, non-technical end users who just want to get something done and want to use data to do it. Okay, and how does it help them? I mean, typically when I find that,
Starting point is 00:30:03 if we do solve the problem of how non-technical users are going to access data and we, and how does it help them? I mean, typically when I find that if we do solve the problem of how non-technical users are going to access data and we try and do away with analysts and they need to do SQL, the next thing is how do you then follow the train of thought with them? How do you answer the question that comes after that? Is that something that you've considered as part of the product? How you sort of like follow that train of thought really? Actually, you know, what's fascinating is I think that's the key thing that drives adoption. I think you've really hit on that most important piece, because we all have batch mode thinking, I think, in terms of BI and analytics, where we just assume
Starting point is 00:30:37 that since the only tool we have today to get answers is a report, that I just have to ask the right question in the right way and get the right answer. And then I'll stamp it gold and look at it every Monday. But that's not really what happens. In fact, most of those things are never looked at a second time. Instead, I think that the key here is this dialogue. The fact that I look at an answer and I say, why is that bar high? Or why did the Southeast have fewer sales? Or why does this person have more incidents? And then I can ask that question. And I can ask it as fast as I can think.
Starting point is 00:31:16 That's the key. It's the performance and the intuitive way to dive into something. And this is the inherent limiting factor in publication. If I look at this bar and I say, I'd love to see that by product. Well, it turns out that drill path wasn't in my initial requirements because I didn't think I'd want to look at it that way. And so therefore, I just can't write in a typical BI tool. With something like ThoughtSpot with an exploratory environment, I'm no more limited in what I can drill to than I am with the next word I can type at Amazon. I can go to any piece of information I want to,
Starting point is 00:31:50 and that's the fact that these combinations you can ask are unlimited, and the answer you get will typically return in under a second against almost any volume of data. That really leads to a different way of working. You never ask a Google query, search Google, and then look at the results and say, oh, I asked the wrong question, back to the drawing board. You just keep refining your question until you get what you want. And that workflow, that intuitive kind of dialogue with the data, that's what you have to have in order to really get to the value in your vast amount of data. So I suppose another potential kind of a challenge
Starting point is 00:32:32 with search interfaces, and you get this with using your Amazon Echo at home, is you don't know what you can ask it sometimes. So when you present people with a catalog of reports and metrics and so on, they can see and get some idea from that. How do you deal with that initial bit of what do I do really at the moment when you first start using the product? It's interesting. The good news is that the problem we're trying to solve is actually a real visceral problem as opposed to an Amazon Echo.
Starting point is 00:33:01 I have three myself, so I understand. And to some extent, you get it to do a few different things. When I'm at work, I know what I'm trying to get done. I typically don't go to ThoughtSpot any more than you go to Google and say, I wonder what I should ask today. Typically, I go to ThoughtSpot when I'm trying to answer a question based on data, right? Which marketing campaign brought in the most, you know, attributed revenue over the past week? And I can just ask that question. We also have the luxury of not having to have a domain that is effectively everything, right? I think Alexa would like to get to the point where I could say, Alexa, could you pick up my daughter at school at 3 p.m. and she would just take care of it.
Starting point is 00:33:48 In practice, you're much more focused in your job and in the data you use. There's much more predictability than a lot of other domains. And we're accessing very specific data that has a structural component to it. We're not searching unstructured data. There are plenty of tools like Google to do that. What we're doing is we're using this against your relational data, and that bound use case really helps us focus users on what's valuable. On the other hand, Alexa has the opposite problem, right?
Starting point is 00:34:21 So I think very different space. We're not focused on being just a recommendation engine and saying, like maybe an IBM Watson would be where I'm just looking for an answer of why my device isn't working and as long as I'm 80% correct, that's good enough for 80% of the people. If I'm answering analytic questions and I'm 80% correct, that's, you know, good enough for 80% of the people. If I'm answering analytic questions, and I'm 80% correct, that's horrible, right? You have to be 100% correct. And so it's, you're much more focused on the use case. And it's not this open ended, like, you know, real AI problem that Amazon's trying to solve. So looping back to the original kind of premise
Starting point is 00:35:03 of the conversation, when I found the document, I think the documents or the blogs that you'd written where you talked about data does not have inherent value you know you have to kind of do things with it to make it valuable how does TalkSpot then in summary how does it make data valuable and how does it surface these insights better than say competitor tools and other uses of machine learning really? Well the simplest way to answer that question is to say that it connects the people who understand the value, the non-technical business users,
Starting point is 00:35:31 directly with the data, which is where they're trying to get the value from. So shortening that pipeline, not just by making each piece easier, but by removing pieces, by saying we don't need to create these artifacts that answer pre-digested questions connecting those people who understand the value right to the data is the best way to get that value in the hands of
Starting point is 00:35:52 the people who can use it okay okay so um yeah what do you think is the next problem needs to be solved then it's imagine you guys have solved this problem what's the next challenge in analytics and data do you think that that needs to be sort of addressed for it to get real value? Yeah, that's a great question. I think I'll refer you back to the thing that I mentioned briefly a few minutes ago, which is answering the unasked question. I think that the real value here is not just making it simpler and more accessible and easier. You know, that's something we've been focused on and I think we've been very successful at. But I think the next step is really when we're able to leverage this machine learning,
Starting point is 00:36:38 the patterns of what other people have asked, other people in your department, or what you've asked before, not even just to predict your next question, but really to find insights in the data that you would never have thought to ask. So that maybe we're sitting in a meeting and you're in Slack or some other product, Salesforce Chatter, and you get a notification right there where you already are that some metric, which you've never asked about, but which ThoughtSpot thinks you might be very interested in based upon your past behavior, all of a sudden this metric went up by 20%. Or we discovered that
Starting point is 00:37:18 it's highly correlated with this other metric you do care about and have expressed interest in. So those unasked questions, being able to identify them and say, yeah, I can answer that question for you quickly if you want me to, but what you really should be asking is this other question. That's where we're really going to get value from machine learning and AI, especially as it pertains to analytics and business intelligence. And we're not so far away from that world. Excellent. So, Doug, how do people find out about TalkSpot, then, and how do they
Starting point is 00:37:48 maybe access information online or get a demo or speak to someone like yourself, really? Yeah, so certainly I think that the easiest thing to do is just to send an email to info at TalkSpot.com or come to our website. There are a lot of videos on our website, a lot of interesting materials. And if you don't want anybody to contact you, then just say that, and we're happy not to. But tons of information there. And I would recommend that if this is something that does sound at all interesting to you, that you just do what I did and you look at it in action, whether it's one of the videos on our website
Starting point is 00:38:29 or just a demo. We're happy to do you one-on-one because you'll understand as soon as you see it. You've already used something like this every day for the past 10 years. We're just doing it at work with your data and making it easy. Excellent. Okay. Well, Doug, thanks very much for coming on the show. What we'll do is we'll put links to your documents and videos and so on on the website and on the podcast notes. But it's been great to speak to you, and thank you very much, and have a good rest of the day. Yeah, likewise, Mark.
Starting point is 00:38:57 Great podcast, and I really appreciate you having me on. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.