Orchestrate all the Things - Viable secures $5M funding to go to market with AI-powered customer feedback analytics. Featuring CEO / Co-founder Dan Erickson

Episode Date: May 12, 2022

There is an implicit assumption in most analytics solutions: the data being analyzed, and the insights derived, are almost exclusively quantitative. That is, they refer to numerical data, such as... number of customers, sales, and the like.  But when it comes to customer feedback, perhaps the most important data is qualitative: text contained in sources such as feedback forms and surveys, tickets, chat and email messages. The problem with that data is that, while valuable, they require domain experts and a lot of time to read through and classify. Or at least, that was the case up until now. This is the problem Viable is looking to address. Article published on VentureBeat

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Orchestrate All the Things podcast. I'm George Amadiotis and we'll be connecting the dots together. There is an implicit assumption in most analytic solutions. The data being analyzed and the insights derived are almost exclusively quantitative. That is, they refer to numerical data such as numbers of customers, sales and the like. But when it comes to customer feedback, perhaps the most important data is qualitative. Text contained in sources such as feedback forms and surveys, tickets, chats and email messages. The problem with that data is that while valuable, they require domain experts and a
Starting point is 00:00:37 lot of time to read through and classify. Or at least that was the case up until now. That is the problem Vyval is looking to address. I hope you will enjoy the podcast. If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn, and Facebook. In order to tell my story, I've got to tell my co-founder's story a little bit as well. So, my co-founder and I are actually identical twin brothers. We have been in the tech industry now for about 15 years. We were those crazy kids that decided to skip college altogether and just go straight into business early on. And I'm an engineer by trade. My co-founder and brother is a designer, and we kind of meet in the middle at product management. Started a consulting company right out of high school that was helping
Starting point is 00:01:31 early stage startups build their very first product. So kind of a prototype-y consulting agency. Ran that for a few years, moved down to San Francisco from Portland, Oregon, where we grew up after that. And I was an early member of the Node.js community, kind of helped get that off the ground by helping organize NodeConf and NodeCamp. And that landed me an early job as an engineer at Yammer. I joined Yammer when we were probably 30 people or so. And yeah, basically joined there. I ended up, we didn't have a product team yet. So ended up meeting somebody on the product design side. I happened to know a guy, so brought Jeff in for an interview. He, Jeff is my co-founder and brother. He ended up getting the job. So he was actually our first designer at Yammer as well, and actually became head of design over time.
Starting point is 00:02:34 Fast forward a couple of years, Microsoft acquired us. I moved on to being the CTO of a company called Gettable. Gettable was in the construction equipment rental space. Spent four years there trying to find product market fit. Never quite found it, so ended up having to shut things down. But then I moved over to Eaze, the cannabis delivery company. I was the VP of engineering over there. I ran the engineering team. Did that for another almost four years and then decided I was going to go start my own thing. Before I get into that real quick, let me cover Jeff. So he stuck around at Microsoft for the full two years waiting for the golden handcuffs to come off and ended up actually starting a small company with David Sachs
Starting point is 00:03:28 called Sachs Labs. It was kind of a precursor to Craft Ventures. It was an internal, they were making some internal investments as well as some external investments. One of those external investments was Zenefits. And so they ended up shutting down SACS Labs, joining Zenefits after they did the investment there. Jeff joined as the head of product and David joined as the COO. For the next two years, Jeff grew that team from zero people on the product side to 60 people on the product side. While I was over at Ease growing the engineering team from five to 50. But then after he was there for a couple of years, I convinced him to come over and join
Starting point is 00:04:15 me at Ease. And he became our head of design at Ease as well. So after that, we knew we wanted to start something together. I knew that I was going to have to go tackle that product market fit problem again, because I failed to find it at Getable. And so we actually started off with a sort of focus on product market fit. Actually built a, we call ourselves Viable Fit. We built a whole product for basically helping people run the superhuman product market fit process, which is a survey. And then you do some analysis and it helps you figure out a roadmap for you. Had to do a bunch of NLP work
Starting point is 00:04:56 in order to make that work at scale. And we quickly found that the NLP work that we did was actually the most valuable part and started to see companies use this that were much, much larger than the traditional sort of finding product market fit company. And so we decided we were going to pivot up market and kind of follow the market up that direction. So instead of doing the survey, we stopped, basically stopped doing the survey. We stopped measuring product market fit and started focusing on aggregating customer feedback across a bunch of different channels and then layering on onto that, a full analysis layer that will actually give you written analysis on top of the feedback. And that's, that's where we are today.
Starting point is 00:05:46 Okay, well, interesting story. And well, a few twists and turns there, I would say, which is kind of to be expected if you're running a startup at the stage that your company is at. So good to know. Thanks for sharing the background. So admittedly, I had no idea that Viable even exists. And however, the moment I saw it, I was like, well, okay, so the value proposition here seems to make sense. It was like one of those things. Well, yeah. So how come nobody else has
Starting point is 00:06:19 done that before? So if I were to summarize it, then well, you can correct me if I got it wrong, or add to my understanding if you want to, your value proposition seems to be, well, basically, you enable organizations to connect all their sources of unstructured text data, and analyze them and get answers. And then basically, you apply that sort of recipe, let's say, to different areas such as product management, customer experience, and marketing. Okay, so good to know I got it right. Yeah, you pretty much nailed it. There's two different ways that you can interact with the analysis. So one is, it's kind of push and pull, right? So one is push. So we actually send you a report on a weekly basis
Starting point is 00:07:08 that covers what happened in your customer feedback in the last week. And so we can actually show you what are your top complaints? What are your top compliments? What are the top questions people are asking? What are the top requests that your customers have? And that is completely automated on that one-week cadence there for them.
Starting point is 00:07:30 And then once you get that report, you inevitably kind of read through it. Our reports range from anywhere from a dozen or so different paragraphs of analysis all the way up to our latest report that we just did that had about 330 different paragraphs of actual analysis for different themes that we identified for them. So usually reading through those, you end up having questions. And so what we actually built was a plain English question and answer system where you can type in a question
Starting point is 00:08:04 that you have about the data, and we can actually give you an answer using the data. Okay, I see. So you mentioned previously in your introduction about Viable that this all started, was kickstarted in a way by you doing your, building your own NLP analysis. And I also saw in the draft for the press release for what you're about to announce, which will come eventually, but I saw in that draft that it seems like you're using GPT-3 as well.
Starting point is 00:08:37 So actually my question is, which one is it in the end? So are you still using your proprietary NLP or are you using GPT-3 or maybe a combination of both? Yeah, so it's actually a combination of both. We are using many, many different features of the OpenAI API, including embeddings, as well as the actual GPT-3 completion engine as well.
Starting point is 00:09:07 But I would say that GPT-3 is, while it's the foundation of what we've built, we actually have our own models that we've built out on top as well, over a dozen of them that have been fine-tuned and trained over the last two years. Okay, okay, I see. Yeah, that sort of makes sense. have been fine-tuned and trained over the last two years. Okay. Okay. I see.
Starting point is 00:09:26 Yeah. That sort of makes sense. I was imagining that may be the case, even though I have to admit that I didn't know the fact that you started out by building your own NLP. Because in a way, if you're building, if you're basing your entire value proposition on top of somebody else's API, well, you're always going to be the downstream guy. And well, that's not necessarily a good place to be.
Starting point is 00:09:52 So I thought that you may have to add some differentiation there. And it sounds like this is exactly what you have done. Before we go into the differentiation and value add part, However, I would like to get a better understanding of the sources that your product enables people to connect to and then analyze. I think you mentioned some of those on your website, but well, you may as well list them here. And well, maybe you also have things that you don't mention there.
Starting point is 00:10:21 Sure. Yeah, so we have a few that are sort of what we call our native integrations. And that is Zendesk, Intercom, Delighted, Apple App Store, or rather iOS App Store, the Play Store, and Front. Those are our main sort of native integration sources that we have. However, we also have a Zapier, which allows you to pipe in data from over 3,000 additional services. And then on top of that, we also allow CSV uploads. So if you've got the data, we've got a way to get it in. Okay.
Starting point is 00:10:58 Okay, I see. Well, that's interesting. And I guess, again, all things considered. So for a startup at the stage that you're at, that's quite a good list of sources to connect to obviously the niche you seem to have carved out for yourselves is analyzing text data, I think many of your customers may be also asking for the ability to integrate numericals or structured data from sources such as spreadsheets or databases and so on. Well, even in surveys, you don't always have unstructured text data as answers. In many cases, you have numbers or a structured predefined list and so on. So are you able to mix and match different sources and provide analysis based on that? We are, yes. So the way
Starting point is 00:12:00 that that works, we don't currently hook up to databases or spreadsheets yet unless you upload a CSV. However, all of these sources also keep track of metadata around the feedback, along with any traits of the person who gave you that data, right? So, for example, in Zendesk, companies often upload things like, you know, what is this customer's like plan level? Are they enterprise? Are they growth? Are they startup, right? Or they might put in an industry or a job title or a location. So all of those kinds of things, our system can actually pull in as well and actually slice and dice the analysis based on those traits. So you can go in and say, I actually just want to report for my enterprise customers, or I just want to report for my product manager enterprise customers in the Bay Area.
Starting point is 00:12:59 Right. You can kind of add all of all of that together and figure out exactly the segment that you're targeting. And then, you know, we can we can give you the report based on that. So that's how we marry that sort of quantum qual data in surveys. Actually, I'm happy you mentioned that, because if you have a choose one or choose multiple type type question on there, we'll actually pull that in as metadata for any of the unstructured text blobs that are also in that same row or in that same response. So that way you can, if you've got one question that says, I don't know how many times a week are you using the product and it's got like a multiple choice there, we'll just record that and that will now be a metadata on the answers to the other questions.
Starting point is 00:13:48 Okay. So it sounds like NLP is relevant at different parts of your product and also your data pipeline, I would say. So one part I can identify from what you described so far is the querying part. So instead of, well, writing SQL queries or any other type of queries, or I don't know, using pivot tables in Excel or what have you, you enable people to formulate their questions basically using natural language. So this is one part. And the other part is that how you process data. So I guess you're going to confirm or disconfirm that I get that you must also have some sort of repository that you use to keep, well, some sort of historical data, let's say, from all the
Starting point is 00:14:39 sources that you ingest and so on and so forth. So instead of, again, there instead of having, I don't know, some sort of database or file system or what have you, you just ingest text and then you use GPT-3 or perhaps your own custom technology to somehow understand, let's say, or at least extract the essence of what's in there
Starting point is 00:15:02 and serve it to your customers as summaries and the answer to their questions. So am I right? And would you like to elaborate a little bit on your pipeline really and which part does your proprietary technology fits in and where do you utilize GPT-3?
Starting point is 00:15:23 Yeah, sure. So we do ingest the data. So we're not just going directly out to the source and then analyzing it there. We pull it all into one place so that we can actually marry it together in the analysis. Once we've got it all in one place, we kind of think of that as like a qualitative data warehouse almost. And we pull it all together. And then actually during that pipeline, so here's how the pipeline works. You know, you hook up your integration, say it's Zendesk. Whenever a new ticket comes into Zendesk, we'll get pinged. And we will actually pull in that new ticket along with its, you know,
Starting point is 00:16:03 the text of the unstructured text in it. And then any sort of metadata that we can pull in as well. From there, it goes into a pipeline of a bunch of different models that we've developed, along with some GPT-3 stuff, that will classify that piece of text into, is it a complaint mostly? Is it a compliment? Is it a request? Is it a question? And in fact, for the different topics within that piece of text, are those complaints, compliments, requests or questions? And then on top of that, we do some sentiment analysis,
Starting point is 00:16:42 emotion analysis, and then actually are doing things like urgency as well, noise detection, all of those things are things that we've developed in house. And then at the, you know, at the other end of it, once we've sort of aggregated all this stuff together. That's where most of the GPT-3 stuff kicks in. And we use a bunch of fine-tuned GPT-3 models that we've developed over the last couple of years to generate summaries, to answer questions, to translate that question into a query that can pull back relevant results, and a few more things. Okay.
Starting point is 00:17:24 Yeah, thanks for sharing more details. And actually, I just wrote a few notes while you were describing your pipeline because I think there's a couple of points that are worth diving a bit deeper into. So first one is the classification. I guess, well, and you said this is part of what you do with your proprietary technology. I guess that in order to be able to guess, well, and you said this is part of what you do with your proprietary
Starting point is 00:17:45 technology. I guess that in order to be able to do that, well, you must have some notion of, well, obviously what you need to classify into in terms of like general categories, let's say, such as the ones that you mentioned. Well, is it a complaint? Is it a compliment? And so on. But perhaps you also need to have some domain-specific concepts as well. So if you're analyzing, I don't know, hospitality, let's say, the hospitality domain, then you need to have a sense of, well, what's a hotel, what's a restaurant, and I don't know, what's on the menu and that kind of thing. So we actually don't. We have developed a completely unsupervised system for thematic analysis. We're actually, I think, pretty much at the state of the art on this side of the platform here. But basically the way this works is we actually use some very complex embeddings for the text.
Starting point is 00:18:42 It's actually OpenAI's embeddings under the hood, the same ones that GP3 uses. But then we have our own proprietary thematic analysis engine on top of that, that helps us cluster those things and find the actual themes that are in there. So we actually do a full-on cluster analysis on the text itself. We don't actually provide it with any sort of context as to what kind of things it's looking for, other than requests, questions, and compliments and complaints. Okay, that's interesting. And I guess that helps you scale to new domains without the hassle of having to add custom development. And the other thing I wanted to ask about is, well, you said that you developed some
Starting point is 00:19:32 fine-tuned models around GPT-3. And that was also something I was wondering about because, well, despite being one of the most impressive fits in technology today, GPT-3 does have its weaknesses as well. So some of the most known ones are, well, toxic language that comes out at certain cases or hallucination, which is a special term people use to refer to the fact that, well, GPT-3 sometimes produces answers that seem very authoritative, but are actually not based on facts. So both of those instances would be problematic in your context. So I was wondering how you are able to circumvent them. Yeah, definitely. And fine tuning is key there. So what we've done is actually taken, you know, we've built out thousands and thousands
Starting point is 00:20:26 of training examples for things like, what does it mean to summarize a theme? What does it mean to name a theme? How does, you know, how does that all work? And we've then basically built out a fully fine-tuned version of GPT-3 that keeps it on the rails. So it's, you know, it's got sort of a more limited language set that it's using. So it's not going to do any of those curse words or, you know, anything like that. And then on the hallucination side, we have done a meticulous job of building out that training data set to make sure that every single example that we pipe in is only directly using facts from the feedback that is piped into it. And that way, it basically tells GPT-3, hey, I don't want you to be creative here. I want you to just report the facts.
Starting point is 00:21:20 And that's exactly how it works. Okay. Interesting. And well, that makes sense. To be honest with you, I hadn't dealt into the specifics of the GPT-3 API and I wasn't sure whether what you just described was even possible. Well, apparently it is. And it makes a lot of sense because, well,
Starting point is 00:21:40 otherwise it would be hard to use the API in the commercial setting the way that you are using it. Yeah, exactly. We were actually one of the first companies to use fine-tuning out of the gate with them. Okay, great. So since we're on the topic of GPT-3, I think that's a good segue to shift gears and go a bit more on the commercial and business side of things. Because, well, since you're heavily relying on GPT-3, I was wondering how much does its API pricing factor into your own price points and your subscription levels? And, well, obviously, that's a good opportunity for you
Starting point is 00:22:21 to mention what those subscription levels are. And from that, we'll get to the commercial side of things. Yeah, let's talk a little bit about pricing model. And then I can kind of relate that out to the costs on the processing side. So we have a usage-based pricing model where we charge based on the amount of volume of data that you're sending us. We've got three different tiers for that. So we've got our startup tier, that is up to 1000 pieces of feedback per month that we that we pull in at $250 per month. The then we've got the startup or the growth tier. The growth tier is up to 5,000 data points or pieces of feedback per month.
Starting point is 00:23:09 And that is $1,000 a month. And then lastly, we've got our enterprise plan, which is no limit at the upper end. And it's sort of contact us. We'll figure out pricing based on your volume. And the way that we've sort of priced this out is both based on what we think the market, sort of where the market is pricing this, the value of the insights that we're providing, as well as making sure that we're baking in a healthy margin on top of the process costs, both our own processing costs of running our own models, as well as the costs that are incurred by using GPT-3.
Starting point is 00:23:51 Great. Excellent. I just have one clarification to ask for here. When you say, when you refer to pieces of feedback, what might that be? Like a single answer, for example, a single field in a survey or the entirety of the survey? It would be, yeah. So for a survey, it would be any unstructured text field. So if you've got only one question that has an unstructured text answer,
Starting point is 00:24:18 then that will be one data point per response. If you've got two, it'll be two. For support tickets, it's any inbound email that you get. So if you're resolving most of your support tickets in two, then you're going to get two per ticket, basically. With chat and transcripts, it depends on the length of the transcript. So this works for both chat and for voice transcripts that we pull in. And yeah. Oh, actually, I forgot.
Starting point is 00:24:54 One more native integration. Gong. Just released that one last week. That's why it wasn't on top of the line there. But yeah, so we do pull those in. Those can be anywhere from, we've seen them go down to just one data point because they're short,
Starting point is 00:25:09 all the way up to dozens of data points because we've done like a two-hour conversation on the phone before and pulled that in and analyzed it. Okay. So that leads to the obvious question and well, the connection with the trigger, let's say, for having this conversation, which is the fact that you're about to announce that you're raising some money.
Starting point is 00:25:36 And so it's, again, the perfect opportunity to ask you, so how well is that value proposition working out? So what's your market traction? And also, if you'd like to share in the same context, let's say a little bit of the fundamentals around the company. So, you know, like how many employees you have and when you were founded and, you know, the basics. Yeah, sure. Let me get those basics in and then we can talk kind of where we're at in the market right now. So we started the company back in January of 2020. We raised a small pre-seed from, well, really it's more like
Starting point is 00:26:12 a friends and family, but ended up being from Kraft Ventures. And so they put the first check in, they led that. We also got some funding from some angels during that time. In June of 2020, that's when we made the change from product market fit to moving up market. By September of 2020, we actually raised an additional $2 million. So it was $1 million back in January, an additional 2 million. So it was 1 million back in January, an additional 2 million, uh, from, uh, from Javelin venture partners, uh, in, uh, in, in September of 2020. Uh, and then in September of 2021, uh, we actually had a bunch of our, uh, of our target market actually decide they wanted to invest. Uh, so we raised an additional million, uh, on a safe there from, uh, from people that are in our target market. So a couple of X heads of product from Uber, for example, and a couple more outside of that.
Starting point is 00:27:14 Then just earlier this year, actually just a few weeks ago, a couple of weeks ago, really, we closed our series seed round um and that was uh five million that we raised uh from streamlined ventures uh they they led the round there participation again from craft and javelin uh and uh and a handful of angels as well that came into this one uh so uh and maris actually uh so that's kind of the founding side of things. We're up to about nine employees right now. It's roughly, mostly on the product and engineering side. We've got three people on the business side right now, mostly in sales. But we are definitely going to be using this money to sort of ramp up on the
Starting point is 00:28:06 hiring side on both the engineering side, the product side, and the sales and marketing side. Okay. Yeah. It sounds like you are on a healthy trajectory in terms of product development, if nothing else. I wonder if there's any, well, stories or metrics or whatever that you're able to share at this point in terms of adoption. I understand, you know, you're an early stage startup, but I don't know if you have any client names or domains that you're working with that would be good. Yeah, yeah, definitely. So we, you know, we've got almost a dozen paying customers right now. So we're kind of fairly early on got almost a dozen paying customers right now. So not, we're kind of fairly early on on the go-to-market side. However, the companies that we are working
Starting point is 00:28:51 with are having a great time with it. So we are, the one that I'd like to mention is Nylas. Nylas is an API company for helping people hook up CRMs and email and calendar and contacts into their own products. They get a lot of feedback across some support channels, as well as sort of sales calls and gong. And they didn't have a great way of sorting through all of that to understand what their customers needed from them. So we piped all of that through Viable, came up with a report for them, and we actually identified some really high leverage stuff for them. In fact, actually, here's a quote from David. He's the SVP of engineering over there. He said, Viable's AI is able to extract trends in our feedback that we then validated by comparing to our own manually generated reports. Viable was spot on and identified
Starting point is 00:29:57 a growing issue with authentication that we immediately got to work fixing. Complaints about authentication are now down 65%. So basically we're helping them actually do that tracking. We've showed them a graph of what that authentication complaint cluster looks like, what the members of that cluster looks like over time, is it getting better, is it getting worse? And they use that to sort of judge their progress towards fixing
Starting point is 00:30:25 these problems. So they're like a classic use case of us. We've also had another company, can't mention the name on this one, but they've, for example, used us to actually put in their employee engagement survey. So it's not even about customer feedback at that point. It's actually about just analyzing the employee side. So we actually work for sort of any kind of experience, whether it's employee experience, partner experience, customer experience. It's really all about helping people analyze the qualitative nature of those experiences. Great.
Starting point is 00:31:00 So I know we're almost out of time or even a bit over it. So let's wrap up with one last, but I think interesting question. And I'm sure you've had to answer that to your investors as well. So hopefully it's not going to be that hard. So my question is, well, okay, fine. So that sounds like a good idea, but what happens when eventually, you know, the survey monkeys and Zendesks of the world wake up to the fact that this is a good idea
Starting point is 00:31:25 and start offering it as their own integrated capability. If I were you, the way I would answer that question would be that, well, by that point, first, you're going to be a bit ahead of the curve. And then the advantage that a service like yours seems to offer is that, well, you can integrate, you can mix and match from different services. Yeah, you hit the nail on the head there. So it's both of those things. On the first point there, we've been working with this kind of data for this purpose for two years
Starting point is 00:32:00 straight now, very heads down and building out all of these custom models for it. I can attest it is quite a big undertaking to build out something like this. And so if any of them were going to get into it, they would definitely be, we would have a head start on them. But second, you're right. We do allow for multiple sources and that allows us to do a more well-rounded exploration of your themes than you can at any one source.
Starting point is 00:32:39 So for example, customer support usually ends up with mostly questions and complaints. Those are the two things that come in through, say, a Zendesk integration. Compliments and requests often will come in from things like App Store reviews. And so you actually want to mix and match different sources to get the most actionable feedback that you can. So there needs to be some third-party external system to store all of this in. I hope you enjoyed the podcast. If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn, and Facebook.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.