Orchestrate all the Things - Edge Impulse wants to bring machine learning at the edge to everyone, announces $34M Series B funding. Featuring co-founder / CTO Jan Jongboom

Episode Date: December 9, 2021

55.000 projects, 30.000 developers, $54M funding, and customers including the likes of NASA, in a bit over 2 years. Edge Impulse is riding the wave of machine learning at the edge. Article publi...shed on ZDNet

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Orchestrate All the Things podcast. I'm George Amadiotis and we'll be connecting the dots together. 55,000 projects, 30,000 developers, $54 million funding and customers including the likes of Naysha in a bit over two years. Edge Impulse is riding the wave of machine learning at the edge. I hope you will enjoy the podcast. If you like my work, you can follow Linked Data Orchestration on Twitter, LinkedIn, and Facebook. So let's start from the beginning.
Starting point is 00:00:32 Could you give a brief introduction about yourself and your background and Edge Impulse and, you know, founder story, why you did it, what it is you do, and all of that in under five minutes. You know how it goes. Absolutely. Hi, so I'm Jan Jongboom, the co-founder and chief technical officer at Agympos. Yeah, so I'm an embedded engineer by trade. And kind of my dream, this goes back 10 years now, is that I want my devices really to understand what is happening
Starting point is 00:01:05 around them and iot kind of promised us to do that right um the downside that is that it's really really hard to process the all the kind of information both visually and audible and and any other signal we can kind of imagine on devices in real time. And that's what we have is we have lots of IoT devices, lots of connected devices, but they are relatively dumb. They know our temperature. That's great. Maybe whether a car is parked somewhere, which is really easy because you have a magnetic field.
Starting point is 00:01:37 It's either on or off. But really understanding the real world. Do we hear an elephant outside? Do we see a poacher in an area where that shouldn't be? How well am I sleeping? All of that is really hard to answer. So about three years ago, I was principal engineer at Arm. Me and my co-founder, Zach, who's also at Arm, started talking about, okay, what could we do to make these devices really, really understand the real world? And machine learning is, we figured machine learning is that way because rather than try to make sense
Starting point is 00:02:08 of all the signals that we have and write software to detect, oh, now I hear an elephant somewhere, which is really hard, right? There's an infinite amount of sounds all around us. We give lots of information. We say, this is an elephant, this is an elephant, this is an elephant, this is an elephant.
Starting point is 00:02:27 And the machine learning model will figure that out. And that is what a Jimples does. We're the largest community of embedded machine learning projects. We let embedded engineers use machine learning to make sense of the real world. And yeah, that's basically what we do we i think the cool part of what we do is that our community is now over 55 000 projects that have been created on the gym poles literally ranging from anything from from you know monitoring rockets that go to space and seeing if there's anomalies to underwater microphones to uh to covering human wildlife conflict in Africa, and literally anything in between. So it's super exciting to see all of that happen in front of our own eyes. Okay, great. Thanks. So yeah, I was kind of imagining that you probably started out in a
Starting point is 00:03:20 similar way to what you just described. So you were basically practitioners that wanted to follow their dreams, let's say, in a nutshell. Absolutely. And so I also saw that you have a very descriptive, let's say, graphic image on your website that kind of condenses the life cycle of pretty much all machine learning projects, actually. But I wanted to use that as a starting point to try and figure out how do you cover that specifically for what you do. So the lifecycle is collect data, then train a model, then deploy that model and validate and well, repeat. Usually that's how it goes. But let's take it from the start and see what kind of components you have in your solution
Starting point is 00:04:13 and how do you cater to those steps in the end, the lifecycle. So I was wondering if we start from the collecting data side of things, then what kind of devices or sensors do you support? Yeah, that's a good question. Basically, anything under the sun. So we have a super, super wild ecosystem of partners, ranging from microcontroller developers like Silicon Labs, Nordic Semi,
Starting point is 00:04:43 to super specialized silicon for machine learning, like Synthiant, which does like Alexa type chips, but that literally do just that, but really low power, or Synaptics, up to larger devices. So Renesas, MPUs, AdvanTech, large industrial machines. So for us, the device shouldn't matter we try to make everything run on on the smallest form factor all the way to big machines um as long as it's sensor
Starting point is 00:05:12 data so as long as there's a time series component in there um so it could be accelerometer data it could be even radar um audio naturally computer vision, but typically very specialized, right? So not these huge models that can detect a thousand different objects, but rather very specialized. Is a screw screwed in? Yes or no. Or even non-RGB cameras, like we're working with a customer now on these prophecy cams, and that's an event stream that comes from the camera rather than a shutter that actually flashes so yeah any type of sensor data um on preferably any type of any type of device and kind of our device ecosystem and our silicon partner ecosystem like helps tremendously i think that right now there's 20 different dev boards from uh from a wide variety of vendors that you can just pick up um and then naturally our customers run on a wide variety of other,
Starting point is 00:06:06 other targets and we'll help them make it, make our software run there. Okay. The ecosystem aspect is interesting. I would say in its own right. So I'm kind of, okay, so let's, let's pause a little bit and talk about this ecosystem that you've built. So it does seem quite extended indeed, and especially considering the lifespan of the company that you've built. I think it's only a couple of years. So how did you manage to build it that fast, basically? Yeah, we had a little bit of a head start, right so i i spent three and a half years at arm
Starting point is 00:06:46 uh my co-founder zach uh his previous he started one of the first iot companies which was acquired by arm in in 2013 um so we know the silicon ecosystem very very well and we know what embedded developers uh like um so that was kind of the story that we went out. Let's go build this thing together. Because, you know, silicon vendors want to sell chips. And to sell more chips, you want to have more capabilities. And I think that has been a really good kind of story where we can help silicon vendors go to market. And after we have silicon to run our thing on, and that means that we can help the silicon vendors go to market. And after that, we have silicon to run our thing on. And that means that we can jointly go to customers and have a much better story
Starting point is 00:07:29 because it's not, oh, we have this amazing ML platform. It's we have this amazing ML platform and we actually have support from the silicon vendors that will run on your brownfield devices already. And that's truly amazing. And that has come together much quicker than we anticipated as well. But it's
Starting point is 00:07:45 it's super awesome to see and for me to as an engineer right as an engineer it's really cool that now we're being pulled into you know discussions about okay what should new silicon look like from the silicon vendor like what what type of capabilities do we actually need what do we see in the market and what could we do if could we do if we put on a neural network accelerator, for example? That is just, it's wild. It's really wild. For people that are not embedded engineers,
Starting point is 00:08:11 this probably doesn't resonate. Well, I'm not, but I think by now I've gotten enough exposure, let's say, to that to understand why you think that this is wild. And it even impressed me that you managed to build it that fast at least.
Starting point is 00:08:28 So yeah. The other thing I wanted to ask about the devices and sensor side of things was I was wondering, I think I saw one of the examples that you cite on your website, something like use your mobile accelerometer and camera and things like that. So do you actually support that as well? Yeah, so we don't. Yeah, phones is because, well, everyone has super capable sensory device with amazing, you know, gyroscope, accelerometer, microphones, camera on it. Um, so why not use that as a center? So if people want, they can, yeah, you can literally build, build a model in five minutes
Starting point is 00:09:20 that distinguishes between, am I waving my phone or moving it up and down or you know drinking a beer or you know anything you can do an accelerometer or audible or or with computer vision so we don't see customers of us deploying on mobile phones that's not kind of space that we're in but it's really amazing um to do in workshops or for people to get kind of this amazing like getting started with that and yeah once you have for the first time, you say, you know, hey, ZDNet or something to your computer and it actually lights up and it recognizes that, that is so magical.
Starting point is 00:09:54 So then you get this bug and people will start thinking about what else can I do with this tech. Okay, interesting. Sure, I mean, I can understand that this works well as a sort of onboarding let's say uh method for you but i was also wondering whether you see applications being built on that but i guess you answered that well at the moment at least not so much it's really hard to do so the yeah we do see it like being deployed on like on a gateway for example but like on a mobile phone
Starting point is 00:10:23 it's really hard it's's a completely different demographic. So the choice that we made is let's not go very wide, but let's go really, really deep in this very specific problem of I have sensor data on an industrial machine or a consumer health wearable, and
Starting point is 00:10:39 let's go solve it for that problem. And there's a wide variety of other startups that try to do machine learning on apps and integrate it that way. Okay so then let's move on to the next part of this machine learning life cycle which is train your model. So what kind of frameworks do you support there? Yeah so we try to offer a fully integrated experience. So if, and that depends a little bit on kind of the background of the person. The biggest thing I think we try to do is to put machine learning capabilities into the hands of the person that really understands the sensor signal, right? You're working in an industrial customer and you are dealing with large industrial steamrollers or large industrial machines on a factory floor on a daily basis. We want to give you the tools to actually collect the data, analyze that for anomalies or to classify what's happening or predict the future.
Starting point is 00:11:39 So we have this integrated pipeline of not just actually machine learning things, but I think we have a pretty heavy emphasis on signal processing because signal processing is really nice and explainable. I look at a signal and if I put a denoise or a bandpass filter on it, I can see which frequencies are filtered out. It's really nice and explainable. And then we have, on top of that, we have stuff that you can learn. So we use neural networks to do classification. So what's happening? We use kind of classical ML
Starting point is 00:12:13 algorithms to do anomaly detection. Like is what is happening? Is it out of the ordinary? Yes or no. And then we do stuff like regression on top of that to predict scalar values. And yeah, we wrap around, you know, the ecosystem that is there. So a lot of our neural networks are built on TensorFlow. But if you don't want to, you're not exposed to that. But if you have a data scientist in there, just like one button, you get the raw Keras code. You can just start editing that and trading the model up.
Starting point is 00:12:39 So it can be anything from low code to full freedom. Okay. Yeah, I was kind of assuming that you probably use wrappers around things like TensorFlow and PyTorch and so on and so forth. Correct. Yeah, so underneath all of our neural networks, at least, are KRLs on top of TensorFlow. If anyone is in a gym,
Starting point is 00:13:00 it's literally just to show expert mode, and you can see the exact Keras models that we have. And then we do a wide variety of transfer learning models to load in. And then, yeah, some classical ML just based on the same Python library that everyone does. Okay. So you talked about how you want to empower
Starting point is 00:13:19 the domain experts, I guess. So not someone who's not necessarily a machine learning expert. And I guess this kind of brings us to the next step in the process and also the next component in your solution. So validating. Well, actually, the component probably plays a lot into building the model as well. So you have an integrated development environment, as far as I could tell.
Starting point is 00:13:46 And can you say a few words about it? And I think it seems like it's cloud-based. Yeah, correct. Yeah. So our complete tooling is available in the browser. Just go to agimpulse.com, you click the button, and then you're in the studio, we call it. And yeah, I think the validation part of this is really important because yeah, if you deploy something to a device or a machine or something, it's really hard to correct
Starting point is 00:14:16 mistakes, right? If you have a computer vision pipeline and at some point, like, oh, wait, it actually didn't recognize that it was a person waiting for in line somewhere. Okay, cool. Someone can look at it, annot, wait, it actually didn't recognize that it was a person waiting in line somewhere. Okay, cool. Someone can look at it, annotate it, and then retrain and done. On a machine, it's really hard, right? The machine didn't see a fill state. Why is that? I can't really go dive into that and understand it as a human. Plus, the feedback loop is really hard, right?
Starting point is 00:14:42 If I deploy this on a factory floor in a very harsh RF environment, how am I going to update this model quickly? So for us, the validation part is really important because that gives confidence in the model, right? If I have a model that can predict failure rates and it's 98% accurate, like what does that mean? That doesn't give me any confidence that when I deploy this on that machine
Starting point is 00:15:07 that it will actually detect the stuff that I care about. Some of the things we do is that we allow customers to either upload relevant data, let's say a full day of sensor data from that specific machine, or we can synthetically create that based on based on examples you give us and then we run over that full pipe full day and then we actually say okay this is the moment that we would activate this moment the model activates this moment that something happens um and will help you do the annotation and then we can tell you exactly okay
Starting point is 00:15:37 the model is sometimes the model misses events all of these events are of this in this class um and sometimes it spuriously activates when it shouldn't and that happens in these in these Sometimes the model misses events. All of these events are of this and this class. And sometimes it spuriously activates when it shouldn't. And that happens in these cases. And that allows you to tune that through and say, okay, well, actually for this device, let's say I have a fall detection for the elderly, it's really crucial that I never miss an event.
Starting point is 00:16:00 Because if someone falls and I don't detect it, then that is horrible. But it's fine if it activates sometimes. Like if someone really fast runs down the stairs and it activates, someone can just press the button and correct it. Or the other way around, if I have an, you know, hey Alexa type system, it's fine if it sometimes doesn't activate, right? Because I can just say, hey Alexa, another time. But it's really annoying if it activates all the time when I don't say it. And finding a balance and making it insightful when that happens, that is
Starting point is 00:16:30 really key. So that's kind of the modeling that we do. And then we let you deploy to the device very quickly and test it out on the real hardware and feed samples back the moment you say, hey, oh, this is actually a place where we don't, where we miss something. One button, collect more data, and we feed it back into the loop okay so it sounds like part of what you're trying to
Starting point is 00:16:52 achieve is well kind of fit the model to um to tend towards false positives or false negatives depending on the use case correct yeah and Yeah, and making that insightful, mostly. And even stuff like, what you see in a model is that we clump together things easily, kind of as humans, right? We have, let's say, a field state on a machine. Like, okay, all of these fields are together, and that's our field class. But it's really interesting to see if there's kind of a,
Starting point is 00:17:26 if you look at the data and it's something that's really hard to do automated, but it's relatively easy to do if you're the expert that actually knows everything about those machines. It's okay. So why is the reason that this model fails to detect these fields?
Starting point is 00:17:39 Is there actually something common between them and making that insightful and having someone just go over the data assisted by, by the machine learning model makes it, makes it much easier to kind of see the weak spots of, of where you are. And that, that gives confidence again.
Starting point is 00:17:56 And what about the final step in the process? So deployment, I guess that also happens through the integrated environment that you have, and you probably also leverage the partnerships in terms of the actual deployment on the board, let's say. Yeah, so what we output, for us candidates, the moment that it ends. So what we output is source code. So we give you the mathematical model, all the normalization code beforehand, all the signal processing code, all the machine learning code,
Starting point is 00:18:27 and then all the post-processing tuning of the model. And that's source code. And we just give that to the user as source code, no compiled binaries, no royalties on that either. And how the customer integrates that or how the person in the community integrates that into their device firmware, that's up to to them um however we do leverage our partnership to make sure that hey if you buy a development board or a product from one of our partners we have this all integrated
Starting point is 00:18:56 so you can get it you say okay i have a this dashboard from silicon labs one button and we'll build you a binary that has the model and everything and all the sensor bindings in it. And you can test it out really quickly on device. So two ways. But in all fairness, the people that we're catering for are people that understand how to deploy devices. They're embedded engineers. They've been doing this forever. So we try to make it as easy as possible to just integrate it into the firmware projects as it is.
Starting point is 00:19:23 So just drag and drop two lines of code to integrate with your code base and now your device can properly understand the world. Okay, well there's something very central actually in this whole discussion that we haven't addressed at all yet and I think now it's the. And that is, well, besides the mechanics of how you deploy to whatever device it is that you want to deploy or that you just described, there's also a very important parameter, which is what kind of models do you generate
Starting point is 00:19:58 in terms of size, basically, and requirements in power and compute and all of those things. So, so far, we have entirely ignored that and just, you know, assumed that whatever comes out of your, you know, TensorFlow or whatever it is that you're using under the hood will just work. And that assumption is actually not true in many, many circumstances. And I guess this is where the secret sauce, let's say, comes in, in a way. You have something called edge-optimized neural, and I'm assuming that this is what you use to
Starting point is 00:20:32 shrink models, basically. So could you tell a few more words about that? Yeah, sure. Yeah. And that is just like spot on what you say. Like it's easy for us to brush over sometimes, but that's 100% correct. You need to be, especially if you want to for us to brush over sometimes, but that's 100% correct. You need to be, especially if you want to deploy something to a constrained device, you need to be super aware of what this device can and can't do and what the budgets are that you have. So let's say you have something that needs to respond to your voice. Well, it can't use a lot of power because maybe it's a widgeted thing or something that runs off a battery. Preferably, it needs to be really, really small because it needs to fit in the form factor and kind of the silicon footprint that you have. And there's probably
Starting point is 00:21:14 a lot of other stuff running here. So it needs to be as small as possible. So for us, the very first step is determining what does this model need to run at? And we started all the way at the beginning of the process already. So, okay, this needs to run on a Cortex-M4 microcontroller running at 80 megahertz. And it needs to run within 60 kilobytes of RAM because that's all I have.
Starting point is 00:21:37 And it needs to have the latency to the maximum time that it takes for an inference needs to be 200 milliseconds because that's my power budget. So at every step of the way, like if you're doing your signal processing pipeline or your machine learning model will tell you exactly how much of that budget are you using with this so oh you can actually use a more complex filter but then actually you don't hit your latency constraints and that is a consideration that the developer needs to make. So what we do with Eon, so we have two parts in that.
Starting point is 00:22:07 So the Eon tuner is a model search that searches over all available signal processing input parameters. So can I downsample my data, yes or no, and lose precision over the signal processing parameters and then over the machine learning models, but always within the constraints of the device.
Starting point is 00:22:24 So we just have this super wide search phase of a thousand different models that we could try. We look at, okay, which ones will actually fit on your device and then which one will actually yield the highest accuracy in there. The second is EM compiler, and that's a tool that allows us to compile neural networks much more efficiently to brownfield devices so the way that it works it's it's not magic we don't charge for the even compiler every everyone signs up can just use it is the way that people typically deploy machine learning models on mobile but also on embedded systems is that you have particularly build stuff on top of tens for light or tens for Lite for microcontrollers. Really capable, really fast,
Starting point is 00:23:07 really good pool from the ecosystem. But the way that it works is that you have an interpreter and then you have your ML model. And the ML model feeds into the interpreter. It builds the graph and then runs the inference. But it's really wasteful embedded system because you need to have the interpreter plus all the potential states that the interpreter can kind of construct.
Starting point is 00:23:24 So what the EMPiler does is just compiles down that graph to source code and then compiles it in. And that saves between 30% and 50% of RAM, which is a lot, especially in brownfield devices. Because it allows you to target the same device that you have, but now with much more complex models, which is really cool. Okay. So I guess besides the compiler that you said
Starting point is 00:23:48 that is kind of available, I guess, the other part of what you described is, I guess, your IP. It's our IP, but yeah, people can use it for free. So almost everything that we have is available for people in the dev community. There's some stuff like collaboration with multiple people, single sign-on and SAML integration and some of our large-scale data pipeline work that we keep for enterprise customers, but everyone else can just go to the website, sign up and start building this, including all of these
Starting point is 00:24:25 tools to do parameter search, completion, et cetera. Okay. Which is cool. That's empowering. This is exactly what I was going to ask you next. So your business model, because you started by referring to a community and that sort of implies at least some open source components, I'd say some open source pieces in the puzzle, but I wasn't sure which parts those are
Starting point is 00:24:49 and what the business model actually is. But if you can just elaborate a bit. Yeah, so our business model, we're a software as a service platform. Our customers pay per project per month or per data pipeline per month. So if you need access to more compute, more collaboration features,
Starting point is 00:25:08 large-scale data transformations and integration with your cloud to pull the data automatically in and ingest that data, you'll pay per project per month. And that's it. We don't charge royalties on the final models that come out. Yeah, that's
Starting point is 00:25:24 basically it. Okay. And what about the, is there actually any open source component in all of this? Yeah. So all of the ingestion pipelines, like how do we actually get data from device,
Starting point is 00:25:41 all the firmware that we built for these devices together with the silicon vendors, all of that is open, very permissively licensed as well. You just go and take that. All the code that comes out of the platform is also open source. So our device SDK is open source.
Starting point is 00:25:57 The models that we output, we output as source codes, again, licensed in the Apache 2 license. So you can just integrate that into any device and not you know pay us royalties ever. Jan Bogaertsen- Which is really good because then you're not a line item on the bomb and you don't get optimized away so that has been a for us a really good decision. Jan Bogaertsen- So for us kind of the all those points are are there and that's where we see this large community like build upon And then in the platform, we do some other cool stuff. So we can, if you have a project on a Jimples, like one of these, I looked at it actually.
Starting point is 00:26:32 So we have over a thousand of these right now that are open to the public. So you can say, hey, I built something really cool with machine learning, cool, publish. And now everyone can just look through your project, look at all the data, look at what kind of Sigma post sync pipeline pipeline you've selected what neural network you've done and have this reproducibility of that so that's kind of the three trigger points where we build this community
Starting point is 00:26:55 okay yeah and yeah i would say it makes sense especially for the um uh for the ingestion let's say part so anyone who has like another type of device, sensor, whatever, can use your APIs and just build their own ingesting pipelines, I guess. Correct, correct. Yeah, that's really easy. Like if you have any embedded device with a UART, like for all the embedded developers watching this,
Starting point is 00:27:23 yeah, you can ingest basically 10 lines of code from any sensor that you already have. Okay. And then let's wrap up with what's actually the trigger, let's say, for having this conversation, which we didn't mention at all, but it's never too late. You're actually having a Series B funding round announced tomorrow.
Starting point is 00:27:44 So yeah, which I guess, you know, usually funding rounds, well, you get some money, which is always nice, but perhaps even nicer is the fact that you get validation. Like, okay, so I showed, you know, the business plan and the, you know, the trajectory and all of that to some people and they said, well, okay, so this looks good. Let's fund it. So who are the people who said, the trajectory and all of that to some people. And they said, well, okay, so this looks good. Let's fund it. So who are the people who said, this looks good, let's fund it.
Starting point is 00:28:09 And how much did they give you and all of that? Yeah, so we just announced or we're announcing tomorrow, yeah, our Series B. So that's 34 million led by Code2. And yeah, the validation part is, I think, really cool. Like, yeah, it's a bit of a story that i think more startups say but like we were in fundraising we just raised our series a uh back in i think we announced in may um from from knm partners and that was already awesome like
Starting point is 00:28:37 it's really amazing and that that set us up for this trajectory we've grown from 10 to 40 people over the last year already um tripled revenue and and yeah during our own our own uh event imagine um was the first kind of the first in-person event that we could actually held in the bay area we had all our customers all of our partners um our you know super wide ecosystem all come together um and yeah a couple of people including david from code 2 were also there and and when they saw that energy they're like okay this is cool we can probably go even faster so uh yeah that's the conversation that we had with code 2 and we figured okay well this is actually the right moment to accelerate even further um and and series b is is gonna let us do that and And I think that's
Starting point is 00:29:25 super exciting and a little bit scary. Right. So you mentioned, I think you have like 40 people in the team at the moment. Yeah, correct. So I guess one of the things that you're going to be doing is growing
Starting point is 00:29:41 the team. And I wonder which direction. Are you going to hire more engineers growing that uh the team and i wonder which which directions are you going to hire more engineers more i don't know sales people or more marketing people or um yeah so we're planning to grow to about yeah between 70 and 80 um a year from now uh so that's that's a lot um yeah i think all over the place so one of the things that has worked out really really well for us um is our solutions engineering team so these are people that work internally with us we can work with solution sprints with the customer to say okay you have a factory you know that there's probably insight insightful information there okay let's cool let's go sit
Starting point is 00:30:23 with your embedded people and let's work towards getting the data ingested and then getting this model and getting the data pipeline flowing. I think we're going to do a lot more of that. That has been truly cool. I think the customers like Aura and NASA,
Starting point is 00:30:39 that is how we can get them started and actually get the data flowing. Once the data is there, then people see, like, this is so cool. started and actually get the data flowing. And once the data is there, then, you know, people see like, Oh, Holy. Like, this is so cool. I can actually get all of that information from the signals that I already have. That is, that is super cool. So that that's one part. And then the other part, we just grow community and support and all of that in the same way as we,
Starting point is 00:30:59 as we've been doing now. Okay. Just final question, by the way, which didn't occur to me earlier. So you said that you have a kind of head start, let's say, because of the fact that you were both into embedding programming and all of that. But because you mentioned a few customers were like, you know, kind of big, let's say. So again, you got to that kind of customer quite early. So how did you manage? What was your go-to-market, let's say, strategy? Or was it organic in some way? Yeah, it's really organic. I think the good decision that we've made, and that's something that I can kind of recommend to every startup, is that we started selling from day one.
Starting point is 00:31:44 So even when the product was basically like five pages on my laptop, we used that to go sell this vision. And that led to us kind of landing our first large deals already end of 2019, basically within a half year of starting with our first enterprise customers and that's that's been really good and we see that our customers grow with us and that adds validation again in the market so yeah it's very hard to kind of make it through a recipe and i have no idea like if you know if we roll the clock back and and kind of play the whole thing out again, if it would happen this way as well. But yeah, that has done really good for us.
Starting point is 00:32:31 And all the use cases that we can build from that, like show validation to other customers. And yeah, I think that's in all fairness, for me, maybe the most proud part or the part that I'm most proud of. It's really cool. I'm an engineer. My co-founder
Starting point is 00:32:45 is an engineer by trade. Who knew we could actually sell as well? Yeah, well, you don't see it that often. That's why I was wondering as well. So congratulations. I think we've covered if not everything, at least
Starting point is 00:33:02 the basics of what you do and how you do it and all that good luck with everything going forward George thanks so much thanks also for meeting on such short notice and yeah really excited really excited about what's coming next thanks bye bye and good luck
Starting point is 00:33:18 I hope you enjoyed the podcast if you like my work you can follow Link Data Orchestration on Twitter, LinkedIn and Facebook.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.