Orchestrate all the Things - Could machine learning and operations research lift each other up? Featuring Funartech CEO / Founder Nikolaj van Omme

Episode Date: May 25, 2022

Is deep learning really going to be able to do everything? Opinions on the potential of this opinion to prove true vary. Geoffrey Hinton, awarded for pioneering deep learning, is not entirely unb...iased in so opining. However others, including Hinton's deep learning collaborator Yoshua Bengio, are looking to infuse deep learning with elements of a domain still under the radar: operations research. Machine learning and its deep learning variety are practically household names by now. There is lots of hype around deep learning, as well as a growing number of applications. However, as applications of deep learning are proliferating, its limitations are also becoming better understood. Presumably that's the reason why Bengio turned his attention to operations research. In 2020, Bengio and his collaborators surveyed recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. They advocate for pushing further the integration of machine learning and combinatorial optimization and detail a methodology to do so. To this day, however, there is no publicly visible operations research renaissance to speak of, and commercial applications remain few compared to machine learning. Nikolaj van Omme and Funartech want to change that. Article published on VentureBeat

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Orchestrate All the Things podcast. I'm George Amadiotis and we'll be connecting the dots together. Is deep learning really going to be able to do everything? Opinions and the potential of this opinion to prove true vary. Geoffrey Hinton, awarded for pioneering deep learning, is not entirely unbiased in show opining. However, others, including Hinton's deep learning collaborator Joshua Bengio, are looking to infuse deep learning with elements of a domain still under the radar, operations research.
Starting point is 00:00:30 In 2020, Bengio and his collaborators surveyed recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. They advocate for pushing further the integration of machine learning and combinatorial optimization and. They advocate for pushing further the integration of machine learning and combinatorial optimization and detailed methodology to do so. To this day, however, there is no publicly visible operations research renaissance to speak of and commercial applications remain few compared to machine learning. Nikolai Fanoum and Funartec want to change that. I hope you will enjoy the podcast.
Starting point is 00:01:05 If you like my work, you can follow Linked Data Orchestration on Twitter, LinkedIn, and Facebook. So my name is Nikolai Van Om. I am a mathematician. I am what is considered a pure mathematician, although I hate this term. So I'm a pure mathematician, but I'm also an applied mathematician. And I'm also a computer scientist. So I'm a pure mathematician, but I'm also an applied mathematician, and I'm also a computer scientist. So I graduated, I did a PhD and two masters, but I'm also a classical singer. I am a dancer, an actor, just to say that I like to do a lot of things.
Starting point is 00:01:40 And this is one of the reasons why I created with my co-founders Funartech, because we like to mix things. We like to mix several fields together. I'm also a father and husband. And we launched a company called FunartTech in 2017. I'm also the CEO of that company. And our story is quite common and simple. After I graduated, I started to notice that there were some similarities and complementarities between machine learning and operations research. And we first knocked on doors and we tried to get some interest and attention. But at the time no one was interested. So after a while, we decided to launch our own company to make it happen. That's basically our story very shortly.
Starting point is 00:02:37 Okay, great. Thank you for the introduction. And there's many things in what you just said that we can dive deeper into. But well, first things first. To give a little background as well from my side, well, we go back a little bit. So we came to know each other a few months ago when you presented a use case or a summary of what you do in one of the conferences that I organize, Connect the Data World. But the occasion to reconnect, let's say, was an article I recently published. It was based on a conversation with Andrew Wang.
Starting point is 00:03:20 And well, in that, Andrew shared his experiences in manufacturing, basically, and the shortcomings of machine learning in the way that it's currently applied. And that seems to have resonated with many people, including yourself. So in a nutshell, what Andrew Eng shared there was that what he has found is that throwing more data at the problem doesn't always work. This is a typical approach that machine learning takes these days. And so working in a domain such as manufacturing that in which you don't always have, you know, troves of data to throw at the problem, it means that you have to take some other path to approach the problem. And adding domain knowledge to that is one of the ways that you can work.
Starting point is 00:04:13 And I know that you also work in manufacturing. So I think a good way to progress in the conversation is, well, if you can just say, what exactly do you do? What kind of use cases do you have say, what exactly do you do? What kind of use cases do you have there? And how do you approach this problem? Okay, that's a lot of questions. Well, first of all, yes, if you throw more data,
Starting point is 00:04:36 it's not necessarily a good thing. First of all, you need to have the data, as you said, in some fields like manufacturing, you don't particularly have those data. They also need to be of very good quality. But also, I would say it's more about the physical philosophical approach. If you are only taking data, you're hoping with your algorithms to get some patterns out of the data, to find some constraints, some knowledge out of the data, but actually you're not sure you will be able to do that. While there are other approaches like
Starting point is 00:05:11 OR operations research, where you can model the knowledge, you can talk to the engineers and they can tell you what they do, what they think and how they proceed and you can transform this into mathematical equations. So you can have that knowledge and use it. And if you combine both, so you combine the data and historical knowledge with the knowledge of the people, the domain knowledge, actually you're able to go further. For instance, you can start without data because you can argue that what people know, the way they work, is data that you can use already. So this is what we do at FernandTech. We combine several approaches and we are really trying to be as inclusive as possible. I mean, when we have to solve a problem, we are not saying, okay, let's take that field.
Starting point is 00:06:06 No, no, we have a problem. Let's take whatever it takes to solve that problem. It's 100% ML or R, but very often we use other fields too. And basically what we do at Front Attack, first of all, we co-construct our solutions with our customers. So we are listening a lot. And basically we are constructing systems that are self-improving over time. And we answer four questions.
Starting point is 00:06:34 We construct a system that is constantly asking those four questions and trying to find answer to those four questions. And those four questions correspond to four types of analytics. So the first question is what has happened and this is the descriptive analytics and basically it's an assessment. You try to gather what exists, what is happening. Then you come to the second question which is why did it happen? This is the diagnostic analytics. And more importantly, do you like it or not? And what could you do about it? And then there is a third question, which is what will likely happen, which is the predictive analytics. And
Starting point is 00:07:20 this is where ML machine learning shines. And basically, you're trying to guess what's going to happen with a certain probability. And then you have the last question, which is, what is the optimal action? And it corresponds to the prescriptive analytics. But you have to see this as a paradigm shift in the sense that with the prescriptive analytics and the way we do it at Frenatec, and of course we are not the only ones, the way we see it is that you are not giving
Starting point is 00:07:54 an input and your system is giving you an output. The input you give is actually the desired output. So you are looking at a goal and your system will tell you what you need to do to reach that goal. That's what we do at Fanatec. Okay, thanks. And yeah, indeed, the model that you just described, so the evolutionary scale, some people call it of analytics. So from descriptive to diagnostics, diagnostic analytics, then predictive and finally prescriptive is a good model, I guess, to help people understand how, well, the evolutionary scale of how things progress in analytics. And in a way, I would say that this is what pretty much everyone in every organization that works with data and analytics aspires to, to sort of climb up that evolutionary
Starting point is 00:08:57 scale, if you will. You did mention previously the two pillars, let's say, on which you base what you do. Machine learning, which I think by now is a practice that's quite at least superficially, let's say, understood or known by many people. And operational research, and that is a much less widely known practice. So before we go any further I think it would make sense that you explain a little bit in more detail what operational research is and how it works exactly. So operations research is not that known but it's really a very mature field in mathematics, applied mathematics. It's basically one of the best science of optimization. Whenever you think to optimize something, I mean,
Starting point is 00:09:55 an objective function, you should first think, okay, what can I do with operations research? Because it's really the study of optimization. To give you an idea, ML is using operations research to optimize its predictions. So most of the algorithms that are used inside ML are algorithms coming from OR. If you look for instance at deep learning gradient descent is an OR algorithm but you have many algorithms that are used in ML. It's quite mature. It exists since basically the Second World War. And if you can optimize with OR compared to other approaches, well, most of the time you see a huge difference. OR is really made to optimize. You can optimize by 20, 40%.
Starting point is 00:10:50 This is what we see in practice better than other approaches. For instance, if you look at the TSP, the traveling salesman problem, so that's a problem where you have cities and you have someone that is traveling and going in the cities going out of the cities once and doing a whole tour so he's starting at a first city and coming back to that first city and he wants to do the whole tour for a minimal amount of time of cost of distance this is an NP-hard problem so the TSP if you approach it with OR, you can solve that problem with 100,000 cities. And I'm talking about exact solutions. This is more or less. If, for instance, you try to do this with
Starting point is 00:11:34 ML, which is something that is totally possible, as far as I know, if I'm not wrong, the best you can do for an exact solution is to solve the same problem with 100 cities. So you see the difference? OR is really done to optimize. Okay, so in broad strokes let's say it's well the art and science of mathematical optimization basically and just to pick on the example that you just gave, so yes indeed it sounds like there is a scale of magnitude difference in what you can achieve applying operational research as opposed to machine learning. So how come then operational research is, well okay for the one part much less known, but also most importantly I I would say, it seems like it's much less applied. So why is that?
Starting point is 00:12:30 So, well, first of all, ML was considered as a subfield of OR not so long ago, I mean, a few years ago. So I wouldn't say that OR is not applied, although now people tend to put ML on one side and OR on the other, there are some fields where OR is really used extensively. Transportation, for instance, or manufacturing. But what happened is that ML had so much success in some fields that it overshadowed all the other approaches, which is quite, I would say,
Starting point is 00:13:10 usual. When you dig a gold mine and you see something really shiny and that it works, I mean, ML is really nice. I mean, I talked about the TSP and the difference between OR and ML, but actually the good thing to do is to combine them because they both have strengths and weaknesses. But when you combine them, and this is what we tried to do at FernandTec, you go even further. So yes, OR is not that known, but it is used quite a lot. And we think that the future of AI will be the combination and OR coming back, the combination of ML and OR. Okay. I know that this is something that, well, you have developed some framework, let's say, around and we'll get to that. But actually, before we do, I think it will help if we go into a specific example. So out of the use cases that you have with Funratec, let's pick one. The one you previously shared with me,
Starting point is 00:14:14 actually the one that in which you work with the Aisin group. So let's see what was the problem there? What were you trying to accomplish? And how did you use your methodology to do that? And what was the outcome? So the ASIN group, I mean, it's top 500 company. It's a huge company. There are more, I mean, more or less 120,000 employees. So they came up with a contest in which they
Starting point is 00:14:46 tried to solve a huge and very complex logistic problem. They are constructing automotive parts and systems and their logistic problem was to reduce the costs for the delivery of the parts. So they are transporting parts between depots and warehouses in a very complex way and their instances are really huge. So you cannot approach this in the traditional way with one model that can solve the whole problem. So this contest, we won the contest and we were very lucky because they believed in our approach and combination of ML and OR and they trust us. And the project went very smoothly. After four months we were able to optimize by 53% the problem and funn enough, we didn't have the right data for some parts of the problem.
Starting point is 00:15:50 And so when they tried to figure out if our solution made sense or not, they quickly discovered that some of our estimations for the data that we didn't have were actually not very good. And so they gave us the right data and then our optimization dropped to 30%. But the thing is, our algorithms are so tailored to the instance that when they gave us the right data, they stopped working, they couldn't produce anything. So we had to backtrack, and we had to simplify a little bit our approach. And because it was the end of the project, we didn't want to invest as much time as we did and so we got only but i mean only 30 is still very good and it's the beginning because now we are convinced that we can go up to 60 theoretically and with the combination of ml
Starting point is 00:16:37 and or we think that practically in reality we could optimize this problem by 50%. Okay, so this is an ongoing project then? Is it going to be continued? Are you still working on that? We are in discussion. So for the moment we are trying to see how we can work together. It takes time. Okay, actually that's the one part I wanted to explore with you. So you mentioned initially that part of how operation research approaches an issue is to actually try and document domain knowledge, expert knowledge from people that work in the field
Starting point is 00:17:27 in the form of equations. So I presume that this is what you must have done in that case as well. So probably the first phase of engaging with a client involves, well, actually talking to people and figuring out, well, even before talking to people, figuring out who the experts are, who are the people that you need to talk to, and then actually going out and talking to them and trying to document what you learn in mathematical equations? Yes. So the first thing is to understand the problem. And most of the time, the customers, they know where their pinpoints are, but they are not really aware of what can be done or not.
Starting point is 00:18:09 For instance, that contest, they didn't propose the real problem. They proposed a very simplified problem because they thought that it was too complex to solve anything in six months. So we started on one problem, but soon we discovered that probably this wasn't the real thing. And we told them, yes, we can solve that problem that you proposed, but we think that that's not the real problem you want to solve. This is the real problem. But if that's the real problem, the solution we will have for the first one, well, it will not work for the real problem. So we discussed and basically we decided to stop solving the first problem, the one that was proposed for the contest, and to immediately try to solve the real problem. And this is something very general, very common. When you
Starting point is 00:18:57 discuss with your customer, they believe that this is exactly the problem that needs to be solved. And actually maybe there is another problem that is related that if you would solve that problem, it would help them tremendously for the problem they came up with. So the first step is to discuss and to listen. For the experts, they had a team of experts right away. We didn't have to discuss that or to choose people. They had the right team right away and we worked
Starting point is 00:19:27 with them. Okay, I see. All right, that sounds, actually, I would, you know, judging from this example, I would say that it sounds like a very, well, service-oriented, let's say, approach in the sense that, well, like you said, you do need to talk it out a lot, at least in the beginning. That's what it sounds like. So, okay, after deciding, let's say, figuring out what the exact problem that you needed to address is, and then actually talking to the experts, which, as you said, was not that hard in that case specifically because the group of experts was already decided. So what did you do next? And how come the lack of data that you mentioned previously, where exactly does it come into play here? So how do you use data in combination with what you can get from the experts?
Starting point is 00:20:28 So most of the time the data is not available right away, whether it's because it was not gathered, whether the data that they gathered is not in the right format or it has lots of problems, or it's secret. So in this case, it was more like it was secret. The parts of the data, they didn't want to give us right away. And this is all, again, common. So our way to start a project is just to start, and we do with what we have. If we don't have data, we know that we can rely on our operations research and talk to the people and ask them what they do.
Starting point is 00:21:09 And then we can try to start with models. If they are data, we take them. But most of the time, the way you need to gather the data is part of the problem in the project. And so every project is really different. You mentioned that it's service-oriented. Yes and no. We started as a service company,
Starting point is 00:21:31 but we are more and more trying to become a product company. So what we do now is we build the product with our customers, and they become our customers with the product we built together. Each project is different, but one thing is for sure, we need to listen a lot to the customer. And what is very, very important for us is that we want to construct something that is really valuable for the customer. So we are not constructing state-of-the-art solution. It doesn't make any sense for us.
Starting point is 00:22:04 I mean, sometimes we do. But what is really important is to construct a solution that is really efficient for the customer. And if we have two different customers with the same exact problem, we'll probably construct two different solutions. So in the case of Aisin, we noticed how they work because this is something we also take into account, is how the company is proceeding, how people are working. And this is also something you can model into your OR equations. This is something very few companies do.
Starting point is 00:22:37 They are focused on the problem, but actually the real problem is to bring a solution that can be used. Until now, we have 100% success rate, while in AI, there is a 80%, 85% failure rate. But the reason is that we are really interested in bringing something of value to the company. And we try to be agile, and we construct construct small models and we immediately test them. We immediately put them not in production because it's not possible, but a kind of production so that we can test that what we do is actually of value for the customer. And that is exactly what they need or not. And in the case of ASIN, well, very quickly we discovered that a front, frontal approach with one model wouldn't do it because their instances are so huge.
Starting point is 00:23:34 So we had to devise another way of doing optimization. the most promising avenue to tackle industrial problems that are huge is actually to do something what I call multi-scale modeling, which is basically using different models for the same problem but for the same way for a simplified version each time of the problem. And so you don't get a solution for the whole problem but you get parts of the solution and when you combine all those solutions together you get enough insights to solve the real problem. And this is what we did in this case, because the instances were really big. that has been identified as initially holding back, let's say the wide applicability of data science and machine learning approaches. And after a certain point enabling more widespread adoption
Starting point is 00:24:37 has been precisely this dependency, let's say on, well, as people would characteristically call it, someone with a PhD in data science or mathematics or whatnot. So the fact that at this point, there is like a wide array toolkit of specific models that people can use pretty much out of the box, whether it's linear regression or decision trees or transformers, the whole spectrum. People can basically just take those models out of existing frameworks and just slightly, let's say, customize them or perhaps even not at all, just use them to train them with their own data and sort of get something that works, maybe not perfectly. And yes, as you said, there's a high rate of failed projects using that approach.
Starting point is 00:25:36 But in many cases, people can get something that works relatively painlessly. What you seem to be applying seems to be in a way in contrast to that. Well, because you need to, it's very specific. Yes, it's very tailor-made. You said yourself that, well, if you have, for example, two clients with the exact same problem, you probably end up creating two different solutions. So I wonder how can that scale, basically? It seems like an approach which is very tailored and perhaps very successful, but hard to scale.
Starting point is 00:26:20 So how can you address that? Yes, indeed. hard to scale so how can you address that yes indeed but it it always depends on the need of the customer because some customers they don't need a very specific tailored solution so you need to provide one but in this case you cannot use existing tools and just put them together and get a solution this This is not possible. Actually, the problem in itself, there is very few articles in the literature on that specific problem. So this is not what we do.
Starting point is 00:26:55 We really tailor our algorithms and approaches. But at the same time, there are things that are coming back all the time. So we got some experience. There are things that, I mean, some approaches, for instance, some combination of ML and OR that we reuse because we know that they work and we can reuse them. But true, our approach is not really easily scalable. At the same time, our customers are really big customers.
Starting point is 00:27:23 We don't have thousands of them. We have very few of them, but their problems are really huge and difficult. The other thing about scalability and what we think is probably going to be the future of the way we construct algorithms is that now you have algorithms that learn by themselves by tweaking their parameters and hyperparameters, which is very nice. But I think that the future is going to be to have algorithms that will be able to tweak themselves in the sense that they will be able to morph into other algorithms. They will be able to assess what is happening and how they are used and to understand that, no, this is not the best way to do that.
Starting point is 00:28:10 And you need to change, you need to morph, you need to evaluate. And we have some ideas about how to do that. And that's probably gonna be the future. And when that exists, then all type of solution are totally scalable in the sense that they will be able to more of themselves and to get as precisely as we do by hand right now.
Starting point is 00:28:37 Okay, actually, that's, I would, I would say that that sounds like a research direction. And we can go into that in more detail and actually also discuss a few more, well, more ambitious, let's say, approaches in that respect. But before we do, because you already mentioned earlier, and I don't want to drop that, you already said that, well, there's a few ways, four in specific, that you use to combine operational research and machine learning. So could you briefly outline those? Yes. So there are at least four ways to combine operations research OR and machine learning ML.
Starting point is 00:29:21 The first one is becoming mainstream. It's very effective and it's the use of both, but as black boxes. So you use one and then you use the other. Most of the time you use first machine learning so that you get some estimates and then you use those estimates as inputs for your ORR algorithm to optimize. My favorite example is the following one. You are a train company and you have tracks and you need to replace, repair them. And you want to do this for the least cost. How can you do this? One way to do this is in two steps. First step, you put cameras under your wagons, you let your train go, you take pictures. And then with ML, probably deep learning, you stitch those pictures together and you discover in the pictures the defects, their types. So you discover that, oh, you have a defect of type A, you have a defect of type B.
Starting point is 00:30:14 And you, with probably deep learning, are able also to say, OK, probably that defect needs to be repaired in six months, this one in two years, or that track needs to be replaced in two weeks. So you get a map of your networks and what needs to be done. But you don't know how to do it because if you send a team to repair a track, maybe the track next to it shouldn't be repaired right away, maybe six months later, but it would cost you more to send another team six months later than to immediately with the same team repair it because it's already there. And that's what optimization or operations research can tell you. So that's the first combination. You use ML, you use OR as two separate black boxes. Then you have two technical ways to use them, which is simply using one to help the other. So
Starting point is 00:31:05 you can use ML to improve OR algorithms and you can use OR to improve ML algorithms. I would argue that ML is already using OR algorithms so this is basically what ML is. The other way works also perfectly you can use ML to improve OR. ML is wonderful to predict some outcome and OR is mainly rule-based but not only. And when the rules apply then it's hard to beat that. But most of the time the rules don't apply. You don't know exactly how to apply them. And there is some probability that if you take one direction or another, you will get completely different outcomes. This is where ML can really help because it can then help the algorithm to take some decision. And I would mention that very few teams are working on this.
Starting point is 00:32:00 But what I see most of the time is that they try to take away some pieces from the OR algorithms and to replace them completely with ML. And I don't think it's a good thing to do because when the rule applies, you cannot beat that. And you cannot replace with an approximation in ML the parts of OR you're taking out. It doesn't work. You really need to combine them so that if they will apply, apply it. And if it doesn't apply, use ML. And then there is a fourth way, which is actually constructing totally new algorithms. If you understand fundamentally what ML is doing and the weaknesses and the strength of ML, and if you understand what OR is doing fundamentally, and again their strengths and weaknesses because all fields have strengths and weaknesses, there are ways to combine both so that the weaknesses of one actually is worked out by the strength of the other. Let me give you an example. GNNs, so graph neural networks, I would put that in that
Starting point is 00:33:07 fourth combination because graph, graph theory is coming from OR. But I would say that's only the beginning. We barely scratched the surface here. There are lots of ways to combine the two in this fourth combination. And we try to convince people of what is possible to do with that combination. And we try to come up with some ideas and projects where if you only use ML, you couldn't do it. And if you only use OR, you couldn't do it either. And so we have two research projects to show that combination, that fourth combination. And the first one is to inject emotions into AI, into text. And the other one is even crazier, if I may say, because we discovered that by combining ML and OR that way, we might be able to reach an intelligent machine, a truly intelligent machine.
Starting point is 00:34:07 The way it works is that we are not trying to mimic the brain. The brain is far too complex to try to mimic it. We have a definition of a system, and we think that we could produce that system. So we think that we would be able with that combination to construct a system that is able to improve its perception of its surrounding in terms of concepts and sub-concepts and would be able to do so by learning very quickly and also what is very important to unlearn very quickly. That said, it looks like it's truly amazing and in a way it is. And again, maybe we are wrong and there is something we didn't see. But actually, if you look at what
Starting point is 00:34:55 we are trying to do, it's really dumb. But still, you would want to call this intelligence. And once that machine is in the open, it will improve itself. And who knows where it can get? Okay, so yeah, that's that definitely falls under the ambitious category. And you also touched upon it briefly earlier in the conversation. So instead of actually going into the specifics of that, which I don't, I'm not even sure we could potentially cover in the conversation. So instead of actually going into the specifics of that, which I'm not even sure we could potentially cover in the time that we have, I'm going to level up, zoom out a little bit and ask you, so how are you able to combine those research ambitions and research efforts with your day-to-day operation? Do they sort of
Starting point is 00:35:47 weave into each other in some way? Yes, completely. Every project we do is a research project, but it's not a research project for itself. It's really a research project to get some real solutions. But every time we do a project, there is some research involved in it. Basically, because we're doing something that is not very common. And also because we like it, of course. So yes, we do the two at the same time. Our research is immediately applied into real concrete cases. Okay, I see. And I think it's probably also a good point in time to ask you if you can share a few more details about some key facts and figures, let's say, about the company. So you already shared when it was founded, so 2017.
Starting point is 00:36:45 So can you share more information, like how many people are currently working in the company or any other metrics that you can share basically? Yes. So we created Funartech in 2017, so we are quite young. We have a core team of about four or five people. And we have something like 10 people around us in the world that are working on some specific projects
Starting point is 00:37:19 that they really like. So they're not working full time for us. And then we have an amazing scientific committee of four professors and basically that's it we're a very small team you have to know that when we first started no one but literally no one was interested in what we tried to do and it's only recently that we are gaining more and more traction. So we are still very small. Okay, I see. Trying to think, and to my knowledge at least, I can't think of any other organization,
Starting point is 00:38:01 and even more specifically any other company, that does things in a way similar to what you do. You obviously know the field much better than I do. So I wonder if you know of anyone who takes a similar approach. Well, I would say that the combination of OR and ML is becoming a really hot topic, especially the first combination where you use them as black boxes. R and ML is becoming a really hot topic, especially the first combination where you use them as black boxes. But the two other technical ways
Starting point is 00:38:33 are also gaining some tractions, but it's mainly some academic groups. So probably big corporation like Google, Facebook are probably doing things also. But there are some other groups doing things that are similar to what we do. We are not the only ones, but it's true that there are not so many as far as I know. There are some institutes that are set up to do precisely that. But I would say one of the biggest problem is that you need some really fundamental knowledge of different fields, and in particular OR and ML.
Starting point is 00:39:15 And the problem is the more you know, the more expertise you have, the more you tend to use that expertise to do it your way. So you need to have expertise, but you need also to have an open mind to be able to discuss and to be able to tell yourself, okay, I would do this this way, but actually my colleague here,
Starting point is 00:39:36 he would do this in another way. And maybe his way is as good as mine. And maybe if we combine our strengths, we could go further. I mean, it's really a problem. The more expertise you have, the more your vision is narrow. So I would say it's the beginning of that combination. Yeah.
Starting point is 00:39:57 I can totally relate to that problem that you described. And well, actually, I would even go further and say that it's actually not a single problem. It's a whole family of problems, if you want to call it that. I mean, you start from the fact that you need people who are highly skilled, basically, and not just highly skilled in one discipline, but at least two from the sounds of it. So immediately, you have like a high cutoff point there. And then obviously you get into the second problem that you just touched upon that,
Starting point is 00:40:32 well, if you have people that are, you know, so highly skilled, then there is a sort of natural tendency, let's say, to think that, well, their way is, well, if not the best way, it's, you know, a very good way of addressing problems. And so it's the usual too many experts in the room issue. So it sounds to me like it may be a promising avenue,
Starting point is 00:40:59 let's say, to explore, but it's probably at a very early stage. And again, I would bring forward the issue of, well, scaling up or, I don't know, lowering the bar for adoption or whatever it is that you want to call it. It sounds to me, you know, as an outsider that at the point where you are right now, this is probably like the most urgent, let's say, easy to address. Yes, but at the same time, I would say it's probably urgently needed, in the sense that we also have a mission at Frenetic, which is basically to do good on the planet with our technology.
Starting point is 00:41:41 Every project that we do, there must be something that we do that we really see a difference. For instance, to reduce the pollution or to improve the quality of the people working. And one thing that we claim, and we're not sure about that, but we think that actually with that combination of ML and OR, you could possibly reduce pollution by 20-40% worldwide. So yes, it's very complex, but at the same time, it might be needed in these times, because you don't get that efficiency without complexity. It seems that they are related together. At the same time, with very simple approaches, you get most of the efficiency. But then when you want to optimize a little bit further, you quickly become drawn in complexity.
Starting point is 00:42:36 But if you want to have that efficiency, probably it's needed. And as I said, if as humanity, we would like to do something with that optimization and do something to reduce pollution, probably that approach of combining ML and OR is a necessity. Okay.
Starting point is 00:42:58 So then wrapping up, what would you say, how do you see what you do going forward? So what are your immediate and sort of mid to long term goals? So we'll continue our journey. We are trying to advocate the combination of OR, ML and other fields. The idea is really to be inclusive and to use different fields. So that's
Starting point is 00:43:25 something we started at the beginning and we still need to do. In particular, we would like people to understand that there are other fields that are really interesting. And well, for the moment, we get more and more traction. So what we are trying to do is to work with those customers. There are big corporations and probably, hopefully, if we succeed with them, this would be use cases that will speak about the combination hybridization of ML and OR. Okay. And I would also presume that part of that would probably be attracting more people, which may be challenging, according to what you laid out previously regarding to what it takes to actually get into and apply this approach.
Starting point is 00:44:17 Yes, indeed. I hope you enjoyed the podcast. If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn and Facebook.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.