Orchestrate all the Things - Decision intelligence wants to automate decision making - but how far can you take this? Featuring Aera Technology Founder / CTO Shariq Mansoor
Episode Date: June 6, 2022Decision intelligence is one of those terms that sound vaguely familiar, even if you've never come across it before. Like many category-defining terms, it can mean different things to different p...eople. This is a feature category-defining terms either have by design, or acquire through extensive use. In October 2021, Gartner identified DI as a 2022 Top Trend. A number of vendors have identified with that category, and Aera Technology is among them, claiming to have been doing DI before it was called DI. Today, Aera is announcing new capabilities for its Aera Decision Cloud at the Gartner Supply Chain Symposium/Xpoâ„¢ 2022. Aera founder and CTO Shariq Mansoor weighed in on DI, Aera's offering, and how it is relevant for supply chains and beyond. Article published on VentureBeat
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amatiotis and we'll be connecting the dots together.
Decision intelligence is one of those terms that sound vaguely familiar,
even if you've never come across it before.
Like many category-defining terms, it can mean different things to different people.
This is a feature category-defining terms either have by design or acquired through extensive use. In October 2021, Gartner identified decision intelligence as a 2022 top trend.
A number of vendors have identified a category, including Aira Technology, which claims to
have been doing decision intelligence before it was even called that.
Today, Aira is announcing new capabilities for its Aira Decision Cloud at the Gartner
Supply Chain Symposium. Aira founder and CTO, Sariq Mansour, weighed in on decision intelligence,
Aira's offering, and how it is relevant for supply chains and beyond.
I hope you will enjoy the podcast. If you like my work, you can follow Link Data Orchestration
on Twitter, LinkedIn, and Facebook. So my background is all around software, enterprise software.
So prior to launching ERA technology in 2007,
Fred and I had a vision for a business for the ERA Decision Cloud platform.
I mean, it was not called Decision Cloud at that time,
and we actually called it a self-driving enterprise.
And it was very clear in our mind what is needed in the market.
And I was working for several years to build the underlying data foundation
for what we call Decision Cloud now.
And when I met Fred in late 2016, it was great synergy.
I mean, as he was looking to build the same platform.
So we went ahead, we raised 50 million,
and we started the journey to make our vision a reality.
And then after over four years of hard work,
we launched the entire DI platform.
And now, I mean, this platform is proven, running at scale at some of the biggest companies on the planet, and we have a very happy customer.
And, you know, George, I mean, it's great to see how the vision become a reality.
Because at that time, no one in the industry was talking about transforming decision making.
And now there's a whole new category emerging around the world. Because at that time, no one in the industry was talking about transforming decision making.
And now there's a whole new category emerging around the world.
And, you know, and then it's, I mean, it feels good when analysts and customers and partners telling us that we are transforming the future of work and have a unique platform.
I mean, it's for us is very exciting times.
Great. Thank you for the introduction.
And yes, indeed, I have to say, even though I have a very superficial, let's say, familiarity with what you do,
I mean, in the time that I had, it was quite apparent that there is lots of breadth and depth in the platform.
And so I can tell that you have been probably working on this for a long time. But before we get to the actual platform, as by means of introduction,
and since you also refer to category and category creation in a way, and I think this is probably
part of what you have been doing, let's talk a little bit about decision intelligence in general, because it's
also relevant in the context of the announcement you're going to make on Monday at the Gartner
Symposium. And so I think for many people, the term is not really familiar. And actually,
I should probably count myself among those people. I was just
vaguely familiar, but not really familiarized myself in depth. After looking a bit into that,
what struck me was that it seems to be like a composite term in a way that sort of touches upon
and merges, let's say, different disciplines, but related.
So things like data science and data engineering and analytics and machine learning and some
rule-based decision making as well.
So how would you define the category and what do you think makes it different and sort of
more than the sum of its parts?
Good question. I think if I mean what I'll do is think about your list. It's a good list which you just mentioned, the different technology. I would add maybe four or five
more things to it to make this DI technology so unique.
And maybe it's good if I can give you a reference to how we were thinking about it.
And what's the thinking behind it.
So yes, you're right.
I mean, there's data engineering is one component, but data engineering alone is not enough.
So what we have done is not just provided
like a data workbench to the data engineers
to go and do the work,
but also added like a lot of pre-built data models
for all the major areas within the enterprise,
like sales and inventories and procurement
and promotions, logistics, and several other areas.
Okay.
And then
one of the big things which we have
done and we have looked at and we have developed
a technology,
a patent technology to actually
not just give to models, but actually
be able to then crawl and automate
their entire process
of extraction, normalization,
harmonization of all
these models across multiple ERP
systems and, you know, bringing all of this content also into the platform, okay? And then
interconnected. And we call it as like, you know, it's like the CDL, which is a cognitive data layer,
okay? And what the idea is with this content, with all these subject areas and all these dimensions and thousands of measures, it's all automatically built.
And then this part itself provides a huge value for the customer.
Because as you know, George, I mean, in the data side, what happens?
I mean, most of the projects were like start and then people trying to figure out what data, where to get, what information, you know, I mean, it's become really hard. The enterprise data model is completely
fragmented across literally hundreds of systems, you know, to be able to build that layer and
content and making part of that platform. I think that's one big, big differentiation. If you look
at it, why it's different uh you know than just combining
different tools together okay and the other thing in your list if you look at it i mean one thing i
would add definitely is that you know we call it like an engine to digitize human decision logic
you know as part of a platform so one other thing patent, which we have filed and built, we call it DFE, which is
depth-first execution graph. So take a complex human decision, okay, and logic, and then be able
to digitize it and then execute it at scale, okay? So it's not like separate tools for machine
learning and separate rules engine. What we have done, we have combined this probabilistic engines,
like, you know, the predictions with the deterministic logic,
like, you know, the what else, the if and then logic there.
And this is very powerful because this is a core to automate
this entire decision process all the way from when you generate recommendations
and then when you write
back to the the underlying systems there and because if you have two separate systems like
this i mean it's not like you're moving data and then one system is providing recommendations
then what do you do with the recommendation you know it's just a prediction value without
the deterministic logic combined with it, it becomes very difficult to automate
that. So that's the other thing. And I think in your system, you mentioned AI and learning in
your list initially when you just mentioned this, the machine learning. But I would put it a little
differently. I would say that one of the key components of the decision intelligence is it has to learn automatically.
Okay. So, so one of the things which we call like error learned, I mean, it's
like how the users and what they're doing with the recommendations, how they are accepting it
and what actions they are taking, the system should learn from it and learn from the outcomes of the recommendations. And that learning has to be an automated. So what we do, we create this
recommendation and what we have done, we have created what we call, there's a new data model
for decisions and recommendations. And we use this as a permanent memory for each decision and its impact.
And there's no such model exists in the enterprise today because there's no way to capture these manual decisions which are happening outside the underlying system.
So this is, again, it's a very important component, you know, for the decision intelligence. And I think I would add to your list
the system of user engagement also,
because you cannot take this old way of like,
you know, going in the UI and going into
and automating the decisions
because this whole new engagement systems is very different than
how people interact with it. So in our case, what we have done, we have built what we call
is a cognitive workbench for the end user to interact with data and using a browser or a voice
on their mobile phone. And some of the customers call it as like a UI for an AI. I mean,
and this workbench is the, think of it as a single place where the end users can interact
with the systems and take actions. And this makes the adoption very simple and non-disruptive. I
mean, we have very high adoption rates across customers with thousands of users
using Aira daily.
And they love the way they interact with the system
because this makes it much, much easier for them.
So, and an important critical part
of this whole decision intelligence is,
yes, you have all these different technology,
but let's not forget about packaging these capabilities
into this DI platform and be able to deliver it.
I mean, there's a lot of work we have done in the space of
that how customers and partners can package these capabilities
into what we call skills.
And we also provide a whole slew of pre-built,
pre-packaged skills out of the box for different use cases
like demand forecasting, planning, inventory, finance.
So the point I'm trying to make here is like, yes,
there are pieces of the technologies in the past exist,
but you need to add a lot more to it, number one.
And then it has to work together as a single platform
because you have seen in the past,
people were doing all this work,
you know, their data lake initiative,
their visibility things there,
but then what happens?
Like you still have most of the things,
decision are still happening manually.
You know, you have no shortage of analytics sitting in the company.
You know, there's no shortage of dashboards.
But there's still, you know, the value of the AI is not, you know,
realized in most of the organization.
So that's why I think it's different than how these different tools
and then the approach of taking pieces together
and try to build it,
I think it's not going to happen
because believe that you need a single platform
for all of this to come together.
Okay, so I guess this is the premise
behind what you call the decision cloud,
which is pretty much what you just described.
So all of these components interconnected, plus, of course,
the necessary plumbing that you need to bring all the data in
and so on and so forth.
I think, however, well, it's a pretty elaborate platform.
That's quite obvious already.
I think, however, there's a couple of core conceptual, let's say, fundamental issues that stood out for me, at least from this whole approach, which have to do with, well, it's fine that when you try to automate decision making in the way that you described by taking this approach and so on and so forth.
The issue you're trying to tackle is pretty real. It's basically a decision fatigue.
So when you have an environment in which there's so much happening all the time, you need to make decisions all the time.
And so if you're somehow able to automate that, then that definitely is going to ease the burden on the people who need to make those decisions.
However, I think the key issues here are basically transparency and accountability.
Transparency in when you're suggesting an automated decision to someone, that someone needs to be able to tell how the system came
to that decision, to that recommendation, actually, because I presume that it's the
end responsibility with accepting or rejecting the recommendation always lies with the person.
So the person needs to be able to audit how that decision was recommended. And then if the person decides to follow that recommendation
and implement that decision,
then accountability needs to happen in the sense of,
well, you already mentioned a sort of feedback loop
that you use to track the outcome of decisions
and then feed that back into the system to be able
to improve it. However, I would also, thinking from the point of view of the person who gets to
take that decision, it's also important to know who's accountable. So if I get a recommendation
and I decide to pursue that recommendation, if it turns out to be a good outcome, do I get the
credit for it? If it turns out to be a bad outcome, do I get the credit for it? If it turns out to be a bad outcome, do I get the blame for it?
So how does that work in the way that you have?
So, yeah, very good question.
I think there are two parts of this.
One is you're talking about the trust, you know the decision you know and then providing users so
if you look at it if i put myself in somebody you know a user shoes there
uh the people will not trust if they have a black box and just give them a prediction number or
recommendation without the rationals behind it okay they're not going to trust it. So,
and it's not just the explainability of the number and some of the people are
talking about, you know, in the AI. So from our side,
what we have done, we have taken a very different approach in this.
So the first, when we generate the recommendation,
we provide the entire context
around the recommendation and explain the reasons why they should make that decision,
supported by all the contextual analytics around the recommendation. And it has to be an easy to
read language, you know, for them to understand. Okay. And we also provide them a way to override or reject you know the recommendation
based on the uh the rationale which we are providing them okay and then what we do as you
mentioned earlier that we store this entire context of recommendation and decision along with
the point in time data used to generate the recommendation and into it is what we call like
a permanent memory. So, and this is a foundation of ARAS continuous learning, as you mentioned,
and because this helps us also to build the trust with the users. So maybe if I give you an example,
like for example, if ARira is recommending to increase the forecast
because we are, let's say we are detecting an oversell
of a product of a certain market, okay?
So what we do, then we store that entire inventory,
that stock information, the sales data, the shipment,
everything in the point in time,
along with the audit trail of the decision event, okay? So now we provide the shipment, everything in the point in time, along with the audit trail of the decision event.
So now we provide the user,
the user either accepted or rejected,
and we're also storing that information.
So let's say if the forecast is for July,
and then in July, when the actual sales happens,
what we do, we go back and re-evaluate the recommendation
and see
how much deviation was it
from the actual, from the
recommendation of the forecast to the
recommendation of the actual,
I'm sorry, the actual numbers.
And this is like a reinforced learning
from the event.
So it's not then the system
is just
providing the information,
but using this information also, the system is learning and adjusting it.
And we are not separating what the users are saying.
We are putting error, which is the automation, you know,
the engine behind it also on the same standards.
So if we are recommending and if somebody automated it
and we accepted it, it's the same outcome that we are very clear that was the recommendation
good or not. So if you look at it in a recent press release that we have just introduced a new
automated confidence scoring features for these recommendations. So if Aira is confident
that a recommendation will be accepted and we have the right data to predict it,
then we let the user know with that, you know, we have, we are like 85% confident and we explain
it why. And if we are not confident, we also let the user know, you know, then we say like, okay,
for these recommendations, you need
to do more investigation before you accept it. Okay. So, so that's why in Aira, what we call
like there's a three mode of operations, okay, which we have seen across customers. So one,
we call it like a decision support. So we call it like a human in the loop. So all the right
contextual analytics is available for the humans to make decisions,
but the more work is needed from the human to make sure, you know, that the decisions are at
the correct, okay? The other ones, what we call like the decision augmentation, which is what we
call like human on the loop. So it means that we are providing the recommendation with the high
confidence based on the past history and all
the data which we have available at point in time. And we are asking the humans to accept
the recommendation. And then the third model is what we call decision automation. So basically,
the human is out of the loop. So what this means is the recommendations with the high confidence
where the customer has decided to automate without any user intervention.
Okay. Now, most of our customers are at the augmentation and automation board, which we have seen there.
Okay. And one of the other things we're seeing is it's amazing how a new vocabulary and KPIs are emerging around recommendation decisions.
You know, like they're calling it decision velocity, like a decision lead time, percentage of accepted recommendation, percentage of automated.
And, you know, this is pretty good because now they start thinking in terms of how they're automating the decision. So that's the,
if you look at it from the trust standpoint, this is,
I think with all the information, with all the audit trails,
providing the flexibility, you know, which mode,
and be transparent to the users that, you know, look, I mean,
we can predict or we cannot predict, you know,
some of the things there is that's really helping. Okay.
That's one thing then. Now, the other part of your question, which you're saying like, okay, you know, you mentioned about, you know, some of the things there. That's really helping, okay? That's one thing there.
Now, the other part of your question, which you're saying, like, okay,
you mentioned about, you know, getting a blame or a credit, you know, for this.
I mean, it's a great question.
And there are similar discussions around this topic is happening, you know,
with self-driving cars.
You know, like who to blame?
I mean, if the car is in the accident or, you know, like who to, I mean, who to blame? I mean, if the car in the accident or, you know,
the car is over speeding there. So, I mean,
this discussion don't have a simple answer, you know, this. So,
I mean, the question is, is the,
is the skilled developer in the IT is responsible, you know,
for that or is a business owner of that process?
So there's a lot of discussion on the systems making it today,
which are the decisions which are made today,
which are not handled in the past.
Because that's the other thing which we are seeing.
We go to the customers and they're not even making decisions
because they don't even know that there's a problem exist.
Because as humans, I mean,
we cannot handle hundreds and thousands
of decision points in real time.
Okay.
So as a platform vendor,
I mean, what we do,
we provide tools and guardrails
for our customers to design these skills,
robust skills,
and provide them way to add logic
to course correct in real time.
So for example, the example I was giving you on the forecast,
if the system has detected that there is an oversell,
now what happens if it's a wrong decision?
Then the system will also now detecting undersell in real time.
Okay. So now if we change the forecast to a much higher number will also now detecting undersell in real time.
Okay.
So now if we change the forecast to a much higher number and the sales are not coming,
the system is still then detected as an undersell and will help to adjust
the forecast back, you know, there.
So, and this is a very important point because people can make mistakes.
Things can happen.
You know, you and the machines can make it, but if you can catch it and correct it in real time you know quickly
then the impact of that is is is much much lower so i mean i don't know if i i'm gonna answer
your question there but it is i mean it's it's it's an interesting topic you know
i know i know i was just wondering just wondering, to be honest with you,
I was more wondering whether you have any examples from your clients.
How do they deal with that internally?
Because I guess it probably also is, well, organization-specific,
let's say, and even culture-dependent.
So how each organization deals with that matter?
What we are seeing, again, across our customers, that business is taking ownership
of this. Why? Because they're the one who's defining the rules, the data is generated by the business in the
rules there. And then
IT is providing the technology
to be able to use
that by using
that data and using those rules,
you know what they're defining
and the constraints.
in my
experience so far is more tilting towards the business,
taking the ownership, either good or bad. Because if it's something is good, I mean, definitely,
it helps the customers and they're really happy about it. And I would not say like blame or bad,
because if something is not working, some of the things, the outcomes are not,
they are now going and fine-tuning
some of the logic and some of the
things on the recommendations because they missed some
constraints or some of the things there and then they go
and change it.
And they make them.
Another thing I was wondering about is
whether there is the option of
somehow accounting for external disruptive events,
whether it's pandemics or some tanker being stuck in Suez or, I don't know, any unforeseen elements that are beyond the scope of the normal predictions.
Is there the ability to somehow ingest them, let's say, in the decision-making process?
Yeah, so there are two parts. I think one is like the event you talk about, another is the
external data, for example. So external data, of course, we are using, you know, we have quite a
few different data sets which comes out of the box, you know, from the external data side.
Like weather, I've seen it used a lot in forecasting and some of the other things there
and but they're also some of the paid data you know from the customers like for example nilsson
and uh and ims data or some of the customers are using around you know competitors analysis
and things which can then influence, for example, the what
promotions they're running, you know what I'm saying? Like you may want to do and want
to run buy one get one free or, or you want to run it if the competitor is also running
around that time there. So external data has definitely an impact on that. Now, one of the other things which we are doing is now working on to provide,
to work some of the other vendors who are providing like, you know, external events,
you know, like supply chain disruption and some of the things, and then be able to take that
and incorporate into a decision making process. So, and, and when I'm talking to these vendors, some of the vendors are really excited because
one of the biggest challenges that they have is like, okay, there's a disruption happening,
there's a fire happening there, you know, in this city or this location, or there's
a storm, hurricane.
What is the impact of this on the customer?
Because they don't have the inside data, you see what I'm saying?
So now what happens there, they're saying it may be a three-day delay
on this because of the hurricane.
Now, maybe it's perfectly okay for the customer
because customer is holding a seven days of inventory.
So three days will not make a big difference.
But in some customers, it may have a big impact there.
So this is the next step, which we are working on now,
to work with some of these other vendors who are providing this kind of
information out there.
But to answer your question, external data, definitely we're using it,
and it has an impact.
But we want to take it to the next level.
Okay.
Great.
Thanks.
And I think that's also relevant again for the
specific event that you're going to be addressing in a few days because if I'm not mistaken it's
it's centered around a supply chain and well there's a lot of disruption in supply chains
happening in the last period. Another question I had was whether you could provide any reference from
from your customers and estimate, I understand it may vary wildly depending on you know the
scope of implementation let's say, but an estimate for a typical let's say end-to-end
deployment of your system. So how long does long should someone expect to take for the system to
be fully deployed and operational and incorporate all the data sources and automate all the decisions
and so on? Yeah, so the way it works with our customers, which I have seen it, is because of SaaS platform,
so there's not much time needed to set up
and everything there.
So it's a SaaS offering.
The implementation time, what they do,
they start what we call skills.
So we have quite a few skills out of the box.
Partners are creating skills.
So skills, I think of it as prepackaged in the application.
So if you have a skill around demand forecasting, if you have you skill around procurement, skill around promotion, the other thing there.
That's one of the things, if skill is already available, we just have to connect. And that's
what I was telling you very early on, that we have built a data model, the integrations with
all of it. So if they have like some of all the major ERP systems, like, you know, SAP and Oracle and, you know,
Jerry Edward and, you know,
some of the Salesforce and all that,
then those implementations are, you know,
it can take between four to six weeks
for that skill to be operational, you know, running there.
And this is what we're doing with the customers.
I mean, some of the implementation is super fast,
you know, to be able to up and running, you know, there. Now, because it's a platform, so some of our customers, when we go, they go and say, I want to create some skill, which we have not built yet, for example to 10 weeks for that skill because it's not like
I have to go and boil the entire ocean there. You start from the areas where you want to automate
and then what the customers do, they create one of our customers. I saw the presentation recently in
their business quarterly review.
They have created a roadmap of like 40 skills,
you know, for a year,
which they want to implement around different areas,
like from logistics to procurements,
to inventory, to even to finance
in some of the area in there.
So it's all dependent on that skills.
So you go in, quickly deploy the system there, take one or two skills, roll it out in production, and then have a roadmap of the
journey of how you want to do that. So that's how we have seen implementation. Right. So organizations do it in an incremental way.
That obviously makes sense.
Yeah.
Yeah.
Okay.
I guess we are close to wrapping up.
So in a way, we approach this a little bit in an unorthodox way because, well, the occasion is that you're actually announcing some new product features and we saved that for last. But I think it was important to just start from the beginning
and lay out the groundwork for what it is that you do
and the way that you approach it.
So now that we've done it,
you may as well refer to the new product updates
and what they bring to the table.
Yeah, so, and I think one of the things
that you see in our update, which we're doing for the new product is, there's a lot of work is happening around what we call our Cortex framework, which is fully integrated into our system.
So, I mean, you have heard about AutoML
and the people are doing it a different tool.
But what we have done is we have integrated
into the entire platform in a way that, for example,
if I take one skill and if I'm running it to,
let's say, one customer, then I go to another
customer, the same skill will behave differently because then once we start getting the data,
all the process of automatically now running an AutoML, generating the model, deploying that
model based on their data, and then using that
model to generate recommendation is fully automated, touchless. So you don't have to do
this. So the whole MLOps process and everything there is all automated through this new AutoML
engine, which we have done there. So this is very good because we don't have to go and now modify
skills because the skills can then adapt to itself automatically, you know, from one customer to another based on the data that you're doing.
So that's the one big thing which we have done.
The other thing is we believe that opening up the platform and make it much easier for the data scientists and others at the customer side.
So if they want to bring up their own models, if they have a team, they have created something there,
no problem.
Use the error notebooks,
bring your own models,
you know, into the error
and then operationalize it in error.
Because one of the biggest challenge
that they're facing is
even though they have a model,
you know, they have data science team,
but how are they going to operationalize it?
So we made that part much, much easier now,
you know, do that in error.
So it's not just error models.
I can bring my own model, you know, into error
and then operationalize it.
And then all the things around
how are we going to make sure
that the model is keep performing,
model versioning, model drifting,
automatic training of the models.
So all of that part is now automated.
And this is where,
that's why it's very exciting about that part is now automated. And this is where, that's why it's very exciting about
that part of the release. And the second part is like, you know, how we're making it easier and
easier for the users to look at the information in error. So like, for example, we, you know,
releasing the graph explorer. Now, I mean, we had a graph
internally there,
but the users
don't have much access.
But now what we have done,
we have created
a new explorer
where the end users
can now go
and click
and explore
their data,
which is sitting
into our CDL,
into the graph
and be able to see
not just,
you know,
how the data is,
but also what
the relationships are,
you know,
between the data
and then find, you then find the information there.
So that's one big thing.
The confidence scores, I've already told you that we are doing,
making it easier and easier for the users to make the decision
and where Aira will be able to tell them how confident it is from there.
And yeah, so mostly it's around that.
And that's why it was worth mentioning
because this is automating
and you know you're coming from an industry
like automating the entire MLOps operation
into it, which you don't have to even modify the application
and the application will then adapt to itself,
what we call our skills.
So that's very powerful.
And then opening it up to a bigger audience
of the data science team.
So look, you do your work.
If you haven't spent years in working into your own models,
we'll help you operationalize it into error.
Yeah, I think actually all three of those areas are important for different reasons,
for the ones that you mentioned. I would personally pick out the graph visualization
out of that, not for any other reason that the others are equally important, I would say. It's
just because I have a personal, let's say, phone tendency to be attracted to the graph.
And it's something I write about a lot.
So I think business users will appreciate the ability
to drill down in that way
and be able to manage complex recommendations that way.
You know, I'll tell you, George,
the most excitement is getting created around that
from the end user.
Because when we're showing, I mean, I remember doing a demo to a user.
We said, oh, let's just look at your BOM and look at your supply
and look at your customer.
Within a few seconds, we're able to show the entire supply chain map,
how their suppliers are connected to the products and bill of material and then how the bill of material is
connected, you know,
down to the product and how the products are going to the customer.
And now they're saying like, Oh,
can I just see the lead times across my entire supply chain?
And those things are like, you know, it's, it becomes so powerful.
So I completely from a demo standpoint,
I think you will see maybe later on also when we're going
to start putting some videos or something that is very exciting. I'm kind of assuming that
probably there is some sort of graph database or graph analytics running under the hood to enable
you to offer those visualizations and explore those relationships?
Yeah, so we have underlined, yes, so there are graph databases,
you know, a graph database is running there.
And we are, you know, one is like, you know,
the commodity databases you can use, you know,
from the graph or something there from the vendor.
But then on top of it, what we have done, we have built like a layer of cash and you know
and combine the the tabular data you know with the graph so one of the biggest challenge with
the graph in the past because i think you you must be aware of it that you cannot use the graph for
analytics if you want to do like massive amount of calculations you shouldn't think that across
like billions of you know the rows there that that across like billions of, you know,
the rows there that it takes a very long time, you know,
because you have to go and traverse all the vertex there,
but can you combine the power of like a column nurse with the graph?
So then what you can do, you can offload the calculations, you know,
to the engine and then have the graph do the relationships
and linkage between the data. So that's a lot of work which we have done around that to bringing
those two together. So then I can scale the graph to, you know, if you have 100,000 products with
millions and millions of combinations, you know, there, which can translate into billions of,
you know, the relationships and the entity, that
will be able to support it.
Okay, there.
So we believe that's a big part which we have done, to make and not just presenting a graph
to the user.
Right.
Well, it sounds like a use case in and of itself.
And well, you may want to publicize it further down the road.
And in case you have worked with, well, commercial graph database or analytics vendors,
I'm sure they will be very keen to work with you on that as well.
Yes, and we are working with the vendors there already on the underlying.
Okay, there.
Okay, great.
So, well, looking forward to learning more about what's happening under the hood there.
Yeah.
Thank you.
I hope you enjoyed the podcast.
If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn, and Facebook.