Orchestrate all the Things - Cloud, microservices, and data mess? Graph, ontology, and application fabric to the rescue by EnterpriseWeb. Featuring CEO & Founder Dave Duggal
Episode Date: October 27, 2021Knowledge graphs are probably the best technology we have for data integration. But what about application integration? Knowledge graphs can help there, too, argues EnterpriseWeb Article publish...ed on ZDNet
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amadiotis and we'll be connecting the dots together.
Knowledge graphs are probably the best technology we have for data integration.
But what about application integration?
Knowledge graphs can help there too, argues Enterprise Web.
I hope you will enjoy the podcast.
If you like my work, you can follow Link Data Orchestration on Twitter,
LinkedIn, and Facebook. I'm David Gallup, the founder and CEO of Enterprise Prize Web.
I've essentially spent my entire career building, turning around, growing, starting companies.
This is my latest instantiation of myself. I'm a regular speaker at tech conferences,
including some of yours,
an occasional blogger, and the inventor of 15 awarded patents
on complex distributed systems.
Okay.
So that's me.
I don't know if you want me to go a little bit more broadly.
Company, I think you also said.
Yeah, yeah, yeah.
It's a good introduction of you, basically.
And I was wondering, yes, the next part would be if you'd like features of Enterprise Web's no-code platform,
and I think what makes it relevant to today's conversation,
is that the core of Enterprise Web
is we're using graph modeling and graph processing.
So that's the key.
So I think this is very distinct
from traditional uses of graphs
or analytics and recommendations,
as we might see in a semantic web.
Different, you know, not versus, right?
Maybe actually complementary in many ways, actually.
But Enterprise Web is a no-code platform for rapidly composing complex distributed domains
so that you could model your operations across your increasingly distributed universe
endpoints that represent your operations,
and that you can connect them in end-to-end event-driven processes.
It's a real big problem for organizations around the world as they've sort of disaggregated from being monoliths to services to microservices and now serverless functions,
they've had an explosion of endpoints,
which now they have to connect and manage and struggle.
So as much as being modular and distributed and cloud native,
those are all good things,
an enterprise still has to act as an enterprise.
It still has to have unified visibility, discovery.
It wants to have automation of its processes.
It wants to have consistency in policy.
It wants to have management.
So Enterprise Web is a no-code platform.
Very much for this moment in time,
people are struggling with complex distributed systems.
Enterprise Web uses graphs to model those systems,
to model the complex graph dependencies, and then to use graph processing to efficiently process
all those dependencies so that people get real-time intelligent services. Does that
make sense?
Yeah. And well, even though I was at least somewhat familiar with what you do, I have to say that the graph aspect that you chose to focus on, at least in this conversation and in the previous couple of conversations we've had as well, somehow did not occur to me. It makes lots of sense actually that if you are trying to solve the kind of
problems that you are then you would eventually stumble upon graph let's say but well at least
you know maybe it's me but I didn't think it was that much pronounced let's say in your
in your messaging or it was you know such a central piece of how you approach things.
Under the hood, well, I'm sure you're going to have the chance to explain in more detail
the ways in which you use that.
But I think it's maybe a good idea if we take a step back before we actually go into all
the nitty gritty details of graphs and how you use them,
and to try to explain a little bit the actual domain that you're in.
And, well, I'd like to share how I first got to be acquainted with that.
And in some ways, I think it's kind of like an age-old problem in IT.
So, when you have, you know, in the beginning it was in-house services and,
you know, off-the-shelf solutions and people were trying to build their own services, everything was
running on their own data centers and obviously you had to somehow, well, dealing with a specific isolated let's say functionality is all
fine and well but the real value comes when you are able to integrate those
islands let's say of functionality across the enterprise to get bigger
things done and so this has always been something that people were pursuing with
a number of different ways and a number of different
platforms that have come and gone in and out of fashion over the years.
So business process modeling and data and application integration and so on and so forth.
So as you hinted in your introduction, over the years, anything this has gotten to be even more
complex and complicated because there's proliferation of services and endpoints
and we've moved from mostly 100% on-premise to well if not 100% but in
some cases close to that in the cloud and we've gone from monoliths so single
single and uniformly architected services to micro services and from that
to as you also mentioned serverless function so it's going more and more
granular which is good in a way but it also means that it's getting more and more granular, which is good in a way, but it also means that it's getting more and more
complicated. So since you have been in that space for quite a while, I was wondering, you know,
what are your takeaways from everything you've seen and, you know, throughout the years?
Yeah, so I think one of the points you made at the outset was interesting, is how we would stumble on graph.
And I think, in a way, you know, I don't know if I'd use the word stumble,
but I think that the reason we came to graph was part of a very intentional design, right?
Because the way that the traditional development that you're talking about happens, right,
even to this day, is manual code and manual integration primarily right
You know you code and recode integrate and reintegrate. And of course, that does not scale for today's demands.
So, We looked at graph structures, non-hierarchical structures, dynamic structures, right?
We were looking for real-time event-driven applications.
We wanted to be dynamic typing, right?
Protypal inheritance, not hierarchical,
class-based inheritance.
So we were looking at a lot of properties
we wanted to achieve,
and those properties drove the design decisions.
And that led us to graph very quickly.
And actually, part of that was also our reading of the computer science
and the system engineering literature.
So to your point, at one point, everything was on a mainframe.
Actually, before that, it was just Turing.
It was this never-ending tape.
We had the zeros and ones on a tape.
But then it became a mainframe.
The mainframe was on-premise, and it was the big centralized monolith. Everything was in there. And now what's
interesting about the mainframe, one of the reasons they're still around is they're very powerful.
One of the reasons that mainframes are very powerful is that on the mainframe, data and code live together.
There's not this false divide of the data team,
the application team.
It didn't exist yet.
It was all together.
Data and code were just zeros and ones with addresses.
A mainframe has what's something called unifying principles.
It essentially has an isomorphism.
It has common methods, common way of
representing things and common methods for handling those things that make the mainframe really
powerful. Then comes distributed computing. And of course, you know, that's great. I mean, so
mainframe had its heyday, right? When things were highly centralized, slow to change, right?
That was fine. Command and control, and control mainframe right so now we're
distributed we have a whole host of new uh capabilities but we also have a whole host of
challenges the reason we have a whole host of challenges is that when we disaggregated from
the mainframe and then monolithic applications, which were just tightly coupled balls of mud, right?
To then more service-based applications,
to then microservices and now service functions,
is we disaggregated without having a programming model
for composition and management.
In other words, we took everything apart,
humpty-dumpty broke, right?
All the pieces were on the floor,
but we failed to actually introduce a mechanism,
a means, a method for composing those things back together.
And so look at where we are today, right?
It's exactly the problem we have, right?
So, okay, you know, 10 years ago,
we had big bloated middleware stacks.
Those are starting to go away now.
What are they being replaced by? Cloud-native
tools. What are cloud-native tools? They're just disaggregated capabilities that came out of the
middleware stacks, right? So you used to have 12 bloated middleware components that did everything,
and that was complicated as it was, and it was pretty opaque, and it was tightly coupled together,
and things became pretty static. Now, go the cloud native computing foundations uh you know vendor landscape
and you'll see hundreds of uh cognitive uh tools and what do those tools represent
very small granular discrete capabilities like oh kafka events um you know, and you have various things doing very specific functionality.
The problem with that now is, who ties that all back together? Right?
Kafka is not your entire information system.
Kafka is not an application development platform.
Kafka is not really a data management platform.
So how do I get an information system out of Kafka and a dozen other tools?
They make it sound so easy when you read the industry articles, right?
You look at, it's the same old pitch from the middleware days.
Every middleware component looks fine in isolation.
Hey, look at this new component.
You need it for analytics.
Look at this new component.
You need it for analytics. Look at this new component. You need it for events. Look at this one thing, this component. But what people forget
is it's the N plus one problem. Every time you add another component, you're adding overhead
and complexity to your system architecture. You're not recognizing it. It's accidental complexity.
And it's the same darn problem with your cloud native
toolchains which is now become unwieldy and now in the cloud you have to who
when you put those things together who was actually caring about the
consistency problem whose responsibility is immutability whose
responsibility is item potency whose responsibility is asynchrony and
concurrency or how let's talk about non-functioning concerns like security, compliance.
Whose responsibility is it? Oh my God,
the systems engineer has to put that all together now, right?
Enterprise web is a reaction to that kind of modular reductions, right?
When I founded this company,
I had experiences to building things those ways and seeing how static that left my operations that are the companies I was running.
That the things that were supposed to be driving my automation were becoming concrete and were stopping me from being agile.
So I was like, okay, I want to solve this problem fundamentally.
I've been around the block a couple of times.
I want to address this. I read hundreds of academic papers, hundreds of engineering papers and articles, tech articles
and things like that. I just processed this entire universe and came up with a thesis. I said, okay,
you know what, there's a right, really a right way to do this for using to use graphs to create a no code environment. So hide all the technical complexity
use graphs to
model declarative relationships between all your solution elements
And then to use graphs to enable declarative composition of
objects into services and then use graphs or
chaining services into event driven processes so Enterprise Web essentially reintroduces unifying
properties you have one way of representing everything whether it's a
Cisco router function a cognitive tool database everything is represented as a
graph object it's an abstract data type in Enterprise Web.
They're all, all those objects are modeled up to an ontology, a graph knowledge base,
which has higher level enterprise and systems concepts.
Right. And so everything is mapped up to this higher level that an upper ontology really right and
That is designed to find the higher level domain and then you have the objects of your domain
the objects of your domain are now completely defined in metadata and
Relationships and state and now I could use those things
to drive my processes my
Model is describing all of my objects.
It presents an abstraction, a common language for describing all my heterogeneous and distributed solution elements,
which are actually all snowflakes.
But it's not useful for me to work with them as snowflakes.
I'd like to have one common abstraction layer, one common consistency layer,
where everything looks the same to me.
You could never do this in a hierarchical approach, right?
Because it would be too static, too rigid, right?
It wouldn't support the real-world diversity, the real-world complexity,
the real-world one-to-many relationships, the real-world change, right? It wouldn't support the real world diversity, the real world complexity, the real world one-to-many relationships,
the real world change, right?
Graphs are inherently flexible,
extensible, and adaptable, right?
They have these attributes
that make them valuable for doing what we do.
So it's core to our IP,
those 15 patents, right?
Is our use of graphs
to solve these problems. I mean, so, you know,
just one last segue on that is traditionally, of course, the use of graphs has been for analytics
and recommendations, processing transact, not for processing transactions and business processes,
right? The reason for that is because of the complexity of these individual objects.
We're not just talking about semantic web
with huge collections of facts.
What we're talking about is a domain that
could be equally complex, but very rich objects.
They're individually rich.
They have lots of properties, behaviors,
dependencies, constraints, affinities. All of those things are modeled in our graphs
in Enterprise Web, right? They're all aggregated and addressable. They're made
so that they're hyper-efficient for processing. So Enterprise Web is making graphs practical.
Practical implementation of graph technology
for application modeling and development.
Actually really design deployment and management.
So it's a full platform.
So as opposed to Semantic or label property graphs which are
really query engines, right, they're models that are accessed by
queries, Enterprise Web is doing queries and commands, right, we're actually taking
actions against these graphs. Yeah, actually I was going to highlight
precisely this aspect.
So a lot of what you described should be familiar to people doing, well, what's today called enterprise knowledge graphs,
what, you know, at some point used to be called data integration or, I don't know, for some other people in context,
they may be doing that, you know, for what they call master data management or what have you but basically the the underlying idea is the
same so fine I have all these data islands these data sources lying around
in the enterprise each one of those has its own data model which may be you know
arbitrarily complex and I want to actually have a unified view over all of those without having
you know to to move them in a data warehouse and do all those nasty ETLs which cost you know lots
of processing time and lots of effort so federation and sort of common data model that enables me to
look at the entire landscape hopefully have and hopefully that way have a better
picture.
The difference is that in your case, you don't just do that, but you actually do that as
a means to an end, which is that you want to functionally integrate those systems to
which those data models belong. So I was wondering if you could tell us just how do you,
well, first of all, how do you introspect all those systems
to get the data models?
And how, you mentioned in your previous answer,
you mentioned a sort of top-level ontology.
So was that a kind of metaphor
or did you actually use something equivalent to an ontology? How do
you populate this ontology of yours and then how do you do the data alignment?
Because that's another part which is common with people doing
enterprise knowledge graph work. They have have to in addition to creating a data model for
each of their data sources they need to somehow align
that so how do you approach those those challenges
all right so that's a lot so you just said so i have to unpack all of that so
but i completely agree i mean so i would like to think that these things should
be familiar i think the most recent movement now is the data fabric, right?
The data mesh, I'm sorry, is what people are saying now, right?
And it's for this very same motivations, right?
And it's actually still, it's a 2B thing.
There's nobody really offering, you know, data meshes really today.
It's an idea that people would like because people would like all visibility, right?
People would like to have a common place to query.
People would like to be able to introspect
their diverse and distributed data sources.
Of course, we're extending that
to the application side as well, right?
We look at data and application being the same,
data and functionality being the same side,
two sides of the same coin, right?
You know, just as it was in the mainframe,
data and code ran together.
It's not data against code or code against data.
I think unfortunately we have some,
some folks who are idealized or reify data over everything or code over
everything. And that's, both are silly actually, really,
because applications generate data and data is consumed by applications.
This is one thing really, right? So in our world,
we call this approach not a data mesh, we call it an application fabric.
That data sources are just one aspect, but we're also connecting systems,
not just databases and data sources, but traditional data sources,
but also equipment devices, calling out functions.
So we call that the application fabric.
And I think then your segue, you segue from that basis,
so the foundational motivation there for visibility.
And I think you were asking about how do we bring people in?
Actually, I'll do it in reverse order.
You asked how do we bring these elements in?
How do we discover them?
And then how does that connect to a model?
I'll answer it in reverse order.
So the model.
So Enterprise Web comes out of the box with a model that's ready to use. Remember, we're a no-code platform,
so the whole point of Enterprise Web is making it easy.
Improve IT productivity, accelerate service velocity,
provide business agility.
Traditionally, no-code has been used
for simple applications, business users,
citizen developers, relatively simple applications.
Enterprise Web supports real-time, event-driven, distributed applications,
you know, web-scale processes. Actually, sort of the holy grail right now in the cloud is something called stateful cloud-native applications, the ability to use state
with cloud-native applications. That's a really big deal. We do that.
So Enterprise Web is a no-code platform that tackles the hard problems
that enterprise struggles with.
Because if you have a no-code solution
that's sort of just for simple things,
as soon as your application becomes mildly complex,
guess who you're calling again?
IT, right?
And now you're coding and reintegrating and reintegrating again.
And you're now sort of half pregnant. Right? So the idea of Enterprise Web was to create
a unified model, a unified programming model, a unified approach to help people connect
their distributed universe endpoints and run automation across those.
So to your point, though, about let's start with the model.
So Enterprise Web comes out of the box with a baseline model
because guess what?
We've been around.
We know what an enterprise looks like.
We know an enterprise made out of units, locations, facilities, people.
It has all of these kind of concepts.
And we can model those concepts in the abstract as right? As universal concepts, right? And then
same thing with system concepts, right? We, enterprise web understands the cloud, right?
We understand distributed systems. We understand types, formats, schemas, right? We understand
all those concepts. Now those concepts apply to many, many, many implementations and might
apply very differently in many, many different contexts, that's fine.
That's the whole role of operatology, right?
Is to actually provide that sort of higher level commonplace,
right, that's useful.
And then by having that model,
it makes it easy for us then to go to customers
and they can either go to, right, start working in Scratch, use our baseline as their starting
place and they could just model manually if they want. And they could just start with
the use case. They can start that way. So this complete manual, and this would be the
role of let's say a software architect right so a solution architect soft systems
architect would come in and they would model the domain whatever the scope of
the domain is right so we work with customers like in telecom doing you know
5g edge which is just the biggest distributed systems domain that there
really is out there right now that needs to be automated.
And Enterprise Web is on the cutting edge of that.
Two, it could be the domain of a line of business.
It doesn't really matter, but you could start there.
You could manually go in there, and we give you a design environment that is both accessible by API or user interface.
And so you have a design environment that allows you to rapidly model your domain inside Enterprise Web using the
baseline as a jumpstart.
But then we also give you tools like, well, we can import RDF, we can import XML, we can
import pseudo-UML, even like documents.
You can give us a PDF document that has pseudo-UML in it. And we could do entity extraction, entity recognition, algorithmic,
you know, extraction, right.
And map those to the concepts in our system.
And we've done this for companies that are in the domain of a hundred
billion dollars plus, right. Where they're like, Oh,
we have a Jason file. Could you import that? And then take your, can we,
you essentially map our model, which might be done in a
less dynamic way in a less more hierarchical model less useful web can
we import in that that model into enterprise web and make it actionable
the answer is yeah we do that like in seconds with any such import you'd want
to have somebody who knows that domain to manually review it, right? Because that's just responsible.
Now, our algorithmic mapping tends to be 95, 98, 99% accurate.
Of course, in a big domain, that 3, 4% can be a little bit,
quite a bit of a little massaging here and there.
But it's still a huge bootstrap, right?
And so that gives you a customer an ability to implement in a domain
within Enterprise Web very quickly to create so essentially Enterprise Web is
almost like a DSL for DSLs right they're creating a domain specific language
inside Enterprise Web which provides the foundational metadata relationships and
policies for them to sort of navigate that domain and use that to model their domain objects.
So once that's in there, and that can happen in a day, two days, depending on the complexity of the domain.
We've already modeled some domains quite richly.
So in the life sciences or in telecom, just this project with SAP, you know.
So we've actually modeled some domains that are also like industry kits.
You can almost imagine starter kits where, you know, we have our baseline upper ontology,
plus then we have sort of the domain.
And now we even have catalogs in those domains.
We have catalogs of, you know, widely adopted objects that are already, you know, used in there in those domains we have catalogs of you know highly adopted objects that are
already you know used in there in those domains so we can jumpstart things quite
effectively then what happens is when you want to when we start onboarding
your own solution elements right your own domain elements whether they're
applications and artifacts systems and and services, service endpoints,
databases and devices, et cetera.
What you do is Enterprise Web Exposes
effectively like a wizard.
That wizard, or you can call it a dynamic UI,
or an interactive API,
but effectively you're interacting with our type system.
You say, I want to onboard this, or I want to model this endpoint.
I want to onboard X, right?
And the system goes, you know, it wants you to type it, right?
So then you say, okay, well, what is X?
X is this.
Okay, well, we know a lot about these kind of things.
And then it asks you, like, the next 10 questions, right?
And then you feed those 10 questions, and it goes, okay, well, we know a lot more about these things now, right?
And then what the system is doing is auto-filling your properties and
auto-generating your interfaces as you go.
I mean, to give you an example, and again,
this might not be an example that many people are hands-on familiar with,
but I think they could guess at the complexity.
You know, in telecom, something like a Cisco router,
Juniper router, Sienna router,
those are very advanced technologies,
in fact, with what they do at such high speeds,
but they're actually fairly straightforward.
We can onboard something like that in like 30 minutes.
In fact, we have all three of those already in our catalog,
so they're available as models already. But if you want to take something more complex, you
know, that might be more disaggregated, might have a micro, might be more cloud
native in its design, well that might take an hour to onboard in Enterprise Web.
In telecom, traditionally, those kind of elements take four to six weeks and four to
five teams to onboard. So we're taking elements that in telecom are known to take weeks with
multiple teams to onboard something. We reduce that effort to 30, 60 minutes, right? We even do more complex things,
the 5G core and all these other things
that are really fundamental
to today's modern communication structures.
And we can onboard those in a couple of hours,
but these are vastly complex technologies.
So these are radical simplifications for that onboarding.
And here's the bottom line for that.
So you put that effort in, right?
So you have some architectural effort, right?
You do some modeling of the domains.
You've got some onboarding effort.
That's like your IT team.
There's some onboarding.
And essentially they're populating a catalog.
And we actually support things like BizDevOps,
BizDev, DevSecOps, right?
So that you can actually go through a process of, you know,
testing things as you onboard them, automated testing and human oversight
and approval, all those kinds of things like that.
So you can put it through.
Then it completes those processes, get registered in a catalog.
Now all those things in the catalog are completely described in metadata,
right?
I can discover them the same way, right?
I use that metadata to discover them.
If I'm a service designer now, I literally go to the catalog,
point and click it, I search it, point and click the things I want.
I could search by type.
I could search by name.
I could search by property.
I find the items I want.
I compose them onto a design palette.
I don't have to know anything about the technology about those objects.
I don't have to understand their properties, their formats.
I don't have to mention protocols.
I don't have to worry about manual integration.
The system will lay behind them for me because the system understands each object independently.
Each object is a well-understood, isolated, immutable object in Enterprise Web.
It understands each one of them in isolation, and when you compose them,
it understands the distinctions between the schemas, the formats, the types,
and will mediate the relationships between those elements.
And it's all doing that based on its graph knowledge, based on its higher level knowledge of the domain,
which it translates then to low-level implementations.
And so the runtime, literally then, so the service designer composes that.
All they really have to do is pick the elements they want, write the essentially service logic they want, maybe the SLAs they want, but that's all business logic, right?
That's all policy.
It's all fully declarative.
They don't even have to worry about where it's going to run. They could say, they can essentially, the service could run on
any infrastructure, right? It could run on Amazon. We deploy the service to the Amazon cloud, to the
Google cloud, to an enterprise cloud. System doesn't care. The system will, that endpoint
where it's going to run that
environment that target host will also be a modeled object in enterprise web it's another type of
object that and when you try to deploy that service the service will look for the either the specified
target host environment to run or it will translate the policies to identify an available and applicable target host
and deploy the system onto that.
The translation and transformation to do that,
because remember, running services on Amazon is similar to running it in Google or Azure,
but it's not the same.
They all have different service endpoints, right?
Same thing with an internal target host.
They'll have different service endpoints as well for their infrastructure services but Enterprise
Web is making those services protocol agnostic and infrastructure independent
so we've really reduced the application effort to a design function of
composition and then the system is handling all the
implementation detail it's providing a real-time trace of every single thing
the system did to perform those tasks if something fails along the way enterprise
web is transactional in the sense that it has compensations retries
compensations and rollbacks so that that is all done.
And that's all done by graph.
And what we've done here is now it's interesting.
In the application world, people are starting to talk about graph.
They tend to model in graph and then implement traditional, right?
Things like network topologies are understood to be graphs,
but they don't have ways to deploy them as graphs, right?
Enterprise Web, by keeping things unified, by doing graph modeling and graph processing, once again, we're stripping out overhead.
We're stripping out cruft.
We're stripping as possible, right, to drive the most optimized decisions in real time.
And, you know, we're hiding this complexity so that, you know, you don't have to worry about those details.
But we can expose that there's, you know, the person with the right permissions, you can see those details.
So now I'm sorry, I answered in length.
You actually had like three or four questions in there. And I wanted to sort of address each one it's due. So sorry for the monologue.
Well, one thing I wanted to ask you just by, I assure you that I listened carefully to all of
it. So, and to prove it to you, one thing I wanted to ask was that you use the router example in your answer.
And for most of us, that's a little bit exotic.
So most of us don't really have to deal with this type of hardware.
They don't have to configure it.
They don't know its properties and so on and so forth.
What I guess probably everyone is familiar with is something like a customer.
So a customer class, let's say.
So, you know, your average organization probably has a CRM
and the CRM does have the notion of customer
and, you know, certain properties that the customer has.
You know, a customer data model in a CRM probably has an email address
or, I don't know, a number of campaigns that have reached out
and so on and so forth.
But then they also have a system for invoicing or finances and they definitely have a customer
there.
So they may have properties such as their bank accounts or previous payments and so
on and so forth.
And you get the picture probably every single system that an average organization has must have some notion
of a customer so and the way the reason I'm mentioning that is that I think it's
a good example to kind of figure out so how do you integrate all of those and
how do you keep track of different properties potentially
conflicting because well let's take email you may have a property for a customer's email
in your CRM and you may also have another property for the same customer's email in
your invoicing system and they may be called different names and obviously you're going
to have different operations that you can do in your CRM and in your invoicing system.
So how do you reconcile those?
And well, how do you, after you do, does that mean that you only have to, you know, as an application builder,
do you only have to interact with the enterprise web notion of a customer?
And if you do, are you then able to execute every potential method, let's say, every
potential function, every customer object across your different systems has?
Yeah, so that's actually a great example.
And thanks for bringing me back down to earth, maybe more human accessible use case here.
So people tables, of course, are the great customer tables. I like those are like the common
canonical references, like a master data management or any kind of things like that.
So part of it, I would say is, well, there's some things that we're going to just have to do.
They're just functions of that effort, right? And then, but it does make a difference about how we're doing it, right?
Traditionally, master data management hasn't worked out too well, right?
Despite good intentions and spending lots and lots and lots of money on systems and
system integrators and all those things like that, because it tends to drive towards a
centralized and static representation of those concepts.
Right?
I think that's fair.
And even today, right, it's like the problem is as soon as things become static, you're dead.
Right?
Because the truth that you're unveiling is nobody has one representation of customer. They have many representations of customers across, and same thing with people and many many other concepts across the system and it's completely it's completely normal for us to start everything
because we're an enterprise grade solution the first thing we always do is normalize all the
core erp tables right you just have to do it because when we go to our customers like we
don't want to just be another people table or yet another customer representation right we're an integration system what about interoperability and automation across these things so it's
one of the things we do we call it a phase zero of any project is to take the
core ERP concepts because they're core for everyone right everyone has a notion
of customer right pretty much right and And people and things like that. can map things up to that concept,
and it supports a form of normalization, right?
And so in Enterprise Web, the way we do it,
very specifically, though, is we would onboard
each of the participating systems or databases or services, right?
We don't onboard them all discreetly in the process I defined just before,
right? Each one of them would be, okay, let's model this source.
It's a database. What's it exposing? Right. What's this one is, you know,
it's a, you know, it's a service and endpoint and it's the,
it's a cloud service. Somebody is using CRM in the cloud.
They have a separate notion of customer over here. Fine.
Well, let's onboard them all.
They're obviously different types of endpoints.
One's an on-premise database. One's a cloud-hosted service, but that's okay.
They're still providing data. In this place, they're not functions or applications pretty much. They're providing data to us maybe in the first instance,
but they could be things that consume enterprise web ultimately as well.
We model all those discrete endpoints,
and then we'd essentially create an entity representing the union of all of them
for a concept, right?
And that would be a mapping, right?
So essentially, it's the minimum viable mapping of the common properties across them, right?
And for us, actually, generally, we're talking about data now. But again, remember, right? And for us actually, generally,
when we're talking about data now,
but again, remember, you do this for everything.
So if we go back to the Rehugger example,
it would be the properties, behaviors,
the dependencies, the constraints,
the minimum viable mapping.
At what level can we say a customer is a customer
is a customer, a customer across all,
use a common set of metadata,
and what do the customers relate to? Can we create a common set of metadata, and what do the customers relate to?
Can we create a common set of relationships where customers relate to things
so we can understand semantically what a customer is?
Can we create that minimal metadata and domain semantics such that we could
use it to query and discover any of those sources,
create an aggregated resource, mapping and merging where possible.
But also, this is, I think, an interesting aspect of Enterprise Web,
is to the extent that there are idiosyncratic or unique properties
for a certain source, because that's another thing that happens, right?
So obviously, we could say that these customer tables would have overlapping things, right?
Maybe different labels, but they don't overlap.
Maybe we have some things with contention.
That's also true.
But you also might have unique properties that you collect only for certain things.
And certain systems collect certain things and certain systems don't collect certain things.
That's also a true case, right?
So you normalize to the extent possible. that's normalizing up your mapping, right? And
then within the objects we're still going to capture any unique properties
because we want that new object to be very rich and complete, right? At least
complete enough to support your use cases, right? So again, minimum viable for
each individual one and then a normalization
of the whole. But in Enterprise Web, think, consider for a second, each one of those objects
is an immutable object. They all have version control and audit history, right? So we can
actually version every discrete source that's contributing to that aggregate. The aggregate
itself is yet another object in Enterprise Web.
It too is an immutable object that can be versioned, right? And it's part of the greater model. That greater model is also an object in Enterprise Web, made out of concepts in
Enterprise Web, which are also all objects. And that homo-iconicity, that kind of consistency,
that consistency in design top to bottom
where everything's just an object,
everything is an abstract data type,
everything is made up of references,
and all those references get computed in real time
to interpret what these objects are.
That's actually a very powerful capability
in Enterprise Web.
So again, you and I, you're technical
and you're academic in nature,
and so we're going into details.
I do wanna remind you that every once in a while
that we're a no-code platform.
What we're talking about, we just made look easy.
You know, we jumpstart the modeling of your domain,
make it easy for you with a wizard to onboard your objects.
Even on the objects level, if you have a file or anything else like that,
that we could use, JavaScript, HTML, JSON, XML,
anything that we could take in, import.
Again, at the object level, we can import those schemas,
things like that to bootstrap those modeling efforts as well.
So we streamline everything where we can.
We make them all available in a catalog.
We make that catalog composable.
Every object in the catalog is version control and with audit history.
So I could be working with an implementation that is
composing a certain set of objects together under a service definition,
let's say with the term customer, that might have a relationship
to five systems. One of those systems might get updated,
right? And so, but the rest
of the systems are all saying the same.
I want to now represent, essentially update my object in my catalog.
Essentially, it's append only, so capture the differences, what's changed for that single source of customer.
I want, if necessary, I might have to remap that to the higher level concept of entity of a customer.
To keep everything normalized.
But everything was version control.
If something breaks, we know exactly where it broke, right?
Because it's a graph.
It's inherently traceable, right?
It's fantastic for debug.
Everything's version control. So we have the history of the way things were.
You know which version is being used today.
And so you have sort of this graph introspection of everything in all directions, which is also very powerful.
So history itself is sort of part of a graph of a single object, right?
It's the timeline of a particular object is actually a set of relationships with changes.
So, you know, I'm going into the, we're diving off the deep end here, and I'm always wary of doing that, George.
My intention is to communicate, and I hope for a certain audience
this is making good sense.
Am I being clear?
Does the parts that are common to...
Master Data Management is an idea
about normalization.
And same thing with the data fabric,
data mesh is an idea
about aggregating at least
all of your endpoints into a catalog.
And we're bringing those both together in one idea,
which is to say, actually, I'd like to have a catalog
with normalized metadata and relationships.
I'd like my catalog, I'd like my mesh or my fabric
to be navigable across.
And actually not just navigable.
I want to use that same metadata and domain relationships as well as the actual state because we have history.
So we have the history of transformations on any object.
So state, I want to use metadata, relationships, and state to drive automation, intelligent orchestration, event-driven processes.
That's a big deal.
And we have this in production now for many years.
It doesn't break.
And it's scalable.
So we've taken these ideas.
Implementation itself is cloud native. So Enterprise Web, this whole notion that we're using Graph 2
and everything's in a catalog and catalogs are just references,
means that we're not even really, you can't really call us a monolith either.
We're virtually centralized, but everything is really captured
in a very decomposed way, right?
In a very loosely coupled way in enterprise web,
what we're doing is providing sort of
the unified management layer over the top,
trick home and transparency,
so that you could manage your enterprise, right?
We're enabling an enterprise to have one logical,
logical representation of its solution elements,
so they can act as a unified enterprise.
And that's the whole thesis for Enterprise Web.
Well, actually, we're almost out of time.
So I think we can address maybe one last topic
before we have to wrap up.
And I was actually going to tie this up
to this whole customer class, class let's say discussion and ask you if you
have any kind of real-world let's say use case that you could you could share
around that so any kind of business process that you modeled around
customers such as orders or fulfillment or
this kind of thing that you think is a good example? Excellent. We actually just did a great
use case in collaboration with some contacts at SAP. Obviously, if you're in the integration
and automation space, given the widespread adoption of SAP, it's impossible not to have integrated an SAP customer somewhere in your life, right? if a graph knowledge base could sufficiently describe a systems domain such that you could
then use the metadata in relationships like we're describing to drive automation.
So they really were looking for something exactly what we do, right?
It's sort of really funny.
They came to us after your event last year, your connections, knowledge connections in
2020.
And so that was great. They heard about us, I guess, a connections in 2020. And so that was great.
They heard about us, I guess, a couple of times.
They read one of my articles and heard about us from some of their other
contacts.
But then they saw our presentation last year and they reached out to us.
They had just hosted a hackathon.
SAP Labs had hosted a hackathon for its unified ontology channel.
And we didn't do that exactly.
We got our own sort of special project from them,
which was order to fulfillment.
I mean, again, sort of just like customer order to fulfillment
is sort of the classical exemplar of a process.
And I mean, like everybody else,
SAP really just wants to see if they can make it easier for customers
to connect and use with their product portfolio, their services.
And that's what we set out to demonstrate for them.
So the hackathon gave us, we used the assets that were already exposed in the hackathon.
They had models, files, and access to endpoints.
Again, we had a little bit of a special enterprise web-specific mission that SAP had given us.
And then we did a big presentation to SAP stakeholders worldwide for them.
But the use case, again, will be one that I would think a lot of your listeners would
appreciate, SAP Order of Fulfillment.
And you started actually almost in the exact flow that we discussed earlier,
sort of that development workflow or design workflow that I discussed was start with the domain.
Well, what was the domain?
The domain was SAP.
That's a pretty big domain.
Now, of course, in our generic enterprise concepts, we already
have notions of people, places, things, right? A lot of those kind of things, orders, locations.
So a lot of those things would already exist in Enterprise Web. But what we were able to do is
SAP has a really nice piece of work. It's called their one domain model. It's actually a graph.
So they call it their graph API.
Now, it's a graph in the sense you can navigate it
and you're navigating products.
It's really almost a graph of their products.
And then when you find products
and you have to discreetly dig into each one,
there's no higher level above them, right?
There's no connecting at a model. There's no connecting meta model over the top
of it. It's just essentially a graph of their products, and then you could drill in and see
their objects and their schemas. So they gave us that as a fun. We were able to access the SAP
Graph API, import that model, do the entity extraction, the algorithmic mapping, blah, blah, blah, in seconds.
We actually do this live.
There's a YouTube demo out there.
I'll share the link with you, and you can post this, hopefully.
And we do this live.
We import this model.
It then essentially creates, essentially we're wrapping the one domain model in the Enterprise
Web's graph knowledge base.
We're wrapping it with our type system and our enterprise concepts, if that makes sense.
It's now part of a bigger connected model in the sense that we're taking their objects,
we're extracting the schemas in their interfaces, we're mapping all those details.
The system's doing that instantly.
It's extracting the types and the formats, all those things like that. It's all those details. The system's doing that instantly, right?
It's extracting the types and the formats,
all protocols, all those things like that.
It's all just done.
Again, maybe some manual massaging, right?
So I'm not going to misrepresent that.
But it is essentially 95 plus percent automated.
Then once that is done,
you're left with really just a couple of steps.
Like I said before, now an IT person might come in and onboard some of the solution elements,
some of the runtime engines, in this case, some runtime engines that are necessary for the end, SAP runtime engines that were necessary for the end-to-end processing of the order to fulfillment. So they onboarded those,
like I said, in 30, 60 minutes, whatever, we onboarded a couple of their components.
So it's part of a federated solution. So that's actually interesting too.
Enterprise Web doesn't have to do everything. Yes, we can do integration dynamically ourselves. Yes,
we do orchestration ourselves. Yes, we do automation, configuration, connection, networking, configuration.
You do all those things, can do those directly. But because we're an integration platform, we're inherently open.
We're working in a federated mode. So we imported their components so they could be part of this hybrid SAP Enterprise Web solution.
Then, let's say a service designer came in and wanted to build a process over the top
of these various endpoints, right?
These various SAP and non-SAP endpoints.
And they came, they modeled the process
in Enterprise Web.
They bootstrapped that again
by importing an existing artifact.
I should
note that
SAP's one domain model
is in OData, so that's just yet
another format that we can import.
OData is known to everybody in the
SAP community. For the process,
they had a BPMN file because that's what they use in SAP. So they give us a BPMN file. Of course,
BPMN is not cloud native. BPMN is not event driven. BPMN is a lot of things. It's a traditional
application process modeling technology. We took their BPMN. It's fine. We extracted the tasks,
the roles,
integration points.
Once again,
we're taking a file,
taking to bootstrap
the modeling of the process.
But in this case,
now we're doing
business transformation, right?
We now extracted
their BPMN details,
brought it down
into Enterprise Web
where it's going to be
reconstructed
as a cloud-native
event-driven process.
That's done almost instantly.
If you do watch the YouTube demo, the demo component is about 15, 20 minutes.
In 15, 20 minutes, we do all those three steps.
We import the one domain model.
We set up the runtime.
We onboard the components.
So a solution architect sets the domain, IT people set up
essentially the environment, and then components necessary for the environment, and a process
designer or service designer comes in, models the service or process, and all that was done
in 15-20 minutes, logging in and out as different roles, all starting with different artifacts. So
you could do everything manually in Enterprise Web from scratch.
Obviously, that would take longer because you're doing the initial
work from scratch.
But, again, you have to think about the mission of Enterprise Web.
The mission of Enterprise Web is to be a no-code platform for IT productivity,
rapid service delivery, and business agility.
And so we're trying to make it as easy as possible to do what would be hard,
what would be tedious and time-consuming.
And now in Enterprise Web, that process is, you know,
we essentially, you can, essentially a customer can create an order.
That's going to trigger, that's going to be an event in our system.
It's going to trigger the order to fulfillment process.
It's going to run across those endpoints.
It's now a truly cloud native process
all running in enterprise web.
Well, I'm sorry, running enterprise web
is the higher level model,
providing a higher level metadata,
consistency layer, the domain semantics,
all those kinds of things like that.
The higher level coordination
across all of their endpoints, SAP endpoints and non-SAP endpoints, to deliver an end-to-end
solution.
This is a problem every single enterprise in the world faces, various shapes and forms.
It's the same problem everywhere.
And so we do this in life sciences, we do it in telecom, we've done it with SAP, we've done it in other places.
I think the idea here is you'd like to have a high-level abstraction and common tooling.
The absence of high-level abstraction and common tooling is why everything feels like it's spinning out of control right now in the development world,
the application development world.
And so by creating a unified abstraction, common tooling,
we're essentially recreating those common principles or unifying principles
that existed in the mainframe for now a new distributed environment.
And I would argue that without those unifying abstractions
and without common tooling, you'll never get there.
It's, you know, you don't, you know, again,
you just have to go to the N plus one problem.
If you try to hard code this with middleware stacks
and cloud native tools,
your middleware stacks will become bloated,
your tool chains will become unwieldy,
you won't have a common abstraction across them, and something breaks, you won't know
where it broke, and now you are a slave to your tools.
Your accidental complexity now throttles your domain problem, right?
Enterprise Web makes the systems, hides the systems complexity so that we can enable the inherent domain complexity
of a complex distributed system.
How do I do?
I think we've come full circle.
When we started, you mentioned mainframes
and then now we've kind of talked about mainframes again.
So it's like a full circle.
I just have one really,
really quick question. Were the SAP people happy from what they saw?
No, I can't make representations, you know, excuse. I could say that our contacts were thrilled.
And I mean, the kind of responses we get are like, how do you do that? I can't believe you could do that.
That looks like magic, right? And it's not magic. I think we've now methodically sort of walked through on this call.
And of course, there's even more technical details that we didn't cover.
Right. There's how we how our graphs are assembled, how our graphs are processed.
Clearly, we could do that. We could do another two, three hours of conversation. It would actually be a great joy to do it with you. However, we did methodically sort of walk
through the logical architecture together, right? How we use graphs to make these things easier.
Great. Thanks. Great. Thank you, George.
I hope you enjoyed the podcast. If you like my work, you can follow Link Data
Orchestration on Twitter, LinkedIn, and Facebook.