Orchestrate all the Things - Trailblaizing end-to-end AI application development for the edge: Blaize releases AI Studio. Featuring Blaize CEO Dinakar Munagala and VP R&D Dmitry Zakharchenko
Episode Date: December 16, 2020You might not know it by reading this news, but Blaize is an AI chip company. Blaize is now boldly going where none of its ilk has gone before, releasing a software development product. And that'...s not the only reason AI Studio is interesting. We discuss with Blaize CEO Dinakar Munagala and VP R&D Dmitry Zakharchenko to explore AI Studio, its potential and philosophy, and where it fits in Blaize's strategy. Article published on ZDNet
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amatiotis and we'll be connecting the dots together.
You might not know it by reading this news, but Blaze is an AI chip company.
Blaze is now boldly going where none of its ilk has gone before,
releasing a software development product.
And that's not the only reason AI Studio is interesting.
We discussed with Blaze CEO Dinakar Munagala
and VP R&D Dmitry Tsakharchenko
to explore AI Studio, its potential and philosophy,
and where it fits in Blaze's strategy.
I hope you will enjoy the podcast.
If you like my work, you can follow Link Data Registration
on Twitter, LinkedIn, and Facebook.
Yes, we may as well start with introductions.
Certainly. Well, thank you very much.
My name is Dmitry and Dmitry Zaharchenko.
I'm VP of Research and Product Development for Blaze.
And AI Studio is quite a bit of my brainchild.
So I'm very proud to have an opportunity to discuss that with you.
There is a team that make it a reality.
So obviously that's something
that I would be able to address
on behalf of myself and the team.
Any questions, anything that is related to AI Studio,
that's really what we've been doing
for the past several months when
we came from an idea to a product we announced yesterday.
Okay, sounds great. Actually I think you mentioned it yesterday's presentation
that you've been working this for the last couple of years if I'm not mistaken.
Correct. We spend actually quite a bit of time up front with researching, working with different customers in different verticals. We took time to actually research, validate ideas. We iterated in a development took a little bit less than a year,
but it's almost a year inactive. I would consider that rapid prototyping and customer validation.
So total of about, I'd say about 19, extra 20 months to be precise.
Okay. Okay. It makes sense. Okay. So actually, to be honest with you, I sent some questions, some discussion topics ahead of time, but I was quite's lots of people who are not very familiar with what you do in general, it would be good if we started with the big picture, let's say, and then we drill down to the specifics of the AI studio, if you don't mind.
Sure.
So just to introduce myself as well, I'm a great pleasure meeting you.
I'm Dene Karmanogala. I'm co-founder and CEO of Blaze.
I was previously building graphics processors at Intel, and now here.
Happy to meet you and looking forward to the conversation.
Great.
Great to meet you too.
Okay. okay so if we agree that it would be a good idea then to start from from the big picture
and obviously since it's also the first time that that we connect it was a good opportunity for me
as well to kind of take a step back and try to yeah to get what it is that you do, where you're coming from, what your vision as a company is, and so on.
And I have to say that the vision does indeed look quite ambitious
for a company which is relatively young, if I'm not mistaken.
So I would like to start the conversation by asking you
to share a little bit about the company's origins and
what brought you here, the story of the founders and some typical company background info,
like what kind of funding do you have, what's your headcount, this kind of thing.
Sure, sure.
I can take that. We started Blaze in pretty much my spare bedroom, my co-founders and I, almost nine years ago. And we were basically building graphics process prior to that at Intel. We left and we wanted to build a new processor for the more emerging workloads,
and that's how we started up.
We have since, of course, progressed multiple through, you know,
we've raised venture capital initially from angel investors
and then through strategic investors and then financial investors.
To date, we've raised about 87 million in equity financing.
And we are about worldwide 315 people.
And maybe talking through sort of the big picture, right,
the vision is that AI is pretty large, right?
The umbrella is huge.
There's no doubt that it's coming to every walk of our lives
and it's every industry.
We chose to focus on the edge and the enterprise part of it.
If you look at it, right, there is the data center
and then there's outside the data center. And that's where we are focused, edge and enterprise.
And the major emphasis
on inference, and this ties back into
our architecture. We have a very novel architecture called the graph
streaming architecture. And it's very suitable
for edge and inference, primarily and it's very suitable for edge and inference
primarily because it's the the way we process any of these task graphs be it
AI or non AI is is very unique we can do it in low latency low power reduced
memory bandwidth requirements etc it all all is part of the key fundamental principle of our GSP architecture.
The second piece is full programmability.
Right from the beginning, this was one of our founding principles.
We said, hey, we're going to stay programmable
because the rate at which algorithms are changing is pretty rapid compared to the time it takes
to develop a new chip architecture.
So it's really difficult to bet on saying that,
hey, I'm going to go specifically build
this fixed function architecture for a certain AI workload
and before a chip comes out, things have changed.
So we're completely opposite of that.
We've chosen to remain programmable. And we said, look, to enable our customers, workload and before a chip comes out things have changed so we're completely opposite of that we've
chosen to remain programmable and we said look to enable our customers we will invest in software
deeply and and that's what we've done um so a ground up uh having a processor which is
rather unique uh and and efficient uh compared to the existing incumbents
and the startups.
We're actually at customers we're winning.
Secondly, we focused on the form factors
that we deliver to our customers.
That's all public information.
We announced them in September.
Once our, you know, Rajesh Sharmal comes on board,
we'll kind of sift through those.
But today we finally, you know, we announced the AI Studio.
So I'd like to maybe spend more time on that.
Happy to answer questions in other areas,
but the Studio is something very unique.
You'll not find it anywhere.
And the big picture here is that
if you look at the $13 trillion impact
they're talking about to GDP due to AI.
A large part of it is actually going to be outside of the data center,
be it a car, be it our factories, public safety surveillance kind of use cases.
Any of these, and many medical research, drug discovery, et cetera.
All of these use cases, the common thing is
they're not armies of data scientists here outside.
If you look at it, the large cloud players
have their armies of data scientists,
and they can build solutions.
But if you look at, hey, how will this work
actually come outside the data center
into the enterprise, that's very limited, right?
I mean, the problems are there.
The use cases are there.
The adoption barrier is that lack of having a data scientist,
the number of data scientists out there.
So that is the gap we try to bridge with AI Studio.
And we made it extremely, extremely easy to develop
and deploy AI modules.
And then we invested in how to actually, the complete end-to-end application, ML Ops and
the works, right?
So the whole idea is, it's like the Intel, sort of the Windows plus Intel, the PC revolution, how it came about.
Affordable computing plus ease of use, office productivity tools.
Similar analogous to that, right?
After 30 or 40 years after when AI is coming up, we made affordable computing and easy-to-use tools to enable and create applications
and deploy them.
So that's kind of the big picture.
Dima, I'll turn it over to you to maybe talk through.
Certainly, certainly.
And, George, we wanted to make sure that we kind of, I know you saw the materials.
Would it be helpful if we go to some of the questions on Studio to kind of be more precise
in addressing the questions that you specifically have been asking?
Yeah, correct.
We didn't have a chance to see the questions, but I'd like to maybe try to address them right now.
And as the rest of the hardware team join the call,
our architecture team comes on call,
they will be able to discuss more of the hardware questions.
Would that be helpful?
Okay, yeah, we can do that.
Just as a short comment before we go to discussing
the specifics of the AI studio.
So, yeah, I just wanted to say while we wait for the rest of the team to join
and then we can discuss the specifics of your chip architecture,
which is very interesting.
So I just wanted to mention that to me and I guess to other people as well,
the fact that you actually have this product just
added to your portfolio so a software product basically is quite atypical for a company in
your space. The rationale as has been explained by Naka makes sense however you, still it's a bold and ambitious move, I would say. Even more so
because what you seem to have delivered is something quite, you know, with a wide range
of capabilities, as it seems. So, yeah, let's go to the specifics of the studio then. Yes, and that's a good context for me to try to kind of, I guess,
further improve on this message of why we are going there.
It is certainly ambitious, but it also has a fair amount of what we call the specialization.
We've looked at the tooling in general,
and what we're seeing is that the industry has been pursuing
different types of tools for accelerating AI.
And because of how wide and how, I guess you can say,
how deep AI is, a lot of the tools that we see out there,
they've actually been,
I would consider them to be more generalized.
General problems, general models,
and those general models applied
to fairly standardized data,
irrespective of the hardware.
Really, I mean, it's effective because
when you look at the compute that's available in the cloud,
you really start to realize that it's almost all you can eat type of a compute.
And for the longest period of time,
people didn't really think about the cost of that compute.
It's been available.
Now that the workloads and the data is becoming,
beyond just being sizable, let's just say it's humongous.
When you see this type of workloads being created,
you also start to realize for the AI to really become a differentiator,
to become something that generates revenue
or reduces cost,
you really need to try to apply AI
in a very cohesive way,
meaning that the problem needs to be specific.
And we actually gone to a very specific segment
that is very natural to us.
It's all about, we looked at what customers are doing
on the edge, what type of use cases
they're trying to implement there
to generate real revenue.
Kind of going from the labs, so to speak, to actual production systems.
That's a problem that we saw being underserved.
Majority of tools that are there, they basically take the model data and they call it, okay, we're done.
We just created a model that works on this data set.
We kind of solved the problem.
And let's try to see if that's going to run actually on the edge.
That's been a very common theme.
We saw a lot of that happening.
We actually took a very different approach.
We looked at what it would take for us to go far beyond just data and model.
Can we actually create a working application that actually delivers the specific use case in its completeness?
Meaning that not just activating the model and data, we really thought about what it would take to build the complete applications.
And what we discovered quickly, it takes quite a bit more.
It's actually adding specific pipelines. For example,
when you're preparing it for the computer vision cases that we've been looking, for example,
in retail or in security, you really need to pre-process the data. So we needed the image
sensor processing tools added into the application. There are cases where you will be needing a
tracker, you will be needing a sensor fusion.
This type of pipelines are needed to be added to the application for that to become valuable.
Then comes the whole aspect of edge devices, not the brute force devices, all you can eat type of energy.
You really have to be very selective.
How do you actually prepare the applications to be deployed on that edge hardware
and and that's the optimizations that we do to reduce the size of the models without losing accuracy and all the additional things that that could be done uh for the for the for those edge
devices and of course blaze being the hardware manufacturer we also do quite a bit on our side to make it extremely lean,
but very, very accurate.
And that's how the studio continued with researching with customers
and seeing that void.
Because instead of kind of going general, general, general,
we actually went very focused.
We looked at the completeness of the AI app development for the edge.
And that's why we feel like it's not necessarily all-you-can-eat type of approach.
It's actually a very, very focused approach.
And this is underserved.
There is almost nobody playing in this space
because it's really been all about cloud.
And this market's been kind of untouched, underserved.
But I think the market is so huge in terms of the potential,
and you see the announcements coming almost daily now
about the importance of serving the edge,
the importance of serving the edge end-to-end.
But we're quite a bit ahead of everybody.
If I can interject for a moment and introduce Val Cook,
who's a key software architect and just
joined us all.
Great.
And just to add, George, to that
comment from Dima, it is
true that nothing like this
exists. I mean, to your earlier point that
we are a young hardware
company and it's ambitious of us
to build software.
The truth is, we're seeing it at customers.
The barrier to adoption of an AI chip is huge because of software.
I mean, you can have the best chip in the world,
and not having software is a key deterrent, right?
And this is not about a specific company or a chip.
I'm talking about in general, right?
Because the whole AI is new to industries, especially I'm talking about the edge and
enterprise, right?
Industrial or whatever.
And the software that exists today
is pretty much in the data center
and that's where it is, right?
The ease of use software, whatever is there.
So in the edge and this segment,
nothing like this exists.
And so it was kind of a very well thought through
and, you know, that's how we built this.
Yeah, indeed. software is always key for for success and I would say that perhaps you know you you may have decided to
tackle as you also pointed out yourself an issue that is not specifically yours, basically. So in that sense, I find it quite ambitious.
And to that point, one of the things that piqued my curiosity
was whether you have possibly used some pre-existing IDE frameworks
to develop your solution or you built everything from scratch.
And I'm thinking basically open source frameworks for IDEs
such as the Eclipse framework or something similar.
Okay, I'll take that.
That's a good question.
What we've done is that when we started to prepare the studio, the user experience, and thinking about what it would take for us to be effective, we certainly looked at, if not probably, several hundred different varieties of tools from both low-level development all the way up to no-code, low-code type of interfaces.
And the industry is not new.
There is a lot to be done and to be studied from many, many years back
on low-code, no-code tools, even for traditional development,
for what I'm talking about, not necessarily AI.
But to answer your question, we certainly learned a lot from the cutting edge newest tools
that our industry is using.
And a lot of the developers that use these tools,
they started to know that there is further degrees of separation
and abstraction now available even through traditional IDE environments,
which all points to the fact that the tools are becoming more intelligent.
They kind of grow in this need to help even developers.
So we use those to really learn from them,
to see what it is that developers are doing.
We spoke to developers.
Developers pointed the things they liked in those new generation of tools and things they didn't like. Like, for example, they were sometimes annoyed at
this very linear, very rudimentary chatbots or helpers that constantly popping up with
unnecessary suggestions. That was just one example of, and you see them in some of the
tools, both in proprietary and open source.
Primarily, we looked at those.
Those notes were noted.
We started to realize that the chatbot is not the interface, for example.
We need to really rethink what would be helpful.
How do you actually contextually ground the tool
so that it's become not just effective, it becomes useful.
Those are the things that we learned from those tools,
but we don't use the open source code.
We don't use the traditional IDs anywhere in the studio.
We were designing it to unify two personas.
One is basically there is a concept of the AI teams
that's becoming very widely available across enterprises. This is
the team that includes data scientists and ML engineers. And then there is another often
forgotten persona that's underserved. It's the persona of someone who actually knows what needs
to be done, what needs to be solved, the type of business problem that we consider them to be
the main experts or subject matter experts.
These folks sit on, so to speak, on a different side of the barricades.
They don't talk often because one doesn't have the skills,
the other don't have always capacity and availability.
And then when we looked at these two personas,
what we realized is that we won't be able to enable successful collaboration if we continue to present a
raw code, so to speak, to those subject matter experts. But the engineers, on the other hand,
will not be able to always be happy with just those, call it sometimes people can say,
dummy down helpers.
We started to bridge this to a persona.
And the way we're looking at Studio now is that it evolved into become an interface
that actually unifies these two different teams,
the teams of subject matter experts
and development teams,
because it creates a very comprehensive workflow from the problem
statement all the way to the working application.
It is ambitious, but because the problem now is well understood and the subject matter
expert actually participate in the process.
In fact, they can even experiment.
They can actually experiment with a product idea and they can reuse the work
that other teams have done across, for example, the business unit or the company. That actually
empowers the better collaboration, but it's not relying on the specific Eclipse, for example,
or any other open source ideas. But there is definitely learnings that we learn that we grab from that okay yeah thank you
and yeah that's you mentioned some interesting points which i wanted to to ask you about anyway
one of them was well things like this intelligent assistant that you mentioned or the visual
workflow that you have which i guess both aimed primarily for the second type of workflow that you have, which I guess both aims primarily
for the second type of persona that you mentioned,
the domain expert, let's call it that.
However, I wonder,
and in the demo that you gave yesterday,
this seemed quite useful, actually.
However, I wonder for the other type of persona that you want to serve,
people who are more traditionally close to the code, let's say, I suppose there is another
interface for them that they can use to tinker with the model. So to write code if they need to.
Certainly. And to answer this one, actually, we as a company, we've been pursuing software for a period of time before even the studio was there.
So there is a very robust Picasa tool that's available.
That's an SDK, more like a traditional SDK and NetDeploy.
These two tools are, when combined, they basically give a full package that traditional down and down into the code type of developers
can use it's um it's very very effective it it covers um pretty much all the needs to go from
from the data all the way to the working application as well but you do it all in code
and gives you full control over that and um because it's been in development for quite some time,
it's very established.
Customers, the coding customers are actually using that as well.
And the studio, it's not just an interface that sits on top of that.
Obviously, it has many things that we're doing very differently,
some additional workflows.
But if somebody is down into the code, there is an SDK.
The studio does retain controls.
If somebody who is more experienced, for example, wants to review the model,
they have ability to go and view the layers of the model.
Or, for example, if somebody wants to modify that,
studio actually has the elements where they can change the parameters,
even like the number of epochs, for example.
There is all these parameters that more traditional engineers can obviously control.
But SDK is what's for the people that like to write their own code.
And maybe Val wants to add a couple of things about it.
Yes, I agree. That's one of the wonderful aspects of it. One of, I have a closed sort of obfuscated or even
hidden or encapsulated approach to things, they generate source code. And that source code is
embedded in the applications that they generate, but it's done in a modular way. For example,
I could take a model that has been built and constructed through Studio, and then I could choose to
pull that model, embed that in a much larger scale application at the source code level
by simply including the headers and the object module, and then proceed to build my project
with this new sort of AI-enabled application now. And so the integration is open and supportive of the lower-level developers,
I guess would be a good way to say that.
Okay.
Okay.
Speaking about models, that is upon another, I would say,
both impressive and ambitious feature that I saw the studio
had so I'm talking about marketplaces and I guess this well actually that's
that's part of the question so what do these marketplaces entail exactly is it
just datasets or are models also available for purchase?
I guess if somebody develops, let's say, a model through a studio,
would they be able to put it up for sale in some of these marketplaces
and vice versa?
So that's one part of the question about the marketplaces.
And the second part would be actually, I think you mentioned you integrate with some existing marketplaces and I guess that's a good
good move because one of traditionally one of the hardest part about
Possibly the hardest definitely the hardest actually about building a marketplace is not the actual
Infrastructure, but it's how to populate the network effect.
So I'm wondering how you deal with that.
Okay, thank you.
Yeah, I'll take this one.
The way we think about the marketplaces, we've actually done both. When I say both, we have the integration with an existing marketplace,
which helps us to take advantage.
And by marketplaces, we mean the data marketplaces and data storage,
like traditional data cloud storage.
And the marketplace for the models and model zoos and model repositories.
The reason why, because there is quite a bit that's already available
when it comes to models that have been tried, models that have been trained,
models that have actually been pre-optimized and posted.
But we also realized that integrating with those marketplaces
is a very quick way for people that actually never really been working in AI
to get started. And the knowledge that's been accumulated in those marketplaces,
it varies in quality. What do we mean by that? There are models that people start,
they kind of drop them halfway, and they're really not a functional model, for example.
And there are models that people consider to be so successful that I've noticed, especially lately, people have a tendency were the marketplace where people kind of are sharing freely, share with open source licenses and make those models
available. And many, many models over there are actually valuable and reusable. But the success
to the adoption of AI is in reusability. And we've taken that important notion into consideration when we realized that enterprises and the companies actually think about the marketplaces differently.
They may not be interested in posting their models into the public marketplaces, but they might be interested in reusing. And for that, we actually created a feature as a part of the studio
where if the company A uses the studio and they obviously been in AI for a period of time,
they have business units that have done numerous workflows and numerous development of models
over the course of several years.
So how do you actually share that in a private way so that it's behind their firewall and, let's say,
a business unit X wants to share with business unit Y?
That really is the concept of what we call the internal marketplaces.
The idea internal to studio.
If they use the Studio and if they have
the models, they can actually distribute those models quickly and scale the work that they've
done in one business unit across the company. That's the promise of our marketplaces that we
use internally. We feel that by kind of, But the way the Studio comprehensively look at this data sources and at this model sources,
we try to present everything that's available to the user.
And when user searches for a model, user of Studio searches for a model,
that user will be able to see what we safely found in the public marketplaces.
And we do look at the models,
the models that actually come,
for example, with a low rating,
the models that actually incomplete,
the models that are missing artifacts,
those models will not be necessarily presented.
Those models we consider to be,
I'd say, unfit for any significant work.
So there is also,
we're also trying to make sure that we only go to the most established marketplaces because there is now smaller marketplaces popping up and there is some
potential for the rogue models to appear, the models that you don't necessarily consider
to be safe, the models that don't necessarily have been verified by the
community, been approved by the community, been rated by the community. So there is that piece
as well. If you do just a Google search, you'll immediately find that there is a plethora of
choices. And if you're not initiated in this space, you may pick up the wrong model.
But when we search through Studio, you will see all assets that are available for you to get
going. And that will include the internal models developed inside your company, as well as some of
the public models that feed the problem at stake. That's why I think the marketplace is very
powerful because it really facilitates reuse. It really enables people to go from the idea to an
immediate project that they can start right away.
And because of the transfer learning, this is another important piece that we've mentioned in the announcement,
is that transfer learning is actually how we bring this all one to fruition much, much sooner.
Traditionally, you would be kind of building the model from scratch.
It takes months, literally months.
And if we have the market months and if we have the
marketplaces and if we have the transfer learning the idea behind it is that we have the data we
know the data we know what the data is for we can actually find the model and and optimize the
model from the from the marketplaces very very quickly quickly because retraining lost several layers,
takes probably 10% of the traditional time
that it would otherwise take
if you are to start from scratch.
So this is actually comes together
because we found that to be the most effective way
to get people from an idea to working on the model.
And that model will quickly become an application now
that's a part of the entire workflow.
Okay, well, you did mention Google.
And actually, as I was listening to your reply
on how you combine results from different marketplaces
and evaluate them and so on,
it did kind of give the impression of doing sort of a similar
job that Google does for web results.
You do it for models in a way.
So that's one hard problem that you seem to have taken upon yourselves to solve.
There's another one, actually, which I wanted to, I was wondering how you deal with,
which is so-called MLOps, or in other words the co-evolution of data sets and models throughout
the development cycle. And I wonder how you deal with it and if you can give like a brief answer
to that. I know it's a complicated issue, but if you can give a brief answer,
because I think we have to wrap up.
Certainly, certainly.
The way we kind of take this one,
we're especially focused on the ML Ops
when it comes to deployment.
You obviously have to focus.
And deploying and running and managing models, especially on the edge.
That's one big problem that the industry in general has been neglecting to address.
And we feel that a lot of the folks that's been trying to solve the ML problems, they've been kind of looking comprehensive with everything.
And that presents a challenge
that's almost too much for one company to solve.
When it comes to getting the models to properly deploy,
making sure that the models perform when they're deployed,
getting those models to get updated.
And by models, actually, I want to say AI apps.
We're saying models.
I know you asked about models, but I'm actually calling them AI applications, edge AI applications.
Getting those to go through all the stages, this is the important part that I feel like
we are focused more than anybody else.
And this is something that is actually doable. We've been learning what it takes to effectively build,
deploy, manage, and maintain and update
those AI apps on the edge.
That's our focus.
That's actually a big problem to solve as well.
It's very nuanced.
But because of the knowledge and expertise
and the way we actually develop applications,
like we've mentioned that in the release as well, we're actually big believers in Open.
And when we say Open, a lot of the binaries that we produce, for example, for our hardware,
they actually come out as the OpenVX formatted files files that then getting deployed into our hardware.
By keeping that philosophy in mind, we were actually able to continuously maintain them
like that when they're deployed on our hardware. And managing them is a very effective
mechanism that we've demonstrated for the past several years, that's all we've done. We made sure that
we know how to do it. Now we're taking this knowledge
and actually applying that as a part
of the studio workflow.
That's really focused on that
part of the mail ops. And that's very,
very powerful. We find that to be very helpful
and useful.
Okay, thank you.
Obviously, it's quite
an extensive product,
and even though we've been mostly talking about it,
I get the feeling that we didn't even get close
to covering everything that it entails.
And we actually didn't even mention at all
what's underneath, basically,
so your chip architecture,
but I guess this is a topic
for a separate discussion in the future
when we get the chance.
One thing I wanted to ask
as a final question
to wrap up this session would be,
I think you partially addressed it yesterday
when people were asking,
you know, if that means
that you're making the pivot
as a company and so on. You were very clear in means that you're making a pivot as a company and so on.
You were very clear in saying that you're still focused on basically promoting your hardware.
However, I imagine that you will be marketing this solution independently.
And precisely as you mentioned, it is actually possible for people to use it regardless of the underlying hardware architecture. So will you be marketing this independently?
And what are the initial signs that you have been getting since the release?
So far, okay, Dima, you want to get that or you want me to?
Oh, please go ahead.
Okay, I'll get it, but you can add that certainly.
So we focused on using it with our hardware
and the customers that we are engaged with
at the hardware level
are the ones who are kind of taking it first.
And it's optimized the low level
when you get,
I think we should spend some time
on the hardware with you,
maybe at your convenience.
But there's a very tight coupling with,
I shouldn't say coupling,
the right word is optimization at the lower levels
when we talk through some of the key aspects
to our hardware.
The fact that the studio can work with ONIX,
the intermediate representations,
does allow it to work with others,
but we're, of of course focused on the synergy
aspect and how we
build
our market
with our products
I hope you
enjoyed the podcast
if you like my work you can follow
Link Data Orchestration on Twitter, LinkedIn
and Facebook