Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 2x16: Optimizing ML at the Edge for Industrial IoT with Sastry Malladi of FogHorn
Episode Date: April 20, 2021Industrial cameras and sensors are generating more data than ever, and companies are increasingly moving machine learning to the edge to meet it. This is the market for FogHorn, so we invited Co-Found...er Sastry Malladi to join Chris Grundemann and Stephen Foskett to discuss the implications of this challenge. Industrial IoT, also called operational technology, is the use of distributed connected sensors and devices in industrial environments, from factories to oil rigs to retail. Any solution to this problem must be oriented towards the staff and skills found in these environments and must reflect the data inputs and outputs found there. Another concern is cyber security, since these environments are increasingly being targeted by attackers. Machine learning can be brought in to control industrial processes and monitor sensors locally, with low latency and high accuracy, reducing risk and increasing profitability. These environments also benefit from transfer learning, periodic re-training, and closed-loop machine learning to keep them optimized and functional Three Questions Is machine learning a product or a feature? When will we have video-focused ML in the home that operates like the audio-based AI assistants like Siri or Alexa? Are there any jobs that will be completely eliminated by AI in the next five years? Guests and Hosts Sastry Malladi, CTO and Co-Founder at FogHorn. Connect with Sastry on LinkedIn or on Twitter at @M_Sastry. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 4/20/2021 Tags: @SFoskett, @ChrisGrundemann, @M_Sastry, @FogHorn_IoT
Transcript
Discussion (0)
Welcome to Utilizing AI, the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics. Each episode brings experts in enterprise
infrastructure together to discuss applications of AI in today's data center. Today, we're
discussing moving AI to the edge for industrial IoT applications.
First, let's meet our guest, Sastry Maladi.
Thank you, Stephen. I'm Sastry Maladi. Happy to be here. Nice to meet you, everyone. And I am
co-founder and chief technology officer for Fogon, which we co-founded about five, six years ago,
headquartered here in Silicon Valley.
Myself, I've been in the technology space for 30 plus years.
You can find me on LinkedIn, my profile.
If you just type in Sastry Melati,
you'll find my LinkedIn profile.
I'm on Twitter as well.
M underscore Sastry is how you'll find my Twitter handle.
Just see what I'm doing.
And I'm your co-host today, Chris Grundemann.
I'm a consultant, content creator,
coach, and mentor. You can find more about what I'm up to at chrisgrundemann.com. And as always,
I'm Stephen Foskett, publisher of Gestalt IT and organizer of Tech Field Day, including our
forthcoming AI Field Day. You can find me on Twitter at sfoskett. As we've been talking on
the Utilizing AI podcast, we keep coming back to edge applications for artificial intelligence.
It's funny, no matter what we talk about, no matter who we talk to, no matter what the topic, it always seems to come back to moving machine learning to the edge.
And one of the reasons for that is that's basically where the data is. As we've seen recently, increasingly we're dealing with more and more data from edge sources with more and more localized machine learning processing, helping to process that data and filter it and send only the important data back to the core.
And that seems to be the new architecture for machine learning, especially in industrial applications. When we talked to Sastry about Foghorn in
industrial applications, I knew that it would be interesting to our audience because this is
exactly what they're doing. So Sastry, why don't you talk a little bit about the challenges faced
by industrial applications of IoT devices like cameras and sensors? Absolutely, Stephen. So as
you summarized well, in the industrial application, especially if
you take a manufacturing plant or an oil gas plant, or even a building automation system for that
matter, the amount of data that is generated from all the different sensors attached to the
equipment is humongous in nature, terabytes to petabytes of data, especially if you've got video
data. And not to mention, you've got different types of sensors as well, traditional
digital sensors, temperature, pressure, velocity, and so on, plus audio, video, vibration, acoustic.
And the ability to transport all of that information into a central location and then
process it and then send the results back to the factory floor has too many challenges. First of
all, the bandwidth, the amount of bandwidth you will need
to transport all of that,
potentially could be cost prohibitive.
And then the latencies involved.
By the time we process all of that,
send that information back to the factory,
it might be even too late
for whatever failure conditions were supposed to happen.
Thirdly, third point, which is cybersecurity.
A lot of these customers are afraid to connect
their highly expensive machinery into any
kind of internet, right?
Because you can attack them.
They could, you know, these are expensive machines.
But the most important of all is the real-time decision-making.
People want to know if something is going wrong.
They want to know in real time before the fact, not after the fact.
So as a result, all of the computations, analytics and machine learning has to be moved
onto the edge, which is closer to where the data is produced.
But of course, as we will talk in the conversation,
the challenges there are,
you don't have a lot of compute power,
but I think that the crux of this conversation
is going to be a lot about what does it really take
to do data processing and machine learning at the edge,
so you can get these real-time insights
in a cost-effective fashion? Yeah, that makes a ton of sense. And as Stephen said,
I think this is a conversation that's being had in a lot of places, this idea of moving things to
the edge to avoid, just like you said, bandwidth constraints potentially or bandwidth costs. A lot
of times getting bandwidth back out of the cloud can be very expensive. That latency thing seems to be the key aspect. But before we dive in further, you know,
I kind of want to explore the idea of industrial IoT a bit more, right? Because I think a lot of
people when they hear IoT, they think about, you know, light switches or thermostats or cameras
around their house. Obviously, you listed out a lot of these examples. But I think, you know,
some of our audience may be unfamiliar with what I've heard referred to as operational technology, right? So can you maybe lay the stage
of the difference between IT and OT and where they overlap and where they maybe don't?
Yeah, absolutely. To take, for example, a manufacturing plant like a Stanley Black
any customer, right? So the machines like a CNC machine or a compressor or a pump,
these are the kinds of assets we're talking about. Now they operate in a plant environment to produce certain parts or they manufacture certain parts.
The types of defects could be like when machine is done or the machine is producing a defective part
or something else is an anomalous behavior that the machine is not operating at the highest level
of efficiency that they expect to see. All of these are problems that the operators,
the OT, operational technology folks,
these are like the reliability engineer
or mechanical engineer or an operator
who is working in a plant environment.
They are not the typical IT programmers
or machine learning data scientists.
They don't know how to program a code.
All they can tell you is, look, my machine is down
or my machine is producing this part that's defective, it's costing us a lot of scrap, right? So the solution, whatever solution that we're trying to talk about, it has to be oriented towards the operational technology folks out there.
Underneath it might still be machine learning and data science and whatnot, but it has to be oriented towards them, right? And you can't require to have an IT person be able to operate this equipment. This is
why it's extremely important. Any solution we come up with in this context has to be an OT-friendly
type of an environment. In other words, they work in terms of, oh, I've got a temperature sensor.
My temperature is too hot for this event, which is probably what is causing this issue. They're
going to talk that language. They won't talk to you in terms of,
hey, what machine learning model do I need to apply?
Yeah, that makes a lot of sense.
And thanks for laying that out.
I think that's a really interesting distinction
in just the skill sets there.
And then the other piece of this, right,
to touch on what you said about cybersecurity,
I assume that some of this information
is like SCADA control and automation,
or is this all new sensors
that are being
added to machinery?
What's the interplay there of that evolution from what used to be very, very manual?
SCADA charts used to be a literal pen hooked up to a rod that was moving on paper, and
you'd roll out the paper and look at what was going on.
Obviously, we've come a long way, but how is that evolution happening from SCADA to
IoT in the industrial world?
Right.
So the SCADA systems are still in place because some of these plants are decades old, if not
even further older, right?
So the evolution that has happened is that all of the sensors attached to the SCADA network,
you know, instead of somebody actually, just like you said, you know, going to PLC, doing
ladder logic programming, or putting it into the pen and drawing it, right? They've actually moved on. How IoT is changing is that they install
a small, what they call IoT gateways, manufactured by Dell, HP, Cisco, ADLINK, a number of those,
or even Raspberry Pis, like smaller ARM-based, you know, machines. They connect it to the same
network. And what they do is rather than actually disrupting the existing SCADA networks, they would plug into that existing sensor network and try to tap into the same data
stream and be able to process it. In other cases, for example, like some of the industrial folks
who are manufacturing these PLCs, they're actually adding extra IO cards into the PLC. So rather than
installing a new IoT gateway, they're adding an additional
compute power into the existing PLC, not disrupting what they've got in the SCADA system,
but then use this extra IO card to be able to process that. In other words, continue to leverage
what they've already installed. But what they're also now doing is wherever there's not adequate
sensors available, they're installing vibration sensors, acoustic sensors, or video
cameras. These are non-disruptive way to change anything, right? You install a video camera or
vibration sensor to the machine. Now you need to fuse these signals together, existing together
network sensors, video, audio together to then identify what's actually going wrong with the
system. So that's kind of how the industry is moving towards. I find it really interesting that there's such a parallel between consumer IoT and industrial IoT,
because a lot of the things that you described would be familiar for someone who's tried to
dabble in home automation or the Raspberry Pi enthusiast who goes out there and maybe buys
some Arduino sensors for GPIO pins or something. But what we're talking about here really is the
same question, but asked in a different circumstance and with a different answer.
Because of course, the impact of, say, having your home lighting go off at the wrong time,
or, you know, accidentally forgetting to lock the door even is much less impactful in a home
environment than it might be on an oil rig or in a factory where you could literally be costing,
you know, millions of dollars a minute by, you know, shooting oil out the side of the thing or
having the machine break or something. I mean, the implications in industrial applications are so much greater.
And yet, it seems like in some cases, people don't think of the challenge as being greater,
and they kind of approach it with sort of a, I don't know, bits and bytes, home automation kind
of approach. That just won't work. That's exactly right, Stephen, in the sense that the impact of
this in the industrial sector is far greater.
Because without naming some of our customers, for example, we have seen lose millions of dollars, just like you mentioned, in the absence of any such solutions out there, right?
And it is even more critical for them to be able to identify failure conditions or anomalous conditions ahead of time in order for them to prevent this lossage.
Sometimes the loss is just not even business loss, too. Sometimes it could be environmental. ahead of time in order for them to prevent this lossage.
Sometimes the loss is just not even business loss too. Sometimes it could be environmental.
For example, if you look at an oil and gas plant
where they're processing this, refining the gas,
what ends up happening is if there is a compressor problem
or something called a foaming issue,
they will end up actually burning their gas
and releasing flames, these toxic flames
into the atmosphere.
You see these oil stacks out there.
And the Environmental Protection Agency's EPA is closely monitoring all this stuff out
there to increase, and they get penalized heavily when their carbon footprint emissions
are higher than certain thresholds as well.
So there's so many different ways in which these customers actually can benefit by leveraging this processing
at the edge in real time, not just from a dollar standpoint, but also from an environmental
standpoint. So I think we've kind of understand the problem here that we've got critical
applications that cost a lot of money or can cause a lot of problems. They need connection, they need connectivity, they need automation, and yet we have a skills shortage in many cases or a shortage of consistent skills in many of these environments.
So let's connect this with machine learning then.
How exactly can machine learning be used to help solve the problem of operational IoT or industrial IoT?
Yeah, I think the best way to explain that is take one example, right?
So one of our customers, this is public, so I can reference it like Stanley Black & Decker,
for example.
They manufacture this.
Take an example.
So many parts.
One of them is the measuring tape, which is the household item that we all have.
So what ends up happening is as their manufacturing is taped, the machine is manufacturing is, they paint it. Now, sometimes there could be problems. There
could be extra ink, extra paint, or some other marking, something else is wrong. It's a very
high-speed manufacturing machine. And sometimes when they manufacture this entire tape, it goes
to some manual quality inspection. Sometimes they catch it. Sometimes they don't even catch it. It
goes all the way to the consumer or distributors where it gets detected and they have to throw
it away.
It costs a lot of scrap.
Now, when you talk about real-time IoT and applying machine learning, then in this kind
of scenario, what you would apply is as the tape is being printed or being manufactured,
you actually install a video camera and you have your other existing sensors, feed all
of the data into a machine learning model that is built to specifically detect the types
of defects that are not acceptable in this process in real time.
You have to do this in real time.
Now, of course, the challenge there is really they don't have a lot of compute power in
these plants.
Now, how do you run such deep learning neural net computer vision based models out there and also
not having to have the skill set for these operators to be able to operate those models.
This is where you have to create this OT centric tools for them, build the model, deploy it,
simply raise an alert as it is coming in. Look, here is the defect, right? This is here and you
need to stop the machine, right? That is how you optimize that. I mean, in other words, one is building the machine learning models. Second is exposing those tools
in a way that are OT friendly to them. Ultimately, at the end of the day, you have to solve the
business problem, which is to stop the machine when it is producing the defective parts, right?
This is just one example, but I can go on and on and give you hundreds of examples in any of these
different verticals.
Yeah, I think another really interesting example might be related to safety, because we've talked a lot about potentially the cost impacts and the environmental impacts of industrial failure,
but there's actually, you know, lives and limbs at risk as well.
So maybe that might be an interesting use case to look at as well.
How does machine learning at the edge help protect people?
Actually, that's a perfect example. We, in fact, you know, we see that all the time, especially in plants, you know,
oil rigs and whatnot, right? Worker safety is very important to them. People have to wear,
one of the examples is people have to wear, for example, certain equipment, PPE, as we call them,
helmets, life vests, shoes, goggles, and whatever you call them. Now, if somebody is not wearing
them, obviously nobody's going to go manually check to say if somebody is actually wearing the equipment or not. Machine learning comes in
handy there again. You've got all these CCTVs installed all across the plant, whether they're
inside the plant environment or outside the plant environment. Maybe somebody's walking under a
crane. Maybe somebody's about to walk into an oil spill, right? These are all hazardous conditions
out there. So what we actually do, what they deploy is install a camera,
make that camera feed and using machine learning to detect to say if somebody is actually about to
walk into a hazardous environment, somebody is walking under a crane, somebody is not wearing
the appropriate PPE equipment and all of that. This is one way. This is the safety aspect of it.
This is actually augmented with the health aspect of it as well, right? I mean, especially thanks
to COVID these days, people actually also want to check the health of people walking in. Are they healthy enough to be able to walk in, whether it's temperature, whether they're wearing a mask, whether they're wearing certain, you know, whether they're coughing, exhibiting certain behaviors, things like that. So safety, health is absolutely an important aspect that almost every single vertical is deploying that machine learning solutions to identify that. Yeah, that's interesting because that's applicable
almost everywhere. I mean, I could see offices, financial institutions, grocery stores,
all sorts of places needing to monitor that. I mean, you think about your local home improvement
store. Imagine if a camera could be watching to make sure that, you know,
something's not about to fall over on somebody or that, you know, somebody's not walking in front
of the forklift, you know. There are so many places, I think, that this kind of application
could be used. And it's a really simple application, too. That's the thing. We're not
talking about some kind of, you know, multi-year software development process necessarily.
We're talking about something which is basically watch this camera,
watch for that thing, and do this thing if that happens.
Is that right, Sastry?
That's exactly right, which is why we found this to be,
we have this as one of our solutions at Fargun, for example.
This is a highly repeatable use case across many verticals,
across many customers, almost something that everyone wants that.
Now, there might be slight differences.
The way that's actually important for us to highlight here is that,
for example, how the environment or the PPE or certain things look in a
customer one environment might be different from a different customer.
So how do you allow them to slightly customize or reconfigure the solution
in a way that's able to detect it? But as an example, walking under a crane is not the same thing as walking under
like a shelf of falling objects in the shelf, right? Now, how do you customize this in a way?
But once we do that, it's a highly repeatable solution across many, many verticals, many,
many use cases. And that brings me back a little bit to what we were talking about earlier with
the kind of operational technologists versus information technologists. And, and one of these common problems across
machine learning and deep learning and AI in general, which is that you're not going to have
data scientists on site, you know, at every turn. And so I can imagine, you know, whether we're
talking about safety or something else, you know, a factory floor or an oil and gas refinery, I mean,
that's gotta be a pretty dynamic space. And so I'm assuming that there's going to be things like that, right?
So not just the initial customization of making sure the model works,
but I assume there's got to be tuning and debugging and tweaking over time.
So I assume there's a challenge here of enabling folks that are actually there
in the field at that plant to be able to do things like that.
Is that right?
That's absolutely right.
You hit the nail right on the head,
which is even when you go build a machine learning model,
train the model with all the images
and everything else that's possible,
deploy the solution in a custom environment,
things do change.
For example, maybe the environment has changed.
Maybe the background has changed.
And a lot of the times,
maybe something else,
the machine calibration has changed.
There could be any number of reasons
why that exact same machine learning model that was initially
trained to run and produce the results may begin to drift, may no longer do that. So we fully
recognize that. So for that, what we have done is obviously at that point, you can't go back to the
customer and ask to say, look, let's go hire a data scientist to go help you retrain that program,
right? That's not going to fly. That's not going to happen.
So we built, for example, this is very common practice,
which is built this self-fine tuning application.
So in other words, you build a UI to say, look, here is a player stock.
Here is a worker safety environment.
Here is a predictive maintenance use case for an asset,
but something has changed.
Bring up the UI for that application.
Allow the customer to say, look, here is my new video. Here is my new information that I'm feeding it. And here is what's different about it. Now, underneath what needs to happen is we use this
concept of transfer learning. So in other words, you previously built a machine learning model.
You have trained it based on certain data sets. The model has learned how things work. Now you're feeding that model this additional information. How does the model now learn not only what it has previously
learned but also augmented with what's being fed now? That's the transfer learning. We use the
technique underneath but again none of this is exposed to the end customer because they don't
know what you when you start talking about transfer learning machine learning it goes over their head
you don't know what you're talking about right. So we only expose the UI to them to upload that information.
Underneath, update the model.
That's one technique that we use to be able to continue to predict with the same level of accuracy.
I'll just say a couple more things before I hand it over to you.
So the second aspect is there is this notion of periodic training.
A lot of the times customers say, you know what, I'm not going to bother to come once in a while whenever there is a problem, I'm going to upload it.
But instead, I want you to periodically retrain it almost maybe once a week, maybe once a month.
And here's how we do that.
So we've also employed that mechanism in certain scenarios to say, look, you know what, let's not bother when the problem happens, let's go take a look at it.
Just periodically augment it and implement it. That's one. And there's also a third aspect. This is what we call the closed
loop machine learning. What happens is at the time that you detected a drift or the degradation in
the level of accuracy, you start sending the data into a central retraining module. The model gets
retrained and you push it back onto
the edge device as well. And you keep doing this iteratively until the accuracy comes back up as
well. So, suffice it to just summarize, there are a number of techniques that can be used in order to
update the models to continue to predict the highly accurate results, even when things do
change due to environmental reasons.
And that's one of the things that we've talked about previously. I mean, we did a whole episode
on transfer learning with Alfrane. And it was the same kind of thought that you can't just kind of
set it and forget it. You have to think about how to keep these things fresh, how to keep them up
to date. And one of the things that struck me when you were describing the Foghorn solution was there's sort of another question here too that's maybe more of a philosophical one. Is it,
if anyone can program these systems using, you know, visual on-screen icons and clicks and draw
border around this and so on, is there a risk of sort of amateurish mistakes, basically by making the
system almost too easy to use? In other words, if I'm working at an oil rig and I'm able to get in
there and mess around with it, could I either make a foolish assumption or a foolish mistake
or cancel some feature that somebody else had set up?
Is there a way to kind of control how this application is used and perhaps abused on a
day-to-day basis? Yeah, you bring up a good point, Stephen, which is not everybody in the plant
actually will have access to be able to change things, right? So we protect this through what we call roles-based access control. Typically, an administrator or supervisor,
whoever comes in, goes, changes, makes the changes, it's all set. Now, the rest of the folks are only
going to be able to monitor, observe, identify what issues are there. If and when somebody wants
to go make a change, then whoever has permission or access to it can only do that. So we give the ability to define the roles and ability to say who is actually allowed to do that.
And that absolutely must be done because inadvertently somebody might actually make
a mistake to undo whatever somebody else has done in the past. That's absolutely
not a good thing to happen. So it's controlled through access privileges.
That makes sense. And definitely a crucial feature. One of the thing that I'm curious about as far as deploying machine learning
or any artificial intelligence at the edge is what the environment, the compute environment
you're walking into is, right? So I'm assuming if you're going into a hospital or a factory or an
oil rig, that there's going to be a variance from each one and maybe even at a single site.
This is going to be a heterogeneous environment, right? You're not necessarily going to have all
of a single type of chip or a single type of server. How much of a challenge is that to move
and optimize AI for the edge? Yeah. So this is actually, we identified this from the get-go
that it is given what you're pointing it out. Every single environment is different, even within
the same rig, even within the same plant plant they could have multiple different types of chipset
hardware types of devices and all of that what the one way we address this in far gone for example
is to containerize it so we actually build our software for example compile it to you know both
if you look at it think broadly speaking right now that there are two types of hardware chipset
either intel based x86 or arm based every single chipset, either Intel-based, x86, or ARM-based.
Every single chipset actually falls into one or the other.
And of course, then you've got GPUs like NVIDIAs
and TPUs and so on.
So what we try to do is to build the software,
compile them into each of these different architectures,
and then containerize them.
So by doing that, if the customer comes and says,
I've got a Raspberry Pi-like device, I've got this Intel device, I've got this control system, I've got this other thing,
no problem. We'll ship you the container for that hardware chipset, it's guaranteed to run.
That's how we've handled that. And obviously, it means that you have to have support for
containers, containerization or Docker on those systems. And for the most part, all of the flavors
of Linux, Windows, and some real-time operating systems
actually do support that.
So that hasn't been a challenge.
Now, but we have run into,
especially with, for example,
with Honeywell and a few others
where they manufacture handheld devices
that are Android-based.
But Android, for example,
does not have container support.
So what we ended up doing was
to take that entire software
and built it as an app
because Android
and it has the concept of these app stores
and entire capability runs as an app there as well.
So, but for the most part, that's how we've handled it
to account for differences and variations
in all these different hardware architectures.
Yeah, this reminds me of our recent conversation
with Akto Amel on Apache TVM
in terms of optimizing machine learning
for various end user platforms, as
well as, of course, the recent Intel launch where they talked about the capabilities of
the new Xeon processors to do machine learning processing near the edge.
But you're talking about even smaller systems, Raspberry Pi class systems, and even mobile
devices, which is, again, I mean, it's a little
confusing because the average person might be thinking mobile devices, IoT, you know, is this
a Phillips Hue competitor? Not at all. That's not what we're talking about here whatsoever. Yeah.
That's exactly right. In fact, I want to touch on one point here, which is the core concept of
what we call edgification or machine learning to the edge, right? So all of these things that you just talked about,
Intel, for example, released this new class of Xeon processors
and they have a framework called OpenVINO.
And then TVM, for example, is another library that optimizes that.
But in all of these things,
if you just really back up a second, level up, right?
What's the challenge?
The challenge is typically machine learning models,
when they're built, they are built to assume almost infinite amount of compute, elastic compute available to you, memory available
to you, resources available to you. But that's not the case in constrained compute environments,
right? So the process of taking machine learning models that are built to run in a cloud-like
environment, and then run them same, run the same models in an edge-like environment
is what we call edification.
How do you edify those machine learning model?
That edification includes broadly two buckets, if you will.
One is a hardware-based acceleration.
If you have a faster CPU, like a Xeon processor
or any other Intel faster CPU or a GPU, that's one thing.
But a lot of the times, like you said,
there may be a Raspberry Pi-like class devices or maybe a smaller devices. There's not a whole lot of, that's one thing. But a lot of the times, like you said, they may be a Raspberry Pi-like class devices
or maybe a smaller devices.
There's not a whole lot of hardware that you can leverage.
This is where we also try to learn,
use software-based acceleration.
There are a number of techniques in machine learning
that will leverage like quantization,
binarization, pruning, and things like that nature
that to condense and reduce the model size,
the model footprint to be able to effectively run
and still with the same, if not higher level of accuracy.
And these are all techniques that we leverage
to be able to run these constrained environments.
Excellent.
Well, I really appreciate this conversation.
I think it does fit in so well with so many of the things
that we've been talking about here on Utilizing AI.
And I appreciate you joining us.
Absolutely.
It's my pleasure.
Thank you.
As you know, we warned you that we
were going to have three fun questions here
at the end of each episode.
And the time has come for that.
So just a reminder to our audience,
we did not give Sastry a heads up on which questions
we were going to be asking.
Though I got to admit admit I do tend to pick
them based on the conversation in hopes that they'll provide a fun answer. So let's start off
here with the first question. In your mind, would you say that machine learning is a product or is
it a feature? Machine learning is not a product. It is more a capability of science or analytics.
It's not a product in itself.
You have to bake that into a product in order for somebody else to use it.
Great. Next question, and this is particularly apropos for Foghorn. Everyone's used to having
their personal assistants that answer when you call out in the middle of the night,
what time is it? What's the weather? That sort of thing.
When do you think that we'll have video-focused,
video-based machine learning assistants in the home
that operate like these personal audio assistants?
It's actually there now to some extent.
If you really think about it,
all of the security cameras,
all of these different monitoring cameras, right?
As soon as you identify to select
when there is an intruder coming in, we're talking about personal computers here, or when there is an
animal walking by or something else is happening, you get an immediate notification as well. You can
actually identify, fine-tune it. Same thing is also happening in the industrial settings too.
As I was giving an example, right? You can, as an operator, you can clearly mark and say,
this is specifically what I'm looking for. As soon as that shows up through the video, I want to get elected right away. So that's actually
happening now. It's there now. It's only going to go, you know, be adopted more and more as we
as we go through in the next few weeks or in the next few months. Yeah, and it's interesting because
that's roughly what you guys are already doing in industrial settings. That's right. All right, third question. You've talked quite a bit about how this technology can help in factories
and oil refineries and so on. Are there any jobs that are going to be completely eliminated by
AI technology in the next five years? I wouldn't say that they will eliminate
the jobs, but maybe the types of roles probably
will be different, right? So today, if you look at it, there are manual quality inspections. And
what happens today is you're manufacturing a product or you're doing some digging in an oil
rig, and there are human beings that are taking some manual inspection, taking checks and all of
that. By automating this to use machine learning, you may need less of those people,
but you will need other types of people to say,
look, now I've got a notification.
I got to go fix that.
Now, maybe it's a different skill set
that you're trying for.
So the net-net, there may not be any reduction
in the workforce, but there may be evolution
in the types of skill set that you will need
in this new era that's going on.
So you no longer need to hire a guy to stand around and look at that hole and make sure
nothing comes out of it.
That's exactly right.
In fact, the stack flare monitoring previously, before we put our solution out there, somebody
is 24 by 7 just looking at the video camera to see if there is a flare, right?
You don't need somebody to just go look at that flare 24 by 7.
All right.
Well, thank you so much, Sastry. Where can people connect with you
to learn more about your thoughts
on enterprise AI
and maybe to reach out
if they want to reach you?
Absolutely.
They can actually look
at my LinkedIn profile.
All of my blogs
and everything else
is LinkedIn from there.
Articles, Sastry, Melody,
if you look for it,
they can easily find me.
If they want to learn
more about Fargon,
of course, they can send a note to info at fargun.io.
I'm on Twitter as well, but those are the places that they can reach me on.
Thanks a lot.
And Chris, I know that you've got a little bit of news.
Where can we reach you?
Yeah, so I'm now full-time self-employed. You can find all about that and the services I'm offering at chrisgrundeman.com.
You can also check me out on LinkedIn, where I post as often as I can, and at chrisgrundeman.com. You can also check me out on LinkedIn where I post as often as I can and at chrisgrundeman on Twitter as well.
Thanks a lot. And you can connect with me on Twitter at sfosket. You can also find us
online in many different places, including gestaltit.com, utilizing-ai.com, or utilizing
underscore AI.
So thanks for listening to the podcast today.
If you enjoyed this discussion,
please do subscribe, rate, and review the show.
And please do share it with other people that you think might enjoy our discussions.
Again, this podcast is brought to you by gestaltit.com,
your home for IT coverage from across the enterprise.
Thanks for listening, and we'll see you next Tuesday.