Orchestrate all the Things - Machine learning at the edge: A hardware and software ecosystem. Featuring Alif Semiconductors Sr. Marketing Manager Henrik Flodell, Arm Director of Ecosystem and Developer Relations Machine Learning Philip Lewer, Neuton CTO Blair Newman
Episode Date: September 21, 2021Being able to deploy machine learning applications at the edge bears the promise of unlocking a multi-billion dollar market. For that to happen, hardware and software must work in tandem. Arm's... partner ecosystem exemplifies this, with hardware and software vendors like Alif and Neuton working together. Article published on ZDNet
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amadiotis and we'll be connecting the dots together.
Being able to deploy machine learning applications at the edge
bears the promise of unlocking a multi-billion dollar market.
For that to happen, hardware and software must work in tandem.
ARM's partner ecosystem exemplifies this,
with hardware and software vendors like Alif and Newton working together.
I hope you will enjoy the podcast. If you like my work, you can follow Link Data Orchestration
on Twitter, LinkedIn, and Facebook. It's a pleasure to see you again, George and
Philip. It's good to see you as well and it's a pleasure to meet you, Henrik. So my name is Blair Newman. I'm the CTO here at Newton. I've been been focused on developing a number of different
SaaS space or zero code solutions. So this has included IoT solutions, this included
augmented reality, some network automation services. And specifically, what we're here
to discuss today is our zero code, or at least for myself, one of the areas that I'll be contributing to is our zero-code SaaS-based solution, Newton.
So at least for myself, prior to joining Bell, I spent a little over 11 years at Deutsche Telekom, where I had the opportunity to perform some similar tasks as it relates to transforming our portfolio.
And it's a pleasure to be here with all of you gentlemen today.
So looking forward to today's discussion.
Great. Thank you, Budha.
All right.
Go ahead.
You want to go, Philip?
Sure. I'll introduce myself.
Great to be here. So my name is Philip Luer. I'm the Director of
Ecosystem and Developer Relations for Machine Learning at Arm. I've been at Arm for four and
a half years. So Arm is the world's leading semiconductor IP company. We've, I think,
due to our partnerships now, we're over 190 plus billion chips that have been
shipped with Arm technology and sides. We're very thankful to those thousand plus ecosystem partners
that have made that happen. Basically, Arm's technology touches about 70% of the world's
population across a wide variety of market segments. Prior to Arm,
I spent about 14 years in semiconductors. When I initially joined Arm, I was very much focused on
the software side of the business. And then prior to semiconductors, I worked in EDA and other
industries. So I've touched quite a bit in tech and it's great to be focused here
on the machine learning aspects of the business at ARM.
Okay, great.
Thank you, Philip.
And so, yes, there's no confusion anymore.
It's just you, Henrik.
Exactly.
Yeah, so my name is Henrik Flaudel.
I'm the Senior Marketing Manager at Aleph Semiconductor.
And we are a startup in the Silicon space.
We were incorporated and founded in early 2019.
And we very recently, just September 1st, came out to the public with our first press release
announcing the products that we intend to introduce to the market.
A little bit about myself.
I've been in the embedded space for the past 20 years or so.
Started out more on the software side, working for enablement companies
that provide development tools for embedded systems.
And for the last 12
years or so I've been on the hardware side working for a
number of the semiconductor companies that are known and
used throughout the industry such as Atmel, Microchip,
Renesas, etc. And yeah.
Okay, great. Actually, Henrik, I'm going to,
to keep the ball in your court, let's say.
And the reason is because to be honest with you, at least I kind of feel,
you know, a little bit less bad about myself.
I was not familiar with, with Alif up until recently.
And so at least, you know, I'm excused because as you mentioned, it was, you just,
you just had your first press release. So it's fine that I didn't know.
So just, you know, to satisfy my own curiosity in a way, let's say,
I'm going to ask you to expand a little bit on Ensemble and Crescendo.
Those are the two product lines that you released and were included in your
press release as well, if I'm not mistaken, right?
Correct. Correct. Those are the two product lines.
But let me just take a step back and kind of talk a little bit about why we decided to go at it and design these products in the first place. So when Aleph was founded, it was founded by Syed Ali,
who previously founded and drove Cavium, as well as Reza Kassarounian, who has a history of
managing microcontrollers for companies like Freescale, ST, etc. And Aleph was founded because
we've kind of looked at what is going on in the embedded industry with all the consolidation that is happening, companies the platforms that the semiconductor companies use to spin their products out of.
They have to do more complex designs with more parts in order to be able to enable all of the functionality that is required for these type of applications.
And because of all of the consolidation in the industry, there has been an inertia in updating these platforms to make them accommodate and make it possible for the vendors to actually add this functionality. So we saw a great opportunity for a company with a lot of experience with designing semiconductor
products to come in and develop a new platform from the ground up based on the very latest
technology that really enables functionality like ubiquitous wireless connectivity, edge processing with AI and ML capabilities,
very strong security and excellent low power characteristics could be deployed.
So we decided to do exactly that.
And the way that we have gone about this, we have an architectural silicon foundation that allows us to release a very broad
range of microcontrollers and microprocessors. We call them fusion processors because they combine
both of these elements that scale really well from single core devices up to multi-core devices
and that also has the possibility to include integrated
wireless communication. We are starting out with LTE cellular communication monolithically
integrated with these devices. In addition to strong security built in from the ground up so
that you can protect both the communication from the device, but also the data on the device itself. And things like planning for the future by adding accelerators for AI and ML workloads,
accelerated graphics operations, imaging, sensor input, and all of the functionality that is
essentially required for the IoT devices that people are designing today. So we want to go at
it from the perspective of if you come to Aleph, you will find a single chip solution for your
design. And we can remove a lot of the complexity and cost and software compatibility hassles that
you have to face when you look for other solutions in the market.
Thank you. Thank you for the introduction. And I guess we'll have the chance to switch to Philip now.
And well, Philip, in your case, I didn't have any issues.
Let's say I'm pretty familiar with ARM and what you do.
So in your case, I'd like to make my question a little bit more specific, let's say.
So I know that ARM has quite a wide product line,
and I was wondering if you'd like to share a few words
about the markets that you're targeting.
And among those markets,
how important do you find the segment of workloads
that are driven by machine learning?
And in addition to that,
I was wondering if you'd like to quickly share some words
on your AI platform and which parts of those do you think
are well-suited for low power demands?
Okay, great.
There's quite a lot to unpack there,
and I'm happy to do that.
If you look at the markets where ARM is today,
it's quite a wide variety.
I'll be doing a disservice here by just naming a few
because it's quite a few.
Everything from automotive
to various types of IoT-related sub-segments,
such as smart home, smart cities, again, to name just a couple within the IoT segment,
but across IoT. In the client space, you see ARM technology in wearables, laptops, gaming consoles.
And then in infrastructure, you see ARM in everything from communications infrastructure
to cloud-based hardware, so server-based hardware.
So when you look at ARM technology, it's being deployed in everything from cloud through to the endpoints that enable
tiny ML to happen. So it's quite a widespread that we cover. And we're seeing machine learning
go across that entire spectrum. I mean, the focus today is, I'd say, a bit more on,
and we'll get into this a little bit later on as I carry on here around some of our microcontroller and application processor-based devices.
But frankly, machine learning is happening everywhere, and this is where partnerships that come with those technologies that enable AI to happen wherever possible. of CPUs like our ARM Cortex-A, ARM Cortex-M family, ARM Neoverse family, GPUs like our
ARM Mali family, and then our neural network processing units, our NPUs like the Ethos
family.
That's sort of the foundational underlying hardware pieces.
Built upon that, there's work that we do to optimize the libraries for machine learning
mathematical applications and inference engines to make
machine learning workloads run faster on ARM, things like ARM NN, ARM Compute Library,
Simpsons NN.
And then at the top of the stack, you have the software frameworks as well as the developers
that use those software frameworks.
And then you have the partners that bring the software
that makes machine learning on ARM possible.
So this whole thing comprised together is the ARM AI platform.
When you start getting into the edge
and you start looking at endpoints
that are at the closest point to the end user possible
and the processing that's occurring there, this is where you begin to intersect the world of TinyML.
And in those types of devices is where you do see like our Cortex-M family, which is the basis for
microcontroller-like functionality.
Henrik can talk a little bit more about the application of that in his case.
And then coupled with that, what's really exciting is the Ethos U MPU line.
So now you have a processing unit that's really focused on graph-based computation.
And when you couple that with the CPU, you get this amazing performance per joule metric
that allows machine learning to happen in circumstances where the power budget is limited.
So that opens up a whole world of possibilities.
Okay, thanks. And yeah, I understand that the partnerships aspect that you mentioned as well,
obviously, near and dear to you, because, well, this is what you're in charge of. But it's also
very relevant to today's discussion, because in a way way it's the common element between all three of you.
So we'll get to that part a little bit later. Right now, I would like to ask Blair on his turn
to just elaborate a little bit on your own target audiences for Newton's offering even though,
well, I'll be cheating here a little bit
because it's something that the two of us have discussed before,
but not necessarily everyone who may be listening is familiar with that.
So it's fair to give them a kind of onboarding, let's say.
And also, if you'd like to focus on the parts of it
that are specifically targeted, let's say, for machine
learning driven workloads at the edge.
All right.
So one of the things that I mentioned is that we kind of, let's say, embarked on this journey
of building these zero-code SaaS-based solutions roughly around six years ago. But actually, as an
organization, just to provide a little bit more context, and this will lead into our target
audience, we're an organization that's been around for a little over 17 years. And part of our
background, which I like to always kind of mention has always been historically systems integration.
So when you begin to talk about, at least for myself, tiny ML and let's say bringing intelligence to the edge,
you know, one of the things that I found is that it's one thing to be able to produce a model.
It's another thing when you actually implement this model into production, because without that, it's kind of just, let's just say a shiny object.
But a little bit more context as to how we got here, which kind of will lead into how
we identified our target audience.
So during this period, we've historically been leveraging all of the well-known traditional
frameworks that are out there working on point in time or managed service ML projects.
And one of the challenges or a number of the challenges that we ran into that I'm sure
everybody is familiar with, right, is the availability of resources, the availability
of capital, the availability to actually produce solutions, I would say, you know, in an effective
time, right?
So these were kind of like all of the challenges that, you know, we were running into,
but also we experienced that our customers were running into.
And one of the things that we challenged our team was,
how can we begin to eliminate all of these different challenges?
And we began to really develop this belief and mantra that we really wanted to truly bring machine
learning to everyone in a very, very, very real sense. And sometimes when you want to disrupt the
status quo, I would say you have to destroy the status quo. And that's one of the things that
we've done over time, and I believe that we've accomplished. And we've done that in a couple
of different areas. So if we truly wanted to make machine learning available, not only to the data
scientists, but to the non-data scientists, to the business user, we really had to change how
machine learning was approached. And that is what really kind of embarked us on this journey
as it relates to building our platform, Newton. And so we accomplished this really in a couple of different ways.
From a technical perspective, we completely destroyed the approach as to how models were
built, where today models are built, let's just say, with pre-existing structures from
a top-down perspective.
In addition to that, we wanted to completely eliminate the need for our consumers, our customers requiring any technical skills in order to produce a model, which were two very key attributes in two areas.
Number one, if you truly wanted to bring machine learning to the edge, if you truly wanted to bring machine learning to the fingertips of everyone, you had to eliminate one of those barriers along the way.
Of course, our platform is really designed so that we enable the non-data scientists, but we also empower the data scientists as well. because it then targets or really begins to illuminate the ability for TinyML to truly grow
and really, I would say, the adoption for TinyML to take off. One of the huge challenges that
exists today when organizations like ARM continue to develop more and more effective
microcontrollers on the edge is how can you efficiently now integrate those models
into those microcontrollers. And the approach that we're taking now, where now is a fairly
iterative and laborious process in order to produce a model and successfully integrate that model
into said microcontrollers, we take a little bit of a different approach. We build all of our models
from the bottom up. So as soon as those models are produced, they can immediately, without any
interaction, then be integrated into those microcontrollers. So our customers are really
empowered to go through the entire life cycle of bringing machine learning to the edge, number one, without any technical skills,
and number two, without going through that laborious and iterative process of now integrating
that model into the microcontrollers that you may find with ARM or that you may find with Alif as
well. So those are, let's say, two of the areas that we've really been focused on. So number one,
I would say that we're in a very unique
position when it comes to our target audience. We're enabling the data scientists, non-data
scientists. But when you look at the tiny ML space, it really covers a number of different
disciplines. Sometimes you're talking to a machine learning engineer. Sometimes you're talking to an
embedded engineer. Sometimes you're talking to someone who has a tremendous amount of experience
from a hardware perspective. And sometimes you're talking to a who has a tremendous amount of experience from a
hardware perspective. And sometimes you're talking to a business user. And one of the things that
we're extremely proud of is that our solution is able to address all of their needs across the
board. So we find ourselves in a unique position that we can really enable all of those different
disciplines along the way. And I mentioned the technology component because when you begin to think about
how machine learning is really growing today,
excuse me, tiny ML is really growing today,
we believe our solution is really enabling
that growth from an edge perspective.
Thank you.
Thank you, Blair.
Philip, I'd like to return to your side
because, well, in a way,
as I mentioned previously as well,
ARM is sort of the glue, let's say,
for today's conversation
and also the occasion
because, well, you just announced
a partnership with Newton.
And I think in order to put that in context, because you just announced a partnership with Newton.
And I think in order to put that in context,
and I believe you also have some relationship with Alif.
So in order to put that into perspective,
I think it would be helpful if you would like to speak a little bit in general about your partnership strategy
and the role that it has in growing Arm in general.
And then in specific, the relationships you have with Newton and Alif.
And where do these relationships fit in your overall strategy?
Okay.
Well, ecosystem is incredibly important to Arm.
Arm lives, breathes, eats, sleeps.
Ecosystem, this is what we do.
Our partners are just incredibly important to us.
And if you look at Arm's history,
Arm has been all about partnerships. I mean, it's our partners that bring the technology to life
and get the technology into the hands of the end customers through all the things that they do.
So we, you know, we enable that and we work with them very closely to make that happen. So
it is all about the ecosystem. It is all about the partners. And that makes it quite easy at arm. So if you look at the role that my team does, it's really focused on building out the ecosystem around machine learning is, in terms of machine learning workloads, are going everywhere in terms of everything from cloud to those through the edge into the tiniest of endpoints.
And so it's really important, especially for developers that are closer to, let's call it, let's say conventional programming backgrounds to bridge that gap
into the world of machine learning.
And so what the purpose of the Arm AI Partner Program is to pull together in an ecosystem
a set of companies that are coming at the machine learning problem from different angles. You know, we have,
you know, hardware companies, and then in the world of software, you have companies that are
focused on, you know, algorithms to solve a specific problem. You have companies like Newton
doing something very disruptive in terms of model creation. And this is all to just make machine learning easier. You know, the purpose
of this is to bring together these companies so they can work together. Proof of that is
the conversation we're having today with all of us being brought together. The combination
and specifically of Newton and Aleph. And if you look at our relation with Aleph, it's very close.
You know, Aleph just announced recently the ensemble fusion processor line in that there is, you know, ARM technology.
My focus has been and my team's focus has been with Henrik on the machine learning aspects, which we talked a little
bit about. There's Cortex-M55, ETHUS-U55, and the innovative way that they've designed those into
their device. And so what's fascinating, and this is where ecosystem comes together, is somebody
looks at that chip from Alpha and says, okay, I need to obviously apply it to a given application. How do I do that? One of the most common things they would most likely do with that chip is
get a model running on it, right? And it's like, well, how do I do that? And then you have someone
like Newton comes in and says, well, that's where Newton fits in. So when you combine Newton and
Aleph, you have this hardware software combination that coming together allows customers to roll out their devices to market.
And what we're trying to do is just be a conduit to make that happen.
And so how we measure success is, can we make these relationships as frictionless as possible?
And at the end of the process, is everybody satisfied?
You've got customers that are satisfied.
You have partners that are satisfied.
If they're happy, we're happy.
That's how we measure success.
Okay.
That's a good way to measure success, if you ask me.
And with that, I'd like to
return to Henrik.
And I think it's a good
opportunity because, well, you kind of
promised to talk a bit
in more detail
about Ensemble and Crescendo
and having
Philip
already mentioned the fact
that you're using, you're leveraging ARM technology for those.
And I was wondering if you could elaborate a bit on exactly how you do that.
I saw in your press release that you built on ARM's Cortex.
And I was wondering if you'd like to explain the specifics.
Sorry, I had to unmute myself there.
I said, I can definitely do that.
It's going to take quite a long time if I'm going to go through all the specifics.
But if you kind of look at the way
to how we have set up
the underlying technology platform
that we have created the Crescendo
and the Ensemble family from.
We have partnered very strongly with Arm in many areas of these devices because,
just as Phil says, ecosystem is really key, even for companies like us that design the silicon.
When we bring out a chip, we want developers to be able to be productive with the silicon that we make as soon as possible. already up and running and that can support the type of innovations that come from the core IP,
you know, the processing cores and things like that, it gives them a great head start. And in
many cases, they can leverage, you know, the tools and the workflows that they have already
established over previous projects. So we have designed our technology platform for both the Crescendo and the Ensemble family of devices around a reference that ARM has called CoreStone, which is very powerful and provides a very scalable interconnect between all of the cores that are present in the device, making software portability kind of up and down a performance spectrum, very seamless.
And we are creating distinct devices that scale from single core MCUs, and in those MCUs we use
ARM's latest Cortex-M55 core with optional integration of the U55 AI ML accelerator as well, up to a dual core solution with two M55s
and two Ethos units,
a triple core device that also features a microprocessor,
a Cortex-A32,
and finally a quad core processor
where you have two A32 application cores
as well as two real-time M55 cores.
So we have quite a broad spectrum of performance in these devices.
This is all designed on top of a security architecture that we have integrated,
which controls all of these cores and manages resource allocations between the cores and the peripherals
and the memories, the shared memories that we've integrated into silicon fabric. And then in addition to that,
we have an optional wireless connectivity subsystem and this is the distinction between
the Crescendo and the Ensemble family. family the ensemble family is targeting more embedded processing it does not have the wireless communication subsystem but the crescendo
integrates that and that contains a full complete uh iot lte type modem supporting cadm and mb
connectivity as well as a gnss receiver with full global constellation support for applications that require that. We're also
integrating a separate security enclave in this communication subsystem that is able to host an
integrated SIM so that you can essentially eliminate the external SIM card from your
design as well and have everything integrated into the chip. And then in addition to that,
we have our AIPM power management system
that manages all of the aspects of the device
in terms of what is turned on at any given time
in a very granular and intelligent way
to allow developers to really benefit from number one,
the technology nodes that we are on and the fantastic power characteristics that we get from that and also from the way that we
have architected the device itself in terms of just low power system architecture functionality
so that you can create very battery efficient devices.
And we really wanted when we designed this platform to be at the very forefront of technology and make sure that we had the most up-to-date core technology, the most power efficient
systems.
So Arm was a natural partner for that.
They have great IP and with their eye towards ecosystem enablement, we know that
people are going to be able to be productive with these devices as soon as they get their hands on
them. Okay, just a quick follow-up question. You mentioned that the key difference between the two
product lines is that one is more integrated, let's say, than the other in the sense that it also
contains connectivity aspects.
So I was wondering if you have any ideas to the power characteristics, the power consumption
characteristics of one versus the other.
And also, well, in the case of the chip that doesn't have the embedded connectivity features, I guess you have to have some external, let's say, to cover those needs in using some external hardware. So what's the comparison versus solution the Ensembl family and the Crescendo family are kind of based around the same systems architecture. So if you look at it holistically, we have the same excellent power analysis on a system like that, that is cellularly connected,
meaning that it is essentially always talking to towers because you have to maintain that network.
When you look at the duty cycle of an application, it tends to be kind of the background power draw that goes to maintaining the cellular connection
this is usually called the EDRX floor that eventually overshadows all other power draw
in the system because embedded systems as you know they tend to come up they tend to
perform their task and you want that task to be finished as soon as possible so that you can go
back to a low power state and conserve battery power.
But when you're communicating on a network, you kind of need to be available for that network the entire time.
So it usually comes down to, well, how much power are you actually consuming just to participate in the network?
And that's where we have put the majority of our efforts in optimizing the crescendo family specifically to keep that
power draw as low as possible. And when we benchmark ourselves against competing solutions
that have the same type of cellular support that we do, we find that we are usually about
two to three X lower than they are in terms of that floor current draw, meaning that your application will
likely last for a lot longer on the same battery with an Aleph chip compared to something else.
Okay, great, thank you. And in that same line of thought, a question for you Blair. So,
and again I think I know at least partially the answer, but that's not true for everyone necessarily.
So I would like to ask you to highlight the specific features in Newton's architecture that make it particularly suitable for applications at the edge that need to be frugal in terms of power consumption?
Yeah, certainly.
So I think one of the key takeaways that if you really kind of listen to Heron,
just kind of give an overview of their solution,
is that especially when you begin to talk about the edge or tiny ML,
basically, you know, battery consumption is your currency more often than not, right?
Once you reach production, that becomes really one of the key areas
that you really kind of really need to focus on.
Now, as it relates to Newton, one of the things that I began to highlight is that one of the areas that we've
really began to change and began to look at differently is how models are actually built,
right? So of course, our platform is a zero-code automated solution, but our approach to building models, which can have a significant impact on how much consumption you have models, we're kind of validating the accuracy of the models along the way.
So as you go through that initial build out of a model, it's immediately ready for implementation and is extremely compact.
One of the things you may notice on my background is we say, you know,
build fast, build once, and never compromise. And what that means is that, of course, we're
accelerating the time to market for our customers as it relates to our automated solution.
You only have to build your models once, and they come out extremely compact without compromising any accuracy. And we see, we have seen that oftentimes our models can be up to a thousand times smaller than other frameworks.
We see that our inference time is also extremely fast or faster than some of the other frameworks that are out there. And more importantly, when you go through that process of integrating your
model into your controller, you know, you don't sacrifice any accuracy along the way. And when
you're talking about tiny ML, or when you talk about the edge, and as Henrik, I think perfectly
stated, that battery consumption becomes your currency as it relates to your
production implementation. So that's from, let's say, from a technology perspective,
that's one of the things that we really, really differentiate ourselves as it relates to being
able to produce a model fairly quickly without compromising any accuracy and in one particular iteration.
And then lastly, especially once you reach production, now one of the things that we
also offer is a complete lifecycle management tool that our customers can use as a part
of our platform to really understand a couple of different areas. I like to call them the four W's,
starting first with what I call it the what, you know, what's in your data. So we provide what we
call an EDA tool, which gives our customers complete transparency into what their data looks
like prior to building a model, which can be equally as important. We've heard it all our lives,
junk in, junk out. So we provide that EDA tool that gives our customers insight into their data prior to training. And then after that, we provide what we call a feature importance matrix,
which enlightens our customer to understand what are the top 10 features that are influencing
their predictions and what are the bottom 10 features that are influencing their predictions and what are
the bottom 10 features. And that'll give you some insight as to whether or not your data is bloated
and more importantly, if whether or not those features that you originally thought might be
influencing your predictions, if they truly are. And then number three, which by far is our most
popular tool, is a tool that we provide our customers, which is called the model internally.
Our internally, we affectionately call it the what-if tool, where we enable our customers, once the model is produced and once they're in the point of any feature that's associated with that prediction so that they can
see how it's actually influencing that particular prediction. So they have complete transparency
into their model as well as what's driving the predictive power of that model. And then lastly,
we provide a lifecycle management tool that proactively identifies or informs our customer when their model may be beginning to degrade and they may need to retrain their model.
So when you begin to talk about the complete lifecycle of TinyML or implementing machine learning on the edge, number one, we're enabling that non-data scientists, our data scientists, our business users.
We're automating the process completely. And then once you get into production, because that's where
we all want to be, Philip already mentioned it, right? Happy customer, right? We want to be in
production, but it doesn't stop there, right? Once you're in the production, then you need to have
those lifecycle management tools so that you don't find yourself with erroneous predictions, finding out that, man, maybe your model isn't performing as you would like.
So we also then also provide services for our customers once the model is built so that
that lifecycle management component continues to stay in place.
Okay, thanks.
Thanks for the elaboration. And well, actually, I'm going to ask you to zoom out a little bit now and to comment basically on today's occasion. So the partnership that you have entered with ARM.
And I was wondering, where does that fit in your overload strategy, basically? And I also honestly don't know if you have other partnerships in place as well.
Yeah.
So I think one of the things I've mentioned, we've been around for some time, a little over 17 years.
I kind of like to say we're a little bit of a hidden gem, you might say, in the market.
And we also, to a large degree, especially when it comes to the TinyML space, I like to say we kind of a little bit had our coming out party this past June, where we had the opportunity to participate began to introduce the market, the world,
actually, to Newton and some of the capabilities that our platform has. Now, when I began to think
about our partnership with Arm, just quite frankly, I see it as extremely symbiotic.
And I think the other thing is that when you begin to look at partnerships, number one, you know, having partners is, for me, it's vitaliotic in a very, very, very real sense.
And that we, of course, we have the opportunity
to interface with some of our customers
who are looking to implement Arm products.
And then subsequently,
as our relationship continues to mature,
I'm sure that Arm will have the opportunity as well
to introduce us to some of their potential customers who may be looking to, as Philip mentioned, implement machine learning on the edge.
And if our solution can help provide them value, then, of course, we'll be certainly looking forward to do that.
Now, for me, I do find having partnerships with an organization like ARM extremely essential, but I think we're
going to more than likely continue to be very selective as to who we partner with, at least
initially coming out. I think we want to really focus on the quality over quantity at this point
when it comes to partnerships. We haven't quite yet reached the state of the number of partners that ARM has
today, but I think we're going to continue to be very, very strategic in regards to who we're
engaging, when we're engaging them, etc. And we kind of see our partnership with ARM as extremely
strategic. And we're certainly looking to really invest more time, more resources and expand it on the relationship that we have today.
Great, thanks.
And so, Philippe, let's have a last round with everyone
and in that I would like to ask everyone to just lay out
a little bit of their future plans, basically.
So, in your case, Philippe, I guess it's all about partnerships.
And well, to narrow it down a little bit,
I was wondering if you have a specific strategy
for growing partnerships in the tiny ML space,
let's say in the AI on the edge space?
Yeah. Yeah.
So we continue to look for partners to help fill the solution gap that will make tiny
ML and, frankly, machine learning in general, I guess from cloud to small devices make that ubiquitous.
So we're constantly working with our partners.
We're constantly working with internal stakeholders, like the lines of business within Arm, who look at the various end segments and look at what's required in those various market segments and taking the lens of machine learning,
we look for and work with them to understand, well, where, where,
where are the gaps? You know, what, what can,
what can we do to kind of where can we find companies to, to just, again,
make machine learning easy. So that's,
that's where we'll continue to focus. We'll continue to work to find partners this isn't really a numbers game even
though i mean arm's a big company so we have a lot of partners but it's not about a numbers game
it's it's really about making those partnerships effective and that means you know leaning in so
we spend a considerable amount of time with each partner trying to understand where they're trying
to go so we can find the common ground uh to drive mutual success. So we will continue to do that. You will continue to see
more messaging with our partners, more projects with our partners, more media for giving our
partners a voice. We recently launched the AI Partner Catalog, which is a way to showcase
technologies from our partners just to make it easier for customers and even other partners to
find partnerships based on ARM technology. We'll continue to do that. We'll continue to work on
things like the tech talks. We're doing podcasts, we're doing events, you know, there's
just, we will continue to, like I said, just kind of curate that ecosystem. So if we can get to the
point where, you know, anyone can do machine learning, then, you know, it's like mission
accomplished. And if we can take away the fear of what it would take to kind of build a product using machine learning, that's mission accomplished plus plus, let's say, compared at least to the other two parties,
you probably have more on your table,
both in terms of product development and going to market, I guess.
Yeah, we certainly have no shortage of things that we want to do at the moment.
I mean, after now that we've made our announcement,
it gets a lot easier for us to have meaningful in-depth
conversations with people that are interested in the technology
that we are integrating.
I want to kind of just echo a little bit about what
Blair said earlier about kind of the challenging around edge AI.
And this is also building on what Philip has said about how important it is with partnerships,
because if you kind of look at it from our perspective, where we're coming from and the
way that we are approaching the market, we definitely see a big interest from the customer base that we intend to serve for,
you know, edge enabled AI solutions.
They're very interested in the idea of being able to leverage that type of technology in
small constraint devices.
And I think that's really the key is that when you talk about an edge AI device, the distinction there is really about constraints.
Because where AI comes from, kind of the pedigree of it is that it's something that is run in vast data centers with essentially never-ending resources of CPU time and memory and things like that. When you want to scale that down and officially run it in something that is configured as a microcontroller,
sometimes with less than even a megabyte of memory,
that in itself becomes a challenge.
But then add to that also the challenge
that AI is in some ways very different
from the traditional development that embedded systems designers go through. It's more in the domain still of data scientists to kind of make the data scientists and the embedded developers expertise merge and to make the models and the technology fit a constrained system.
That is really the challenge that if we can overcome it, it's going to open
the floodgates for adoption of this technology in this space.
Because we definitely see a lot of use cases that will, you know, existing use cases that
will benefit greatly from being able to take an AI approach to solving those problems.
And there are some problems that we're not able to address today unless we can get machine learning into the picture and leverage that on the edge devices.
And the final comment I want to make on that is, and this is something that I'm very passionate about myself.
If we can't have efficient edge AI processing on the embedded devices, we're never going to be able to seamlessly integrate
this type of technology in our everyday life because it usually takes way too long
to send the commands somewhere else and have them processed and then return the result.
It makes the technology noticeable. So if you really want to have this as something that you
never think about, that it's a machine
that you're talking to or interacting with or that is that everything happens in the background
more of the workload needs to be able to be done directly on the device so
thank you and last but not least Blair let's wrap up with you. And I was wondering, yeah, I mean, after, well, after coming out in a way,
because you talked about how you have intensified, well, your outreach efforts in the last year or so.
I was wondering what your plans are going forward in terms both of outreach,
which you continue to do, obviously, but perhaps
more importantly, product development?
Yeah, thanks, George.
It's interesting.
And I'm sure that the individuals that have an opportunity to listen to this podcast was
probably a broad demographic.
But, you know, sometimes when I have an opportunity
to talk to some of my colleagues
and when they hear me say that,
our objective is to be learning to everyone.
And sometimes I get the feedback,
well, Blair, you can't be everything to everyone.
And sometimes I use some harsh language and I said, that's unacceptable, we can't be everything to everyone. And sometimes I use some harsh language
and I said, that's unacceptable, we can't.
And I remind them, it wasn't too long ago
that when virtual servers first initially started
to become prominent, no one thought
or no one really realized that it would become commonplace.
It would become democratized. Even from a cloud computing perspective, not too long ago,
everyone was saying that, no, you cannot use the public cloud for production purposes.
Now, what do we see today? We see that, you know, my son who's 12, 13 years old,
he can go and he can build a virtual machine in the cloud with no technical skills. We didn't
realize it was possible. So for us, we don't accept, we don't accept that machine learning
cannot be available to everyone. And so our next steps is to continue to push the democratization of
machine learning. Our next steps is to continue to push making machine learning and tiny ML
truly available to everyone. Because I truly believe, and even if you heard what Philip mentioned
and obviously the success with ARM,
that we're right there on the cusp.
Maybe we're right at the cusp
and you can't see over the hill just yet,
but we believe that we're right on the cusp
of truly making machine learning available to everyone
and realizing some of the goals
and breaking down some of the barriers
that Henrik has mentioned.
And I think a testament to this is our partnership with ARM. I think, you know, I think Philip was
absolutely true to his word when he said that partnering with ARM, you know, whether it's
tech talks, whether it's the ability to participate in today's panel
discussion, is that I believe that we're certainly well on our way. So that's going to continue to be
our focus, is to really continue to democratize machine learning, make the service as it relates
to tiny ML truly available to everyone, and bring this intelligence to the fingertips of our customers.
So that's what we're going to dig in and we're going to continue to push that agenda so that when we come back, you know, in our next panel discussion, maybe it's a year from now, maybe it's sooner.
And we'll be able to realize some of the results that we've all been discussing today.
So that's what we're going to continue to do. I hope you enjoyed the podcast. If you like my work,
you can follow Link Data Orchestration on Twitter, LinkedIn, and Facebook.