Embedded - 404: Uppercase A, Lowercase R M
Episode Date: March 4, 2022Reinhard Keil joined us to talk about creating the Keil compiler, the 8051 processor, Arm’s CMSIS, and the new cloud-based Keil Studio IDE. MDK-Community is a new free-for-non-commercial use, not-...code-size restricted version of the Keil compiler (+ everything else). CMSIS is a set of open source components for use with Arm processors. The signal processing and neural net components are optimized for speedy use. The SVD and DAP components are used by tool vendors so there may be components you care about more than others. Keil Studio is Arm’s new cloud-based IDE with a debugger that connects to boards on your desk: keil.arm.com. Reinhard talks more about the advantages of cloud-based development in this white paper. Arm Virtual Hardware has multiple integrations, the official product page is www.arm.com/virtual-hardware. The MDK integration and nifty examples are described in the press release. Reinhard mentioned the Ethos-U65 processor for neural networks. The Dragon Book about compilers Transcript
Transcript
Discussion (0)
Welcome to Embedded.
I'm Alicia White, alongside Christopher White.
Our guest is Reinhard Keil.
If that name sounds familiar, have you ever heard of the Keil compiler?
Hi, Reinhard. Welcome to the show.
Hello, Christopher. Hello, Alicia.
Thanks for giving me the time to talk to you.
Could you tell us about yourself as if we had met after you spoke at an ARM conference?
Yeah, as you said already, my name is Reinhard Keil.
Today I'm working for ARM on the technology for embedded.
That includes, of course, also IoT and machine learning.
But I started my professional career a long time ago, actually at Siemens in the semiconductor
software. Today, that would be called Embedded Tools Division, but at the time, embedded was
not invented. And as most of you know, together with my brother, I created a startup company where we were focusing on tools for embedded.
We are well known for the C51 compiler.
You mentioned it already.
But that's by far not the only product we made.
When we sold the company to ARM, we had distributors in 40 countries and a large user base.
During my time in ARM, I headed the Kyle MDK team and initiated SAMSIS,
the software standard for microcontrollers.
Nowadays, I'm part of a team that defines the next generation software and tooling,
and we believe that cloud-native development will become important.
We have so much to talk about with all of that.
Do you mind if we do lightning round first?
No, go ahead, please.
How do you spell ARM? Which letters are capitalized?
Oh, today we actually have lowercase on in the logo and in writing we
uppercase the A
and then lowercase R
I've been doing it wrong for a long time
before
it was actually all capitalized
so four years ago
we changed the logo
and
a long time ago it was called
on risk machines or advanced risk machines.
But going to the stock market, advanced risk is not a good thing.
That's the reason why ARM is basically just ARM.
As someone whose name is misspelled often, I have some sympathy with your name.
What is the worst misspelling you have ever seen?
Oh, well, of course, when you talk to English people, then they don't get Kyle right.
They say every year all kinds of fancy things.
Kiel is the most prominent one.
I'm used to it, so I can cope with it.
When it comes to Asia, of course, they have all the problems with my name,
but it is what it is.
What's your favorite C keyword?
C keyword?
That's a really hard one.
I don't have a favorite C keyword in reality.
So include, of course, is the keyword where you can include a lot of predefined stuff.
So that's maybe the most powerful one if you see it this way, but it's a pre-processor keyword.
And do you have a favorite processor of all time?
I would say my favorite one is a Cortex-M4-based microcontroller,
maybe an ST one, an ST32F4 is really a cool microcontroller
from my perspective.
I can see that. That one's from my perspective. I can see that. That was a good one.
I agree with that.
You started your compiler company in 1980.
What was it?
Yeah, actually, we started in 82.
But at that time, it was more a hobby than a company.
So I was a student at that time,
and I was a nerd, electronics nerd.
Perhaps I am still a nerd these days.
Who knows?
And at the time, we focused on electronics,
and as I said, the company was somewhat a hobby.
Actually, we didn't start with first compilers.
Our first product was a telephone switchboard.
And complete, solid state, no microcontroller included.
When did you first start selling a compiler?
A compiler, we brought it to market in 1988.
So this was six years later.
Did you just sit down one day and say,
enough with the telephones, I want to write a compiler?
It was a journey.
So we knew that we need a microcontroller in a featured phone system.
And the problem was an Intel development system at the time was the price of a sports car.
So I got the job at Siemens.
That was luck in their tools division.
I had, of course, some friends that actually helped me to get there.
And it was a part-time job.
I learned a lot there.
And first, I had the idea, we create an operating system that could run the Intel development
software on CPM computers.
This better computer at the time was an 8080-based system, quite low power compared with today's
standards. And the operating system was then our first commercial software
product that allows us to work with professional
tools.
At that time, writing a compiler for that
sort of computer was a high-end compiler
and a high-end processor.
But now the compiler is more often used for embedded systems,
which are the smaller computers, the resource-constrained computers.
When did you make the decision, or how did you make the decision,
to stop focusing on the high-end and go towards the low-end?
Actually, high-end was x86 at the time.
The low-end was 8051.
And to some extent, Intel claimed that the 8051 will be replaced with a 16-bit microcontroller.
Somewhat, I did not bought into that
because the 8051 was a cool chip.
It was pretty cool for the telephone switchboards
that we built.
And so we realized that actually many of our customers
were using our operating system
to develop 8051-based applications.
And before we went into building a compiler,
we developed an assembler and a debugger with integrated simulator.
And we started partnerships with emulator companies
that in combination of technology and strategy alliances
let us expand the business. And we also tried to build an assembler for the Intel 8096,
for the 16-bit controller.
But soon we realized we need to focus on one target
because we were small at the time.
And this was the reason why we focused on the 8051.
And initially, I didn't have the plan to create a compiler.
I wanted a partnership with a company that had already one.
And I called up a company in Germany that has a commercial seed compiler
and suggested that we improve the product together.
But the guy didn't see the value of this kind of partnerships
and did not believe in the improvements that I was proposing.
And then in 1986, we decided to write our own compiler.
And to be fair, most of the idea came from a friend, Peter,
who then was working with me until he retired.
And his idea was to create a compiler from scratch scratch i met him during my time at siemens
and i was focusing on the go-to-market plans this is how i would call it today this was basically
what kind of partnerships do we need once we have the product and the peter was focusing on getting the compiler but the project was very complex
and intense it was i would say five to ten times more complex than what we did before
and so i helped him then in the last year so totally it took us two years to get to this
compiler and i was then focusing on the code generation part of the compiler.
And we brought it to market at the Electronica exhibition in 1988.
This was basically the time to market window that we had.
And this exhibition was every two years.
So we were under time pressure to hit the time spot.
And we partnered with four emulator vendors to showcase our compiler.
And it was an instant success.
We pulled from there.
When you wrote the compiler, that was early days.
So did you have to build your own parser?
I'll make it back up.
I was involved in a compiler project about 10 or 15 years ago,
and the parser we built with Lex and Yak and things that were freely available,
it probably wasn't such a wide range of available tools back then, right?
Yeah, we used Yak at the time. So actually, Peter was very innovative also at that time.
But the whole thing is written from scratch.
There is no base software that we have used to create it.
There was a book that teached compilers,
but the material was pretty bad.
We also used the Dragon Book.
It was a famous book.
But the Dragon Book had all these algorithms
in a way where
resources were not a problem.
But at the time,
resources in a DOS
computer was a problem.
The complete computer had
640 kilobytes,
not megabytes, kilobytes.
And
in this 640
kilobytes, there was the operating system there was the compiler and the program to
compile so you can imagine that we had the first version was actually not very optimized but later
on to actually grow the the compiler performance we had to write overlays and actually work with overlays to manage the memory
constraints.
I remember having to switch floppy disks when the linker was needed versus the
assembler and the compiler.
Yeah,
exactly.
It was,
it was basically for people that start today,
it is hard to believe that a computer can run with 4 MHz clock speed.
But 6 MHz were the 80s of the time.
The PC-80 was the computer of the time,
fastest system that you could get.
And it was a 6 MHz later than an 8 MHz variant.
But you're compiling for the 8051, and I'm still kind of boggled by the idea of putting an RTOS or an operating system of any sort on an 8051.
Those are like, maybe I misunderstood, were you putting
operating systems on the 8051 or bigger
systems? No, the 8051 or bigger systems?
No, the 8051 runs via metal code.
So basically the design pattern is an endless loop.
However, we created also an auto system then early 90s because, to be fair,
the biggest support nightmare were the people that tried to create an operating system for the 8051. The 8051 is not a stack-based machine. So actually, operating
systems usually work very well
on stack-based machines where you have stack addressing. The 8051 doesn't have
that. And this makes it so tough to create a compiler for it and actually
to work around that
we had to invent
what is called compile time stack.
So a stack
layout that is actually
arranged
at the link time. And I think
this was the innovation that made our
8051 compiler so
different from others.
I didn't know that. That's amazing.
Huh.
8051 actually is a challenging
thing when it comes to C compilers.
I've used the
8051 extensively
and the Kyle compiler for it.
I don't know that I
knew that. He probably did his job
very well.
Yeah, because we have
all this complexity
was kind of
for somebody that
writes vanilla C code, it was
irrelevant.
But inside
and actually also
when it came to operating systems, then
actually you need to know the code
at compile time.
And this was the reason
why in 1990
I wrote an operating system.
It was more a demo operating
system than a product, but later on we
decided to make a product out of it
and the reason was the
support cases that we had.
Yeah, how do you switch threads without a stack?
It's just...
Okay, I'm not even going to try to follow that.
So you had the Kyle compiler for the 8051, and you mentioned YAC, so you were aware of the Yet Another Compiler compiler, even though you ended up doing it from scratch.
No, no, we used YAC.
Oh, okay.
GCC was first released in 1987.
What did you think of those crazy folks giving away their work?
To be fair, GCC was at that time not on our radar.
We had six or seven commercial compiler companies that all had 8051 compilers on the market as our real competitors.
And we were basically fighting with our commercial compiler competitors at the time.
Of course, GCC is today a very good compiler.
We are using it a lot in ARM, and it's definitely good.
But for AD51, because of the challenges I mentioned, it was a long time no competitor.
I think SDCC, the small device compiler variant, came to market or came to life early 90s, beginning 2000. And at that time, we were so far ahead and established
that we didn't realize that there is a free compiler.
We were so far ahead, we didn't even see them.
No, really, yeah, okay, for x86, for sure, GCC was there, but we used Microsoft tooling to develop our tools.
And, you know, GCC was at that time not important.
And to be fair, GCC wasn't even doing that great for Cortex-M ARM until, in recent memory at least.
Until we brought it in-house and fixed
all the problems. ARM does
contribute a lot to GCC
and makes sure that it is a decent
compiler for the processor.
So we do a lot
of work on open source tooling these days.
I remember
the ARM
non-EABI
debacle where you had to choose that sort of thing
and explaining it to people made no sense
until finally we all agreed on the same EMI.
So you use GCC
and you charge for Kyle.
Yeah.
And I understand because you need to make a living, I get that.
But it is a very different model of the world.
And both coexist.
Okay.
How do they coexist?
Because they do seem so different.
Yeah, well, today, as I mentioned, I initiated SEMSIS when I was in Aachen.
It was actually in 2008.
And SEMSIS in the lowest level, in the SEMSIS core,
we have a lot of compiler macros that actually make it irrelevant
if you work with a GCC or with an ARM compiler or with an IAR compiler.
To mention even our competitors, we call them partners actually.
So what we do is we have a software layer where the compiler at the end is not relevant.
Of course, it becomes relevant when code size matters or where performance matters
or where aspects like certification matter or where you actually buy a service from on,
not a compiler because we actually sell more than a compiler.
It's not just a compiler that we actually sell more than a compiler. It's not just a compiler that we provide.
And therefore, GCC and the commercial home compiler coexists very well.
You've mentioned CMSIS a couple of times, and I think it's really, really cool, but
I don't think we've described what it is.
Could you describe what CMSIS is briefly?
Yeah, CMSIS stands for
Common Microcontroller Software Interface Standard.
And it has today, I think, nine components,
if I remember correctly.
So it has a lot of aspects.
It starts with the SAMSIS core,
which is the base framework
for a processor, for a microcontroller.
But then we have also a DSP library.
We have an RTOS abstraction layer.
We have a neuronal network library.
We have then on the tooling side a thing that we call SAMSIS PEC.
I will come to that in a minute.
We have SAMSIS drivers that provide API interfaces in a consistent way.
Then we have delivery standards for symbols called SVD
that gives you the awareness in the debugger of the peripheral symbols.
And we have a SAMSIS DAB and our latest components are SAMSIS Zone, which is here to partition multiprocessor systems or the trust zone, secure and unsecure areas.
And SAMSIS Build System that uses actually the SAMSIS PECS and makes it CMake compliant. So there are a lot of components today.
There is actually a blog which explains this very well.
The blog has the title,
Which Senses Components Should I Care About?
It's written by one of our support engineers.
I can recommend this blog because it gives you basically an insight
of what is relevant to a developer. And of what is relevant to a developer.
And not everything is relevant to a developer.
Several of the components are there to help the silicon industry.
Yes, and some of those are really interesting.
You mentioned SVD and DAP.
The SVD is the part where other compilers, other systems, other debuggers
use that because it describes all of the boards or all of the processors. So if you have a Cortex
F4 and a Cortex L0, the definition, or sorry, Cortex M00 and Cortex-M4,
and the description of what SPI is available on the ST versus the NXP,
all of those are in the SVD files.
Yeah, this describes basically the user-facing peripheral registers of the device.
This is what the SVD file provides.
And we have today more than 9,000 different microcontrollers described with SVD files and with device family packs,
which then basically collect the SVD files, add to it some header files,
drivers, debug configurations, and the likes.
So the ecosystem for SAMSIS is immense.
We have 60 different vendors that produce these deliverables,
the SAMSIS packs, and they are consumable by many IDEs directly
or an integral part of the IDEs or for those companies Lauterbach for example
uses just the SVD file because they are focusing on the debugger they just use the SVD file that
is part of the distribution so you are right SVD is one of the important components. That is the debug access firmware.
Debug access protocol is what that stands for.
And this gives basically a consistent way to talk to the core site registers.
And it's a lightweight firmware.
Can be adapted in many different flavors.
And is configurable.
Supports also the ASVO trace, for example.
And this is basically what we recommend to put on eVari kits.
But we use it also ourselves in the U-Link series of debug adapters.
They also use the DAP complement.
And the debug access port, the DAP
software, it makes it
so that any cortex
can debug any other cortex.
And that's why...
That's too simplistic.
Okay.
The SAMSIS DAP is just a firmware
that basically translates
USB to
DAP commands. And the commands that go via USB to tab commands.
And the commands that go via USB is very simple.
They are primitives.
Read memory, read registers, start executing.
Even set the breakpoint is a write register operation. So the debugger that then runs on the host computer,
you see typically Mac or Linux.
They translate then the debug front-end commands
where you have symbols into this primitive sense
and send it via USB to the SEMSysDAP firmware.
And the SEMSysDAP firmware can indeed run on a Cortex-M.
And in this way, yes, we use a Cortex-M to debug a Cortex-M,
but it is a lot more complicated than it appears.
Good.
Those are two sections that most developers never look at
because they are about tooling.
Yeah.
And some of the ones that I've used include the DSP,
which has Fourier and fake floating point numbers, Q numbers, I guess is what they're really called.
Yeah, fake.
You know what I mean.
And I heard that Edge Impulse, the tiny machine learning folks, also use your NN, your neural network pack.
How do you decide what pack you're going to do next?
First of all, it's an evolution.
So we consistently improve our components.
Today we release, I would say, every nine months a new SEMSIS pack
where we improve what SEMSIS does.
And when it comes to DSP, to pick on what you just said,
we have new processor technology coming along.
The latest processor is the Cortex-M55,
and it has actually a vector instruction set.
We call it Helion.
It's optimized for microcontrollers,
and this vector instruction set makes parallel execution.
It operates on vectors.
And a lot of the DSP functions benefit from it, but also neuronal networks.
So neuronal networks, our SAMSIS NN library is the primitives for machine learning,
maps to this instruction set.
And of course, we make it transparent because the same operations you can also perform on Cortex-M0.
Then, of course, without the vector instruction set, you have then only the standard thumb
instruction set.
The M4 has SIMD instructions, which already improve
DSP performance quite a bit. But with Cortex-M55,
we really focus on DSP and machine learning performance.
And this can then also be extended further
with the ETHOS-U processors, where we have
done really high-performance machine learning.
It's a neural network processor.
What was the processor name?
ETHOS-U, ETHOS-U65 and ETHOS-U55.
They can be combined into a microcontroller,
and actually the ALIF device is the first deployment of such a system.
So when you take a look to Alif, they have a Cortex-M55, a Cortex-A32,
when I remember correct, and an ETHOS-U65,
basically designed for high-end machine programming on edge devices,
on end nodes.
Sophisticated technology these days.
Yeah, it's certainly very popular.
So you also have the drivers, which I don't use as much,
although I always want to.
You work so hard at making the DSP and NPAC packs so optimized,
but it's the drivers,
since you're supporting 9,000 different processors,
the drivers are not as easy to get optimized.
How do you balance between flexibility and optimization?
Yeah, in fact, we partner with our ecosystem.
And when you take a look to the support of these 9,000 devices, actually the support
is developed by the silicon
vendor. We teach them
how to do this support. And
with drivers, we didn't make
as much in the road as we
hoped. And therefore, we are
bringing SAMSIS-PEC in
the moment into open governance.
Actually, we did this last summer.
And today we are working with ST and NXP to bring SAMSIS PEC to the next level.
And something similar we may do with drivers going forward so that we actually get drivers
that are consistent across the industry.
And with SAMSIS PEC, this is our first project that we do in this fashion so in
in this collaborative way we envision that actually the tooling from st nxp and on will
have the same base functionality and we will base our tooling on vs code concepts so in concepts. So in Kyle Studio we used TIA, but VS Code is derived from TIA.
This is Eclipse.
TIA is under Eclipse Foundation.
We envision
actually that VS Code
is one of the platforms of the
future. That's good to hear.
I've been using VS Code recently
and finding it quite
nice in terms of extensibility
and stuff. Yeah. nice in terms of extensibility and stuff.
Yeah.
And in terms of modern development environments,
I've also been using STM32 code IDE a lot.
Cube, buddy.
Cube, cube.
Cube tooling, yeah.
And I have to say VS Code is better.
And that's an understatement.
Yeah, that's what we frequently hear. Eclipse-based
is not
what is modern, state-of-the-art.
And that was also the reason why we
started with Kyle Studio.
We had DS ARM DS
as an Eclipse-based IDE,
but it wasn't
so popular in the microcontroller industry.
And Cube is, to my knowledge, Eclipse-based.
It is.
But Microvision is pay-for.
Yes, there are also free-to-use variants.
It's not all paid for.
So we did get a question from a listener
about when and why to pay for compilers
versus donating to open source ones.
Do you have advice for when folks should pay for compilers versus not?
Yeah.
As I said before, you actually don't pay for a compiler.
The compiler is part of a complete offering where you get a debugger,
where you get device support, where you get pretty much everything out of the box.
And actually, my partner in the US
called me up over 20 years ago.
And it was for him late in the night
and he called me and said,
hey, you know, we are selling a feeling.
I said, John, no, no, we don't sell a feeling.
How many margaritas did you have tonight?
We are selling compilers.
And actually, he is right.
We are selling a feeling.
We sell the feeling or the service to get your job done with our tools.
And this means when you're right with the problem, you can call up our support and we
will help you.
And our partners are also here to help.
So if there is a problem somewhere, then actually we help.
Of course, today, partners also use open source compilers in their offering. You mentioned
the ST-Cube and there are many other free-to-use tools. But at the end of the day, it's a slightly
different business model. It is part of the day it's a slightly different business model it is part of
the device offering so the price tag is basically baked into the complete offering of st nxp and so
on so it is not so that the compiler is free and when it comes to donate you can of course donate
to open source communities please do so to actually help these communities.
I say the norm, we are leveraging open source a lot and we contribute.
I think we are one of the most active contributors in many open source projects, LLVM to name one.
We contribute also to TIA.
We actually help the open source industry wherever it makes sense to us.
Including open source CMSIS.
Yeah, of course.
CMSIS is also open source, but the reason why it is open source is not that it is free
software.
The reason why it's open source is we remove all barriers of adoption.
And when you want to combine it with another open source project, you can do so.
When you want to use it commercially, you can do so.
We don't put any roadblocks in using this foundation software.
And so we come to the free as in beer and free as in liberty, which is the general open
source definitions of freedom in English don't always make sense.
And there's free as in puppies.
Yes, many open source projects are free as in puppies.
But it is free as in you can use it.
It's free as in liberty.
You can use it and change it and do whatever you want with it.
But as you mentioned, it's not really free as in beer.
You are paying for it via purchasing ARM processors.
For example, yeah.
Exactly.
It is so we have
many engineers that contribute to
open source projects, I think in total about
maybe more than a thousand.
And of course, at the end
of each month, they get the paycheck.
I think two of the things you mentioned are
probably the key things for me when
I'm considering paying for something. Paying for
a tool set is, like you said,
device support is huge, because that can often be really tricky,
especially if you're not using one of the completely mainline,
you know, high-volume parts.
So it's nice to get up and running quickly if you've got a weird chip.
And secondly, to put it less diplomatically,
it's nice to have somebody to yell at.
I think that's what he said, yes.
Yeah, I's true.
And you should also keep in mind
many of our customers are in industrial space,
to pick an example.
And industrial is long living.
We have users that use
a 10-year-old compiler version
because they started the project with it
and they only make tiny modifications
so the risk to introduce a newer version is too high.
And we still maintain and help them
to use these older versions.
And this is basically the service that we provide,
part of the service that we provide.
That software configuration management staticness
that is found in embedded systems more than I think any other
is important.
Being able to rebuild code that is time critical
or FDA approved.
Or very old.
Or just very old because you don't make a lot of changes.
Many
modern compilers
or modern systems
or heck, let's just call Python
a thing,
don't have that. It's harder
to
lock everything down. Lock the versions
down.
Not when you are using our tools.
I mean, we have in MDK, you can say which tools I'm using,
and it gives you a list of the tools and the software components
that you are using in your application.
And you can basically file these tools and use it five years later.
So I don't see the problem.
It's basically product lifecycle management.
This is the term that is used in the industry,
and it's part of project lifecycle management flow.
But that is one reason to pay for compilers,
is so that you get that,
because it is harder to maintain all of that yourself.
Yeah, correct. This is, as I said, we offer not just compilers is so that you get that because it is harder to maintain all of that yourself. Yeah, correct.
This is, as I said, we offer not just compilers.
We offer the service.
And as Chris pointed out,
there's the service of getting started more quickly
without having to go out and search things,
especially when evaluating new platforms.
I used embed
for a while for that,
but things have changed there.
What's going on with embed?
Yeah, embed
was actually, or embed is a term
that stands for many things.
And embed stands also for the
cloud compiler that we
had 10 years ago.
We released more than 10 years ago the first cloud-based compiler.
It stands also for the operating system, and the operating system
is evolving. We are actually reworking quite a bit
here to make it
more fit for purpose in the cloud services
that are relevant today,
as Amazon, Alibaba, Google, to name a few.
So there are cloud service providers,
and we need to connect to those cloud service providers these days.
And when it comes to tooling,
the new name for what was called previously Embed Studio is Kyle Studio. And as the name implies,
we want to put it out of the box, not only Embed OS-centric,
we want to make it generic for the whole range of
diverse operating systems for all kinds
of debug tasks, modernize it so that it actually
can support CI flows and machine learning
and the likes.
So we have actually put the first implementation of Kyle Studio into the cloud.
It's a cloud service and it's today free to use and the cloud service will always have
a free tier to use.
So there is always a way to get started with form-based microcontrollers
without any price tag.
And what it does, it gives you access today to 400 different evaluation boards.
And it has actually, in the meantime, we have a debugger in the browser. So you can connect a eval board from ST, for example, to the browser.
And you can debug.
And you will not realize that it's actually a browser-based debugger.
It performs very well.
And what it does is you have no installation whatsoever to get started.
Everything is there.
You just start with an example project and can compile, modify the codes and evaluate the performance of the system and so on.
It's amazing.
It's really cool technology.
And nowadays we all talk about home office.
Actually, I'm working still in my
home office in Germany these days. We are obliged to work at home when possible. And what we
envisioned in the future, we will have a lot of hybrid workers. So people that work a few days in
the office and other days at home. And with a cloud-based solution, you can actually work
everywhere you want. You just need a cloud-based solution, you can actually work everywhere
you want. You just need a browser to get started and you have your development environment right
with you. And this is the beauty of that aspect, but there are many other aspects of cloud native.
Okay, tell me a few of those. I can only think of disadvantages.
So the ability to get to
it from multiple places and
different kinds of computers,
I can see that as an advantage.
Well, and there's the no installation.
That's often it.
Yes, the no installation. And the free
is always beautiful.
What else do you have for the
pluses column?
Okay, let me give you a few.
You are aware of Git, I think.
Git is a cloud storage repository hosting system.
It integrates in Cloud Studio.
You can actually host your repositories in the cloud.
Then we have virtual machines in the cloud these days.
We call this on virtual hardware.
This is actually a server that you can connect to get CI up and running.
And together with GitHub runners, for example, with GitHub Actions,
you can run a CI validation test whenever you commit something to your repository and it tests whether there is a side effect.
Of course, it requires quite a bit of investment.
We try to lower this investment, the upfront investment, to set up a CI system.
But once you have it, it improves your productivity dramatically.
Actually, it's your productivity dramatically.
Actually, it's used quite heavily where safety critical comes to play,
automotive industry, for example,
but we envision that it helps also the standard embedded developers going forward.
The next aspect is when you have a cloud-based system, an IoT iot device when you program an iot device then you are anyway
connected to the cloud service provider to see what your device is doing and when you want to
deploy a firmware update to your device then actually you can do this via over-the-air
programming and this gives you other benefits also.
So you can actually bring a product earlier to market
when the complete functionality is not implemented.
And over time, you can actually extend the functionality
of your distributed device in the field.
So are you doing distributed management?
Not we ourselves, but
for example, AWS has such a service
and we work with AWS to use
the OTA services that they offer
and to integrate
them better. Do you do
any of the management or log
collection?
No. As I said, ARM is about
partnerships. We leverage the ecosystem. Actually,
you can, to a certain extent, say the ecosystem is our product. And therefore, we work with
partners to actually offer this type of services. And we try to integrate them into our tooling,
for example, so that they get easier to use and are actually manageable and give you the benefits that I mentioned.
One of the things I liked about Embed was how many peripherals were supported
and how much code there was that was easy to use.
But it felt like there was a lot of weight to it.
At some point, it wasn't easy to use because there was too much.
That seems like it can be a problem with any cloud thing that also supports many, many things.
Many, many boards, many, many processors, many, many libraries and all the things. Many, many boards, many, many processors, many, many libraries and all the things. How do you
avoid the wait?
First, you are right. InBit had a few obstacles,
let's call it this way. And what we are doing
differently in the Kyle Tool offering is we leverage
actually the work that silicon providers do
to optimize their SDKs.
When you use an NXP or an ST
or an Infineon device, all of these silicon vendors
today provide an SDK. And this SDK has highly
optimized drivers in it.
They are actually optimizable in many different aspects.
You can configure the DMA interrupt behavior.
You can configure buffer sizes, and, and, and.
And these are also written in a way where the code overhead is quite tiny.
Of course, you can go bare metal and directly call the register interface if you wish to,
but frequently you will not beat the implementation from a silicon partner.
Have you seen the STM HAL?
Yes.
Because I've spent a lot of time with it it and i'm not sure optimal is the word
no it is they have two different flavors of the hall there is a so-called ll low level driver
which is really quite optimal and now of course you need to compare it fairly
it is it is configurable and it has a lot of features and when you compare it to embed you will see it
is more optimal and therefore and i mean more optimal also in in terms of flexibility it has a DMA control in it, and so on. And therefore, the overhead is actually not dramatic
for the features that it provides.
I would put it this way.
I would certainly not use it.
Wait.
I would definitely use it.
It can be tricky to use.
And there are definitely corners that you can get lost in.
But I did find that the low level was more what I'm used to. It was. There was still a lot of checking, which is great if you're doing
things the first time. But if you're running millions of DMAs in an hour, I don't need you
to check things again and again. Yeah, but this is an assert macro. When you compile without a debug switch, then it is compiled away.
That wasn't always assert.
Yeah, that doesn't matter.
That's totally not a point for you anyway.
I should ask STM32 folks on.
I'm sure ST has room for improvements.
I'm not here to defend ST.
No, no.
Like I said, I don't recommend not using it.
Right, right. Definitely should use it if you're on
STM.
Optimize the code if you have to, but only if you
have to. Exactly.
Back on the
cloud stuff, because that interests me,
I'm of two minds about it.
I have my old man mind, which is
what are you doing putting things in somebody else's computer?
And then I have the future mind, which is, are you doing putting things in somebody else's computer and then i have the future mind which is okay this sounds interesting um and they're
they're fighting so one of the questions that comes to mind is is this something you can switch
back and forth can you have some people on the cloud service and some people on their desktops
um doing the same project yeah you can, what we have in the first incarnation
is you can actually export projects into MDK
and use the desktop tooling.
MDK has about 250 menus of investment in this IDE.
And it will take us a few more months,
let's call it this way,
to get on player with Kyle Studio with the MDK feature set.
Once we have that, we can deploy Kyle Studio also in desktop, in Linux, Mac and Windows versions.
And this means that you can pick and choose whatever you want.
You can decide, view it like Microsoft Office 365.
In Office 365, I can decide to use an online version of Word
or an offline version of Word,
and actually switching back and forth is kind of seamless.
And the cloud has this benefit that I mentioned.
It supports hybrid working quite well.
It helps you to avoid that you have to reinstall computers over and over.
And therefore, I think the flexibility that you get with it is where in the long term the benefit will be.
You still will have desktop computers in years to come with a setup that is for your project. But to get started with a new project, to evaluate a new system,
you wouldn't spend days to set up a development environment
on your local computer.
Instead, you would use a software-as-a-service system like Tile Studio
and start right away.
This is how we see the future when it comes to embedded programming.
And the other aspect, I hear a lot, yeah, we are concerned about data security.
But if you think about it, your computer is connected to a network.
And the security that the big cloud service providers offer you is far more better than
on your local computer. So they have a whole team that actually checks
whether there is an attempt to attack their systems,
because if this would happen, they would be in trouble.
Yes.
Let me ask one more question that goes back to something
Alicia talked about with you a few minutes ago,
which is the locking down various versions of things.
Is that something that's easy to do,
will be easy to do with the cloud service?
Like, okay, I need this compiler from two years ago
and these libraries, and that's documentable and traceable?
Or how's that going to work?
In our virtual hardware system,
we work with so-called amazon machine images and the amazon
machine images are versioned and you can run an old version of a machine image for the latest
version of a machine image and the old version gives you the environment that you had two years
ago and you are not forced to use the latest and greatest.
In Kyle Studio, we are not at this level yet.
In MDK, in classic MDK, you can actually say, I want to use compiler from two years ago
in my project.
And it picks the compiler from two years ago.
We envision that something similar we will offer for our professional users of Cloud Studio.
I'm really glad to hear you emphasize the evaluating new platforms.
Because one of my uncertainties about using the cloud compiler was the download time.
I mean, I have to flash it. That always, it takes an
irritatingly long time, even if it's only two seconds.
Two seconds? Why are you flashing? It takes two seconds.
Small processors. But downloading, that just adds more time. And I didn't, but knowing
that I could then put it on my computer would be helpful in that regard.
Once I was finished with evaluating and wanted to get down to solid development, I really think that's important.
Yeah, and I think we have to offer this type of installations for a couple of more years until nobody thinks about cloud anymore.
Because to be fair, when I use Visual Studio Code on desktop or I use Kyle Studio in the cloud,
I sometimes forget that I use a browser version of the editor.
The performance is almost identical, and therefore it will become a habit to use cloud.
I'm pretty sure in ARM we leverage cloud services quite a lot.
And I know that German Automotive also uses cloud services quite a bit.
I do too, and yet maybe it's having used embed that wasn't a pleasant part of the process.
Yeah, it was early days though, first off.
It's true.
Yeah.
But we do have really good network.
I imagine Reinhardt also has really good network connectivity, which is not true of everyone in the world.
That's true.
Today, I wouldn't sign that anymore.
I think that in many countries,
you have decent network connectivity.
And therefore, I wouldn't worry too much about it.
And actually, these days,
pretty much everyone does video conferencing.
And for a cloud-based IDE, you don't need more bandwidth than for video conferencing.
Actually, I think you need less.
Yeah.
I mean, I still have international friends who definitely, we don't video conference, we voice conference because that's the level of.
Yeah, but keep in mind, the compilation is actually done on a cloud server.
And the cloud server that Amazon provides is four or five times faster than my notebook that I'm using.
It's a normal notebook, maybe not the fastest in the world,
but even if I would get a very fast notebook,
the cloud server would beat compilation time.
When it comes to editing, yeah, bandwidth plays a role,
but to be fair, what do you download and upload is all source files with a few hundred kilobytes.
It's not much bandwidth
that you need. It does level the
playing field in some interesting ways.
Yeah, it definitely does
make our computers consoles again.
Why did I spend so much money
on it?
You mentioned virtual hardware
much earlier, I think at the very beginning.
And now that you've explained the cloud, I suspect the virtual hardware, is it simulated?
It's basically a simulated embedded device in the cloud.
Are you familiar with WalkWe?
No.
He has a simulated,
he simulates the processors for like the Raspberry Pi
Pico,
the RP2040,
4080?
2040.
And also ESP32
and at Mega.
But he actually simulates the processors and has peripherals and runs code,
and you can actually GDB your Arduino code in the web.
That's also in the cloud.
Fully simulated, yes, that's also in the cloud.
Yeah.
Are you going to do something like that? Or something more like Embed, where it was a simulator,
but it didn't simulate the hardware so much as it kind of simulated the hardware?
Yeah, what we do is we simulate basically a processor system with some peripherals.
This is basically our offering.
You can pick and choose which ARM processor you want to simulate.
So you can actually simulate a Cortex-M4 or an M7 or an M55 with a E-TOSU.
And you can test drive your algorithms on this simulator
and make performance comparisons and the like.
So it is to a certain extent,
it is there to help evaluate the different processes from on.
It helps you also in the software design cycle
when it comes to unit and integration tests,
because this type of testing you can do on this type of simulators.
And we offer them as a cloud service you can actually start when it comes to complex ci you can start multiple
instances and when you run unit tests at scale then you have typically many hundred tests that you perform.
And alone, the flash download time on real hardware is much longer.
But what also cannot happen is the real hardware,
when you flash it 10,000 times, this is about the life cycle, it's dead.
A virtual machine cannot die.
Wow.
Not in the same way.
That is the beauty of it.
And we position it today with CI, but going
forward also with
MLOps. MLOps is the
development flow that you use for machine
learning, where you need to
optimize the algorithm
that you deploy to your target system.
And machine learning optimization will happen anyway in the cloud
because of the compute-intense machines that you,
or the compute-intense algorithms that you need.
So the machine learning, the training of many end-node devices
will happen naturally in the cloud.
Therefore, we think the validation is also better in the cloud.
And once you have validated it in an MLOps workflow,
you can actually deploy it to your target system.
Yeah, I can see that.
To a certain extent, we look a little further ahead
as the normally embedded developer needs today to get his job done.
But we think about what is the need in two or three years in this industry.
Yeah.
I mean, that's how you stay in business because it's going to take you a couple of years.
To do anything.
To do anything.
And then we're finally catching up and saying, oh, we need that.
I still feel like embedded is in many ways lost in the past.
So it's good to see some forward thinking stuff happening.
Yeah.
Let's see.
I think we still have a couple of questions.
Tom asked about Yak, but I think we've covered that.
Andre from the Great White North asked about who influenced you.
Dennis Ritchie, Aho, Nicholas Wirth, Sethi and Olman, someone else? I only recognized one of
those names. Yeah, Nicholas Wirth is the, to my knowledge, the author of Pascal. And actually,
when I started university, this was 1980. Pascal was the high-level language
of the day. So when academics in the universities, the first
language that we learned was Pascal. And popular compilers
were Turbo Pascal at the time. But to be
fair, I have written tools in PLM until
we started with the C51 compiler, so 1986.
And PLM is actually very close to Pascal.
PLM was the Intel flavor of a high-level language.
I would call it these days, it's actually an intelligent assembler,
because at the end of the day, the compiler wasn't that clever,
but it was really a productivity gain compared to assembler that was
dramatic. Therefore, I like Pascal a lot.
Yeah. Then I think you mentioned Dennis Ritchie,
the inventor of the C language. And of course I know him,
not personally, of course, but I read his book inside out.
It was the Bible. And you have to keep in mind, when we started with
the C51 compiler, there was no ANSI standard.
This didn't exist at the time where it used. We had no
access to it. I think it was in design in 86.
It was not released officially. And therefore,
Cunningham Ritchie was the go-to book when it came to how the language should behave.
And the other colleagues, Atul and so on, have written the Dragon Book. We called it Dragon Book because this was on the title cover of this book.
It had a very good collection of clever algorithms for compiler design.
With the caveat that they have not considered resources as constrained.
So they were basically infinite resources were available for the algorithms that they
described.
And the challenge was to map the algorithms to what was available at that time on compute power.
It's been a pleasure having you, Reinhardt.
Do you have any thoughts you'd like to leave us with?
Take a look to Kyle's studio.
Take a look also to what we are up to. We have on the landing page actually quite a bit of outlook what we will do in the future. And I encourage you to get,
to take a look to these tools and explore it. Our guest has been Reinhard Kyle, Senior Director of
Embedded Technology at ARM and founder of Kyle Software. Thanks, Reinhard. This, Senior Director of Embedded Technology at ARM and founder of Keil Software.
Thanks, Reinhard. This was a fun discussion.
Thank you for your time. Really a pleasure to meet you.
Thank you to Christopher for producing and co-hosting. Thank you to our Patreon
listener Slack group for questions. And thank you for listening.
You can always contact us at show at embedded.fm or at the contact link on embedded.fm.
When I say always, I mean sometimes because it's been down for a little while.
If you didn't get a response and you thought you should, please, please do resend it.
It's been down since November.
And now a quote to leave you with from Grace Hopper.
I had a running compiler and nobody would touch it.
They told me computers could only do arithmetic.