Embedded - 237: Break All the Laws of Physics
Episode Date: March 9, 2018Jan Jongboom (@janjongboom) of Mbed (@ArmMbed) joined us to talk about compilers, online hardware simulators, and inference on embedded devices. Find out more about Mbed on mbed.com. The board simulat...or is at labs.mbed.com(Mbed OS Simulator). The code for the simulator is on Jan’s Github. Mbed Labs also has the uTensor inference framework for using TensorFlow models on devices. You can see some of Jan’s talks and his blog on janjongboom.com. Jan will be running a workshop at SxSW called Changing the World with Open, Long-Range IoT on March 10 in Austin, TX. Additionally, he will be hosting an IoT Deep Dive Workshop on LoRA on March 14 (also in Austin, TX). For background on LoRA, check out the recent Amp Hour episode with Richard Ginus.
Transcript
Discussion (0)
Hello and welcome to Embedded.
I am Elysia White alongside Christopher White.
I've talked some here about the online compiler from Embed.
We're going to learn more about that, but do you know that there's a simulator for it?
Jan Jongboom from ARM is here to tell us more.
Hi Jan,
thanks for being with us today.
Could you tell us a little bit about yourself? Yeah, my name is Jan Jongboom.
I am a developer evangelist at
Arm, and part of
that is developing all kinds of
new and interesting features that I think our community
loves. And indeed, I've been
working on a simulator which allows you to
run basically Cortex-M
emulated Cortex-M devices
running ARM embeds straight in your browser.
Cool.
So
I am going to start lightning round
where we ask you short questions and we want
you to have short answers. And if we're
all behaving yourself...
Behaving yourself? Ourselves?
It's all us that this goes badly with.
Then it will go very quickly.
Are you ready?
Yep.
Favorite processor?
Cortex-M3.
That's a core.
All right.
We won't push it.
I know you probably can't be.
You probably can't have favorites.
Do you think we are in a simulated reality, and how would we know?
I think there's a...
Yeah, there's definitely a chance of that.
How would we know?
Try to find a glitch.
Embedded development or web development?
Very tough question.
Traditionally web, but I've started to like embedded a lot lately.
IoT. More internet or more things?
More things.
Favorite Internet of Things protocol?
Do you mean like BLE and Wi-Fi?
Yeah.
Okay.
LoRaWAN.
All right.
What is the best board for someone wanting to move away from the, I guess I have to say Arduino, I can't hint around Arduino?
Tough question. I mean, I started out with the Nordic Semiconductor NRF51DK, and that was an absolutely great choice.
But obviously, I can't pick sites.
Yeah, BOE is a nice place to start because it's got so much functionality. Absolutely.
And it works as like a personal area network.
So you can easily get something on your phone running, and it's really easy to demonstrate.
What's a tip that you think everyone should know?
Programming is not hard.
It's just logical thinking and writing it down.
All right.
All right.
So now for longer answers.
The first one basically could be the rest of the show if we do this wrong.
What is Embed?
Embed is ARM's Internet of Things device platform.
So it spans a variety of products from an operating system for microcontrollers to device management cloud and everything in between.
So there is the compiler aspect to it.
Yeah.
That's all online.
Why did you...
Why?
I mean, there are lots of existing compilers out there.
Why have an online one?
Well, I mean, the first thing is to stress
that it's not just the online compiler.
We integrate with IR and
Kyle NDK and all the other IDs.
And we have some local tooling as well with
Embed CLI. Historically,
the online compiler
started because
in late 2009, I think
a lot of listeners still remember that,
all your tools for building
embedded devices were
paid for and only ran on Windows.
And we wanted to have something that people can get started really quickly.
And what's easier than opening your browser and going to a web page and typing in some code and then hitting compile?
And that's where it kind of started for me.
That was where I came into embed, is that I got a board that that said go here and there's a compiler and and then
the compiler didn't have an ide i didn't need a j tag unit for this board how do you do that
yeah so yeah i mean first of all i i really recognize that i mean for me i was i was
running code for uh an upmel mic controller with AVR Studio, or Atmel Studio.
It was a relatively painstaking process, and then someone handed me
the Nordic Semiconductor's NRS51DK,
which was an EMBED-enabled board at a hackathon.
And I was like,
this is amazing. This is how I want to write my code.
Not in a Windows environment,
but a proprietary ID.
But yeah, coming back to your question,
how do you... You don't need an external JTAG.
So we have a project
called DapLink.
And DapLink is some software that runs on a
secondary MCU on almost every
embed developer board.
And that
contains X to the debug
interface on the Cortex-M processor.
So we
fake a file system, we fake that we're
a USB drive, and then whenever you drag
your firmware onto the fake USB
drive, we use the debug access
protocol to flash
the actual target MCU, which I think is a really clever
trick, because it doesn't require you to have any drivers
or anything on your computer.
But that means
I can't step through my code, right?
Yeah, through that link, you can also step through your code.
So we have the PyOCD project
or the OpenOCD project on some boards from SD.
And those will actually give you full debugging capabilities.
So you don't even need the JTAG to step through your code.
You can just use GDB or Visual Studio Code or KyleMDK
or anything like that to debug your code. You can just use GDB or Visual Studio Code or KyleMDK or anything like that
to debug your code.
Do hardware developers pay any price
for having that extra MCU?
Is there a power kind of cost
for having that on the board?
Or is it something that you drop
when you go to production?
It's something you drop
when you go to production.
So typically you add a normal DAP interface
like the 10 pins or 8 pins and then you program program through normal JTAG if you go to production.
Okay, cool.
So these are intermediate boards, they're prototyping boards?
Yeah, absolutely.
But yeah, absolutely prototyping, development boards.
So incomparable to what you would put on actual products.
But the MCUs that are on the board,
those are the same ones.
You've mentioned the Nordic NR51,
and I know they have a 52.
Actually, Nordic supports the embed platform quite a lot.
What are their chip vendors?
Oh, that's a lot. Is this everybody who makes an ARM Cortex?
Not everyone, but a very large chunk.
ST uses it for a lot of their boards.
We have NXP boards, Renasas.
Yeah, it's virtually, I wouldn't say everyone,
we have 140 different boards from, I think, 60 different partners.
So you should be able to find something that matches your requirements.
And if I make a board, how hard is it to get my board on there?
So we have an embed-enabled program.
If you pick an MCU from a vendor that already supports embed,
it's absolutely trivial.
If not, then you need to port the hardware abstraction layer that we have.
But because we're building on top of CMS's RTOS,
which, Johns, are 98% that your MCU already supports that,
porting is relatively straightforward.
If I'm already comfortable with my desktop compiler, my IAR or my Kyler, whatever,
why would I even try embed?
So embed is a lot more than just a compiler.
I think it's a very sane set of middleware that every IoT developer needs.
Every IoT developer needs a network stack, IPv4, IPv6.
Every IoT developer needs
TLS libraries.
You probably need a bootloader if you want
to add updates. You need an update client
in that case.
File system drivers, we have
both FAT file system and our own
very resilient
file system specifically made for embedded.
If you choose embed, you don't
just get the community or the online tools.
You also get all this middleware that is well-tested,
maintained by ARM and maintained by our engineers
to actually ramp up our IoT product a lot quicker.
So if I make a product
and I use an embed board for development and I use your compiler and your RTOS and networking stack
and then I make my own board and I have a JTAG,
what is my path here?
What is the transition path to more traditional tools?
So you can just start developing in your IDE
while you're already using our development boards.
After that, you take actually just a normal path
of productizing something that you hooked up on a breadboard.
So you design a new PCB with the same MCU on it
or maybe a different variant in the same family.
You hook up everything.
And then the same binary that we created,
even in the online compiler or the binary you created in IR,
you can just flash on the boards through JTAG.
There's nothing special in the binary.
Is the RTOS and everything, can I
download it and compile it with IAR?
Is it binary? What is the licensing?
So almost everything that
we have is licensed under the
Apache 2 license, so a very permissive
license.
And it compiles with
GC6, RMCC 5
and RMCC 6 and IR.
I think IR 8, latest release.
Some parts,
and it's mostly like networking, drivers,
etc. that come from partners, are available
under a permissive binary license.
So not a very patchy license,
but you can just embed it in your product
without having to pay anyone.
So one small elephant in the room
is that we haven't mentioned that,
and I think I'm right, is that most of these frameworks
are written in C++, and your code will be in C++ as well.
It doesn't have to be.
No, so the hardware protection layer is basically in two flavors.
One is C, that's what the vendors need to port.
So we have a CL for the vendors, and then a C++ hell for developers.
Yeah, if you really want,
you can use the C variants.
In general, I think the C++ API,
especially for stuff where you do
RTOS things or using network stacks
is a lot more friendly.
But you have some freedom in that.
Do you find you get any pushback from
having C++ so
prominent in an embedded? I know a lot of developers
who are not
big fans of C++ in embedded. I'm not one
of them, but has it been a challenge?
Less than I anticipated.
I think the main concern
with people is that they're afraid that if they
start using C++ on their thing,
that it actually will blow up in code size.
I've written, I think, five or six blog posts on reducing the code size for embed projects.
And after you link them to that, they often start to realize that that's not that big of a problem.
But yeah, not that much pushback, actually.
And there's a lot of existing code,
not just developed by the folks
at Embed, but there's
a huge community
and all the licenses are very permissive.
Yeah.
I think, yeah, so the default license
if you publish anything on Embed is Apache 2,
but you can license it
with something else. But yeah, I think our online community at all that embell.com is is amazing and has has been an absolute
catalyst behind the success of embell i think we have 20 000 different libraries published there
literally every component that you buy in a random you know you have to in tokyo you have
the electronic stores with all these random components. Like, the chance is 98% that you pick a random component,
and someone somewhere has already written an embed library for it,
which I think is pretty amazing.
It is pretty amazing, especially when I go to try to make something for a new flash chip
or an ADC I've never used.
And I can go to embed, and there'll be a library there with Apache licensing.
And I don't have to make my own header file.
I just have to tweak it to use my I2C or whatever.
Yeah, that's brilliant.
Why did you go the C++ route?
Was it partially influenced by Arduino?
I was not there at that point.
So I can't comment on that i i like that about it i like i do like that the embed feels the embed compiler and environment feels like a stepping stone from
arduino do other people say that or do you just not want to talk about arduino at all
so actually arm is an an Arduino partner as well.
We have, like my boss's boss is on the board of Arduino.
A good friend of mine is in the board, is a CIO of Arduino.
I have a really good working relationship with them.
I think what we try to do a little bit more,
so Arduino really focuses on developer experience there
and we want to look at all the aspects
that you're looking at
if you want to build a secure IoT product
and then offer a solution everywhere on that route.
So, yeah, I think NBED and Arduino
are actually pretty complementary.
Arduino has made it really easy to get started
building IoT products.
I actually started my first project was in Arduino on Embedded.
Yeah, they both have their place.
They both have their place.
We try to do a little bit more.
Arduino has a huge library.
You have a huge library.
They don't exactly overlap.
So they overlap in spots, but not entirely.
Yeah, so I think one of the,
maybe not one of the issues,
but what Arduino is missing
without trying to sign negative
is Arduino doesn't have an RTOS.
So also a lot of the RTOS primitives,
locking, mutexes,
are not implemented in libraries.
At least all the
libraries we put out are thread-safe
and can be used in an RTOS environment.
So, I mean,
I've contemplated about the idea of having
some sort of Arduino compatibility
layer for EMET or the other way around.
So having an ecosystem
where you can use both.
But it's just
they're both
catering to a different public, so
they have to be a little bit different.
And you mentioned the security
part. That's always something
that's a little daunting.
Like, I know
I need good security,
but I also know that I shouldn't be developing myself.
Yeah, so, yeah, I completely agree.
I think security is fundamental to any Internet of Things product,
and we've seen how easy it is to get all your devices hacked
if you try to roll something yourself.
I think ARM is in a position to both do the research
and invest the money in
building a common set of middleware
items from
a TLS library to integration
with TrustZone or with your
memory protection unit
to having a secure
firmware update service
on the device.
I think that it would be better if we would all channel
our efforts and standardize on whatever we're building.
So that's why ARM is investing a lot of money in these security features.
Do you ever worry that your code is being read?
I mean, it's open.
Do you think about, as you write the code,
do you think about who's going to read it
and how legible it needs to be
and all of these, oh my God, it's open source issues?
Or is that just me?
I mean, so before I joined ARM, I worked at a large telco
and they donated my time to the Mozilla Foundation.
So I'm very aware that all my code is out in the open, because all the code we wrote for Mozilla was out in the open.
That has to be reviewed by two or three people.
For me, it's not really different.
I like it, actually.
I think the more people see the code,
the easier it's going to be to spot any issues with it.
I like being able to read the code
because it gives me good ideas for my own code.
A lot of the embed things are written in good C++,
not overly complicated, usually aware of memory situations.
And so it is nice to see embedded code that other people write.
Yeah, I completely agree.
I moved from being a web developer to being an embedded developer
three or four years ago and being able to,
to tap into this vast ecosystem of code that I know works and actually powers
millions of devices that,
yeah,
I mean,
for me,
that's,
that's,
I can,
I can totally see where you're coming from.
One of the difficulties with having a class, with teaching a class about embedded systems, especially one that's online or distributed, is not knowing if everyone can get the same hardware.
And if they can, if they can get it working on their computer.
Yeah.
So I was very excited to see you have a simulator.
Could you tell us about it?
Oh, my.
Yeah.
One of the things that I kind of hate about embedded development
is that the feedback loop is very long.
You need to compile your application, which takes a bit.
You need to flash your application,
and then you need to get your application again
in the state that you're testing, again in in the state that you're
testing or that you're um yeah basically in the state that you're testing and so when i saw the
microbit project which has been written by a couple of uh of my or worked on by a couple of
my colleagues and a whole bunch of other companies um they have an online editor it's actually a
block-based editor but they also have the simulator
directly on the screen on the right.
And that is such a powerful tool.
I flew to Trondheim for Maker Faire Trondheim
a year and a half ago,
and I was teaching kids how to do microbit
in Norwegian, which I don't speak.
And just the whole mere idea that whatever they just click together
they can test directly in the same browser window before even flashing on the device which kind of
take you know even an optimistic case 20 30 seconds that is so cool and from the moment that i saw that
i i realized i want that but then a bit more open and a bit less focused on just a single
development board so that's why I started the simulator project.
And what stage is it in right now?
I'm still a bit wondering where it fits in our developer flow.
So right now the simulator just sits on a website.
On the left you have an editor, on the right you have the simulator.
But that's not a product, or it doesn't integrate with anything
um so i'm looking at i think it would be really cool if we could test our cloud services directly
through it so you could say spin me up an instance and then it will just run a simulator and you see
actually a board appear in our device management clouds or um if you go to one of our documentation
pages and you see some example code,
you can just press the run button
and it will run in the simulator.
So it's pretty stable.
It works quite well.
It's definitely, but I don't want it,
I want to integrate it in a different way
than it currently is as a completely standalone product.
I want to integrate it in our compilers,
in our examples, et cetera. So that's what we're going to focus on for next year. So is it a simulator standalone product. I want to integrate it in our compilers, in our examples, etc.
So that's what we're going to focus on for next year.
So is it a simulator or an emulator?
Is it actually doing the ARM processing?
So the cutoff point,
so if you look at an embed board,
you have your application,
then our C++ hell,
then the C hell,
which the vendors implement,, then the C hell, which the vendors implement,
and then the actual board,
and you need to toggle registers to make LEDs blink.
So I implemented the simulator
as a set on the C hell level.
So according to EmbedOS,
it's just another target.
And that the target underneath
draws a board on an HTML page instead of toggling some registers, and that the target underneath draws a board
on an HTML page instead of
toggling some registers and that's where it is.
But the nice thing there is that I can
reuse almost everything that we built
already in embed.
Let me describe what happens
when I load the simulator because
Jan was very focused on all the things
he wants to have happen but
you should see at least audially what I'm seeing.
So over on the left side, there's include embed H, digital out, LED, LED1.
I assume that's defined somewhere.
Usually in embed.h, it goes down and it finds the board you're on.
There's the main while one.
LED equals not LED.
That blinks it.
That is all you need to do to blink the LED.
And that would be true on any embed supported board.
And then there's print.
LED is now LED.read.
And then it waits 500 milliseconds.
And that's all.
So that's the code.
It's short this
is just blinky and there are a couple other demos but let's continue with blinky on my right hand
half of the screen there's a little blue board that looks kind of like the nxp expresso's got
cut in half there's it's not very big but there's a button and a virtual button, but it's clearly a button. And there are some labeled pins,
fake pins, spy, I square C. And then there's an LED that is blinking and a serial output
that says blink LED is now one, blink LED is now zero. And it goes on and on and on.
So if I change something, if I change the weight milliseconds to a thousand
instead of 500, it would blink slower and I can add components that will have more LEDs or different
places. And then there are a whole bunch of different demos. I mean, this is blinky. This
is just hello world. There's one about interrupts that has to do with using that button, that fake button there. PWM out so we could make the LED go
soft. You can add a fake simulated LCD display, a touchscreen, a temperature humidity sensor.
And then there are the networking ones, TCP socket, HTTP, HTTPS, co-op. And that means you can simulate all
this stuff with this website and have it go out and do all the things you want to try. And there's
all this, and none of these are hideously long programs. They're all things you can read and and try i mean the hdps demo has a certificate in it so it's you know there's some
not cuteness here but overall it's like a page of code and i am happy about that it lets me learn
about these things easily so So what did I forget?
No, that's pretty spot on.
I mean, for me, it's like the things that I want to make sure of that one, the code that you write on the left hand side in the editor needs to be 100% Emmet valid.
If you run it on a board, it should run exactly the same.
And I also didn't want to make it like dumbed down.
So that's why you see in the HTTPS example you actually have
to specify the certificates that you trust
and that's because we're using
the embed networking libraries
directly so it's not just
faking it does an XML HTTP request
to the HTTPS service that you do it actually
opens a socket and then
does a DNS request and then
actually you know
does a TLS handshake and all in the browser.
So it's not a dumbed down version.
I think that's what makes it really powerful
because I was working on the,
I'm maintaining the HTTP library for embeds
and I was on a plane from Italy back to the UK
and I just wanted to hack on the HTTP library,
but I had to like, you know, whip out the development board
and now I have a local server running.
It's all pretty terrible.
So I wrote my new feature,
which was about content streaming over HTTP,
in the simulator.
And I tested it two hours later when I plane landed.
I tested it on a real development board
and it just worked exactly like it did in the simulator.
I think that is super cool.
Well, I think your intuition is right, too,
about the need for these kinds of things
because embedded development is very slow.
And putting real hardware in things
always probably halves or quarters
the speed we can actually go.
Yeah, I think so.
We're kind of in a renaissance
for embedded development.
Like, you know, 2009 or even
2012 when I shipped
my first embedded product, still
a lot of stuff was on Windows.
You needed external JTAGs.
The development cycle was so incredibly
slow.
And now, first of all,
almost all the tools are cross-platform.
So that's already
a great step up.
And I think we're going to now see an actual renaissance in development tools
and how you develop things.
And I'm pretty excited about that.
But this board, this simulated board,
is it real?
I mean, could I...
You say I could load this code
to a physical board
and I would expect it to work the same.
Yeah.
But what is this board that I'm seeing as it blinks its light at me?
So the hardware abstraction layer, so all the pin names,
they're derived from the LPC 1768 development board,
which is the original embed board.
But if you change
the pins to whatever your development port
has, then all the
examples, including all the peripheral ones,
so I've noted the name of the
peripheral
in the simulator, so if you get the same peripheral,
hook it up to the same pins as the simulator
tells you, it will run exactly the same on any embed port.
That's very cool.
What if I wanted to
actually support
my board in the simulator?
I think for now
I don't want to go really on that route
because I don't want to have any
board-specific
code in there.
So the board looks like an NXP
board, but it's not an NXP board.
And probably if we're going to integrate this into a product,
we're going to make it look less like a real board
even. So I want this to be
a generic target rather than
simulating a specific board.
Is the simulator open source too?
Absolutely, also a bunch too large.
Okay, so if somebody really was ambitious, they could
put their own
board in there.
Yeah, absolutely.
And I would love to see something like that.
You know, maybe, like, where I would like to have some support,
if people are hearing this,
if you have some peripherals, maybe on the board,
or peripherals you want to have,
like, write a little abstraction layer for it.
You can just write it in a little bit of C
and then most of it is just JavaScript and HTML
and hooking up an extra component is pretty trivial.
There's plenty of examples already in there.
So I would love to get some support for that.
Yeah, I think it is less about the boards
because given a Cortex-M,
you have a lot of similarities between boards, but if you are doing a specific memory device or peripheral I2C or SPI or whatever, that's where things start to get complicated.
Yeah, absolutely. What about for specific boards? What about DMA and all of these other tweaky features, sleep?
And do you deal with those or do you just leave them in the hardware abstraction layer?
Currently, we just leave them in the hardware abstraction layer.
One of the things I like regarding sleep,
one of the things we're adding now to development boards
is the ability to track energy usage
of all your peripherals, et cetera,
and then graph that out.
So it's currently still in the planning phase,
but that's hopefully coming to development boards
very quickly.
Something like that would be super cool to simulate
in a simulator because you kind of know
what peripherals are being used.
You know, when you properly go to sleep and testing stuff like that, making sure that
your sleep cycles are correct.
That would be something that would be great in a simulator and easy to graph out.
And you said this is a project that is gaining traction over the next year.
Is that right?
Do you have deadlines or is this sort of something you do as you can
um so we don't really have any deadlines i mean it's done when it's done um which means it's never
done well so yeah i mean i have so i have some ambitious plans um they're're working, but they're not integrated in our product yet.
So for me, it's only done when we actually integrate this
in some of our getting started flows.
On a technical level, the stuff that I need for that is I've done.
It's not open source yet, but it will be.
And now it's just working with the product teams
to actually integrate it in our flows.
I'm pretty confident we're going to be able to do that relatively quickly.
And this is available off of labs.embed.com.
And when I went there, there were some other things.
What is Microtensor?
Yeah, Microtensor.
Yeah, Microtensor is...
Yeah, I get really excited about it.
So Microtensor is a deep learning AI inference model that runs on a microcontroller.
There are a lot of hard words in a row, but what it means is that you can take a TensorFlow trained model, Google's deep learning tooling. You can train it somewhere on a cluster of GPUs,
and then you can push the pre-trained model to a microcontroller
and then do the classification on there.
And that's awesome because you can do proper machine learning
on a microcontroller and push real intelligence to the edge.
And because it's the same model that you can also run on TensorFlow,
you can actually dynamically shift workloads from the cloud to the edge
or the other way around, depending on network conditions
or what type of connectivity your device has.
So, yeah, Microdensors is super awesome.
That's crazy.
I always think of machine learning as requiring a GPU or something like that,
but that the idea of doing at least small classification on a microcontroller
opens up a whole bunch of applications.
Do you have some examples of what you kind of foresee that being used for?
So, yeah, one of the demos that we have is just a simple handwritten digit recognition.
There's like a big training set available.
It's like the 101 in machine learning.
The minced set.
Yeah, correct.
So we've implemented that.
That runs with under 256K of RAM, which I think is already pretty cool.
And now we're going to build a bit larger and more interesting models.
So one of the things that I
could be thinking about is stuff like computer vision,
especially for constrained
devices, or devices that have
a constrained network connection.
Ones that are connected over low-power wide-area networks
where your data rate is in bits per second
rather than megabytes.
You cannot send out images
to do classification somewhere on a cluster of GPUs.
You want to do that on the edge.
And I think that the microdancer
is going to give us that opportunity.
And this is the inference framework.
This is running the thing
that you trained on the GPU.
Yeah, correct.
So you still need that big AWS
or GPU system
in order to train your neural network.
But this is running that neural network and having it be the same code, which is really important.
Because I know a lot of times implementing somebody else's machine learning system, little changes can change things, like how the FFT library was implemented.
If that makes a difference, I've always thought that means you're not trained right.
But sometimes it does make a difference.
So this lets it be direct-ish, more direct.
Yeah, and I think that the compatibility with TensorFlow is really important
because a lot of the innovation right now in machine learning space uses TensorFlow. Just
last week, a friend of mine released Propel,
which is
basically
machine learning for JavaScript.
So it's written by two guys who used to work
on Node.js, and now they have a
set of libraries where you, as a
JavaScript developer, can start doing machine
learning, and they handle all the hard parts on it.
And then after that, you've got to train TensorFlow model. You
compile it with the Microtensor tools
and now you can run it on your mic control.
It's terrific.
Very happy. I've either
got to retire or go back to school.
It's very exciting, but
it is kind of...
I mean, this is all very new and
I mean you could do embedded the same way for the last
20 years or 30 years
and now it's not true
well I mean most of the concepts are the same
machine learning is definitely
a completely new way of thinking
yeah but I think that internet of things
or embedded development
is going to be crucial for the adoption
of machine learning.
Because for machine learning, you need fast sets of data.
But the only way to get all these fast sets of data is using cheap micronsolvers.
So I think we are in an excellent spot.
And whatever is taking the data in order to create these sets then has the ability to act on them.
And so that is where the inference comes from.
That is where you start having to run this stuff, make some decisions.
Yep.
It's not just data collection.
Yeah, yeah, correct.
So we're moving from just data devices to now we can actually do the classification there.
And I think it's cool.
I am teaching, I taught last year at a summer school in Africa,
and I'm returning this year.
So we're going to be in Kenya, and it's called Data Science Africa.
And in there, we're getting students,
so mostly mathematics PhDs and mathematics master's students
who know how to do machine learning,
but they don't know enough to actually build IoT devices,
to actually do the data collection
and maybe data collection on the edge.
So we're just going to go over there for eight days
and run workshops with these students
and actually transform them,
not just mathematicians, data scientists,
but also into IoT developers.
So I think the fields are coming together.
You also have a JavaScript on Embed,
which, you know, I'm not really sure about that,
but I'm sure somebody is jumping up and down in excitement on that.
Could you say a bit about that?
I think it's been called an elimination to every DT
to run JavaScript on a microcontroller.
I'm contractually prohibited from commenting on that.
But I think it's in the same ballpark as the simulator.
It's a way of making it easier and faster
to test out stuff on micros.
Things that I can't do in C,
but what I'm really missing from my days as a web developer
is to quickly hack something together on the web console
in the browser and see if it works.
So one of the first projects I did
with the JavaScript runtime on embed
is to make a redeveloped print loop that runs over serial.
So you connect over serial to the development board, you can just start
writing some JavaScript and it will actually change stuff
on the development board or
start toying with the peripherals.
And that's a really cool way
of demonstrating how
this Internet of Things
or embedded thing works to people
who've never done that.
Are we going to see it in
production systems?
Maybe.
But I think it's a really interesting research project.
And the first simulator we built was actually for the JavaScript runtime.
So, you know, you always build on top
of what you've been doing before.
And all of these are free?
Yeah, Apache 2 licenses, all of them.
But ARM also sells the Kaya compiler. Yeah, Apache 2 license, all of them. But ARM also sells the Kaya compiler.
Yeah, correct.
Yeah, I mean, ARM as a, like, our main bread and butter is selling architecture licenses or, you know, selling chipset designs.
That's the Cortex-M3 or the Cortex-M0+. The actual core comes from ARM and then is put into a chip developed by ST or Nordic or any of the other names we've mentioned.
Yeah, so we're fabulous, but yeah, that's pretty much correct.
Yeah, plus, of course, all the processors go into mobile phones.
Like 99% of all mobile phones use an ARM core inside.
But yeah, and then on top of that,
there's some other things like
some money is being made
by using the
Kyle MDK and the
ARM compiler. We don't tie
all the projects that we do in with that. You can also compile
with GCC, although often your
code size is a bit higher, your memory size is a bit higher,
so it still
pays off to actually buy a compiler.
But yeah, we're also in the software business,
so embed is a vehicle of selling more chips,
but we also have some services around embed.
So we have embed cloud,
the device management platform,
and that's a paid subscription
that works really well
with the rest of our tools in our ecosystem,
but you're not forced to use it.
If you just want to use embedOS or
our
labs projects, it's completely free and
completely open source.
So the cloud part would be like
they would do your device
management and data
storage and that kind of thing for you?
Is that what that does?
Not even data storage, so just device management.
It's all the stuff that you're going to run into
if you start shipping 100,000 devices.
All of a sudden, you need integration with the factory tooling in the factory,
and you need to have device identities,
and these device identities can't be burned into the device
because maybe you don't even know who your customer is
or the device may be resold.
You need to reprovision them with new keys
and you need to do firmware updates over them.
And it all needs to be in a really secure manner,
both on the device,
integrating with your silicon features,
and in the cloud.
And yeah, that's Embed Cloud.
And then can I also send data through Embed Cloud
to get to wherever I needed it?
I mean, because the device management is very complicated.
But then I also want to be able to talk to it
for whatever reason I had to build this thing.
Yeah, so I mean, there's two ways of doing it.
So we don't store your data.
So we can use it as like a data plane.
And then we proxy your data to wherever you want to with an API to get it out.
Or what I think is also a very viable option is to say,
well, I use Embed Cloud for my device management,
and then I just open a socket directly to whatever other cloud provider
to actually dump my data straight into the database.
Okay.
So I could use AWS and have it talk to the Embed
Cloud in order to
get all the data from my sensors.
Yeah, absolutely. Cool.
Let's see. Changing
topics rapidly and without
warning, how was Embedded World?
I think trade shows in general,
the first day is really exciting. You see all the products
and you have a lot of energy and you see all these people.
Then the second day, it starts going a bit slower.
And then on the third day, you just want to go home.
It's one of the larger embedded conferences.
Is that right?
Yeah, correct.
I think one of the, yeah, I mean, I spoke there.
I think it's really cool. One of the great things about having a conference that big and so niche, basically,
is that you have all these super specialized tracks, and then there's multiple people talking about it.
So there was an energy harvesting track, like a half day, just people talking about energy harvesting.
That is insane. That is pretty cool.
So I spoke at the LPWend track
about firmware updates over low-power wide-area
networks, which is one of my other
pet projects.
That's pretty cool, pretty inspiring.
The downside of a conference track like that is
that because all of these things are so
niche, they start spreading it out.
They have 10 separate tracks at the same
time running. Maybe you have 60 people in your audience which is not that much it's hard to balance
where you put your time especially with in-person things like that because you can get really
detailed but then you have a smaller audience so that makes sense yeah yeah yeah it depends also
like embedded world is is important not just because of the conference
and not just because of the trade show, but because everyone is there.
So it's great.
Everyone that you ever want to speak to, every company you want to speak to,
is going to be there.
So I think for me it was pretty useful to be there.
But conference-wise, I was speaking at the Things conference,
a conference in Amsterdam just focused on Laura, Laura Wen.
We had the same talk, or a very similar talk, and then we had 650 people in the audience.
So even very niche topics can all of a sudden draw an amazing audience.
Do you have any place else you are looking forward to speaking in the near future?
I am flying to Southwest Southwest on Wednesday.
The large interactive festival.
So if you're going to be
there, I'm not sure when this is going to be
republished, but on 10 March
I'm
holding a workshop as part of the official Southwest
Interactive program.
So it's going to be two hours of building
of exploring a long range internet
of things. So we're going to build some LoRa network using MBAT, of course.
And then on the 14th of March, we have a meetup.
No batch required in the ARM offices.
You can search for IoT Deep Dive Austin on meetup,
and you'll find it.
And we're going to talk about long-range IoT.
We're going to talk about LoRa, do some demos,
and then eat some pizza and grab some beer. So if you're
around in Austin, come by.
What drives you to LoRa?
It's
in general
it enables me to do things that I could
never do before.
It's always this balance between
either range and power consumption.
So I love BLE because it is very low power consumption and it just magically
works.
But the downside is it only works,
I don't know,
30 meters afar.
Um,
or I can do cellular,
which is great because the range is,
is fantastic and it works everywhere.
But then I drained my battery within a week.
And I was really looking for something that gave me like BLE type of power consumption,
but then with a really long range.
And I found that in LoRa about three and a half years ago.
It's just brilliant.
Like the first time we did a transmission,
we put up a LoRa gateway on top of the building where it worked.
And we could see on the other side of the Oslo Fjord,
it was eight kilometers away,
we could actually reach the gateway with our little small device,
knowing that it was still going to run for a year on a pack of AA batteries.
That was really cool.
Wow.
Okay, we have to do a show on LoRa at some point.
AmpEra did one recently.
It is really cool.
Yeah. I guess I'll link in the Amp Hour show too.
There's more
technologies in the same space.
You have Sigfox who's like father
to technology and
the telcos are catching up on it
by doing narrow end IoT.
It's in general just a really
good place to be in
connectivity-wise at the moment.
Everyone is waking up to this and realizing that IoT connectivity,
machine-to-machine connectivity should be done without a SIM card,
and it should have long range and low power.
So I'm excited about that.
It should definitely break all the laws of physics.
Yeah, if we could do that, then we could also have really high data rates.
That would be even more amazing, but not yet.
Well, Jan, we've kept you for long enough.
It was great to talk to you.
Do you have any thoughts you'd like to leave us with?
I think the embedded space at the moment is so energetic and so full of fun.
We're seeing a renaissance in development tools. We see a lot of the things that desktop developers were used to,
simulating devices, just stepping through code very easily,
actually making it back to embedded.
We see new networking technologies that all of a sudden give us things
that we could never dream about popping up.
So I think embedded is a fantastic place to be in.
And yeah, I can't wait to see what the future brings.
I completely agree.
It's an exciting time to live.
Our guest has been Jan Jongboom, developer evangelist at Arm.
You can check out this simulator and many of the other things we talked about on embed.org or labs.embed.org.
Jan's website is youngbomb.com, where you can see some of his talks and his writing.
Check those out. It's pretty cool.
And again, link will be on the website in the show notes.
Thank you for being with us, Jan.
Thank you.
Thank you to Christopher for producing and co-hosting.
And of course, thank you for listening.
You can always contact us at show at embedded.fm
or hit that contact link on Embedded FM.
And now a quote to leave you with from Carl Sagan.
Somewhere, something incredible is waiting to be known.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of
Logical Elegance, an embedded software consulting company in California. If there are advertisements
in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance and listeners like you.