Embedded - 254: Murdering Thousands of Gnomes
Episode Date: August 3, 2018Gabriel Jacobo (@gabrieljacobo) spoke with us about embedded graphics, contributing to the Linux SDL, using MQTT, and working far from his employers. Gabriel’s blogand resumeare available on his sit...e mdqinc.com. His github repo is under gabomdq. SDL is Simple DirectMedia Layer (wiki). It is not so simple. For MQTT-based home automation, he uses the Raspberry Pi Home Assistantbuild and many Node MCUs(ESP8266s running Lua, Micropython, or Arduino Framework).
Transcript
Discussion (0)
Hello, this is Embedded.
I am Elysia White, here with Christopher White.
When I say graphics, do you assume I mean a big game computer?
What about in Embedded Systems. This week we'll be talking to Gabriel Jacobo about embedded graphics
and MQTT home automation. Hi Gabriel, thanks for joining us. Hi Chris and Alicia, how's it going?
Good, I know you have worked at some company we're not going to talk about with my husband,
but how about you give us an introduction of yourself as though you were
meeting us for the first time at a technical conference.
All right.
My name is Gabriel or Gabriel in Spanish.
I'm an electronics engineer and I typically work remotely with US companies, usually in
software or film work development.
And lately I've been working film work graphics and systems engineering.
I thought he would go on for longer.
I have three kids.
Well, let's do lightning round and we will dig into more because despite your short introduction, you have done many interesting things.
So lightning round, we ask you short questions.
We want short answers.
And if we're behaving ourselves, we won't ask you why and how and how and all of that.
Christopher, are you ready?
Yeah.
Gabriel?
Sure. Let's go.
Star Wars or Star Trek?
Star Wars. What's a Star Trek?
What's your favorite X-Window manager?
Weyland.
Gnome or KDE?
What's Gnome?
KDE.
I've been using KDE constantly for the last 10 years.
I wouldn't exchange it for anything.
Dinosaur movies or shark movies?
Dinosaur.
Is the Amazon a river, a bookstore, or a tribe of fierce women?
For me, it's a bookstore or online shopping destination.
See, I had a sales platform in that question,
but you just skipped it.
I was tightening.
And I'm a little uncertain about your answer
because when I asked you about Amazon earlier in the week,
you said it was a river.
Yeah, because Amazon
doesn't really have any presence in South
America, but anytime
I go to the US,
Amazon is probably the first
site I visit.
What color should
a llama be?
Say again? What color
should a llama be?
Oh, what color?
So the thing is, llama means flame in Spanish.
So that's why we're talking about it being of different colors.
Blue is the color of a good flame.
So I'll go with that.
Do you like to complete one project or start a dozen?
I'd like to complete one project. start a dozen? I'd like to complete one project.
What I actually do is start a dozen.
All right, let's go and let's dive into things.
Graphics.
When you say graphics, and I know you've done a lot of different work in graphics.
What do you, what is, What does that word really mean
when you're talking about embedded systems and graphics?
So graphics means different things to different people.
For end users, it means probably what they see on screen.
For designers, it means the way they communicate with users.
And for us firmware engineers,
it's the whole pipeline that goes from assets
to some processing, the frame buffer,
and eventually pixels on the display.
It's a little odd because I would never say
I've worked in graphics,
but I have put bitmaps into Flash
and then gotten them out and put them on the screen.
Yeah.
Should I be saying, oh yeah, graphics is on my resume?
So when you did that, did you constantly write to the screen?
Did you worry about doing the minimal amount of work possible to preserve battery, for example?
All of that is, I think, graphics work in a way.
Yeah, I did all that.
And I actually drew a graph too.
So it wasn't just splatting things on the screen.
I had to worry about pixels.
Yeah, I think you can put that on your resume.
Okay, graphics expert.
Graphics is usually expensive for batteries.
So for embedded work,
anything that saves you on drawing time,
on redrawing things that you don't need to draw,
even if you're just moving assets from the flash to the display,
that's a fairly complicated process
and definitely worth putting in your resume.
Could you go through the process for us?
So the center of the graphics process
is what's called the frame buffer,
which is a complicated word
for an array in memory buffer
where the pixel information is going to be
and it's going to be sent to the display through some mechanism.
The assets typically come from a flash storage,
SD card, depending on the resolution,
it's the type of storage that you're going to have.
There's going to be some sort of DMA transfer
from the storage to RAM memory or even directly to the frame buffer.
And then there's going to be some processing done either by the CPU in what's called software rendering or by a GPU in what's typically called hardware accelerated rendering.
And then that frame buffer is composed into a frame and that gets sent to the display at a certain time.
The timing of sending images from the frame buffer to the display
has a little bit of complexity,
because the display is doing its own thing,
and the CPU is doing its own thing,
and if you don't coordinate that properly,
you end up seeing strange artifacts on screen
which you don't want to.
That depends on how much
complexity you have in the
thing you're displaying. If you're doing something very static
then usually you don't have to worry
about that, but if you have a lot of animation
or changing data or
numbers, then you get into the complexity
of, oh, I don't want to update the
screen while I'm still updating
the data that I want
to put on it for the next update.
Yeah, exactly.
And that assumes you have a frame buffer on
your LCD and that your micro isn't
acting as frame buffer and having
to send things every
update.
Yeah, so displays
typically work in one of two configurations. The simplest one
for the display is basically a mode that emulates what all CRTs display did, which is you have a
V-sync signal and horizontal sync signal and pixels and you send stuff to the screen and that gets reflected immediately
into pixel information and then an evolution of that is uh displays with graphics ram on them
where the microcontroller can send information at a given time and then that gets displayed
to pixels after a certain refresh period yes if your data sheet says you have to use Hsync and Vsync,
run away, run away very quickly.
Well, I mean, you have to often because it's cheaper,
but it is a lot simpler in the code
to have something that has onboard RAM.
But you pay for it in your processor
because now your processor is doing all of that.
Everyone knows that firmware is free. So that's why they're typically used to exchange
hardware complexity for firmware complexity. So that's all how you would deal with like a
bare metal system. But you worked on this thing, what's it called? SDL? Simple
Direct Media Layer? Which of course
has the word simple in it, so
it should be simple, right?
It's anything but.
It's simple
for the user. It's quite simple for the
user, actually. It's not
simple in its
implementation.
What is it?
First, if
somebody hasn't heard of it,
what is the general description?
So, let's say,
let's start with the simplest case,
right? You have a frame
buffer on an RTOS
and you create your
rendering engine, right? So,
you tailor everything to So you tailor everything for that RTOS.
Your whole rendering pipeline is oriented towards having
just one framebuffer that you can directly access,
and everything works nice.
Your input comes from interrupts or whatever.
Then imagine that you want to port that
with a minimal effort to, I don't know,
a desktop simulator on Windows or on Mac.
Then all your rendering functions basically stop working
because Windows uses its own API for rendering
and for window creation and for input,
for events, etc. and mac uses a different
scheme linux uses a different scheme android doesn't even have a main loop that you can use
so what sdl does is abstract all of that into a single ap API slash application philosophy.
So instead of having to worry about
how you create a window on Windows or Mac or Linux,
you call sdl underscore create window,
and everything is magically solved internally.
You don't need to worry about
how the main loops of the OS work.
You don't need to worry about how the main loops of the OS work. You don't need to worry about how to read the keyboard
or the gamepad or the mouse.
Everything comes as events in the same way across all platforms.
So it's basically a hardware and OS abstraction layer.
Like Qt?
To some extent, but Qt Qt to some extent
but Qt is much
more
it's more of a total application
framework so it has networking
and all kinds of stuff
language extensions whereas this is
this is in C it's more
directed toward
input and graphics
do I have that right?
yeah so I don't know how Qt abstracts platforms internally,
but you could have, for example,
a Qt port that works on top of SDL if you want it.
SDL provides the bare minimum layer
to abstract window creation, input,
OpenGL context creation, and that's it.
Qt provides a whole bunch of other stuff on top of that.
Okay, how do these relate to OpenGL and Direct3D?
So after you create a window,
if you want to render using GPU,
you need something that's called an OpenGL context,
which is basically a way to reference what you're doing
for the hardware to know that you're the one issuing instructions
and which texture you own, etc.
So the way you do that is different on each platform.
Creating an OpenGL context is not really part of the OpenGL specification,
but part of the window manager slash OpenGL integration.
So SDL abstracts all of this,
and it manages the Windows way of doing it,
the EGL way of doing it, the Mac way of doing it.
And you just have to call SDL, GL,
create context or whatever it's called,
and you get a context and don't have to worry
about any of the internal details.
And it also has a neat way to,
in OpenGL, you need to load the function pointers
from the library if you don't want to link statically,
and it helps you with that as well.
It's not the only library that does this,
but it's a nice utility that it provides.
How does all of this...
So I know you're an expert in STL,
because I know you've contributed to it.
It's an open source project.
How does this all relate to embedded systems?
It doesn't.
So SDL doesn't really have support for an RTOS, right?
So if we equate embedded systems to having an RTOS or not having an RTOS at all.
STL doesn't really apply here.
But there are a growing number
of small wearables
that use Linux.
And if you want to, for example,
develop an app and not have to
download it constantly to the
device to try it, you can
develop an STL app.
You try it on the desktop.
You use the OpenGL ES graphics stack.
And what you see on the desktop
is going to be exactly or very close
to what you see on the device.
And so that cuts down development time by a lot.
And there are many devices doing linux embedded yes if i wanted to have something that was doing
linux and was making graphics i mean i guess you know when i do embedded linux
i just plug in an hdmi screen and it all just kind of works
but that's not that's not what you're talking about plug in an HDMI screen and it all just kind of works.
But that's not what you're talking about.
Let's say,
can we consider the Raspberry Pi to be an embedded platform?
Yes.
Yeah, okay.
So let's say you want to develop for a Raspberry Pi and you don't want to compile everything on device
because it takes forever.
Yeah.
Yeah.
So you cross-compile.
And then once you cross-compile,
you have to download it to the Raspberry Pi,
try it, run it,
try it with whatever input you've managed
to connect to the Raspberry Pi
and see what it looks like.
Or what you can do is
have what's basically
that same app on your desktop,
build it, try it there,
mess around with the UI
as much as you want, and once you're
settled with that, you know
that SDL is going to abstract
most of the differences
in hardware, because your desktop has
a different GPU, and it probably has a different resolution, etc.
And so you try using that,
which is basically pretty much the same code,
build it on top of SDL and OpenGL ES,
and that should run on the Raspberry Pi,
mostly unmodified.
So this doesn't mean that you cannot develop on the Raspberry Pi directly or that you shouldn't.
It saves time, especially if you're doing a complex project.
It's also nice to be able to give a simulator to your designers
and say, well, this is what it's going to look like on the thing.
And then you don't care if the designers are using Mac
or Windows or Linux,
and you still get a reasonable facsimile
of what their version of graphics is going to look like
on the embedded version of graphics on the device.
Yes.
And if, for example, tomorrow a competitor to the Raspberry Pi shows up
and the software stack that they provide doesn't have a window manager,
but it has direct hardware access.
For example, most of the Vivanti GPUs have that.
So SDL supports Vivanti's GPUs directly
and it can create Windows on the hardware directly
without having to go through a window manager.
So your code runs on that.
It's a different GPU, a whole different system,
and you don't have to change anything for that to work.
And of course, if you're doing things like games,
you support multiple platforms out
of the box with minimal effort. Yeah, when I looked into STL, one of the things that people
were saying about it was that it's a game engine. I'm like, I don't think that's right.
But it's not a game engine. It's a platform and OS abstraction layer. You still have to
make your own engine on top of that.
But STL makes it simpler because you can make a game engine that is mostly cross-platform because you don't have to worry about the display systems.
The display and the input and many other things. Yeah, sound as well.
In addition to that, you did a database thing with STdl let me see if i've got this right
community sourced database of game controller mappings to be used with sdl2 game controller
functionality i don't think all those words go together so uh one of the things that SDL added quite recently, or maybe in the last four years, is a way to abstract game controllers.
So if you look at different game controllers, the most popular ones are the Xbox and PlayStation ones.
And then there's a whole myriad of buttons and things that if you're a game developer, for example,
and you want to map all of that variety
to something consistent on your game,
you're basically facing a titanic task
because you need to know what possible things
the user may have in their home
that they'll attempt to use on your game
and map that somehow to the game.
One of the most common solutions
is to have some kind of mapping functionality in the game
where it says, okay, press the button to jump
and you press something and then that gets recorded.
The database that you're referring to
is a way to formalize all that.
So SDL supports mapping of many controllers
to an Xbox controller format.
And people around the world have been contributing
those mappings that they do with the controllers
that they get their hands on.
And we aggregate all of that into this database.
So when anyone wants to ship a game
and support a whole bunch of controllers,
besides the ones that SDL supports out of the box,
which is a fairly minimal amount,
they load up this database and they can support,
I don't know, the last time I checked,
it was like 300 controllers.
Mentally, I have this picture as though it was like 300 controllers. Mentally, I have this picture as though I was using Code Composer
and switched out the key bindings to match my Visual Studio code.
Is that a good mental model, or is that just totally...
Yes, that's exactly what it is, yes.
A mental model of somebody who doesn't play enough games.
That's true.
Or plays with keyboard and mouse.
No, I have the VR system.
I play with my feet.
So this database thing and the SDL things you do,
and it's all open source,
and how do you get paid?
You don't.
It's an investment, basically.
So the story is that i started um with some friends
trying to make a game um to make a game of course the first thing that they tell you is if you're
making a game don't make your own engine so what did we do we started making our own engine of
course the the engine was based on sdl and and SDL had a number of shortcomings that were angering me,
so I started contributing there.
And one thing led to another,
and I ended up working on a company in the US
that basically found me through my open source contributions.
The reason why I did those contributions is to learn and also to get some
recognition or to
it's a way to build up your resume
when you don't have specific
expertise in the area that you want to work in
probably
there's a lot of paid open source
development but I haven't been
involved in that
everything's been voluntary for me
cool
I have some listener questions some of which you've seen
because you're in our Slack channel
let's see
how about
the secret is out
many people are in our Slack channel
and if you want to are on our Slack channel.
And if you want to be on our Slack channel and you're a listener,
all you have to do is contribute a buck or two on Patreon,
and then you get the link, and then you can say hello.
Okay, so Nick asked, Nick the Exploding Lemur asked, if you have any tips for hunting down processors with stable graphics drivers
and or easily acquired graphics engine specs and open drivers?
So the state of GPU drivers is very bad.
The only ones I know of that are open source and quite good actually
are the Raspberry Pi one, which is, I think, a work in progress.
And then on desktop, you have the Intel ones.
Besides that, everything is behind three layers of DRM and NDAs.
And Chris knows about this.
You talk to one vendor, and they talk to the other vendor,
and you finally get to the one that sells the GPU IP
and nobody gives you any internal details
unless they're probably talking from BP to BP level.
It's very complicated.
And of course, they're doing super secret stuff.
So I can understand them not sharing.
No, it's all the same.
Everyone's doing the same stuff um what about below that i mean that's that's gpu stuff but in the microcontroller world there's not a lot of for hobbyists or people you know doing a like
thinking about a kickstarter project there's not a lot of devices that help out much with graphics in the kind of the 2d space
yeah i've seen a a couple of st uh chips that have a 2d accelerator which is essentially
a dma engine that can do like 2d transfers where you can specify an area and copy it to another area and a bit more above than that
is an engine that can do scaling and maybe rotation in 2d but that's pretty much it if
the other issue with this kind of implementations is that they're very ad hoc so each manufacturer
does their own thing and there's absolutely no standards and
maybe many of those implementations
are not very distributed
so expect plenty of
bugs if you're doing that
and it's a
huge security risk
in a way because you're moving data
around
behind the
CPU protection mechanisms,
so you can easily shoot yourself in the foot if you're not careful.
When you talk about moving data into frame buffers and being able to scale it,
some of that has to do with fonts,
and having a font in Flash and then moving it to the screen, um, or having a letter
in flash and moving it to the screen because the font is a series of letters.
Uh, how do you, how do you deal with fonts? Do you have a favorite font? Do you prefer to do
vectors and, and take the computing overhead? Do you usually use bitmap fonts?
What do you want to do?
So, yeah.
Vector fonts are
ideal. They usually
cost you
just a little bit above of
what you can spend on embedded systems
in terms of RAM, mostly.
Because they hover around
100k, and 100k is probably just outside of your budget
in many embedded systems.
It's basically a compression mechanism, right?
You have the vectors in a file,
and when you want to render something,
you desolate these vectors into a bitmap
and cache that probably somewhere and use that so
if you are using bitmap fonts what you're doing is most of the same process but offline
you start from a font and you render the bitmaps and you store only the bitmap on your flash memory
the issue with that is that you cannot really change your text size,
and you need to have space to store all of the glyphs
that you want to render,
which, if you're supporting things like Chinese, for example,
is not a trivial amount of space.
I think most of the Cortex-M
processors can render
TrueType fonts correctly
if you have the RAM.
So if you have above, I'd say,
256k of memory,
you should probably give
TrueType fonts a go.
There is it to work with.
They're easy to work with.
See, I always heard that they were really hard.
I kind of avoided them.
You need a library.
Yeah, you have two open source
libraries, FreeType
and STP TrueType, which is actually
one of those single header
libraries that Chris likes so much.
So you just use that and
it gives you a bit of whatever you want uh okay good question from ben
which was initially phrased why do the displays at the gas station take five years to update for
every key press and then was changed to what are some common mistakes made during embedded graphics programming. How much time do we have?
About 35, 40 more minutes?
We have MQTT, too.
Oh, right. Okay. Less time.
I think the thing that people don't understand,
the most common misunderstanding about graphics is that graphics is a bandwidth bound operation, right?
You do have some CPU processing and GPU processing,
but the first limit that you're going to hit is going to be the bandwidth limit
because it's mostly about transferring information.
So in the case of very slow displays in gas stations or atms or whatever what usually happens
is that they choose a processor for price reasons or business reasons or whatever and then they make
a totally separate decision about the resolution based on what displays they can get or what resolution, what visual impact
they want to have on the user.
And then those things are completely separate and they usually don't match up.
And then that's why you end up with those ugly low frame rates that you typically see.
And this is one of the reasons that we do, when talk about dma when we talk about graphics because
it is all about moving things from here to there as fast as possible and still being able to do
other stuff yep are there other common mistakes um the other common mistake which is related to
probably one of the other questions that i saw on the Slack channel, is across products.
The resolution is evaluated as a new thing all the time.
So you change products and you change completely the resolution because, again, business reasons or availability or whatever. And then you start on your second product,
you start maintaining two sets of assets
which don't really mesh with each other.
And on the third iteration,
you start handling three types of aspect ratios
and it all gets complicated.
So if I could make a choice
across an entire set of products
I'd say stick to the same resolution
or at least stick to the same aspect ratio
on your screen
that's a hard problem even when you're not doing
a small embedded system
even if you're doing something with Linux
or Apple's had this problem, right?
Oh, we came out with a new phone and now all the developers have to go and regenerate everything.
And there's no good way to make this automatable because things don't look good when they're
scaled or when they're automatically laid out. So yeah, it's a big problem every time.
When you look at movies, I mean, you have different scaling on whether you're watching
it on TV with the black bars or whether you're in the movie theater or whether you're watching an old movie or a new movie.
The aspect ratio and resolution, that's just a never-ending problem.
Yep.
And trying to decouple it in your code is a great idea, but as Chris said,
it looks different.
So you can't just say,
oh, well, we're going to pound def that out.
It's not, it doesn't work like that.
Yeah.
So there's a few techniques
that came mostly from the web development world
because web pages are subject
to a bunch of resolutions and they need to look from this
basically what's the same set of assets and code they need to look the same or roughly the same
so there are techniques to minimize that but they cost in term of firmware development and
cpu processing power so they're not always feasible on embedded systems.
Well, let's go on and talk about MQTT, because there were some questions about that,
and I heard you have been doing some home automation.
Yes. So I must warn you that MQTT is so great that I'm willfully ignorant of all its internal implementation details. I enjoy it as an end user, mostly.
Well, what is it?
It's a communications protocol
typically associated to IoT devices.
It runs on TCP IP
and it follows publication subscription
methodology principles.
Okay, so you've already said TCP IP, and I'm like,
oh, well, I'll just take out all my micros,
because that's heavy weight to start with, and then this is on top of it?
It's the in-red of things.
If you don't have TCP IP, you're just a thing.
I'm okay with that.
There's a variant that I'm not really familiar with
that doesn't run on top of TCP IP.
But, I mean, we're talking something that can do at least Wi-Fi or Ethernet and have a TCP IP stack.
Okay, so it's on top of TCP IP.
So guaranteed transmissions as long as you have the path between them.
And publish, subscribe.
So your widget, your IoT thing probably publishes the data
and then some server-like thing subscribes.
How does it work?
So you need to have one broker um
i use mosquito there's a bunch of them which is like the half the central point of the mqtt network
and then everything else is a client and each client can subscribe or publish information. The process is you publish something with a topic and a payload.
And the other clients can subscribe to a topic.
And when you publish to that topic,
they get the payload that you transmitted.
It's geared towards IoT devices.
So the whole MyDevice offline is baked in, you have messages for availability.
So whenever you come online, you say I'm available,
and whenever the broker sees that you've gone offline
because you don't respond,
it sends automatically a message saying this device is offline.
It also retains messages, The broker can retain messages.
So, for example, if you want to talk to a thermostat
and have a high-temperature setting,
you publish, I don't know, ADF.
I want the setting to be ADF and retain that,
so whenever the thermostat comes back online,
the broker communicates that information automatically.
And the other piece of technology that it has is a QoS setting, a quality of service,
which says if you want the message to be transferred once or if you really want to make sure that the message is received by other clients.
Okay, so Mosquito is the broker and it is sort of the hub.
Yes.
What do you run that on?
Do you have like a home server
or are you running that on like a Raspberry Pi sort of thing?
So at home, I have a Haas IO distribution,
which is the home assistant distro that runs on the Raspberry Pi. It's
centered around containers, and
on one of the containers, you run Mosquito, and everything
talks to that. And home assistant talks to that as well, so everything connects
through the Raspberry Pi. And what do you have for
the devices?
Oh, devices, it's mostly ESPs,
8266 NodeMCU for the doors and lights,
that kind of thing.
Did you say Node?
Yeah, it's called NodeMCU. Like Node.js?
No, no.
You don't have to install 10,000 dependencies for it to work.
It's called NodeMCU, and out of the box it runs Lua,
so nothing makes sense.
Okay, what are you building with this?
When I first heard of this, you were going to do one little thing.
Yeah, I had a greenhouse temperature control.
Now I have two doors, lights, the home alarm,
and the heating system of the house connected there.
It's pretty neat because you program everything in MicroPython,
or if you're doing something a bit more
intensive, you use Platform.io
the Arduino
framework and it's just
plug and play mostly. It's a
very satisfying way of
working. I don't even
know how you install the tool chains
for this. Everything is
click, click, click, click here and everything
works magically
it's amazing what about power i mean if you're alarming if you're alarming doors if you have
door alarms don't those just run on batteries the esps are okay but they're not like long-term
batteries as the esp for the alarm is connected to the alarm battery,
which is like a huge brick that can last a few days.
Yeah, that's no issue.
And also I've been worrying about battery life for four years,
so I don't care anymore.
MQTT comes in flavors.
Do you have to worry about any of that, or does it all just kind of poof works?
It magically works, yeah.
I don't think there's versioning, at least.
So since I'm building everything
basically from the same source code,
everything works.
I don't know if I bring a binary from somewhere else
into my home automation,
which doesn't seem like the best idea in the world,
probably won't work.
I don't know how it manages versioning, to be honest.
You make this sound very easy.
It's surprisingly easy, yeah.
On Python, you just create a client
and issue mqtt.subscribe
to some topic
and you start getting callbacks
and that's all you have to do, it's three lines
it's funny as you talk about the architecture
of it, with the publish
and subscribe and the broker
it sounds very much like robot
operating system
okay, but going on
to the next subject in my long list of things to ask Gabriel about.
Where are you based?
I'm in Argentina in a city called Mar del Plata, 400 kilometers south of Buenos Aires on the coast.
And I have met you in person near San Francisco.
How many hours did you have to travel to get to San Francisco?
So there's no direct flights to San Francisco from Argentina.
And there's definitely no direct flights to anywhere from my city.
So I had to take a bus to Buenos Aires
and then you typically go to LA
if you're lucky and then from there to SF or San Diego.
So it's about 20 hours, 22 hours
if you're lucky. So you don't commute regularly?
No, my commute is from my bed to my desktop
how do you how do you handle that how is working remotely for you and and how do you make it
work and not distractionville i mean it's a nice deal It's not easy to get started doing this. It has all the challenges of working at home, of focus, of doing the things, especially working in hardware, being in Argentina. it needs to go through customs that's a whole ordeal and then
the companies that you work with
need to be remote
oriented in a way
they need to be prepared to have
remote workers and have the
philosophy required to do that
but that's probably
not different
from the work that you do there
being remote 100km or,000 or 10,000
is the same in terms of human interaction.
The things that get more complicated
is when you jump across countries
and you're doing hardware projects.
I started doing software projects,
basically web development initially,
because it's much easier to work like that
against the server on the Amazon cloud,
not in Brazil, but in Seattle.
If you depend on hardware that maybe breaks
and you need to wait a week or two to get a replacement back.
Do you have any advice for people who want to work remotely in general?
And then do you have any advice for people who want to work remotely
internationally like you're doing?
I think the first thing, if you want to work, for example,
from Argentina for U.S. companies, is just get started.
Just make an account on Freelance or Elance or Odesk
or whatever it's called now,
and just get gigs doing PHP or JavaScript work,
and it's going to pay very loosely.
Just do that for some time to get used to working remotely.
You're not going to get
the most amazing jobs doing that
but you're going to create
a resume and work that
you can point to maybe more
interesting employers
saying
to them basically I'm responsible
I can manage my own time
and I can get things done
and then after that is the same as for any work.
You need to establish relationships with people,
and people need to know you and know what you can do.
And if you cannot get into an area that you want,
then look for open source projects that you can contribute to
as a way to prove that you can get the work done.
Are there any other pitfalls that you've come across
just in terms of mechanics of things like shipping?
I know that sometimes shipping hardware can be a pain
because it gets either confiscated or stuck in customs.
Are there ways to suggest to the companies you work for
to smooth those things over?
Or is that just kind of the cost of doing business?
Yeah, in Argentina in particular, FedEx is the...
Can we advertise companies?
Oh, yeah.
FedEx, please send us a check.
Yeah, exactly.
Give us a couple of free shipments.
It's the best way to send things.
They typically take a week to get stuff here,
and it usually doesn't get bothered by customs
if you stay within the courier weight and price limits.
And then if you go beyond that,
then you start getting issues, right?
But that's like a logistics problem
that it's not really in your hands to solve or to do anything about.
I say that the most difficult thing working remotely,
probably for you it's the same,
is getting people to acknowledge you and keep you in the loop on things.
That's when you're not physically present and talking face-to-face to people,
the relationship changes.
Especially when there's a time zone change.
Yeah.
So if you can visit in person every now and then,
that smooths things a whole bunch too.
And I found that working remotely has helped with some jobs.
My lab is pretty good compared to many of the startups I work with.
My office is extraordinarily quiet compared to many of the offices I visit.
And when I work remotely, we get better documentation.
Part of that's just because I document things, and I like to,
and I like to have them documented because I like to forget things very quickly,
and the documents help me.
But also the documents help them, whether they realize that or not.
I realize it when somebody
emails me from a client I worked on
two years ago and says, I thought
this would be impossible, and then I read your document and realized
I only had to uncomment
one line, and it was all good.
I mean, that sort of thing
is awesome. Also, you cannot
nap if you're working in an office, right?
That is not true.
Yeah.
It's less true than easy.
Yeah.
So if you're not feeling well,
if you feel like you're blocked by a problem,
it's easier to just get up and take a walk or whatever,
clean your head and come back,
concentrate on the problem, really think
it through.
Especially the open office plans that have become typical in the US are very distracting.
And at least for me personally, mostly because I'm not used to them, it cuts my productivity
a lot when I'm there.
It's hard to concentrate.
I totally agree.
And working as a consultant, being able to clock out and go for a walk is just so valuable.
Yeah, because it also benefits the company because your brain really doesn't stop processing what you're doing.
But you can really
be able to take a walk.
At least I don't. Maybe I should.
But
when you're doing that, you're really
thinking the problem that you're facing
in the background. Yes.
Very much, yes.
Going back to graphics,
Chris, you had another couple of questions.
There was one in the list here that I think we skipped over.
It was kind of the crux of the difficulty of graphics on embedded systems.
If you're doing something moderately complicated, the dance between updating the frame buffer, syncing with updating the screen uh i'd just like to talk about that a little bit
to give people a sense for okay if you really want to do something that maybe has animation
that's fluid and doesn't have tearing artifacts how you go about architecting your system
first you have to describe what tearing means oh tearing would would be so, if you're updating your framebuffer out of sync with how
the display is perhaps
reading it out, you might
have half of your screen updated
in the framebuffer and half old data
and then the display might read it out and put it
on the display and then you'd have this
half-updated screen which
since oftentimes
you're moving something, it would look like
you've got a big tear mark,
a line across the screen.
So how do you solve that, right?
It's a, I don't know if we can reveal this, Chris,
but it's an ancient secret bust
from generation to generation of firmware engineers,
and it's called double buffering.
Yes.
Sorry, I gave it up.
I'm sorry.
I'm going to return my firmware engineer badge.
So what you do typically is you have,
instead of only one frame buffer, if you can afford it,
you have two frame buffers of the display resolution.
And while you're building your frame with the CPU or the GPU on one,
you are displaying the another one on the screen.
And to switch between the frame buffers,
what you do is you wait for a signal from the display,
which is most likely the B-Sync signal,
which indicates that the display has started scanning the frame buffer.
So once it started scanning the frame buffer, it once it started scanning the frame buffer,
it will read from the active frame buffer all the pixels,
and the pixels that it has read,
they will no longer be in use.
So that's the moment when you can switch frame buffers.
And the active one becomes the backend, the back buffer,
and you can start drawing your new frame there
while the display that's in sync with the front buffer.
And so this is a ping pong buffer system
where when ping is activated, is in use by the LCD,
you write to the pong buffer
and then you wait for the signal to say to swap. And then the pong buffer is being used by the LCD and you write to the Pong buffer, and then you wait for the signal to say to swap. And then the
Pong buffer is being used by the LCD, and you write to the ping buffer. Yeah. Yes. And if you've got
to just play with an embedded frame buffer, usually they only have enough on there for one frame,
right? They don't have double buffering. So you might have to add at least another buffer in your
RAM. Well, How does that work?
Sometimes you can't always read out the LCD's frame buffer.
That's usually kind of exciting too.
And so you have to keep a shadow of what they have.
Right, right.
It's usually very slow to read the frame buffer in the display itself.
So when the vSync comes,
you send the active frame bufferuffer in a one-way
mode and then start
working with the information that you
had on that framebuffer.
Then there are tricks if your
performance is not enough
or you don't have enough
memory space for two
framebuffers, you do partial
updates of the framebuffer. So
you try to minimize the amount of
pixels that you change
and then maybe you can get it
fast enough to
draw at the frame rate that
you want to draw at. Maybe not 60Hz
but maybe 30Hz
without seeing the
disturbances on the screen.
Do you have any test methodologies?
When I had to do LCD
graphic system,
I found later that I wished I had
better unit tests
to start with involving lines
and squares.
Do you have a good way to figure out
if you have
written your graphic system correctly?
It's very hard to test.
I wish I had a solution for that.
Usually graphics have some form of user input involvement.
So, yeah, there are painfully involved ways of testing things, but
I couldn't volunteer a solution
for that.
Usually you accumulate a series of visual tests
that you can run through.
They're basically recordings
of all of the errors that you have had to deal with
because each test...
You start out with, this is what I think
this test, this functionality
and it looks good
but you forget some corner case or usually it's an actual corner exactly and so you add that you
update the test and so you end up with a suite of things that you can do but like gabrielle said
it's very hard to automate that because it's a visual thing. So, I mean, you can kind of take screenshots
and compare them against known good results,
but that's all very painful.
Yeah, it's a tough problem.
I remember writing the word red and green and blue in each color
and then realizing I hadn't written them in the correct colors
because I wasn't sure which was R and G and B on my display.
That's a classic problem, mixing up the components' orders.
Yeah, when you have black and white,
which a lot of times you start out with black and white,
it doesn't matter.
But then when you start doing animations for your team,
suddenly it's important that red is red and not
green well here's a question for you that that the folks on the slack might have asked but they
didn't um there's a lot of different display technologies out right now and if you're decide
if you're building a hobby project or starting some small product it might not be clear what the best thing to choose is there's there's oled there's
lcd there's other things matrix lights um leds are actually pretty popular right now
and there's there's drawbacks to all of these right um do i need to list the drawbacks yeah
yeah tell us the drawbacks of oled because for it. Yeah, tell us first the drawbacks of OLED,
because that may be important to me soon.
Oh, that's a pet peeve of mine.
So OLEDs die in time.
They're like little gnomes that emit light,
and they get older and older each time you make them emit photons.
And the more you make them emit photons, the more they die,
the more quickly they die.
So, yeah, that's a fun fact that I wasn't aware of
when I started working with OLED displays.
And people like to choose them because they're super bright,
and even the monochrome ones look really, really engaging.
But really, we're just killing little gnomes inside.
Just murdering thousands of gnomes.
Yes, that's scientifically proven to be.
Yeah, look them up in a microscope.
Just enable kamikaze mode and you'll see them in the little airplanes.
Very bright.
But the nice thing about OLED is you don't draw power your power draw is
proportional to how many pixels you're actually lighting right yeah so each each so each pixel
is like an individual uh light instead of being uh the lcd system where you have a backlight and
you're basically filtering the backlight to produce
colors so the contrast is uh the contrast appearance is much nicer because the black
is really black there's there's no light coming uh where the black pixels are and then color
i've heard the gnomes die at different rates. Yeah, blue dies especially fast.
Not sure why.
They're probably much larger
because we perceive blue with a lower intensity.
So to produce the same perceived blue and green,
for example, brightness,
the blue one needs to make much more effort.
My story is that we haven't had blue
LEDs for that long, so we're just not really good at making
them. Yeah, that's
probably true as well.
Or maybe the other
pixels don't like blue.
Yeah, those gnomes hate the other gnomes.
Yeah.
So,
they give you these curves
and they say if you're at a thousand nits,
this display can live for a hundred hours.
And then what you get is a pixel
that isn't able to emit at its fullest.
And depending on what you're showing on screen,
you can get a burning effect, which is an image that seems to be constantly there.
And that's produced by the lack of the pixels to emit light at the same level that the neighboring pixels can.
Well, I mean, that sounds fine.
Not really. Not at all fine. Not really.
Not at all fine.
Okay, so OLED is OC?
If you manage to get all the norms to die at the same rate, then it's fine.
Because you won't notice it for a very long time.
The problem is when they start dying at different rates, forming a pattern pattern then you really notice that
okay so
LEDs
might be an option
but those are very power inefficient
so I guess LCDs
those are fine, they've been around for a long time
they must work, right?
yeah, they do
the problem is if you
if you want to make a really nice-looking device
with no borders around it,
it's very hard to do with LCD
because they require a bonding area somewhere.
So you get that.
If you look at wearable devices with LCDs,
you see that somewhere they maybe advertise their brand or whatever
on a region because they cannot avoid having that section there.
And it has the backlight issue.
You have to...
There's very few LCDs that you can see.
They have transreflective ones,
which light coming into it is reflected off a substrate on the back, and then you can see it in daylight.
But usually you have to have some sort of light-emitting thing on the back, and that's usually some white LEDs or some phosphorescent thing, fluorescent thing, which draws a lot of power.
So I should use E-ink?
You should use beeps, Morse code.
I do feel like we're going to get down to, like,
you just shouldn't have a display at all.
It's really better not to have graphics.
If you can avoid it, it's a big part of the power budget.
If you want the display, just watch the TV.
Yeah.
All right, do we have more graphics questions?
That was the main one I wanted to go back to.
I don't think we missed any in the list.
Well, I mean, the list is just there because I have to do something for the show other than talk.
Well, I have lots of questions for Gabriel, but he's not going to answer them i think i i've already gone over the line uh with the double buffer and the
knobs thing uh i don't think i should have revealed that okay i have one more question for you then
when you were young when you were the age that your sons are now
yes what did you want to be when you grew up? Yeah, I wanted to be an engineer.
Really? From that young? Yeah. Yes. So before I wanted to be an engineer, I actually wanted to be
a garbage collection man, or so my mom says. But right after that, I wanted to be an engineer. My stepdad was a garbage collection person.
And I, you know, in high school, I was like, oh God, don't tell anybody.
Just say he drives trucks.
And I found out a few years ago that I was the coolest person in the neighborhood
to the five-year-old who lived across the street
when he found out that my stepdad drove garbage trucks.
He was just like, that was the coolest thing.
Yeah, I was probably the same, yeah.
But you knew what an engineer was and you wanted to build things at a young age.
Did you take things apart?
My uncle is an
electronics engineer, so I guess
that's where I picked that up from.
Yeah, absolutely. I was hated
by my brothers. I constantly
took apart toys
and never put them back together.
Well, no. Once you know how they work,
who cares?
Yeah, my mom says that I went to the university
so I could learn how to put the
toys that's actually true yeah well i think we've kept you for long enough uh gabriel do you have
any thoughts you'd like to leave us with yes i'd like to read you a quote one of my favorite
it says it's from douglas adams from mostly and it says, a common mistake that people make
when trying to design something completely foolproof
is to underestimate the ingenuity of complete faults.
Yes.
A lesson to live by.
Yeah.
Our guest has been Gabriel Hocobo,
software, electronics, and systems engineer at large.
You can contact him via Twitter, via his webpage,
which will, of course, be in the show notes,
or you can email us and we will forward it along.
Thank you so much for being with us, Gabriel.
Thank you. It's been very nice being here.
Thank you to Christopher for producing and co-hosting.
And, of course, thank you for listening.
You can always contact us at showandembedded.fm
or hit the contact link on the embedded.fm website
where you can also find the blog.
And now a quote to leave you with from Jorge Luis Borges.
Plant your own gardens and decorate your own soul
instead of waiting for someone to bring you flowers.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not
receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.