Embedded - 159: Flying Rainbow Children
Episode Date: July 5, 2016Chris and Elecia talk to each other about compiler optimizations, bit banging I2C, listener emails, and small-town parades. Games to learn/play with assembly languages include The Human Resource Machi...ne by Tomorrow Corporation and TIS-100 by Zachtronics. We've been enjoying the Embedded Thoughts blog. And Chris is reading Practical Electronics for Inventors and liking it. We talked a little about Interview.io's adventure in voice changing. Shirts are gone for awhile. New logo stickers are available at StickerMule if you'd like to support and share the show.Â
Transcript
Discussion (0)
This is Embedded.
I'm Elysia White with Christopher White, and it is just us this week.
I'm hoping we talk about embarrassment and resilience, but we'll call it work parades
and or listener emails.
Okay.
Sounds like a good plan.
I'm already embarrassed uh you may have noticed you
listeners that we look different on your podcast player isn't it cool isn't the icon just awesome
uh sarah petkus did the art she was the uh woman we spoke to about the robot army
and flamingos and leg licking robots called noodle and the
gravity road comics but she did our art and we're pleased very pleased she also did the t-shirt art
uh and for those of you emailing and asking for certain sizes and colors no i'm sorry. We ordered ours and now it's gone.
What does that mean, we ordered ours? Well, I mean, I ordered some for you and I
ordered some for me and I didn't order extra for guests or listeners.
We will do it again someday.
Yeah. I'm thinking January-ish, maybe.
Or perhaps before January.
No, that makes the tax situation difficult, but we can talk about that part later.
Tax situation.
In the meantime, we have stickers.
They are for sale at Sticker Mule.
Though, when work comes down, I will start asking you if you want them and handing them out for
free it's just been crazy lately and work will probably calm down in september early september
i hope and if anyone has neat projects they want me to work on after that let me know i've got
plenty yeah great lots of stuff to do around the house. Yeah, exactly.
Let's see, in other Alesia news,
the book, the new book about toys and taking them apart is stalled.
Possibly permanently, we'll see.
Jerry emailed about it,
and then he went off to read the first four chapters,
which are on Embedded FM on the blog section. And I do have a couple of more chapters
roughed out talking about analog signals and digital processing and scope
setup. So I'm kind of excited about those but I just haven't gotten to it yet.
I have been working a lot. And one of the things that Jerry said was that it was too bad
an expert is needed to give permission for this today.
For what? What does that mean?
For the whole taking apart things and disassembling toys.
And you really don't need permission.
You never needed permission.
No one needed permissions except that I would have needed permission.
If I was 16, it wouldn't have occurred to me to take apart my toys.
Really?
Well, to some extent, I had toys that weren't amusing to take apart.
That's part of the fun of taking apart toys when you're a young kid.
You might get in trouble.
Especially taking apart things that didn't belong to you,
like your dad's tape player. Yes, great. Now player yes great now your dad knows how complicated those things were uh yeah i do now but i didn't then
and i don't think people i think there are people who need permission and not only permission but
also a it's okay to break things as long as they aren't too expensive and
they're yours and and a little bit of just hand-holding uh just just follow along you know
here's what a multimeter is and you just take apart all the screws and suddenly you get to see
things and i don't some people just are into that and they're just born that way.
And I was a reader.
I wasn't like that.
So if anybody needs permission to do it,
forget my book, just go out and do it.
It's really worth it.
We're worried about breaking something by two.
Yeah, yeah, exactly.
Okay, so we do have technical questions from listeners.
Are they technically questions or are they questions about technical things?
I think we'll do the former later and the latter now.
Okay.
From Austin, he had a project he was working on
and he wondered if it was still bit banging if he does something
through an fbga he has a project he did a final in his he had a project in his final year of his
bachelor's degree that required him to write an i squared c driver peripheral master thingy
in a hdl so that's Advanced Hardware Design Language.
Something, something.
And he wanted to know if it still constitutes bitbanging
or if it's realized in hardware means it's no longer bitbanging.
That made me decide we really should define that term at some point.
Terrible term.
It is a terrible term.
Altera Hardware Description Language.
Oh.
All right.
Because...
Thank you, Christopher and Google.
Why not have your own?
Bit banging.
That is where you don't have hardware to support a peripheral,
and all you've got is a GPIO, and you...
And maybe a timer or two if you're lucky.
Or not.
And you write software to set the level of the GPIO up and down
to be the peripheral, the I2C or the SPI or the UART
or serial communications of all kinds.
And it's really a pain in the neck.
And so if you wanted to do SPI communication
and your processor didn't have an onboard SPI port or you'd use them all in other ways,
you would have a GPIO that was spy clock
and you would toggle it up and down.
And you would have a MISO and a MOSI and you would
toggle them up and down in accordance with the clock.
And the spec.
And you'd have to time everything appropriately using delays or hardware timers or something else so that you actually met the spec of the device you're talking to.
Yes.
Yes, it's a huge pain. And you know, if you're doing all that timing and fiddling with these bits, it's taking a huge amount of your processor to do something that...
You probably don't have a big processor spy port on a dedicated peripheral like that.
That wasn't the circumstance.
The circumstance I was in is I came into a place that had an embedded system.
There were air quotes there.
You didn't see them, but they were there.
It was a PC and it was running VXWorks. But it was PC standard architecture, and they had a National Instruments digital I.O. board.
That was the only way to get...
So what you would normally do with, you know,
today a few dollars' worth of microcontrollers,
they had thousands of dollars' worth of expensive
digital in-and-out hardware and National Instruments
special drivers for V XWorks.
It was, in retrospect,
I should have questioned what was going on to start with.
But anyway, I needed to talk to OneWire devices.
And of course, there's no PC interface for OneWire devices,
or at least there wasn't then.
And OneWire is sort of like I squared C, but without the clock.
I think they have USB stuff
now. So they did it
over the NI boards and they had
lots of
I helped write the device driver
for that and yeah it was terrible
because OneWire has really strict timing.
It has no clock.
The clock is part of the data. So if you
screw that up in any way things
get really weird.
And we had capacitance issues and all sorts of other things, which, yes.
So doing that was not a really good idea.
But it worked, sort of.
In the second generation,
we still had PC architecture,
but we did what Austin is doing.
We put controllers in an FPGA.
So we could access those through PCI,
and they went off and did the SPI and I2C
and one-wire stuff, and we didn't have to
spin in tight loops for
20 microseconds to make sure
that things were working correctly.
Well, and I've had situations where
I started out with a
GPIO that was just supposed to indicate
timing things, and then
or debugging things.
And then it kept getting a little more complicated.
And suddenly it was sending Morse code.
And then right after that, it was like,
oh, let's just send out a serial stream
and we'll catch it on the other side at a baud rate we know.
And it was just one step too far.
So anyway, that's what it is.
Right.
And if you can find your way out of doing it,
you probably should.
It's a horrible waste of resources.
It's a waste of time.
You're reinventing the wheel.
Yeah.
You're doing something that's not the core
of whatever you're working on.
Unless your job is to write a better spy driver
and your company writes spy drivers,
you probably shouldn't be doing this.
But to Austin's question,
is doing it in FPGA pitpanging, I would say no.
I would think not.
That's just implementing a hardware controller.
Yes, because he's probably doing it efficiently,
and he doesn't have a CPU sitting there going,
well, I'm going to have to no-op for the next 20 instructions
because I have to wait some period of time.
That's just more built into the FPGA and its clocking system.
Unless you're dropping a soft CPU core and I'm doing that.
And then, yes, it is still.
But now it's getting really ridiculous.
Yeah.
So dedicated hardware is awesome, and you should use it if you can.
Even though reading the I2C manual of the STM32 devices is an exercise in narcolepsy?
Have you ever come across a situation where your device had a weird enough version of I2C or SPY that you couldn't use your onboard peripheral and you had to bitbang to do some timing thing?
I have vague memories of that
happening well it's because i squared c was a proprietary uh protocol that everybody else
implemented slightly differently so they didn't have to pay for it um so yeah but most of the
processors now will do all the little variants okay uh but yes absolutely i do remember having to do that or having an I2C hardware
controller set up
but having the I2C clock
be something else
well there were other times
like oh you must hold down
for 50 milliseconds
after this particular register operation
to allow the chip to
do something
and sometimes those were difficult if you were three steps removed or this particular register operation to allow the chip to do something.
And sometimes those were difficult if you were three steps removed from the hardware driver.
Yeah, or if you had multiple peripherals
who all had different...
And you can do that by having a GPIO routed around
to slam it down or something, but yes.
Yeah, yeah.
Okay, so Austin, I hope we've answered that.
The next one is... Wait! Would building an I2C controller out of discrete logic be bitbanging?
As long as my processor is not involved, you can do whatever you want.
Alright, so software. Bitbanging is a software thing.
Yeah, if I have to do it.
I just saw that question in the notes and thought we should,
you know,
I didn't hear you mention that.
Discrete logic.
Somebody should do that.
Well,
you saw the processor
out of discrete logic.
6502?
Yeah.
Yeah.
That was pretty awesome.
Very awesome.
We should put a link to that.
We already talked about that.
Did we?
I don't know.
It was after Maker Faire.
Yeah.
Okay, that's going on.
Bob, in hearkening back to Christopher's wanting to set up systems from scratch,
he wanted us to spend some time discussing...
Wait, can you remind me what I wanted?
Oh, I was talking about feeling like I hadn't really done that
and didn't have much practice starting from very, very scratch.
Yes, because usually you just get the example code from the vendor.
And the startup.
Okay, I'm on board now.
Okay.
And actually that Ada thing that I read from Inspiral
had great how to start up stuff,
but that's a separate conversation.
Anyway, Bob wanted to know about compiler optimizations.
What about them?
Well, how do we choose compiler optimizations?
That's the first one.
And there are lots of ways, and I'm loading IAR right now
so we can talk about specifics.
So generally in most compilers, you have a choice of levels.
And usually it's low, medium, high,
and maybe you have one or two others.
GCC is dash, capital O, one, two, three, whatever.
IR has low, none, low, medium, and high.
And high has options for size, constraints, or both.
Right, and within all of those,
you have smaller flags for turning off various specific,
turning on or off very specific
optimizations.
But I think
you have to have a pretty good reason to go in and
tweak individual
ones on and off, and then those reasons
can exist. But for
most developers, choosing one
of those levels is
choosing a level is sufficient.
And as you go higher,
it adds on more.
So it'll no longer be optimizing things for size.
It'll be optimizing things for size and speed and stuff like that.
And there's a trade-off too.
So if you choose size and speed,
you probably won't go as fast as if you choose turn on all the speed optimizations.
Right.
Because certain optimizations for speed depend on expanding your code out,
doing things like unrolling loops
so it's not doing as many tests,
stuff like that, so that it trades off.
I'm going to take your code and generate tons of assembly
that's more inlined than I would normally.
Right, so if you have i from 0 to 10, and it's fixed as a constant 10, then it can just copy those instructions 10 times and not have to do a branch check it all. It just takes your whole loop and unloops it basically and says, well, I'll just do everything inside that 10 times
and put the assembly language
instructions for each
loop iteration one after another.
So that's one example
of that kind of thing.
Function inlining is another
way to get faster and get
more code. And that is
where it takes your small function, your
accessors or getters or even your larger
functions and it just puts them in where they are and there's no...
It doesn't call them.
It doesn't store stuff, push stuff on the stack and restore it.
It's just like, oh, this is no longer a function.
I'm just going to put this in the middle of the calling function.
And there's no reason why that doesn't work.
Although one of the reasons we don't always optimize in the beginning
is because it's difficult to debug
when it eliminates variables that it doesn't need
or it unrolls these loops
and so you're pushing step and it rolls all the way through that loop
because it's just one assembly block now.
So turning on optimizations too high while you're doing development
can be extremely confusing.
And it's even confusing for me and I forget about it even though I know this and I do this all the time.
I still find situations where I'm running a debugger and I'm like, well, why am I over here?
Because your program counter, it looks like it jumps all over the place.
Right.
Because it's doing whatever it found most optimal, which is not the most legible.
One of the things that can happen as well is if you're debugging and you have, say, a common exit point,
like a fatal assertion function,
it can optimize that such that you end up at one
that you didn't think that you're not really on.
So you're debugging your code, you get a fatal assert,
you say, okay, I'll fire up the debugger and I'll see where that is.
You go through and it puts it on some line,
and you think, oh, that's weird, that's not where I expected.
It turns out it's just diverting all of the fatal asserts
for several situations into one particular place,
because they all look the same to the compiler.
And so it's optimizing and saying,
oh, well, why do I need to repeat this?
For size, I'm just going to point to this location.
So you have to turn that off to actually see
where your real assertion was.
Yeah, and that's another,
we're talking about size versus speed.
That is definitely a size thing.
Yeah.
I would say that size is the thing that generates the most debug confusion, probably.
Speed will, too, because it puts things not necessarily in the order you think, because
if it's got any sort of pipelining or caching, it may do things in an entirely different
order so that it can be most efficient.
And so your variables don't mean anything
because they aren't doing it in order.
That happens a lot too.
You go in and you look at your local,
on IAR you have your window that says local variables or whatever.
You can look at that and you can enter your function
and things will be just random in a couple of them.
Or you walk all the way through and it will say things that don't make any sense whatsoever.
And you can go look at the registers sometimes and make sense of that and say, oh, okay, this thing I'm looking for is actually stashed over here.
But IAR, I think IAR gets a little confused about its own optimizations.
I think the people who are writing the debugger and the people who write the compiler
maybe should talk a little closer.
No, it happens in Kyle and GCC too.
Yeah, but I don't remember it as bad in GCC.
Oh, my latest version of IR often will tell me
it just doesn't have that variable.
Yeah, that happens a lot.
So I don't see as many randoms as I used to,
but I do see more.
That's the thing that tends to point me to turn off optimizations
because I need to look at something and I come into a function
and it's like, oh, all these are unavailable.
They had to be somewhere.
And you can get bugs when you turn on optimization.
That never happens.
The most common one is volatile, or failure to
put volatile.
Because the function will
or the compiler will notice that you're
using this variable, you're checking it over and over again
and it never changes.
So it can just optimize this out and put you
in an infinite loop.
When actually you were checking some hardware register
but you hadn't noted that it, with the keyword
volatile in C and so the compiler doesn't noted that it, with the keyword volatile in C,
and so the compiler doesn't know
that it's going to change somewhere else.
So it gets rid of it later on
when it's actually needed.
You said...
Bugs.
Bugs.
You can write bugs that go away and come back
when you turn off.
So you have a bug.
It's not that optimization is creating a new bug or
turning them off is creating a new bug. You have an existing bug. But the way it reorganizes the
code or the way it changes the timing alters how the bug appears or if the bug appears.
Yes.
And this is especially painful with things like stack overflows or any kind of memory thing where things get shuffled around. So if you optimize for size, you may have things in different memory
locations that mask an issue, an overflow, or move an overflow to another place that doesn't
matter anymore. As soon as you turn on some optimization, all that stuff changes around,
and suddenly you have a complicated bug.
And so people go, ah, the optimization did it, which is...
It isn't the optimization that did it.
It's the optimization that hides it.
Yeah. And of course, any timing issue is changed when you optimize for speed,
because your timing just changed a lot, hopefully.
And if you wrote a bit-banging I squared C driver, it's probably wise
to be aware that something might happen if you change the optimizations.
So if you were building a program
from scratch, how would you put in optimizations? When would you go from
none to low? And when would you climb up that ladder?
I'm so rarely doing it from scratch now.
Most of the projects I work on are often resource constrained from the get-go.
Which is tough, because it means your debugging situation is more difficult.
Right, so I end up turning off optimizations rather than turning them on.
Oh, I have to debug this.
I have to go in and turn this off.
Maybe on a particular file
because I don't want to affect the whole system.
Well, yes, you may not be able to build the whole system
if you turn off optimizations everywhere.
If I turn off optimizing for size,
I may run out of space.
So that's an issue.
That's always exciting.
But also, if you do it on a file-by-file basis,
and you kind of know where you need to be looking in the debugger,
you're kind of guaranteed to not be changing,
like we said about timing and stuff,
not be changing too much of the system
so that perhaps something is unhappy.
Because it may not be a bug.
It may be that you desperately need to meet timing with optimizations.
You have to turn it on to meet your timing.
So you may not want to turn it off everywhere.
So turn it off on the files that you're examining.
But generally, if I had my druthers,
I would leave it off or on low in initial development
and then get that free bonus,
either space or speed,
toward the end of the project.
It's a concern because it changes so much,
and if you have any of these sneaky bugs,
you get them all at the last minute,
and they're hard to figure out.
But the sneaky bugs aren't one way or the other, right?
It's not that...
Well, things like volatile,
and it compiling that out.
Or even timing bugs where you've gotten the timing down with your optimizations off.
I do tend to start with them on none or low.
I like low because it reminds you about things like volatile right away,
but it doesn't hurt debugging too much.
And then we'll go up to medium, you know,
towards alpha stage when you're starting to get into users.
And then I only go high if I actually need it for speed or size.
Some people are very skeptical of the optimizer,
especially as you get to the higher levels.
I think I've talked to developers who are like, well, you turn, especially as you get to the higher levels. I'm not skeptical about it.
I've talked to developers who are like,
well, you turn it on and who knows what's going on.
It might be making mistakes.
Yeah, it's more likely I am.
There's probably once in a while that the compiler overextends its capabilities
because you're doing something funny, right?
You have something planned in the way you write the code,
and it doesn't really see the grand plan and moves things around.
I've heard of that happening in certain cases with really high optimizations,
but generally I don't think it's a big deal.
No, although I have had optimizations turned on very late and just had code go.
I mean, it might as well be a ball of fire
at that point.
Probably the best thing to do is to turn it on
and off occasionally
and just verify that your system
is doing what you expect.
Yeah, and unit tests.
Unit tests with optimization on,
yes, yes, if they
behave different, then you have a big clue.
That's fine if you have device, onboard device unit tests.
I do. I have been lately. I like both.
That's pretty hard.
Well, the unit test project is separate, and so it compiles the same code,
but then it has its own set of code that runs just the tests instead of the system.
And it's nice because it is on the processor.
But then you can't
automate it, so it can't be part of your
builds, it can't be part of regression testing.
You can automate it.
It's so much
clunkier.
You have to write
expect scripts or something to talk to the device
and
have a device
connected to some sort of automated server.
Oh, yeah, I guess so. That's true.
It's a lot more work.
It is. And so we run the unit tests when we change things
and before releases, but that's about it.
And I do a lot of my development there in the unit test land,
but not everybody on my team does. I encourage it. I bet we've got a lot of my development there in the unit test land, but not everybody on my team
does.
I encourage it.
I bet we've got a lot more unit tests because I'm like, so your system seems broken and
I tried to run your unit test and your unit test was broken too.
And it could totally be my fault, but could you walk me through this?
And then they wander off and fix it for me.
Oh yeah.
Okay.
What else on optimizations?
I mean, we could talk about some of the more tweaky ones, but I don't.
We need to find a compiler writer, don't we?
A compiler writer?
Yes.
Do we know any such people?
Maybe. I'll put it on my list if people
look for it. Or maybe one of the listeners will write in and say, hey, I write compilers.
I can tell you more. Would a compiler writer be a compilerer?
Or a compiler tricks?
Is the compiler used to compile
the compiler? A compile the compiler?
A compiler-inator?
I believe it is compiler-compiler.
Because there was that Unix command, yak.
Oh, you had another compiler-compiler.
Yeah.
Right. Okay.
But what did they compile yak in?
Is that a compiler-compiler-compiler?
This went on for too long about 10 minutes ago.
It's like the parade this morning.
Yes, Christopher and I went to our local parade.
It's 4th of July.
It's the world's shortest parade.
That is what they say.
Apparently by distance because it was like...
Two hours long.
And it was ridiculous.
God, I don't even know where to start with the ridiculousness, other than, wow.
There was a pig on a spit.
Rotisserie style.
And then there was a pig not on a spit.
In a wagon.
Wilbur, Charlotte's Web style.
So that was an odd juxtaposition.
Yeah.
Yeah. Charlotte's Web style so that was an odd juxtaposition yeah yeah and then there was
the
the
extremely low rent
Olympics float
from the Boy Scouts
yes
there were many floats
that were like
here's 20 bucks
in a plywood box
I'm not sure that one
was 20 bucks
I think that might have
been more like 12
they had a couple of barrels plywood box. I'm not sure that one was 20 bucks. I think that might have been more like 12.
They had a couple of barrels.
Yeah.
Yeah. It was very local. Very
local. Colorful.
In that...
Very Parks and Rec. Very Parks and Rec.
Yes. Although if I'd seen it on Parks and Rec
I would have thought it was...
Too dorky to be real?
A little bit.
Yeah.
Yeah.
It was.
And we left our house and met up with our whole neighborhood walking down there.
Yeah.
You couldn't...
And when there was only one person from my household, it was because their family was
already down there saving them seats.
It was weird to walk down with so many people.
Yeah.
Yeah. I'm just trying to think of all the... them seats. It was weird to walk down with so many people. Yeah, yeah.
I'm just trying to think of all of them. I'm still overwhelmed by
the strangeness of it all.
I kept making fun of little Sebastian
as if there would be a tiny,
tiny horse, and then one came by.
Well, and then I think we
missed a couple more because...
There were people who just kind of were in the parade.
Like it was,
they decided a week ago,
I'm going to be in the parade.
There was the guy in the little scooter,
you know,
those scooters,
like the Walmart scooter.
And he had the trailer with some dogs in it.
That was his whole thing.
Those weren't dogs.
Those were cabbage patch kids.
They weren't alive.
Right. The cabbage patch kids, not the dogs. That was a different thing? Those weren't dogs. Those were Cabbage Patch Kids. They weren't alive. Right, the Cabbage Patch Kids, not the dogs.
That was a different one.
Right.
An older gentleman driving a Walmart scooter with a Red Rider.
Flyer.
Radio Flyer.
Radio Flyer wagon with two Cabbage Patch dolls.
And then there was a lady who had the dog driving itself in the little car.
That was pretty cool.
I mean, she was actually controlling it with an RC, but it looked like the dog was driving, and that was awesome.
Yeah.
Yeah.
They closed our whole city for this.
We could not leave our house for many hours other than to go there.
And at the end, one of the last things was the
head lice treatment center banner.
Yeah. You seem traumatized by that. It's a little disturbing.
First, that there's a treatment center, first, for head lice. I thought you generally tried to kill them. Second, that this
somehow merited being in a parade. They had a banner.
They had two people carrying that banner.
Third, that they were preceded immediately
by something unrelated,
but who had chosen to put lots of balloons
in kids' hair.
So they had spiky, weird balloon hair
right before the...
Anyway, this has nothing to do with computers,
electronics, embedded systems.
Nothing, sorry.
Nothing at all.
Okay, wait a minute. Do-do-do-do embedded systems. Nothing, sorry. Nothing at all. Okay, wait a minute.
Well, we talked about unit tests.
I had a question from Aditya.
He wanted to know if we had any statistics,
solid numbers and references,
or even ideas, the less referable information, more word of mouth,
of how widely used unit testing is in the field of embedded systems.
His company is transitioning to unit testing, and he thinks it's great to go through the process,
but where are the industry standards?
And I thought about waving my hands frantically and saying, you should do it,
you should do it. But then I thought maybe I would ask if any listeners actually had
any statistics.
All I've got are anecdotes. Not as widely as you'd expect, I think.
Oh, no, definitely not as widely than you'd hope.
Yeah.
Even the concept seemed new to me a few years back.
Yeah, yeah. I remember at a startup probably 16 or 17 years ago now,
which is still late in the software engineering as a discipline, you
know, history, we were encouraged to write what weren't called unit tests then, but were
called verification tests.
Because they were more of a chip company.
They were more of a chip company.
And we thought we were stealing this grand idea from the digital designers who do formal design verification on everything.
And it's a lot like unit tests in some sense, in that you test each block that you design extensively, and you have a specific test harness for it.
So they had the grand idea, well, we can use this for software.
And we invented unit tests tests even though it had
existed long before i'm sure um but but it was that my point is that it hadn't seeped into the
the uh developers minds enough to say oh we should do unit testing it was like oh this is a new and
novel idea it was pre-2000 right yeah okay yeah we did, they were just unit tests for a lot of low-level library stuff.
But that was the first time I had ever encountered anything like that,
even at really, really big companies.
Yeah, even at really, really big companies.
I don't think we did them at HP when I was there in the 90s.
I remember setting up this whole, what now
I would call a unit test system
to rerun large amounts of algorithm data through the actual
processor it would be running on and to rerun it also through a PC test
side. And that was
shocking to the company I was working at and sort of miraculous
that they could run this code and it would tell them
how it would do if they actually went out and ran it on the ground.
And I didn't think, I thought that was
special, but it seemed like the right thing to do. And I definitely didn't call it
unit tests, I think I called it sandboxes. Right. Well, it was always system tests. I mean, that was the QA thing
when I first started was, okay, you write your code, you put it on the device, you ship it over
to another department, and that department has huge automated test facilities that just run the
device and put it through its paces as a black box.
And that's sort of the distinction
that I learned was black box
versus white box testing.
Where black box testing was
the person executing the test,
the system executing the test,
doesn't know about the internals
of the thing it's testing.
It's just a router or whatever,
your embedded system. And all it can router or whatever. Your embedded system.
And all it can do is affect the inputs
and outputs.
It can set up
situations. It has a spec and it has to
meet the spec. It doesn't matter.
It doesn't care how it meets the spec.
Whereas white box testing, which is what unit testing sort of is,
is being able to
understand some of the internals and write your tests
in a way that you have greater knowledge
about how to exploit it, what coverage you have of the code,
and that sort of thing.
But yeah, everything was system test back then,
and regression testing was all based on automated...
We had big labs full of equipment,
and they would run overnight in simulated scenarios.
And that's still the way
a lot of testing is done and should be done because
unit tests can't test.
They're not good at integration testing
between two systems. They're not good at
functional
testing at a high level
because writing those kinds of tests
gets really...
At a certain point, you're re-implementing the system, right?
Well, and I didn't have good testing until I got to LeapFrog,
where we had an actual QA department who would play with the toy.
God, what a job.
Some of them were the happiest, nicest people,
and some of them were really cranky.
It was kind of a split thing.
But yeah, and it was fantastic to have people tell me my state machine was broken.
I mean, it was weirdly fantastic.
It was the first time that I'd had that fine-grained testing.
But between the system sandbox testing before that and then that,
at that point I was like, no, everything needs to have a test.
I break it down into more now,
but it was around 2000 to 2004 that I started hearing unit tests.
And it was around 2004 that I really started believing in unit tests. So here's my thing on unit tests. And it was around 2004 that I really started believing in unit tests.
So here's my thing on unit tests.
I think it's very difficult to bolt on a unit test culture in a,
in a environment or a team that hasn't been doing it from the very beginning
because it's very hard to write tests for legacy code.
It's very hard to write tests for legacy code. It's quite painful.
Sometimes it requires exposing things
that you didn't really want to architecturally expose.
And it's very hard to maintain that momentum
if it's not part of your development ethos, I guess.
You can say, okay, we're going to start doing unit tests now.
And I've seen this happen.
You write a few unit tests, you incorporate them in the build,
and then we get swept away in development,
and we forget we were doing that.
No, that I've definitely seen happen a lot of times.
And then it's like, oh, we should go back
and write some tests for all this code.
And by then there's either no time or no inclination or, you know, it's working well enough.
Why do we need to do this?
Yeah.
And, you know, one of my clients is in the state where the unit tests don't build.
And I'm just like, this is a problem.
We should stop and fix this.
And they're like, you have
another project to work on. Go work on that over there.
You don't have to see if the unit tests don't build.
And I'm like,
no!
There's no free lunch, right? Unit tests cost time and money.
But I think they're worth it.
I love the fact that they catch bugs for me.
It's very hard to demonstrate to the people
who are looking at the schedule.
Yes.
So, and we've talked about this before, but people making the decisions about when things need to be done,
they see this long stretch and what are you doing with this?
Well, we're writing unit tests.
Prove that that's going to actually make things better in some measurable way.
And you really can't.
See, I don't think it's a permissions thing
like that i think sometimes it is sometimes it is but i think that as an engineer if i say
well this module is going to cost this many agile points or this many weeks whatever whatever
however i'm communicating how long i think this is going to take. If I assume that includes unit tests, I'm so much better off.
Sure.
And I just write them myself.
And if somebody says, why did you do that?
I say, I don't know how to develop without it.
Why wouldn't you do that?
But I think also not all things need unit tests.
I squared C drivers.
Controversial position.
What?
You don't think an I squared C driver needs a unit test? I don't know how to unit test an I squared C drivers. Controversial position. What? You don't think an I squared C driver needs a unit test?
I don't know how to unit test an I squared C driver
unless I'm talking to something on the other side.
And then I'm unit testing the thing I'm talking to's driver.
Well, you can test parts of it.
There's another thing I have about unit tests.
I never understand how exhaustive they should be or need to be.
But they don't need to be necessarily a complete code coverage sort of thing.
Going back to the Ada conversation,
they may just need to be testing your input checking.
Okay, every function is API.
What happens if I send it something out of range or garbage?
Do I get the errors I expect?
That at least will tell you that you're not proceeding
down a path where your code does mysterious things and you don't catch it. So I think there can be
levels in unit tests where it's like, okay, this is too hard to exhaustively test because it's
really a driver and it talks to hardware and it's hard to mock all that up. But we can at least go
through and say, well, if I call this function with, you know, device address out of bounds or something, what happens?
Does it do what I expect?
And then things like algorithms, those are good for unit testing,
because if you can break them out of the system,
you can run them somewhere else.
And assuming the compiler generates similar code,
you should be able to test the functionality.
Well, and you can run them offline on your computer
and then also run them online on your processor and compare
and be reasonably confident that later offline commit time unit tests
will work as long as you've done the A-B comparison.
But then you reach things like you say with the device driver
where on the other side when you say with the device driver, where
on the other side, when you get to very high levels,
things like graphics, things like
UI, that gets
trickier to automate unless
you have certain frameworks
that help you with that. And in embedded systems,
you often don't have those sorts of things unless
you're running a really heavyweight embedded system
like, I don't know,
QT on
or Qt, excuse me,
on a Raspberry Pi or something.
Those have frameworks where you can go through and it'll touch buttons and you can script
all that stuff.
So it's possible.
But it's really hard on smaller embedded systems with displays to say, oh, is that displaying
the right thing?
I don't know.
I don't have a robot to look at that.
There's also when you need to look at an analog signal or some n number of analog signals.
I remember at one company,
we set up this pretty complicated system
where we had data that had come in and been recorded,
and then we could play it through using a DAC.
But it became clear pretty quickly that the DAC's granularity was not as good as our ADC.
And so now you're testing this other piece.
It was just...
But you can do things with tolerances, right?
Yeah.
I don't want people to think that if you have something that's producing an answer that
might not be the same every time, that you can't still write a unit test.
Oh, no, you can definitely have it if it's between
X and Y.
I wrote a unit test for the image processing
pipeline, or at least part of it,
at one of the medical device companies.
And, you know, it had an
FFT in it, and other
things, and basically
I put in a synthetic signal
and has expectations about what I should get out of it.
You can balance check that.
You can say, okay, this is plus or minus 2% or 3%.
That's within where I expect this to come out.
You can do stuff like that.
It doesn't have to be, okay, I put in this thing and I got exactly this thing out every time.
Because when you're working on complicated systems,
you get complicated responses.
Yeah.
And it doesn't have to be that you put in a real signal.
For the FFT thing, it was like, okay, I'm going to put in a sine wave and I'm going to change its frequency.
And I'm going to make sure that I get a frequency spike
at the locations I expect and check for those.
That's pretty doable.
And that's not going to catch every problem,
but it'll catch a lot of problems
if somebody goes in and makes a change
where something's scaled wildly weird.
Yeah.
They change your FFT window
and now you can't get half the signals you could before.
Yeah.
Oh, there isn't a good answer to the actual question,
which is,
where do we get
the industry statistics
for this?
Well,
nobody's going to
fill out a survey.
Most companies
aren't revealing
internal information
like that,
so it's going to all
be anecdotal.
Well,
I think,
like,
I know
some of the UBM publications
used to send those out,
although,
now they're not UBM publications anymore.
Now they're Aero publications because they got bought.
But whatever.
And Michael Barr sometimes sends out surveys.
I don't know. I think the incidence is quite low.
It's quite low, but I would like to believe it's getting better.
I don't think it's getting better.
I would like to believe that it is getting better.
And I think it is not.
Quit bursting my bubbles.
I would like to believe in unicorns.
Flying rainbow children.
I don't know what that means
i'm pretty sure they were at the parade maybe that's what's wrong
uh okay more listener uh write-ins kevin wrote in to suggest Human Resource Machine by Tomorrow Corporation, which secretly teaches you the basics of assembly programming by making you solve puzzles. It's cheap and fun, and it may kill an hour or a day of your amusement time.
Yes, I was playing another assembly language game from
Zakatronics. We might have mentioned it on the show.
Briefly, but I don't think we got the name.
It was the TIS-100
Tessellated Intelligence System.
And it's
a very strange little
processor. Well, I don't even want to describe it.
But it's a bunch of puzzles based around
a processor that doesn't really exist
that is partially parallel and has its own machine language microcode and you go through
and you have to either fix the microcode or write the microcode to solve the puzzles
when you didn't have very many instructions to start with so it's extremely small i think
maybe six or seven instructions but with you have to pay attention to things like,
well, people can go look at it.
It's very cool.
And it does things like part of your score is how compact your code is.
So, oh, we did this in 10 instructions
or it only took three cycles and you did it in five.
So you can do stuff like that.
If you're really tired of doing your actual job.
You can do a fake version of your real job and not get paid for it for a computer that doesn't actually do anything.
Bursting bubbles wherever you go.
No, those are fun, but they do feel like work often.
Let's see.
Joey wrote to us about his blog
embeddedthoughts.com
and I don't think I shared that with you Christopher
but it's been
he does a lot of projects
and it's been pretty interesting
oh okay
I think you did actually
I seem to remember looking at that
but he's been working on FPGAs
and little processors
and it's just
I don't know, been really cool.
My show notes exploded, so I no longer know what we're talking about.
Oh, I'm sorry.
Did I push save again?
Yes, and I clicked the Get Alicia's Changes button,
and then it just closed Word.
You work with only the best tools.
No, you're driving now, not that you worked before.
Why do you bother with the show notes?
I don't know.
Well, did you have any comments from the past few shows for yourself?
I mean, we've talked about Ada and CastAR and Valve.
Although we're going to be talking about Valve again soon.
Late July.
Late July.
No, I thought they'd been a good run of shows.
This is, you know,
this one will... We like to
stop it sometimes so the bar gets low again.
Yeah, yeah. Make sure that people's expectations
don't continually ratchet up
because it gets harder and harder to meet them.
So I would bring it back down, a show like
today, where we talk about pigs and
lice.
Pigs and lice, nice.
Flying rainbow children.
No, I don't know.
You know, I keep coming back to that thing
where I have too many things to do
and I keep getting more of them.
After the Ada show, I wanted to go look at Ada
and I didn't.
I still haven't yet.
Next week's show is about?
I'm actually reading an electronics book, so I feel somewhat
virtuous.
Yeah, you've been enjoying that. I thought you were going to pick it up
and then put it down and then go buy that new
book that I was hoping you'd buy,
but I can't buy full-priced books because I just can't
bring myself to do it. So could you please stop reading the
electronics book and instead go
buy that one
that I wanted?
I don't know what book you're talking about anyway the electronics book i i've been on internet hiatus for the last almost 48 hours now
which is longer than i have in years so far i haven't gone insane um but what's the book
it's practical electronics for inventors yes and one of the co-authors is Simon Monk, who's been on the show.
And I apologize, I don't remember the primary author currently.
I figured I should go back and review some electronics,
because I always complain about not knowing it.
And the only way to solve complaints like that is to learn things.
So it's been pretty good so far.
I'm surprised at how much physics is in it.
He does a good job.
They do a good job of introducing kind of the fundamental physics
to understand what's going on with electronics
without making it seem like you have to know it necessarily.
So if you have a physics background, it's kind of like,
oh, here's what's really going on here if you're interested.
Although I do think it could be a little bit intimidating
to see some of that without realizing you can skip this.
So I would advise people if you read it and the physics gets too heavy
to say move on to the next section.
Because they talk about semiconductors and band gaps
and electron band gaps and things very early on,
which could be intimidating because that's a difficult topic.
But you don't really need to know that.
It's more of a, if you're interested,
here's what's going on at a deeper level.
But I'm enjoying it because so far I've reviewed a few concepts
that I just didn't get before. And maybe it so far I've I've reviewed a few concepts that
I just didn't get before
and maybe it's that I've changed
or maybe I just didn't care enough to get them
back in college
but even simple stuff like
analyzing resistor networks
and breaking circuits down into sub-circuits
and using the various
Kirchhoff laws and
the other rules to be able to solve voltages and currents at various parts in a circuit.
It makes more sense now, so I'm feeling better about proceeding through it.
That's cool. I bought that and Electricity Demystified at the same time.
Unfortunately, Electricity Demystified was the one that I chose to read,
and it was really about electricity and not about electronics and so while it was interesting it was not what i
wanted and i didn't figure it out until about 75 percent of the way through well i i couldn't
understand why we kept talking about wall power and why resistors always we had to talk about ac
and i'm like this is i don't really this, sure, but can we skip to the good parts?
So, yeah, I haven't read that one yet.
I do occasionally buy books that I don't read.
Not often.
So let's see, what is this book that I want you to get? It's the new Jim Butcher book, the steampunk one.
Okay.
You're just going to make me buy it myself.
You know, if I buy it, it comes out of the same money as if you buy it.
So it doesn't make really any difference.
I know.
Okay.
Okay. Okay.
There was a news thing, and you and I talked about it briefly yesterday, but I'll put it
in the show notes.
It's from Interviewing.io, and they're a place you can go to practice interviewing, which
seemed kind of cool, and they put out a study.
I don't even know if I want to call it a study.
They put out some anecdotal.
A collection of small, small collection of anecdotes.
Small collection of anecdotes.
It was a couple hundred collection of anecdotes.
They wanted to look at gender in interviewing for technical positions.
And so they made a voice changer.
And they looked at how people did in different voice configurations.
There was the no change one.
There was the modulated but not gender changed.
And then there was Darth Vader.
Then there was the modulated and pitched changed.
So it's only the last one where you actually can have gender differences.
They wanted to do the modulated one
so that it wasn't just the computer artifacts affecting people's opinions okay so modulated and
um unmodulated had the same effect with with regard to gender but with the ones where they
shifted the pitch they had some interesting results. The men who were switched
to women's voices did actually as well or better on the interviews as they probably would have done
without their voices shifted. And the women did well or not, the voice, it didn't matter.
Switching the pitch didn't seem to really matter
that much.
Do they have samples of
the switched audio?
Because you cannot just switch
the pitch.
No, because I do, in fact,
with the upflexion.
It's not just the upflexion.
All the phonemes are said differently
between genders.
So if you just switch the pitch,
they modulated it too.
You sound like an effeminate man
or a
manly woman.
So you have to
be very careful to, you have to
have something that will go in and
work that out as well.
But it turned out it didn't matter.
Okay.
I mean, okay, yes, maybe you could see through it if you tried, but they were a little mystified.
You could still be subconscious, is what I'm saying.
But what they found, after looking through the data and trying to figure out what the heck was going on in this pile of things that didn't look right, was that after women had a bad interview, they left the system.
Hmm.
And the men didn't.
How many?
It was a couple hundred.
Okay.
And they did point to other studies
that show this
has been done before
and there has been
a noted phenomenon.
And
so
my takeaway here
is that
resilience is important.
Oh, no,
I'd buy that.
I mean,
I've had bad interviews
and they certainly
stick with me for a long time and they don't necessarily make me feel bad about myself, but they make me feel irritated with the system. And the system's a big word. But the way that interviews are conducted and the way that corporations hire people and the adversarial nature of it sometimes.
Yes. And it's something that, you know, I can still remember all those bad interviews and they
still make me feel like this industry has problems that I don't like.
And, you know, it's easy to say, well, you know, I don't need to deal with that.
But they don't make you leave.
Not yet.
But yeah.
But they didn't make me leave because usually those were interviews where I was seeking another job while I had one.
So I could say, well, that didn't go well,
but where I'm at isn't that bad, so I'll stay there.
If I had been searching for a first job and had two or three in a row,
or if I had been laid off and had been searching,
I don't know.
If I had a couple bad ones,
I would probably have seriously considered
finding some other thing to do.
But they're finding that...
I'm sure, yeah.
Gender seems to play a role in giving up a little more quickly,
probably because we hear that there are no women in tech
and so we're like, oh, well, everything's against me.
And then you go into the interview and here they aren't.
And yeah, and so everything's against me.
I'm just going to give up.
No, sometimes it's just a bad interview.
Sometimes it's them.
So yeah, keep at it.
That's my encouragement.
It's sort of a backhanded form of encouragement
but you're not alone
everybody goes through this
at some point
maybe
no
Vincent Bednars
wrote to assure us that at 63
his list is just as long
as Christopher's
I guess that's good
his list of projects to do yes and adam on the other hand send us links to go look at other
interesting projects thanks a lot adam yes that's always helpful the ocean floater does sound really
cool a marine buoy yeah that sounds. And all of that ham stuff?
Because we need more stuff to do.
Yeah. Thanks, Adam.
Thanks.
Let me get you a little Sebastian
pony and a pig.
What? I don't know.
This is...
Yeah, okay, so I'm pretty much out of stuff.
Yeah, that's too bad.
Did you have anything else you want to talk about?
Things to talk about.
The whale at the parade was cool.
It was large.
That's not really something to talk about.
No.
Next week, I think we're going to talk about computer vision.
So I have to start cramming on that.
Luckily, I've already done a little bit of cramming. But I'm going to talk about computer vision. So I have to start cramming on that. Luckily, I've already done a little bit of cramming.
But I'm excited to talk about computer vision.
That'll be fun.
Yeah.
I can do some studying.
Oh, wait, that's not next week.
That's week after next.
Oh, well, I guess I have plenty of time.
Well, next week, we're talking about who knows what.
Right now, today, yes, that is true.
All right. All right.
All right.
I mean, that's a solid hour.
Solid B-minus.
Solid B-minus.
Look, people.
Thank you for buying the shirts.
It is pretty cool.
And we did not expect so many of you to buy them.
And you paid for the art from Sarah,
which is fantastic.
And I am thrilled to be able to support her.
And so, yeah, thank you.
And send us pictures of your shirts in strange places.
That might have come out weird.
It really kind of did.
But now I want to put one of my shirts on the pig.
I wonder if we got the pig's phone number.
What?
I don't know.
I don't think my shirt will fit on little Sebastian.
Yeah, thank you for listening.
Thank you to Christopher for producing and co-hosting.
Yeah, I'm so good at it.
And hit the contact link
if you would like to say hello.
We did give out all of the Make With Ada boards.
That was pretty fast.
You can also...
And you'd better do something with the people
who get them
or nothing will happen.
Yeah.
You can also email us at show
at embedded.fm.
And if you'd like to say hello or if you'd like to just complain about this show, that's fine too.
So now some Winnie the Pooh.
Gosh, I don't know where I stopped last time.
I should have marked it.
One day when he was out walking, he came to an open place in the middle of the forest.
And in the middle of this place was a large oak tree.
And from the top of the tree there came a loud buzzing noise.
Winnie the Pooh sat down at the foot of the tree, put his head between his paws, and began to think.
First of all, he said to himself, that buzzing noise means something.
You don't get a buzzing noise like that, just
buzzing and buzzing without it meaning something. If there's a buzzing noise, somebody's making a
buzzing noise, and the only reason for making a buzzing noise that I know of is because you're a
bee. Then he thought for another long time and said, the only reason for being a bee that I know
of is making honey. And then he got up and said, the only reason for making honey
is so that I can eat it. So he began to climb the tree. He climbed and he climbed and he climbed
and he climbed. And as he climbed, he sang a little song to himself. It went like this.
Isn't it funny how a bear likes honey? Buzz, buzz, buzz. I wonder why he does.
Then he climbed a little further and a little further, and a little further, and just a little further. By that time, he'd thought of another song. It's a very funny
thought. If bears were bees, they'd build their nest at the bottom of trees. That being so, if
bees were bears, we shouldn't have to climb all these stairs. He was getting rather tired by this
time, so that is why he sang a complaining song. He was nearly there now, and if he just stood on
that branch, crack! Embedded FM is an independently produced radio show that focuses on the many
aspects of engineering. It is a production of Logical Elegance, an embedded software consulting
company in California. If there are advertisements in the show, we did not put them there and do not
receive any revenue from them. At time our sole sponsor remains logical elegance