Embedded - 30: Eventually Lightning Strikes
Episode Date: December 11, 2013James Grenning (@jwgrenning) joined Elecia to talk about how to be a good programmer using Test Driven Development (TDD). James' excellent book on how to use TDD: Test Driven Development for Embedded... Systems Take a class from Renaissance Software Manual test is not sustainable blog post, from James' blog Legacy code challenge from Github SOLID design principles Iterative and Incremental Development article by Craig Larman Untapped: the beer drinker's twitter To get the signed copy of James' book, email (show@embedded.fm), tweet (@logicalelegance), or hit the contact link on embedded.fm with your number between 0-99. First one with the correct number wins the book (if no one is correct, the closest number will be selected 12/25/13).
Transcript
Discussion (0)
This is Making Embedded Systems, the show for people who love gadgets.
I am Alicia White, and today I will be speaking with James Grenning, author of Test-Driven
Development for Embedded C. Hi, James.
Welcome to the show.
Hi, Elle.
How are you doing?
Great.
A little tired, but great.
Me too. James has let me audit a few days of his course
as he teaches the benefits and methodologies of test-driven development. So if I mentioned
the class, that's what we're talking about. And if we mentioned being tired, that's what we're
talking about. And I'm a big believer, I have been for a long time, in design for testability.
Our philosophies are a little different, though.
I'm going to believe that the listeners of the show
already have some sort of testing is good idea.
But before we get the course of agreements to go,
let's get your baseline.
If we met at embedded systems conference,
how would you describe yourself and what you teach?
Well, let's see. How would I describe myself and what do I teach? Well, I teach
a few things around agile development, test-driven development, engineering practices of test-driven
of agile, as well as some of the management practices. So I teach those things. I've been an engineer since the late 70s
doing computer software and related type work,
management, et cetera.
And so I would describe myself like that, I guess.
Maybe not.
It's weird to be talking into a microphone, but
I'll probably get over that in a little
while. Yeah, you'll get used
to it.
And if you listening are
interested in a class, James
is very eloquent when he doesn't have a microphone directly
in front of him. And they're big microphones.
Check out his
Renaissance software.
We're also going to be giving away a signed copy of his book
we're going to try the whole contest route here clues for how to get the book will be noted as
often as i remember them but first james is going to choose a number between zero and 99
because we are zero based here and you need to be the first one to guess the number i'm going
to send it via twitter or the contact link on embedded.fm it's one entry per person although
in the event of a tie whoever has the better story about test-driven development
events you know a stupid thing that happened that got fixed. Eh, I don't know.
Whoever has a better story always wins.
Okay.
So you mentioned Agile.
Are test-driven development and Agile linked?
They are connected,
although test-driven development doesn't have to be used with Agile.
But test-driven development, when I learned it,
came from extreme programming. Extreme programming predates Agile, but test-driven development, when I learned it, came from extreme programming.
Extreme programming predates Agile, as extreme programming and Scrum were things that were
known, you know, and being developed during the 90s.
I discovered extreme programming in 1999, and Agile didn't happen, I think, until 2000,
get named until 2001.
We had a show with Curtis Cole that was very Agile is great
but he wasn't interested in Testament development.
I mean he was interested but he wasn't an evangelist for it.
I am still unclear as to how they can be separate.
Do you always teach TDD with Agile, or are they separate?
Well, if an organization comes to me and they want help with Agile,
what I'll advise them is that what they need is sound technical practices
to go along with the iterative management style.
And what I would do is encourage them to first let me teach our engineers what TDD is
and what the engineering practices are.
And then we would begin shortly thereafter with trying to learn how to get the iterative cycle started.
So we do a workshop to help people get stories and their backlog in order.
And you mentioned extreme programming.
Is that the same as pair programming, or are they different?
Extreme programming is based on Kent Beck's work.
His book published in 1999, I believe.
And extreme programming describes 12 practices, some of
which are management practices and some of which are technical practices, although Kent
is a programmer and so his emphasis is on the technical practices.
When you hear about someone doing Scrum, Scrum is mostly the management practices. Extreme
programming is mostly the engineering practices. Extreme programming is mostly the engineering practices.
And bringing those two things together is a good fit.
And Agile is something that came a little later that definitely has scrums.
Well, so Agile is kind of a unifying idea
of what some of these different techniques were that were evolving in
the 90s. So extreme programming involves these 12 practices and iterative development. Scrum
involves iterative development and kind of continuous improvement. The developers of Scrum
expected that as people would iterate, they would discover, gee, this retesting
is not working. We need to do something different. We're going to have to solve the technical
problems of incremental delivery. And that means automation of tests and other type things is what
Ken Schwaber and company thought would happen. Although most Scrum adoptions and Agile adoptions really
are light on the engineering practices.
And this company is getting into trouble because of this.
Yeah, I hear about Agile.
You get a story, you solve some bugs, there's some points, and you don't always hear about
how testing.
And you said retesting. We talked a little bit in class about that, that a lot of
the benefit for TDD comes from this retesting concept. It's certainly one of the benefits.
When we do test-driven development, there's some misconceptions about what it is.
Some people think that it's, let's go write all the tests for the system and then go develop the
system. That's going to result in a lot of problems because we'll be making mistakes.
Test-driven development is you make mistakes in the upfront work on the test. Test-driven
development is an incremental approach. Write a small test, make it work, write a small test, make it work. So we cycle
through a series of small, we're doing small cycles to try to get the code to do what we
want it to do one behavior at a time. Yeah. In class, that was the hardest part for me is that
when you say one behavior at a time, you really mean it. Write down the function.
I'm in my test.
I'm starting from very scratch, and we did a circular buffer,
and you wanted me to have init,
and I would type in my initialize function name,
and then the goal was to compile and to have it fail compiling.
And then to go to a header file and put in my initialization API,
just that function, nothing more, not the whole API,
even though I might have it in mind,
just one line at a time,
let it fail a link and then go and actually implement,
initialize as a blank function.
And then that would pass a test.
It was very, very slow to me.
Although as the class went on, it got faster.
Is that what you usually find?
It feels slow because it's so foreign,
and people are used to what they're used to,
and when they're doing something that's foreign, it feels very odd.
One of the things that I've realized from doing test-driven development is that I make mistakes quite often, and if I'm going to do these small cycles, I can discover the mistakes right away,
and if I've only made small changes, then I know exactly where the mistake is. So you mentioned
that compile step and the linker failure.
If I make code that's incompatible, I get unexpected errors,
and I can fix those right away.
If I've only got one problem that I'm solving at a time,
it's much easier to solve it.
So the problem of doing test-driven development is to come up with a series of these tests
that when you're finished with all your tests,
that you're finished with your production code.
And then the question you asked me a second ago was, so you get these tests to get the cost
of retest low as a side effect of getting the test, the system working the first time with the
tests. You also leave behind a trail of automated tests that you can continually run in the future
as you do other maintenance activities on the code that you've built. Yeah, you showed a graph in class where if you have, you know,
you have a certain number of hours a week and you make a certain amount of features. So every week,
let's say you do three small features, three large features, whatever it is, but a certain amount of stuff gets done.
Maybe you measure it in lines of code or files written or some other metric, but we're not
going to care about the metric.
It's some constant amount of good stuff gets done.
And then there's got to be some testing that is associated with the good stuff.
And week one, you know, maybe it's 75% code and 25% test. Week two, well,
now you want to have that same ratio, except when you check in week two's progress, week two's new
features, another thousand lines of code or whatever, you still have to go back and test at least partially week one
because you might have broken stuff.
This is nature regression tests.
Right.
And therefore, by week three, now you have to test week one and two
or you have to live with there being untested code.
And so there was this constant amount of work you were doing
and this ever-growing linear, I thought exponential, but ever-growing amount of stuff you have to test as you add more code.
And you said retesting.
Yeah.
And that's where TDD becomes one of those things you can actually convince people to do is automated retesting.
Yeah, so even if you didn't do test-driven, if you did test second,
you could automate tests and make the cost of retest come down somewhat.
One of the driving factors is if what I rely on is manual testing and my intuition on what I should test, I'm going
to build up a large backlog of hidden defects and things that aren't going to work in the future.
And I might not, I might not discover that until much later in the development effort. And then
I've got a big bug fix, um, activity that's going to have to happen. Uh, this seems to happen in a
lot of, uh, areas where we build up these bugs
and they all appear at the end of the project by surprise.
Everyone's surprised.
We prefer to be proactive.
So it feels like doing TDD is slow because you're working on a test
and then you're working on production code, you're working on a test,
you're working on production code.
And it feels slow because it's not what you're used to.
It turns out if we think about
the time we spend not just writing the production code, but the time we spend writing the production
code and testing it and fixing the bugs, we're probably even with the time to test drive that code the first time in a proactive way,
maybe even ahead of the game.
So if those two costs are similar,
if we go look at the future,
the next time we have to touch that code,
the retest is virtually zero because we have the automated tests that we can run.
And who hasn't had to go back and make a change
after you thought you were done?
I think probably none of our listeners tonight.
Yeah.
I mean, even the Mars rovers are firmware updatable.
So you have to go back and test.
But test-driven development is something I have a few questions about
because I prefer test-driven design.
And maybe this was something I picked up working with the hardware engineers,
where it's designed for test.
It was a big acronym, and design for manufacturability you have to design a board
so that later you can verify that it works and you can then point your finger at the software
engineer and say no my part works not that that ever happens hi phil uh but that was my
introduction to it is this idea that every piece of the system, mechanical, software, hardware, they all have to be designed so that at some point you can point to it and say, that works.
And I can show you, here is why I think it works.
And, you know, it's never irrefutable.
Maybe it works only when the sun is shining sort of problem.
But it's a design ideal.
You don't go into a problem without thinking how you're going to prove that you've solved it.
I think it's kind of a scientific method sort of thing.
You have to have an end.
You have to have a hypothesis, and then you either get there or you don't.
You mean TDD is a scientific method thing?
Or design for test is a scientific method thing?
Design for test more.
And test-driven development helps me get there
because I do have to design the tests.
But it's a very programming sort of exercise.
It's not a systems architecture sort of exercise.
Does that make sense?
Or do you disagree?
I can understand your point.
So design for test, testability is a nice thing,
but if it's not tested, who cares?
It's true.
Okay, so with test-driven, it will be tested.
And it will be testable.
How can I prove it?
I've got the test, okay?
I'm kind of stealing this from Ron Jeffries
because there was a conversation over Twitter a few months ago
which is around this topic of testable, prove it.
You know, testable doesn't matter.
Tested is what matters.
And test-driven, now if you say test-driven design or test-driven development, you'll find that TDD, that term is used both ways.
Oh. that TDD, that term is used both ways. In 1999, when you would have picked up the Extreme Programming
book, it would have said Test First Programming. Over time
we've evolved the name and understood better
the impact that test driving your code
has on design. For instance, where modularity
might not matter if you were just manually
testing your code only in the target system later, modularity might not matter so much.
But if my goal is to be able to write tests for each module of my code, if I get a module that's
out of control, too much complexity in it, I won't be able to test it,
and it'll be too late. That thing will only be manually testable.
If I'm test driving, I'll discover that early.
I'll get an early warning that there's a problem with my design
because I can't get to the code that I would like to add.
I can't think of a way to test it.
It means the module is too big and needs to be split apart.
So test-driven development
incorporates test-driven design. I did notice in class that the modules we worked on were very
well modularized. And that was because if you wanted to replace the IO driver for the flash
interface, you had to separate out the IO driver. the flash interface,
you had to separate out the IO driver.
I mean, if it was going to be spy or whatever,
there was a hard line there
because that's where you were sneaking out the code
and putting in a mock or a fake or a pineapple
or whatever you called it.
A spy, a test double.
A test double.
A pineapple.
Oh, I had a taste test.
You must have been thinking about lunch no no that was the
the exploding fakes you mentioned oh if you put them in then if you ever get there they explode
and it made me think about grenades which made me think about pineapples and yeah maybe it was
about three o'clock okay time for a cookie but but the test doubles force you to think about where you can cut things apart.
And that helps with the modularity.
And also if you are trying to design tests for something,
as well as the code, it kind of decreases the amount of brain space you have,
which means that you try to minimize the problem into the right chunk.
And I liked that. I liked that a lot. It did. I didn't expect test driven development to drive
modularity to that extent. And I hope that wasn't just an artifact of having good, good examples.
Well, um, I also have other schooling in my background, which is the Bob Martin School of Solid Design.
Bob Martin, some of the listeners might know of Bob.
He's influential with agile test-driven development, object-oriented design,
and the person that coined the phrase solid design principles,
which there's five different principles, one for each
letter. And these principles are guiding in when we make a design decision, we want to use
these principles. For instance, one of them is single responsibility, the S in solid.
It says a module should have a single responsibility. It seems kind of like an
obvious thing, but you know what? All the code I see in the world, almost no one does this.
They've got lots of mixed up and confused responsibilities.
So I try to, with my code, show what could be
and how we could make modules that actually obey these principles.
And so that goes back into you teach more than the test-driven development class. You
teach agile classes and whole system design courses. Coaching in that more, but I used to
teach design. All right. I don't do much of it now though. Is your number 42? Is my number 42?
It is not. All right. Okay. Am I supposed to tell you if it is? You're not supposed
to lie to me. I thought I was going to pick the number afterwards. No, no, you're supposed to
pick the number. You're supposed to pick it when I gave the spiel at the beginning. That was all
the time. All right. I'll work on it. Okay. Maybe now that we know it's not 42, we should choose a
number. There'll be a few more questions. Okay, all right. As Christopher pointed out
earlier before we started recording,
I'm tricking people into
listening to the whole podcast.
Okay, James is going to write down the number.
That will help me.
You don't get to see it, though.
You shouldn't have used a Sharpie then.
Did you see it?
No.
Okay, so sorry.
Sorry for those of you I'm tricking into listening to the whole podcast
so you can get a signed copy of Test Driven Development for Embedded C.
But, you know, keep listening.
Let's see.
What do you prefer to teach? Everything or just TDD?
Um, so, uh, I prefer the engineering practices, which TDD is essentially the center of the engineering practices. Uh, there's, there's more to it than just the TDD part. So that's kind of
my mission is to help engineers to learn these techniques because in 1999 when I was sitting
through the extreme programming immersion that I was invited to, taught by Kent Beck and Ron Jeffries and Martin Fowler and Bob Martin.
Test-driven development was demonstrated to me in Java.
So I know a number of languages, including Java and C++, as well as my embedded C.
And they were demonstrating test-driven development, and they had dependencies on things that they wanted to
fake out. And as this was demonstrated to me, it became obvious that this is a solution to a
problem embedded engineers have. And the problem embedded engineers have is they're often doing
their work without anything to run their code on. The target system is not going to be ready. They're doing concurrent hardware software development. Many times in my career, we had
to wait, and late in the development cycle, we'd get hardware and then find out none of our code
works. When I saw a test-driven development demonstrated, the idea to me was, oh, for
embedded code, we could do this and get good confidence built in our code
before we ever have the hardware.
When the hardware comes, we can find out if it integrates,
and hopefully we'll have less problems,
and we can move to the market quicker,
rather than having all the unknown of all this unworking code
when we first get our first board.
And that was really important.
But when I'm running on a small processor,
you want me to run my code on a PC.
Well, you're writing in C, okay?
Well, as close to C as I can get with my...
With your...
1980s-based development environment,
which I'm telling you,
Christopher is saving up a rant on that one.
You might want to update the old Intel Blue Box.
It's not that. It's not that.
It's just that embedded compilers are a little hokey sometimes.
And they definitely, there are tip-to-tip differences.
Yeah, so let me say a couple things about that.
So C is supposed to be portable, right?
It is.
Supposed to be, quote-unquote, portable language.
But then compiler manufacturers start putting in special help for you.
Like they make global variables that are their registers.
They make special keywords to say this function is actually hooked up to interrupt number one.
They extend C, right?
Exactly. That's what I mean.
Yeah. Okay. Well, that's too bad they're doing that.
This causes a big problem because now this code can only be run with their compiler and their processor.
It kind of makes me wonder.
Now, I assume first good
intent. They're trying to help you build a better system. But I also wonder if they're not trying to
handcuff you to their tools and their silicon also. Because pretty much if you use any of those
special things, you're locked into their stuff without having to do major changes.
I have a little snippet of code up on my GitHub account, JW Grenning. Up on my GitHub
account, it's called the Legacy Code Challenge. And if you want to go grab that and see if you can
convince that code to get into a unit test harness, it would be interesting off of whatever
target that might be. I forget what it's for. But I do encourage people to get their code off the target
because that's a huge bottleneck. If I write code for something I don't have hardware for,
how can I get my work done? Well, I can wait. Okay. If I can't fit any tests into my constrained
memory, how do I test it? I can manually test it.
Maybe that's okay for a small application, but for anything bigger, it's not.
So I encourage people to get their code portable.
Try to isolate as much of their code from the hardware
so it can be tested in a friendlier environment.
There's kind of a nice side effect of this too,
is when that hardware goes obsolete, which it's going to, right?
How do I know this?
It always does.
There's Moore's Law, right?
It's going to keep changing.
Then maybe the bulk of their intellectual property can move without major change if it's not tied to this silicon vendor's specific code.
And voila, we are back to modularity.
Absolutely. Managing of dependencies. This is one
of the lessons I learned from Uncle Bob. That's Bob Martin, right? Yeah, that's right.
So are we going to end up with a processor simulator? How are we going to get away from recreating the whole hardware and software?
Well, we're going to write portable code, okay, for starters,
and we're going to fake out our dependencies on the hardware.
So like we did in the class today, we wrote a flash driver that interacted with a memory map device.
We had to intercept
the calls to two functions, ioWrite and ioRead, and we could put this thing we
call a mock object behind it. And the mock object, I would call it a simulator,
but it's not really a simulator. It's much simpler than a simulator because if
I'm going to simulate a flash device, it's probably to be more complicated
than the code I need to write to drive the flash. So I'm not going to write a simulator,
but I can simulate a sequence of interactions with the flash device. And so this mock version,
this fake version of IO read and IO write, enable me to surround the code I'm working on
and thoroughly test it. I can go right from the design spec for the part, the flow charts,
and write code that meets those flow charts to the best of my ability, get that to work. And
then when I get the hardware, I marry these things two together and find out where I messed up.
Okay. And now you have 10% of the bugs you would have had if you hadn't done any of this. If you'd just written the code, you'd have had a whole bunch of bugs.
But now we're at the oddity, the I misread the data sheet sort of bugs.
Yeah.
And that's a much smaller subset of all the possible ones.
Yeah.
And whether it's 10% or 10 or 2, I don't know what that is, but it's much less than the number of mistakes that you would normally make.
I mean, people are really good at making mistakes.
I used to think I was good at programming until I started doing test-driven development.
Then I discovered that I make about, I don't know, you were measuring me this week, about 10 per hour?
Or am I being kind to myself now?
I know some of them I made them on purpose, but most of them were live memory, not memories. Well, the thing about test driven
development isn't, is that the mistakes they're different. If I let a bug out into the world,
that's a bug and it annoys me. It annoys me if a customer calls and says, this is broken.
I have one client that they accused that the customer is vocal and ends up on boards,
like forums and stuff and complains and about how the device doesn't do this special nifty
feature that was all promised. And I spent time looking at the bug and I realized it was a server
problem and it kills me that six months later, the server problem hasn't been fixed and the
customers are still saying the device is broken. And so that is a bug. That's a real bug. And even
the bug that makes it to QA or even the bug that makes it to my colleagues who have slowed them down.
Those are bugs.
But when it's just me tippy typing, if it's just me in the compiler, I don't really call those bugs.
I'm not even sure I call them mistakes.
They're just, this is what I do all day.
I type, you know, I pound on my keyboard back and forth.
Well, actually, most of the time you're thinking.
Yes.
So most programming is thinking.
And it's not about typing.
Having good typing skills is certainly helpful.
But thinking, you have to have good, clear thinking skills.
And TDD helps us express what we want.
So the tests express what we want,
and then we make the production code do it. If you write code and you make a mistake, you know,
where, I don't know, for me, conditional logic, actually for a lot of people, conditional logic
is quite difficult to get right the first time. And the tests just help me get it done quicker because I can get feedback
immediately while I'm thinking about it. You were asking me a couple of minutes ago though about
abstraction layers and test doubles and processors. I'm thinking of something else. I'm just kind of
gathering my thoughts here. I do get stories back from people that have taken my class and, uh, one gal who's a embedded engineer down in Texas, she sent me an email telling her story.
She said, well, I thought it was all kind of crazy, this whole test driven thing, but I thought
I'll give it a try because I don't have the hardware anyway. All I've got are the specs. So she worked for, you know, a couple months on her application and
her interface to the hardware, et cetera, and then finally got a board. And in her story,
she told me that there were three changes she had to make, that she had misunderstood what the spec
did, and then she was done. And she said, it's the first time I've ever been finished early.
She was two months early in her deliverable. Time that would have been spent grueling, going through
all the code line by line, trying to find out what's working, what isn't working.
She spent several days in integration and then everything worked and she was done early. It was
a really nice email to get.
Yeah, that's excellent.
I get these from time to time.
Yeah, that's got to make teaching nice.
Yeah.
Well, when you hear that someone has success with these crazy ideas, it's really helpful.
So how did you get into teaching?
You were an engineer.
Yeah.
I still, if someone asks me what I am, I'm an engineer. I can't stop being an engineer. I think I was one before I ever went to teaching. You were an engineer. Yeah. If someone asks me what I am, I'm an engineer.
I can't stop being an engineer.
I think I was one before I ever went to school, too.
Engineering is about problem solving.
I've always done that.
That's kind of been a theme on the show
is me trying to figure out how people
identify themselves.
Because
you and I have both written books but i bet
have you ever introduced yourself as an author
no i i'd introduce myself in as an engineer that happened to write a book
yeah or that happens to teach people but i've met people who introduce themselves as authors
who have never had a published book and And yet that's how they see themselves.
They are authors, writers.
That's really cool, but I'm an engineer.
I agree with you.
Yeah.
So you don't introduce yourself as a teacher?
No.
Or instructor?
No, I say I teach, but I'm an engineer.
Okay, so you were an engineer.
Yeah.
And you wrote a book.
Yep.
And then you started teaching?
No.
You were an engineer and then a consultant
and then a teacher and then a book writer?
I'm not sure.
Really?
This is your career.
I'm not sure I can fill this in.
What I'm not sure about is that it's a long day
and you just did about a five combination thing
and I'm not sure what you asked me. No, no, I'm sorry. Tell me about your career. All right. So, uh, as a, uh,
in high school, in the seventies, a friend of mine was programming a computer at our high school.
I said that part already. And he showed me his computer listings and his punch card, punch cards.
And I thought, that looks horrible.
I'm going to stay away from that.
And then he said, no, come and check out the computer lab.
And he showed me in and there's a big scary guy in there and there's no windows.
And it's like, no, thanks.
I'm not going in there.
Definitely no girls.
There were no girls.
Well, I actually went to Loyola Academy, which is an all-boys Jesuit high school,
so there were no girls around anyway, so it would have been okay to go into the computer lab.
But so in college, I'm taking engineering classes, and in calculus,
the professor gets this bright idea that we're all going to program Newton's,
simulate Newton's method of
calculating the area underneath the curve. And we had these teletype machines for doing
some basic programming. And he showed us how to do a little bit of programming. And then we had
to go figure that out. And so my regretting, you know, getting anywhere near a computer was ending because I had to do this.
And it actually wasn't so horrible.
Actually, it was quite fun to get to trick the computer into doing something for me.
And I liked it.
Get the computer, yes, trick the computer.
Trick the computer into doing something for me.
And so I stuck with this math teacher and got more experience at that. And then I picked up an
elective and then there was a required programming class. And then the next thing I was thinking is,
people will pay me to do this? Yes, exactly. This is a scam. People will pay me to do this?
Yeah. Oh, okay. I know what I'm, yeah, this is my major now.
So, you know, it's part of the master plan. No, it just kind of happened.
My first job out of college was with a company that did contracts for the FAA.
And the day I walk in, my boss gives me a data book with the 8251 in it and said, go figure out how to work this thing.
I didn't know what that was.
I didn't know what a data book was.
And then we built a color weather radar display system with 16K of RAM and 16K of PROM. We did have to upgrade it from 12K. It was the first ever color weather radar for the FAA. It's kind
of a cool thing to work on as a young guy. And then I went into, I changed to a different job where we started doing C and embedded
processors like 8085s. And you can fill an 8085 with C code really fast. So we had to do crazy
things like bank switching of memory. And then we got a bigger processor in 86. And then,
you know, so my career was involved in building some cool stuff in the 80s and getting into some leadership and management roles.
And then getting kind of fed up with corporate life and then joining my friend Bob Martin in his consulting business in the mid-90s.
I had worked with Bob throughout the 80s at my second job.
And so we consulted together and that's when I started teaching.
Bob kind of taught me how to teach.
Cool.
Yeah.
But that's what you do full-time now?
I do teaching and coaching and consulting.
And then?
And then I do some writing.
I do projects for me.
Yeah.
Yeah.
Cool.
How much of what you teach is test-driven development and how much is just good design practices?
Are they separate?
No, they're not separate.
To me, they're not separate.
I've seen a lot of people write tests
that don't know how to design.
I've seen more people that can sling out tests that don't know how to design.
I've seen more people that can sling out code that don't know how to design either.
I find that the feedback you get from writing tests is very valuable to critiquing your work.
If you write code that's not modular, your tests will be a mess.
It starts to tell you there's a problem.
If you're willing to listen to what the code is saying and the tests are saying, you can come up with better designs.
So in my three-day class, basically the way it works is on the first two days, we spend a lot of time going through test-driven development examples and exercises and growing in complexity.
And then on day three, we get into design, solid design principles,
refactoring techniques, how do we keep a design clean for a long period of time.
And then the thing that everybody has been worried about the whole time,
in the afternoon, we get into legacy code, what to do about the existing code that we have.
And I do it this way, legacy code at the end, because I want to paint a picture of what could be, how nice code could be and how modular and independent it can be with these tests.
And then we get into the real world of, well, what's your real code?
Okay, so in the last day, we're talking about the techniques for dealing with legacy code.
And then the end of the class, either one or two days, we spend in the attendees code.
They bring their own code because it's hard to go from these simpler project,
simpler ideas in the classroom to
their big mess is really what it turns out. Their problem code, I shouldn't call it a big mess,
their successful products doing great things in our world, but usually the code has suffered
and we spend time trying to get their code disentangled from their execution environment
and into the test harness so we can start to evolve that code
in a safer way with tests.
Because few people start a project from scratch.
I mean, usually you are continuing someone's project.
Right.
Very few of us get to write the first line of the code.
So legacy, I think that's important
because that's where the rubber meets the road.
That's where theory becomes practice.
You're going to write new code too,
but you're going to have to make it work with existing code.
So you need to know... So the rubber can meet the road in a number of different places.
So in the new functionality, totally new functionality can be test driven
but existing functionality that's going to be evolved needs to be treated carefully.
Treated carefully first by adding some test to it.
We can't afford to add test to everything.
So we're kind of strategic about how we add the test to it. And then, you know, being careful, just getting really
good at being careful and not introducing unintended consequences. And then for legacy code,
do you treat it like third party code or how do you treat third-party code like APIs and libraries, RTOSs, graphics libraries, board support packages?
Okay, so if I'm starting from scratch,
what I would like to do is somehow abstract away a little bit from the thing I'm using.
So let's say I'm buying a third-party package to do something.
Usually a third-party package, be it an RTOS or a database
or some kind of file system or something, usually what we need is something focused,
meeting some need of our product, but what the vendor is providing is a general solution that
does way more than we need. If I let my code just use
directly this third-party code, any piece we need, I'm going to get bound to that thing and it's
going to make my life difficult in the future if I want to change. It's also going to make testing
difficult because that thing is probably going to be tied to my hardware or something in my
execution environment, my operating system.
So what I would prefer would be to write an interface that I own.
So if I'm using a third-party operating system, I might slightly abstract the operating system.
I might say, how do I get time from the operating system?
How do I get a periodic wake-up call from the operating system? I want to have an API that
I own that does that, and I'll write a small adapter behind that thing that can interact with
the actual third-party operating system to give me those features. That way, if I change my mind
when I use a different one, I can limit the amount of work. But the tie-in to this with test-driven
development is if I own that interface,
I can create fakes for it, and it's more convenient.
I can build this interface.
This is one of the design principles.
This is the interface segregation principle from Solid,
which would tell me to design my code to a focused interface that my code needs.
Design the interface for the client's
need, not for what the server provides. Okay. Then you write an adapter in between the two.
And this gives you a lot of flexibility as you evolve your system.
That makes sense. And it kind of, well, okay. I believe in test-driven development uh but one of the things that came up in class
today and it was a little hard to answer was okay i'm i'm in you've convinced me
uh how do i convince my boss or how do i convince my engineers how do i convince my boss to let me
have the initial ramp up time how do i convince my engineers that this do I convince my boss to let me have the initial ramp up time?
How do I convince my engineers that this isn't going to slow them down? How do you convince
people this is a good idea? Well, you know, I can't convince them really, but what I,
what I'll try to do is can, what I want to know first when I'm working with an organization is,
do they recognize that there's problems? And the problems that they usually recognize is that we're late, and part of why we're late is
that we're chasing bugs, and life is not as good as it could be. This is one of the problems.
Now, we talked earlier about the cost of retest. One thing I'd like managers to be aware of is how we can't afford
not to automate our tests. If we don't automate our tests, that means after the second week,
I have to retest the first and then test the first time the second. After the third week,
I get to retest two batches. After the fourth week, I'm retesting four batches.
And I don't really do all that retesting I stopped testing things
I should and I start to accumulate risk and writ I think of it as tinder under
the forest eventually there'll be a lightning strike and it's all gonna go
up okay so I'm so I'm off so I want them to be aware of that problem because many
it's a very simple model I've got an article on my blog about that, and it's called Manual Test is Not Sustainable.
So if you're interested in looking at that, take a look at that.
It's a very simple model, easy to communicate to people that aren't techies.
So I want people to be aware of that.
And the other thing in just trying to get people to start it is when you know, when we put off testing till the end,
we're going to have a lot of problems at the worst possible time. So we want to try to find a way to
get features to be written and work and stay working throughout the life of the product.
And that means cost retest must be low. That means some automation. so there's those things and you're asking me how do I convince people and I make a point of telling
people that I am not really there to convince them I will try to convince them that I'm convinced
okay and that I know that this helps me and try to get them to see what's in it for them so that they will try it
and see if they can convince themselves one way or the other. So for instance, at the beginning,
you know, I asked you and the others, what do you like about programming
and about development? And I hear about design and about, I like to program, I like to solve
problems, things like that. And then I ask, well, what don't you like?
And people tell me they don't like doing repetitive tasks.
They don't like testing.
That's usually somebody whose boss made them come to the class.
That's my third question.
Why are you here?
I don't like testing.
They think they're going to get at me because I don't like testing either.
Because testing, what they're thinking of, manual, repetitive, boring testing is
not fun. Test-driven development, on the other hand, happens to be problem solving and coding
things that people like. So I appeal to them, try to move them into doing more things they like.
Other things they don't like are writing documents and chasing bugs. And so I challenge people and
say, what if we could do less of the things you don't like and more of the things you do like?
Would that be interesting to you?
You know, will you try it?
And by the way, I'm here to make you try it for three days.
You know, so you're going to try it.
Three days is not that long in an engineering career.
Yeah, it's not that long.
I also encourage, you know, everybody in the class is usually younger than me. So I say,
hey, you guys have a lot of your career left. And I learned this after 20 years of experience. I
wished I had learned it earlier because it would have saved me a lot of headaches in the 80s and
90s if I had known these things, you know, rather than just at the turn of the century.
So it's really not up to me to convince anybody. It's up to me to
try to connect the problem that they see, which is I've got defects, it's making me late, it's
making me chase bugs, and to a potential solution to that problem. So as an engineer, I like to
approach it as a problem and a solution. What's the problem? I've got defects, I've got a bad
design, I've got such and such. What's the solution? Well, if we had a feedback loop around
our coding that showed us when we made mistakes and we could get them out, if we could discover
when we had design problems sooner, we could change those before it's too hard. Do you see
how these things connect and how there might be an improvement in your situation?
And so I try to appeal to the logic of that problem solving. And then I want them to experience it so
they can actually make a decision based on their own experience and not on something I'm saying.
So they'll experience it and see if they think it would help them.
That makes sense.
It's hard to be an evangelist for something that you barely understand.
Sure.
Yeah, and we had a couple of guys in the class today who felt their role was to go back and
convince their colleagues.
And I kind of warned them that that's not their job.
And by the way, they won't be able able to because I've actually tried to do that now you
know after having taught test-driven development now for probably over a decade in one form or
another java or c++ or c I realized I can't convince anybody to do it I can't convince
them that it's a good idea but I can get them to try it.
And so they can try and so they have an opportunity to convince themselves.
That was when, when we talked about how to convince somebody to use it.
My, my personal idea was a little on the attempted sneaky side, which is.
I always appreciate that approach.
If you, if someone is giving you a hard time because they just hate the idea of it,
tell them to learn about it,
to understand it,
so that they can tell you why it sucks.
Yeah.
And when they come back,
either they will say,
it sucks and here's why,
and then that's a problem
and you can talk about it.
Or they will say,
it may not be the best thing ever,
it may not be the solution to ever. It may not be the solution
to all of the world's problems, but it's better than what I had. And better than what I had is
a step. And that is a fantastic step. If you are actually moving forward instead of just to the
side, all right, let's keep going that direction. So I kind of liked the, well, you know,
tell them they have to learn it so they can tell you it sucks.
Yeah.
Well, you know, the worst time to make a decision about something
is when you don't know anything about it.
What? Really?
I know a number of people that that's just not true for.
Oh, well, you know, they're using System 1, I guess, right?
Oh, yes.
Ignorance-based decision making.
Okay.
Yeah.
So is test-driven development and Agile the one true path?
Is this the only way to do it?
Oh, I doubt that.
Is it the best that I know about right now?
Yeah.
People work better on smaller things and with feedback.
So we could look at lots of examples of monumental systems built with a lot of upfront work.
We're just going to plan it out and then execute, and they don't.
It doesn't work.
Incremental development was how software
development started. A lot of successful systems were built with incremental development.
Agile and TDD are forms of incremental development. We find that people are more successful at that.
There's some interesting writings. Craig Larman wrote an interesting article about
the history of incremental development,
which kind of goes through some of the older systems like Polaris submarines and Atlas rockets
and space shuttle that were developed incrementally with a lot of the techniques that we now would call agile.
And then there's other things that were developed later under the 2167A banner of planned and tracked that are disasters.
FBI wasting hundreds of millions of dollars on a record system.
I don't know if I, you know, there's something going on in the health care industry right now.
A big $600 million project that's not working so good. There's these monumental big efforts
that don't tend to work out
so well. Incremental development is about trying
things, getting feedback, seeing if it's what your customer wanted.
That's what we're... That's what I
like about it. Does it, does it fit everything?
I don't know.
Cause I don't know everything.
No one's ever accused me of knowing everything either, by the way.
So back to your number, um, uh, you mentioned system one and I'm going to go off, off the
air with a little tidbit from thinking fast and slow.
And I'll explain a little bit of that but one of the things that that this book about psychology uh that you and i've been talking
about uh is about grounding and some human factors uh it says that if you ask someone to choose a
number between one and ten due to just how our brains work the number is usually seven
is your number seven?
It is not.
Okay.
But you asked me to pick a number
between one and 100, I think, right?
No, no.
I don't think it was one and 100.
No?
Zero and 100.
Zero and 99.
We're zero-based here.
Oh, okay.
Zero-based.
It's an embedded systems show.
Yeah, that would be better.
Why wouldn't you go up to something more sensible like...
255?
100 hex.
I don't know.
All right.
Can my answer be in hex?
Yes. Yes. New rule. Your answer can be in hex? Yes.
Yes. New rule.
Your answer can be in hex.
I'm going to have to change my number then.
But it's still not.
No, I'll leave the number.
Okay.
Never mind.
All right.
Well, I actually have more questions for you, but I think I'm about out of time.
I have all these questions about, well, what about this kind of test?
Does it count?
Does that count? Manufacturing test? Hardware the loop tests, sandboxes, online, offline.
But I'm tired and I think maybe it's time to set you free and we'll gather some questions and do
this again in a couple months. Does that sound okay to you? Sure. Do you have any last thoughts
you'd like to leave us with? Gee, let's see. Any last thoughts?
I had some a while ago,
but being what time it is now of the night
and the long day.
And the beer.
Well, there's only one beer here,
but we could probably solve that problem.
I also like to,
if you know about the Untapped website.
The Untapped website. No.
The Untapped app.
U-N-T-A-P-P-D.
It's the Beer Drinkers Twitter.
So my handle on that is JWGrenning2,
and I have some beer drinkers from around the world
that will toast me from time to time.
That's kind of a fun one.
So if you're listening to this podcast,
shoot me a message message and we could be
untapped friends. My guest has been James Grinning. Thank you so much for joining me.
Thanks for having me. Wonderful to be with you tonight, Al.
Links will be in the show notes if you want to contact James via Twitter or via untapped,
the beer app.
If you have comments for James and can limit yourself to 140 characters,
Twitter's great.
It's at JW Grenning and that will be in the show notes.
Otherwise send an email via the contact link on embedded.fm and I'll make sure it gets to the right place.
That is also how you will be getting your signed copy of test driven
development forded C.
Send me a number, and I'm sure James is going to leave me his number,
and we will take whoever's first. Hopefully I will announce it next week.
This week's last little tidbit is from the book Thinking Fast and Slow, since James and I have been talking about it both offline and online.
It has to do with what System 1 is and how that all works.
It is a great, if huge, book all about how to manipulate your brain and other people's brains.
Please use what you learn for good.
Here's one of the things I learned. When faced with a difficult situation,
we often, often answer an easier one instead, usually without noticing the substitution.
What does that say about your code?