Two's Complement - Testing in C++
Episode Date: January 7, 2021Matt and Ben talk about the eXtreme Programming engineering practices, such as Test Driven Development, and how to apply them in C++. Matt tests a widget and some grommets. Ben complains about slow bu...ild times.
Transcript
Discussion (0)
I'm Matt Godbolt and I'm Ben Reedy and this is Two's Compliment, a programming podcast.
Hey Ben, last episode you and I were discussing how we started out with the same goal,
that is to be in the games industry, and I spent a decade or so doing so,
whereas you, through no fault of your own, ended up not working in games.
Yes.
Sort of like 20-odd years into our careers, we've met up,
and we've been discussing kind of the things that went well,
the things that didn't go so well, the things that differ.
And last time we specifically were talking about testing and how i hadn't really learned how to do testing as a first
category of thing it was like something that i learned much later in my career whereas you've
kind of made a thing about testing yeah well yeah it was sort of all of the xp practices because
like as you said you know i i didn't't get to fulfill my dream of becoming a game
developer, you know, falling like Icarus covered in feathers and wax from the sky into the
reality of the cold, harsh world.
But what I did do is I got really into sort of like extreme programming and extreme programming
engineering practices, because that's really what that whole thing was about.
When you say extreme programming, I hear people like bandy it around.
But what do you what do you mean when you say extreme programming?
So the capital letters. Yeah. Yeah. XP.
I don't know why they capitalize the. But, you know, maybe it sounds cool.
Yeah, it sounds cooler that way. But so XP was this was this process i mean some people still use it but it's like this
process for building software that you know was invented or developed kind of like as part of this
um various projects in sort of the late 90s and early 2000s and it was an engineering focused
approach to building software as opposed to a lot of the other approaches at that time which were
much more sort of i don't know business oriented I guess is the way that you describe it.
Not programming.
Waterfall type stuff or not even that.
I mean, I'm not really talking about that.
The contrast that I have in my career was the first, the thing that I first started doing when I graduated from school was working for this company in Houston doing data
visualization. And they use a process, they're called RUP, the Rational Unified Process.
And I tried that for like- Rational as in Rational Rose.
Yes, as in Rational Rose. The actual, yeah, Grady Boots.
Yeah, exactly. And so, you know, I tried that for a year and I was like, this is gross.
And found XP and got way into it from there so yeah so my my career instead of
going into the games industry i got way into agile engineering practices like um test room development
pair programming continuous integration and all that stuff and i've been doing that basically
since you know for almost 20 years now which is very very different from the grind oriented programming that i experienced
in north london living a mile away from where my office was making it home after closing time from
the pub staggering into bed getting back up again going into work and work long hours and and
grinding and essentially applying human effort of will as opposed to actual sound design practices
to making software work,
which, you know, towards the end, in fairness,
I paint this picture,
towards the end, in fairness,
we were starting to pick up on some practices.
We did have some tests,
but I think, like I said last time,
most of them were assert-based.
And being in the games industry,
we were working mainly in C and C++,
you know, in some assembly and some other weird things that we use.
But the mainstay of that was developing for consoles that were not standard.
They weren't something you could just run and deploy to.
There were steps involved in actually getting it to run.
You know, in some cases, there was a serial cable and you had to, like, squirt your software down into it to make it work.
They had different architectures from the host machine that you were running on,
so you couldn't just build and run them locally.
And so testing was very much something that we did on a subset of important code
that we bothered to get ported to run on x86.
Or if we were working on a PC game,
then obviously we could do some amount of it but it
was basically assert based testing but even then even accounting for all those things right now
fast forward 20 odd years and we're talking about server software for the for the finance industry
and we have all the capabilities open to us and yet testing is still hard in c and c plus plus
and so i thought one thing we could talk about this time specifically is why is that the case and what can we do?
Yeah, yeah. I think that's a fantastic idea.
So you have described to me as like good design can often yield testability or perhaps the other way around, right?
If you design for testability, that is a good design.
So maybe there's something wrong with our design.
So let's start from like what makes something easy to test,
what makes something testable,
and then we can sort of, I'll try and riff on
why that maybe doesn't work for me,
or maybe it does.
Right.
Maybe it should.
I mean, I think for me,
and I mean, this is my own personal belief,
but code isn't well-designed unless it's testable.
That's just a criteria of design that
it's like, and it's a much more concrete and useful criteria than I think a lot of people use.
There's sort of, you know, there's a lot of like, you know, winning by style points that happens in
some kinds of software design where it's like, oh yeah, well this has this certain shape to it,
or I don't even know. Can I write tests that run quickly is a criteria that
is right if it if you can't do that it's not well designed and and specifically as well you mentioned
quickly as well as like another thing you've kind of grafted on there but we talked about
last time but that's an important part of it no no that that's a good point is that it's like the
kind of tests that i'm talking about where they are very focused they can run very quickly they're
decoupled from things like the network, the file system, other services,
you know, hardware, even in certain situations.
Like, if you can't write those kinds of tests,
your code is not well-designed,
by my personal definition.
Yeah, no, I can see an argument for it.
I think, you know, we might politely disagree
about where the line is under some circumstances
because of pragmatism, which is...
Yeah, I mean, I was just going to say,
I think one of the places where that gets difficult
is when you have constraints on your code
that are related to hardware, related to performance,
related to multi-threading,
is another dimension we talked about last time.
And so, you know, one thing that I would kind of love
to get into with in this episode
is talking about, like like in C++ specifically,
some of the hurdles to doing things
to increase testability
that I would think of as good design,
but you would maybe turn your nose up at
and be like, yeah, okay, I get why you're doing that.
But if you do that, then it's going to cost you this, right?
And do you really want to make that trade off?
Right. I think that's it.
I think that you've kind of cut to it there the usually you only pick c++ if performance is one of your primary goals i don't
think many of us rub our hands with glee and say i'm going to write a mainly string processing
application that's not performance critical in c++ that's not the first thing we'd go for you
know you'd pick python you pick pearl'd pick Ruby, any of those things.
JavaScript, they work beautifully.
So they have a great user experience out of the box.
C++ has a lot of hard work to even get it up to the point where you've got a build,
let alone an executable that does something useful.
So if you're picking performance, then one of the things that C++ leverages,
if the compiler knows more about the way your software
is put together and how your code fits together, it can do a much better job of optimizing it.
So putting direct calls in your code to where the source code is visible to the compiler. So for
example, if I have a class and I have methods in that class and some of those class methods are actually even
in the header file and there are compiler technologies make that less important but like
you know the the implementation is there for everyone to see in your header file then the
compiler can often inline it do a lot of great optimizations and give you some super fast code
which is wonderful but you have tightly bound and coupled the code that calls
that function to the implementation of that, the specific implementation of that function.
And that means that there is nowhere to put a little break in between to say, well, I'd like to
observe the interaction between your component and the thing that you're calling. There is nowhere to
put a test in there. Now, obviously, in other languages, and you can do this in C++ as well,
you would use a mock often to say, well, I don't care about what you are doing.
Moreover, I just care about the interactions you have with another object.
So you mentioned file system.
There's a great one.
In C++, you probably would just use a std file system object.
And that's great.
It comes with the the
modern versions of c and you get access to files and paths and all those good things right but
it's a concrete object there's no seam for me to mock it out unless i go out of my way to add one
in so even where even though file access is going to be slow we know that it's not a highly
performant area of the code by default we we can't interrupt
that communication and say well i'm not going to give you a real file here i'm going to give you a
pretend file yeah and i mean that sort of dependency injection is what people generally refer to it as
is a common technique for testing right um but i i kind of wonder it's like you know you kind of
just said it's like well if we're going to be talking to a file system anyway,
performance is kind of out the window, right?
Like, at least in some respects.
So at least in that area, I would imagine, like, yeah,
you might have to roll that abstraction yourself.
You might want to roll that abstraction yourself,
depending on what you...
It might be a useful thing for you to do just because of yeah and i mean you know i actually think that this is one of the great
benefits of testing in general is that it forces you when you're when you are forced to decouple
things it makes you consider what parts of the dependency you actually need and which ones are just kind of coming along for the
ride and although it seems convenient oftentimes to have the full array of possible ways to interact
with an object or a system or anything else it is often quite nice to focus those interactions
to be only what they have to be because a it reduces the number of
possible code paths through your code right like if you're only using one thing then you just have
to make sure that one thing works you don't have to make sure that everything works but it's also
sometimes makes it easier to under a little easier to understand especially if you're coming into a
new code base you're like okay well i know how this function call works and they're using it
all over the place as opposed to, there's 12 of them,
and I don't know what they do.
So giving some thought to that, I think it has value.
So the kind of things you're talking about, perhaps,
are, you know, obviously there's the,
essentially the global space is an object,
if you want to think of it in that way,
or a scope.
It's a scope, you know, like calling just
std file system colon colon file on a thing
gets you a file is in the global scope.
But if i deliberately
limit myself and say well no this this this component i'm talking to only wants to load
cache files and it only really cares about the contents of those cache files why don't i just
hand it something for which that is the interface and now i've kind of documented the interactions
and i've limited the interactions in a useful way so this the abstraction might actually have power above and beyond the testability that might be the initial
reason i put it in place i might put it there and i do this all the time you know obviously we're
we've worked together we've we've done these some of these things together before but this is the
kind of thing that i do and i've had discussions with other c++ programmers who have have come
down on the side of like saying i wouldn't necessarily add an abstraction purely
for testability and i've kind of gone back and forth on that because i i have done i've added
stuff you know like exactly in the file system is usually the arena where this stuff crops up is like
yeah okay i'm going to add something which provides a file or provides a config loader or something
like that and then i somewhere there's a thing that can do files but most of the time it's my
fake one for testing but it was interesting finding that there were people who wouldn't do
it just to add testability but very often when i've done it i have discovered that it's a useful
thing and perhaps that's what you're alluding to here is is that like you've mentioned something
about abstractions before that they are what they're discovered yeah abstractions are discovered
not created right you you need to sort of see how your system's used
and then pull the abstractions out.
Right, and I've worked with a good friend of mine before now
who has also referred to bad abstraction layers
as obstruction layers,
which I think we've all been there where you're like,
well, I just need the thing on the other side of this wall.
So there's always a danger if you're introducing these things
where maybe you are decoupled from the file system in your config processing system.
And you're like, well, I just want to see if the file exists.
And now I haven't, I'd love to reach out to the disk.
And now you're like, well, now I have to actually add this to the interface.
But to your point, if I add it to the interface, then maybe I'm documenting more about the things that I actually need to do.
And I mean, I think you want to make those kinds of changes easy,
but make them intentional, right?
Like if you're going to sort of break the abstraction a little bit and say,
okay, well, I need to see if this file exists,
design your code such that it's easy to add that,
but then add it only when you need it.
And I mean, I think one of the other great benefits
that you get out of doing this is you sort of get a very low cost design for reuse.
Like in their file example, if you have an abstraction that says, okay, I can read the
contents of this file, I can stream it in, then if you have an abstraction there that's saying,
it doesn't really matter if I'm reading from a file or from a socket, the code will work
basically the same either way.
But the second that I try to seek,
then that abstraction now has been completely destroyed.
And sometimes we make decisions about how to implement an algorithm.
We're almost like looking for a reason
to choose one thing over another, right?
Like there's eight different ways I could do this.
Which one do I do?
If you have the whole array of possible file system,
you know, tools at your disposal, you might choose one arbitrarily. I do? If you have the whole array of possible file system tools
at your disposal, you might choose one
arbitrarily. If you've focused
it to be only a certain set of
things, the choice is maybe,
okay, well, I could arbitrarily pick one of
these eights. Well, I'm going to choose this one because it's the one
that I've already abstracted. And that naturally
leads you toward designs that are more
reusable because it's like, oh, well, you could also
do this with a socket and you'd never know the difference.
So one of the, we're going to go off track from C++ pretty quickly, but I would like to get back
to some things I'd like to talk about. But one of the things I've seen in other programming languages,
mock sort of setups, is the actual, their dynamism. So I'm thinking Python specifically here.
Their dynamism allows for the kind of thing where you can monkey patch out the global file system routines.
And just say, hey, I want to test this thing.
I know, nudge, nudge, wink, wink, that it's going to open a file using this particular API.
You essentially hack the runtime to say, no, no, if you call file.open, do this instead.
And you temporarily install
that patch then you can write your test and then you can say yeah okay when i call the config system
and i know it loaded the file and i was able to provide it the file and i could assert that that
was the case but that's so brittle it's so brittle because i could reasonably refactor
i don't know if refactor is the right thing let's not talk about that but i could
could easily change without too many external obvious changes
the implementation of my config loader to use a different API.
There are hundreds.
As you say, there's a dozen ways you could have loaded a file.
I could mmap the thing.
I could open it.
I could use path to open.
I could use whatever.
And now suddenly I've broken a perfectly reasonable test of that system
because I just chose not to use the API.
But if I force it through an interface that I designed ahead of time
to say, well, these are the kinds of things you can do to the files
in this context, then necessarily I'm not open to that.
Now, obviously, I don't get the choice of doing that in C++
short of some absolute heroics with the preprocessor
and hash defining a whole bunch of
gunk to be things that they aren't which is so dreadful I'm not even going to go further down
this logic setup but but actually that does lead me to the other thing I wanted to talk about which
is when I do testing in C++ I typically do think of adding a virtual layer so So this is a class with virtual methods
which has a performance, a runtime performance cost.
And I've long argued back and forth various places
about what the costs of that really are.
And certainly back in the days of the games industry,
the cost really was the indirect call instruction
that it necessitated.
Nowadays, that's less the problem. It's more like
it's a barrier to the optimizer. The optimizer can't see across it, doesn't know what it's doing,
but even that's coming down. But let's hold that to one side. But that means that you are definitely
designing your software ahead of time with a very specific, strict contract. And that's the
interface definition. There's another school of thought that lets you use template parameters
into a function or a class to hand it essentially a policy object
that says this is the thing.
When you want to use a file, use this object, this type of object.
And that's a compile time change.
The compiler can see which one you actually handed the implementation,
and it can therefore generate
like again perfect fast code paths through the virtual is a runtime decision that the compiler
doesn't know that you've switched out on so there is a way of having a lower overhead runtime
overhead version of of essentially dependency injection but it's a sort of insidious one because
every type you pass around your program
carries with it the parameters
that were given to that type.
And so in your like real program,
you might have,
oh, I'm using my config loader
and its type is config loader,
angle brackets, real file loader.
So that's like the policy that it's going to use.
But when you're using it in test,
it's a config loader, sorry, angle brackets, fake file loader so that's like the policy that it's going to use but when you're using it in test it's a config loader colon sorry angle brackets fake file loader but then that's all well and
good until you want to embed that in another object and now that other object carries with
it the same dependency so transitively you pick up all these things and it becomes harder to
deal with it's an interesting thing that i don't think many other languages have an analog to and
i don't know where i'm quite where i'm going with it other than i tend to not use it because i hate
the compile times that that come along with it and there are so many tricks you can do in the code
that rely on the actual concrete types because you can interrogate that and it's a commonly done
thing to say like well what was my template parameter does it have a colon colon this does
it have a uh what what is the type of its colon colon size type or i can dispatch on that that
now it's not the interface is not opaque to the implementation that's using it you can actually
inspect the implementation you were handed and make decisions which means that you can actually
have a different you know i could actually say i can use a trick to detect if it was really a fake
one that i'd been instantiated with and do something different in my implementation that
could mean that you test one thing and in your non-test case you it does something different
right and i don't like that you know i like the idea that almost essentially the library that i
built that i link against the test.xee is the same library I then link against
to create the thing that I'm going to ship off to production
as opposed to completely recompiling it
with essentially a set of pre-processor,
well, compile time processed parameters
that could change the behavior.
I just realized what I've just been doing is ranting
for the last two minutes about this particular topic
as I feel so strongly about it. But it's a valid way of achieving this and I've seen it used
to great success and in fairness I have used it sparingly in contexts where I feel it doesn't
leak outside of a small component which I guess brings us right back to design and maybe that's
ultimately what makes me feel so uncomfortable about this, is that it's an abstraction that
leaks throughout the entirety, transitively, through my design, and I don't really want
it to.
I want to hide stuff away.
Without appealing to the kind of virtual method or interaction-based testing, typically what
I find myself doing when I am writing testable C++ code is trying to chew off the
smallest possible thing and test it as a component and then test it in aggregation and then test the
aggregation and then test the aggregation of that so it's a very sort of build up and I'm not testing
the specific interactions between components which because I can't see into them but I know
that my grommet works and I know that my screw works But I know that my grommet works, and I know that my screw works,
and I know that my nut and bolt work.
And then I kind of sort
of blindly trust a bit that
putting them all together into a widget
works. But I do test the
widget. It's not like I'm there.
But I know that I'm leaving
something on the table there by not being able to test
all of the interactions between the two.
But I still think it's not an invalid way of testing in general.
No, I think that's a perfectly valid way of testing. And certainly dependency injection
is not the only way to write testable code. Far from it. If you're working in a functional
language, dependency injection is not even a thing. Right.
Right. So, and I mean, I think the pattern that you're describing of building small components,
whether they're classes or whether
they're functions, and then assembling a system out of those, and then writing tests for the
assembled pieces can work. And I have certainly done testing that way. The thing that you can
run into when you do that is you sort of run into the problem of when you make changes, you wind up with a whole bunch of
failing tests. So in an ideal world, and not even in an ideal world, that's a stupid phrase. People
say that all the time. How about the real world? In the real world, when you do things in a certain
way, what you get is when you introduce one bug into your system, you get one failing test. And I have
built systems that have that property. It's not impossible. It's not even really that hard.
But obviously, working in different environments, different languages, different domains can make
it a little harder. And if you are working in a language where, for other reasons, you wind up
composing smaller components
into larger components
and then testing through the larger components,
in addition to the tests
that you might write for the smaller components,
but if you're trying to test those interactions,
the downside of that will be
that you will probably wind up with,
if you go to change a component,
you might have dozens or hundreds of tests failing.
If I break my widget,
then obviously everything that depends on it
essentially is fair game for,
or rather if I might take my grommet
that's in the widget,
then all the widget tests are likely to fail as well,
as are all the things that further up the chain.
Right, right.
And some of those things may be valid, right?
Yeah, yeah.
But not all of them will.
And being able to,
this kind of gets back a little bit to what we were talking about with the file system, where if you have those abstractions, you can make a clear distinction between what's relevant and what's not and sort of prevent this you're when you're testing that way you almost want to i don't know i kind of feel like you almost want to aim for a flatter structure
like the ratio of leaves to nodes in your graph is you want you want more leaves and
you know as opposed to the total number of nodes i suppose this i just literally now this dawns on
me and that the thought is that in something like a c++ application the higher up the absolute
abstraction or even the composition tree you get the sort of necessarily the fewer the interactions
between those larger components that you've built are to the point where you probably will get to a
pragmatic point where you're like well this is my gpu the interactions with the gpu are turn the
screen on draw triangles here's a giant list of triangles clear the screen and flip the page
buffer those are like the top level interactions i'm going to do and at that point i'll gladly
take the virtual overhead whatever it is because I've only got a dozen different interactions
and they're infrequent both in terms of the code
and also the runtime.
I don't have to do them very much.
So the classic example I actually, sorry,
the reason I went GPU is one of the examples
I pull out when I talk about this
is how to decide when something should be
a virtual interface or not, specifically in C++.
And the example I give is an interface to a texture or a screen.
So you can get how wide are you, how tall are you.
You probably don't want to have a plot pixel and get pixel
because that's too tiny a piece of work.
It's a single machine instruction usually, or two machine instructions.
So you're going to necessarily hamstrung string the compiler if you implement it as a virtual method so you think well
okay what will i do i'll i'll allow access to maybe locking by getting a pointer to a contiguous
region of that area and then i'm on my own i've got i can i can monkey with it i can read and
write pixels directly or i just get a pointer to the whole buffer and whatever so that's a great sort of
like point the number of interactions you're likely to have is uh is is so high you would
probably abstract it but once you get to the yeah flip the page draw a triangle maybe I don't that's
probably nowadays that you want to draw so many triangles it's not a big deal but um once you get
further up the hierarchy that's the point where you okay, I'm going to put a seam in here, and I'm going to divorce this part of the code from anywhere else.
And anything else that wants to interact and test with this part of the code, I can always put a fake one of these in.
Here's my fake GPU.
Here's my fake screen.
And then I can say, sure, you can paint all these pixels.
Yeah, go for it.
Do your thing.
And then I can look at it afterwards.
And there's no coupling there between the actual implementation and the and the thing whereas the the the gpu
itself may be built out of components that are more tightly coupled and are tested more in that
way that we we said before it's sort of incrementally and transitively you kind of like
build up this idea that the the the gpu component is okay because transitively all of its components
tested okay and in aggregate they tested okay so maybe that hybrid approach is the right approach anyway or the pragmatic approach
under these circumstances of yeah absolutely and in fact you know when i was working at that
data visualization company in houston that i mentioned i did exactly that with 2d graphics
you know we built a layer of abstraction on top of the the graphics library we were actually doing
this in java at the time which was a little weird but it would work um and we built a layer of abstraction on top of the graphics library. We were actually doing this in Java at the time, which was a little weird, but it would have worked.
And we built a layer of abstraction on top of the 2D graphics system for testing purposes.
But it also let us do a lot of other very interesting things.
It made it a lot easier when we needed to add functionality like, oh, can you make a screen capture?
It's like, well, yeah, I can just turn this graphics object
that I have into a buffer,
and now you've got your screen capture.
Of course.
We actually even wound up doing a thing,
this was also for testing purposes,
although a very different kind of testing,
where we would test some of our graphics code
by rendering it into a buffer
and then running it through a vectorization program and seeing if
we approximately got the same vectors out
that we put in. That's marvelous.
Yeah, and I mean, it was a little clever.
A miracle that worked. It was a little
clever, but it sort of told us some interesting
things about how it worked. When you say clever, are you meaning
that in the slightly pejorative sense of the word clever?
Only slightly pejorative.
It was
a technique that was
interesting that told us some things about our code.
I wouldn't recommend it as a general purpose
technique for testing graphics, but it was
possible. My point here is that it was possible
because of this abstraction that we had
created. And so
those kinds of things, I think to your
point, that is a perfectly valid technique.
And I think thinking about where you want to add
those virtualization calls in C++ or, or, you know,
the sort of layers of abstraction in whatever language you're working in,
it's, it's obviously not just testability is the only,
the only thing that you should be thinking about there.
It's also, you know, what are they going to be the,
the other design impacts and the other, you know,
performance impacts of doing this.
But, but I really like your point about sort of the higher up you get the fewer interactions there really seem to be i think
that's that's quite insightful c++ point of view as well from a build point of view and whatever
the the less coupling you have between components the or the fewer coupling
the less the less coupling but the less coupling you have between between components it's certainly possible to leverage that to make your builds faster because
you can carve the world up into parts that are dependent or not dependent on the interface
changing as opposed to some typo in a comment in the the implementation of one of those functions
or whatever you're at and that sort of comes back round to the the the conversation about fast changes to testing so obviously the big um problem
the elephant in the room uh with c++ well it's not really an elephant because we all know it's
there and we can all see it and it's like the build time and your sort of worldview of testing
is and i've seen you do this,
you have a watch command running in one window
and you've got VI in another window
and you're literally saving.
And as you say, your save is a micro commit to you
and the watch immediately notices that you save
and it runs all the tests.
And it's essentially a live dashboard
of where you are in your process of developing.
And that's super important and then
it the rules of eights which we talked about last time if it's 800 milliseconds that's instant and
you're like i saved yeah that's cool or whoops no i made a mistake or no that doesn't work well
there's a cool feedback loop there c++ out of the box tends not to have that and in my experience where at least part of my career i spent right writing my
own c++ parsers uh to try and make a whole new way of like even compiling c++ to try and solve
this very problem my experience you have to think about that from the get-go in c++ it's not something
you graft on at the end a bit like i say i, right? You know, it's harder to do, come
in afterwards and write tests. Right.
You start from the ground up and say, okay, my build
in the limiting case of having just the
whatever test library I've
picked, you know, I like Catch-2, people like
Google Test, there's a few of them, that
linking with a simple test
that just fails, or just succeeds,
or one of each, just to prove to yourself
that you're actually capturing that can i even get that building and running in a somewhat interactive amount of time
and then you grow out from there and you take a deep breath every time you cross some threshold
an eight second threshold and again we're never going to be 800 milliseconds in our world i don't
think right you know single digit seconds is fine and then there are ways and means
of both breaking up your code and making it compile faster which we could perhaps talk about
at length another time but also making it so that your components are split away from each other so
that you can run tests on just that component be it a single file or a single clump of files that
are together and obviously your build process heavily interacts with this
so that you can just say, look, I only changed this file.
I know that loads of my code, in theory, is affected by this change,
but I just want you to build the library and the test for the library
and ignore everything else.
Because right now I'm in that mode where I want to do that quick loopback
and all the other compilers can wait until I've proven that I've gotten this right.
Mm-hmm. Mm-hmm. the other compilers can wait until I've proven that I've gotten this right.
Yeah, I feel like there's a whole category of things like that where whatever group of people you're working with, you all have to commit.
You have to make a shared commitment to doing something a certain way.
Otherwise, it's really not going to work out super well for you.
One of them is testing.
One of them, I think, is a commitment to these
fast build times and structuring the codes
that you can build things incrementally, you can build
things quickly. Another one of those is
deployment, like being able to deploy
safely a running system. You have to decide
to do that at
the start. It's very difficult
to sort of,
both from an organizational standpoint and from a
technical standpoint to sort of add that in later right continuous builds continuous deployment all
these things kind of fit together well they all go they they they go together really well if you
start them all together but anyone that i'm trying to put it in afterwards is a is a is a pain yeah
it's much more difficult to do it's not that it can't be done it just takes
hard work and it takes commitment from everybody involved if you have half your team that's
invested in fast builds and the other half is like yeah it's fine i like coffee um you know
there you're just not going to get to where you want to go that's that's that is it i mean like
there's the whole sword fighting xkcd of you know what are you doing right which right yeah if you see that
as an essential part of your job it's going to be really hard to come along and have someone
convince you that you should do a whole bunch of work yeah to take your build time down from you
know a minute to eight seconds so that you can you know work in this fashion right because a you've
probably never experienced before so you don't really know what it's like and you don't really
know why it's good and b it's like and you don't really know why it's good.
And B, it's kind of a huge risk.
It's sort of like, well, how do you know
this is really going to turn out like this?
That's a really good point.
Certainly, I think there's a certain amount
of institutionalization that I've seen
with other companies I've worked at
with the C++ developers where it is just taken as read
that, oh, well, it's a 20-minute build
every time you do anything significant.
Nobody touch error.h because if you touch error.h, it's a 20-minute build every time you do anything significant. Nobody touch error.h, because if you touch
error.h, everyone's cursed.
Next time they get Paul to
have a 30-minute build, so off you go.
And of course, projects do get
big, and so there are necessary
things that take a while, but
it doesn't
have to be that way.
I remember having the same thing about, and we should perhaps
even talk about this another time,
like IDEs.
I like IDEs perhaps more than VI, although I use both.
And I know you're a VI user and you've seen IDEs
and you appreciate them and everything.
I like IDEs.
It's just VI is an IDE to you.
I was a huge Eclipse fan.
Well, no, that's Emacs.
You're thinking of Emacs.
But the point I want to make is that i remember seeing
um somebody using intellij as it was at a very previous job and they were just like a maestro
playing a virtuoso you know virtuoso playing an amazing instrument the speed that the ideas were
forming in their head to code appearing on the screen fully formed followed by well maybe i
shouldn't rename it
that, you know, call the variable that, oh, hang on a second, this is an interface, pull out an
interface. It was wonderful to watch. And it totally changed the way that I thought about
how you could program. And it moved me much further down the explorative tap on the keyboard,
look around, maybe this, maybe not this, versus the, you know, sagely stroke your beard and think
about it
approach that we discussed a little bit last time so there's a there's definitely something there
and i think that obviously as part of that suite of tools having a fast build having fast tests
all factor into that that sort of local minima in the space of things that you can do to be
productive where you and i currently are which is is fast turnaround on builds, deploys, tests, all those things.
Yeah, I mean, that's a good spot.
Yeah, so I mean, I think this is a really deep topic, right?
Like we're not going to cover this, unfortunately, in 45 minutes.
And I think the aspects, you know, the style of working that we're talking about
with fast builds and fast tests the unfortunate truth is is that
you you can't really see the full picture i feel like until most of those elements sort of come
into play and that makes this this a a complex topic topic to talk about but i definitely think
maybe on the next episode or any episode after that getting more into the into the details of
you know practically how do you do these
things. It sounds like you've had, through
your attempts, you've had some experience with it.
But there are also lots of other people that I know
do this and have done
it quite extensively. So maybe we
could mine the
brain trust of the internet to figure this out.
But it's a very deep topic. I'm
looking forward to talking about it more. Awesome. We'll do another
one of these. We should. Cool go all right you've been listening to two's compliment a programming
podcast by ben rady and matt goldberg find the show transcript and notes at two's compliment.org
contact us on twitter at two cp that's at t w o s c p T-W-O-S-C-P.
Theme music by Inverse Bass.