Embedded - 395: I Can No Longer Play Ping Pong
Episode Date: December 10, 2021Tyler Hoffman joined us to talk about developing developer tools and how to drag your organization out of the stone age. You can use GDB and Python together? Yes, yes you can. And it will change your... debugging habits. (You can find many other great posts from Memfault’s Interrupt blog including one about Unit Testing Basics.) Tyler is a co-founder at Memfault (memfault.com), a company that works on IoT dashboards and embedded tools. On Twitter, Tyler is @ty_hoff and Memfault is @Memfault. Control-R is a history search in shell commands (magical!). The fuzzy search tool discussed is FZF (probably even more magical!). XKCD comic referenced: xkcd.com/1319 Fitbit’s Tower of Terror Bug
Transcript
Discussion (0)
I'm Alicia White, and you are listening to Why I Hate Tools.
Your host is Christopher White.
Wait a minute.
Our guest is Tyler Hoffman of Memfault, who has returned to talk with him about tools.
Hey, Tyler. I feel like I've been hoodwinked here.
How are you doing? Welcome back.
It's maybe the closest guest return that we've had. Hey, Tyler. I feel like I've been hoodwinked here, but how are you doing? Welcome back.
It's maybe the closest guest return that we've had, so welcome.
Well, I'm not sure he's going to stay on the lines after that, so we'll see. Yeah, happy to be back. We ran far too long the last time and didn't even cover a second topic.
And the second topic was tools. And I had some things
just kind of vaguely that I wanted to talk about. And then you, uh, came up with a whole bunch of
other ideas to talk about, but I guess the place to start for me is asking you, um,
I've worked at a lot of places, probably too many. And many of those places, the tools was just kind of an afterthought.
Engineers kind of came up with a collection of things to use as we developed the software.
There was no necessarily a well-defined process for things.
Some people used different stuff even.
And then at Fitbit, I remember when the Pebble acquisition came in you and some other folks came in and really
kind of took a more methodical
approach to that
that's kind of what I want to talk about
partly is how to get
a more
professional tools organization
within software and development
I get to hear this from your perspective now
because we only got it
from our perspective as as the company being acquired yes and trying to to move things in
a direction that we thought was right but it'd be fun to to explore whether you thought it was
the right direction oh absolutely and it really was eye-opening like i said i hadn't maybe maybe
it's a a black mark on my career but I hadn't really seen that kind of seriousness around tools before.
Except maybe at like Cisco where, you know, it's a gigantic company where we had a huge kind of developer infrastructure thing that had to be somewhat well-defined.
But even there, it was like people just kind of did random stuff um so when one of
the first things was just getting everybody to be on the same page with tools like like uh just
going back to the fitbit example we all had iar and that was kind of the commonality everything
else was random um and getting the right versions of things was very difficult especially when there
were more dependencies so what were some of the things that did you guys come up with that at pebble
based on experience or did it just seem like the right thing to do so here's here's the the brief
story of pebble and why i felt like we started off on good footing, which was like, you know,
process and tools.
The original firmware and hardware and generally software engineers at Pebble were young.
They were from the University of Waterloo.
They were all friends of the CEO generally.
And we were all, I think even from their mouths, we were all pretty naive about how to develop a hardware product.
It was like, can't be so hard.
You write some firmware code, it mostly works
and yes, there are some bricked units here and there
but I think we were taking generally a software approach.
It's like we've all written some form of like a Java application in school,
we've probably written an iOS or an Android app. And when you're writing those pieces of software,
you have a debugger, you have easy logs, you have already tools that capture crashes and
capture all of this information for you and kind of tell you exactly where it went wrong. And so when the engineers and even when I was joining, it was still towards this path.
It's like coding firmware shouldn't be impossible and shouldn't be as hard as like most people make
it out to be. And honestly, we didn't really talk to too many other firmware engineers outside of
Pebble, I would say. Like we mostly like pat our heads down working hard almost all the time.
And so when we were like, oh, we now have units in the field.
We need logs.
Let's build a system that automatically does this for us,
and we'll capture logs, and we'll do circular buffers on logs.
Anyways, it was just we didn't know any better,
and we had kind of already had a software mentality, and we knew how important those processes and tools were.
And so we just built them not knowing that, like, I think many engineers from 10 years ago and before just never even thought to do it.
And so that's kind of still the status quo a lot of times today in hardware and firmware organizations.
I mean, I've built many circular logging systems.
For sure.
And I think that the compilers are important and I think the tools are critical.
But I want to spend all my time playing with the technology or with the application or getting things done.
So I have a hard time with tools unless I need them right this second.
But didn't you get irritated when something went wrong with tools and got in the way of
your development?
Yes. And then I would crankily storm out of the building and hope that somebody else would
solve it. That's not entirely true, but yes. I mean, I find working on tools to be an order
of magnitude more frustrating just because I'm working on tools than working on hard technical problems.
We discussed this a little bit last time as well.
It's like, what is the hardest part of our jobs?
Or like, I think you asked me, what is the hardest part of being a firmware engineer?
And I would honestly say it's not working on the actual hardware product.
It's supporting the hardware product before, during, and after it's shipped.
I still feel like that holds true.
And yeah, and I think I have a different approach on it.
I think it sounds like you are very motivated by working on the cool tech,
which is fantastic.
Every company needs those people.
I myself am motivated by getting other engineers
to be better at their jobs.
And I love the compounding effect
that if we have two engineers
and I can write a couple tools or a couple processes
and clean up a few things, that's okay.
We sped it up by maybe 20 to 30%.
But if you have an organization of like
20 people or 50 people and you speed everything up by 10 you've made like an order of magnitude
more improvement and how quickly things can actually be done and that's like what i get
excited about that's the that's the role that i carved myself out at fitbit um ultimately yeah
and that experience i, you say 10%,
there were things that were sped up there by more, much more than 10%. Like 80%. Yeah. Um,
we can, we can talk about all the things that we did, which is, which is always fun. Um,
you going back to your question. So you asked how did we come into Fitbit, the Pebble, you know, engineers
come into Fitbit knowing that this was the right way to do it? It's because like,
we had just done things entirely differently. And, you know, I, I'm going to say things from,
from a different perspective here is like, I feel like we had accomplished just as much at Fitbit,
just as much at Pebble, sorry, compared to Fitbit,
in terms of the firmware,
with maybe 10% to 20% of the amount of people working on it.
And that's not to say that we shipped the amount of products,
the same amount of products,
or we weren't at the 10 to 50 million device scale, but in terms of like the robustness of the firmware, how often we got Bluetooth bug reports or battery bug reports, like those two companies
compared, like we did pretty well with the 10 or 15 people that we had working on the project.
To be fair. And I agree with you that that's a common thing that I experienced at startups
when going from a large company to a smaller company that did the same thing.
Just finding and finding, wow, there's a huge speed up here for some reason.
But to be fair, Fitbit for a long time had a couple of firmware engineers.
I mean, it was really tiny until not that long before Pebble was acquired.
Right, right.
It was fewer than five people for a long time.
And that's including contractors.
Yeah, that's including me.
It's including contractors.
So it was quite small.
So I think part of it was just a really small, small team trying desperately to get stuff out.
On multiple lines.
Yeah.
And that's it.
I think that's when the tools and processes go completely to the side
and never get done is when you have a very small team
and you are just, quote unquote, trying to get things done.
And that's, you're like already underwater
there's nobody with time to do it i mean it feels like there's nobody with time to do it
not when everybody's focused on the ship date and you feel like it can't change because christmas
doesn't change correct oh my gosh the the, yes, the number of times that I realize that I'm not
again, working for a hardware company and realize that like an Apple announcement or
the holiday season, like doesn't have to stress me out is amazing.
That was one of the horrible things about working on consumer products is getting
forever a sense that uh that december
is more like the start of school than christmas vacation exactly when you say tools
okay you know there's diff and grep the compiler great question what tools are we talking about
here so when and and this was this this was a contentious point as well.
I actually don't love the word tools.
I think I just like the word non-firmware.
It's a terrible word,
but I think that's actually what I mean when I say tools.
So just a brief history of why the word actually
is a little wrong in my mind.
I did do developer productivity, um, at Fitbit. That was generally my role. Um, I had a few people
working with me on that. And I think what a lot of people decided it was, was the tools team.
And if it wasn't firmware and it was written in Python or a web application, like it was our responsibility to fix anything and everything and build anything and everything that was tools.
And I didn't, I got frustrated by that because like there are things that probably like won't even help a firmware team at large, which is like a random Python script or tool
that somebody needs. And then there are things that are like, you know, like we were talking
about before, like compounding benefits, like let's entirely rewrite this process or entirely
change the way that we're doing something. And that's what I had carved a team out to do
with those types of things. But you asked, like, what is a tool?
Yeah, just non-firmware.
If it's not C code, if it's not running on a device,
and it's not directly in line with what the customer will actually see,
I feel like that is kind of what we're trying to talk about today.
It's like, what are the things that support developers, processes, teams
that is not directly going to be customer-facing.
So what are the top three not tools?
I don't know, but I can tell you what
the three quickest things that I found,
at least we were talking about the Pepple acquisition and the Fitbit.
What were the three things that we focused on first?
We can try that, and then we can see what we uncover so the first thing we came is the slow
build time or actually no first one is the it was the developer environments i think were just
very hard to get right um i remember joining pebble and it took me, I think I committed a, uh, submitted a
pull request on the morning of my second day at work, which I thought was really cool.
Like I had the firmware working, I fixed a bug.
I verified it in the UI on my little like developer device, um, that just worked pretty
much out of the box on my Mac machine.
And it was nice.
It was easy.
I,
I loved it.
Um,
the comparison is at Fitbit.
Like,
I don't think I committed or submitted anything until like two months in or
like two or three months in.
Um,
and by then I was like finally getting around to understanding how to get
firmware on the device,
um, get my machine working get ir working everyone had to get a license which took a while like yeah there were
just a lot of problems um yeah so the i think just to give some context i think i remember
around that time most developers were using max we're using ir so most developers were using Macs. We were using IAR. So most developers were running virtual machines.
In Fusion,
there was a standard image that everybody installed.
So you did all your development
in Fusion, on IAR,
in Windows, on a Mac.
It was slow, and people didn't
like the editor, so some people had various
hacks to use their favorite editor on their Mac
and share the file. Anyway,
just context, it was turtles on top of turtles.
Yeah, and again, it's not bad.
Like, if everyone did this, and this is something, yeah,
so I don't want to keep, I'll come back to this.
So developer environments, the second thing we sped up
or just like knew was going to be an issue
was the compiler and the compiler speed.
And it wasn't the compiler.
I'm sure IR is a fine compiler.
I've seen fast builds.
Like I've talked to a bunch of the engineers
from early days at Fitbit where they were saying
the IR builds were very fast.
That's because there was no code, sorry.
Well, it's because there was no code sorry well it's because there was no code
um there they were probably maybe i don't actually know maybe running on actual windows machines
yes we were but also like ir itself can be parallelized like you can run it in multiple
threads concurrent builds but when it's a bunch of python scripts and external scripts and XML parsing and all of this kind of, you know, placed in there without order.
The build will run slow.
It'll be sequential.
It'll be blocking on a lot of things.
And that's kind of what it came to.
Like, I don't think IR itself was the problem.
It was the build system orchestration um and those python scripts did
things like uh the assets for exactly images and fonts and fonts and things so they had to be run
although they didn't always have to be run and they really weren't i mean they weren't things
ir should be running it needed to be part of the build process.
So it had to be at least orchestrated by IR, but it didn't have to be,
you know yes, it didn't have to be part of that actual process.
Well, the problem was I are like, you know,
that forced you to use their, their editor.
And the CLI version of it, you know,
wasn't exposed or easily enough in the Linux version was like behind,
you know, guarded in a safe somewhere that they wouldn't let anyone touch um and like when we were traveling on a plane and trying to do work you couldn't have the license
and you couldn't compile so then you had to like take the one offline license i don't know it was
yeah it wasn't it wasn't great and i don't think developers could work for like a week because
they didn't have a license.
So the second thing we tried to do was fix the compiler.
And the easiest way to fix the compiler for us at Pebble,
and like we didn't like to use Windows, like we just had,
and we just knew that Windows couldn't be fixed
in a way that it would be fast for firmware development.
And so we switched to GCC
and we like slowly moved
the entire build system
to using GCC on Mac or Linux.
We kind of had Windows working along
for the factory builds.
I think those needed to be run
in Windows for some reason.
And this is like Chris Coleman,
the CTO of Memfault. It is like
his greatest gift to Fitbit was, I don't even know if you guys know this, the way we got the GCC
build to work and to be synced with the IR build forever, basically, as he had built an incredible
amount of XML IR build file parsing that converted it into a make file um so he parsed all the project files
for all the different builds parsed um the build commands themselves like three basically of the
metadata things and then converted those all into make file commands um and that was like the first
part of the build step was running these python scripts and then it would compile everything and
make with a bunch of parallel threads. And then the build
would go down from like, you know, 18 minutes
to two minutes. And sequential
builds were like 30 seconds instead of like
10 minutes.
Yeah, I do remember him doing that.
And that was, well,
don't need to get into specifics of IAR,
but the way they manage projects is
very painful if you have to merge
things. Anyway.
But it isn't as bad as Eclipse.
I don't know.
They're all bad.
They're all great, and they're all cumbersome.
Yeah.
I mean, it's great if you can click a play button.
Like, that's wonderful.
It's just, it tries to do too much, hide too much, and then you can't optimize anything, basically, at that point.
They're very good for single developers.
They make a lot of stuff super easy.
And then once you start to scale them to bigger teams,
those things get in the way.
I would argue you could get up to five developers.
But yes, it's a small team thing.
But what you're saying is one of the revelations I had
was when we switched to GCC and it's like, okay, my bill from totally clean is two minutes instead of 18 minutes, go get a coffee and, you know, try to figure out something to do.
It was amazing.
It was like, well, this is ridiculous.
That was the biggest complaint.
I think I had a couple of um message me on the side like
half joking being like i don't love this i can no longer play ping pong when my builds are running
like i don't have enough time anymore that's all right that's well i'm glad i'm too i'm too
efficient you have to stop this expect me to work in this time?
Yeah.
So developer environments,
getting them set up was like the first painful thing.
You asked me for my top three.
So I'm going to go to the third one now.
The second one was just speed of the build itself.
Like we knew that was so important.
We had it at Pebble. Every time the build would get slower than like two minutes,
it was kind of like all hands on deck, like come fix the build and get it back down to a minute. And then the third
thing is just process. You had alluded to this earlier, Chris, where you were saying every
developer has their own scripts or everyone has their own process or how did you get to get
to use your favorite code editor outside of IAR like everyone had their own way um it's both a
good and a bad thing the the one thing that I wanted to do and this is what I spent most of my
time on or early on was like getting people to do the same thing. And if everyone has one single way to do things
like packaging assets or loading assets on a device,
if you have one way to take a screenshot of the device
or to run the build or to debug the build,
if there's only one way to access all that thing,
and then if everyone,
if some people know a better way to improve that system,
they can apply the changes
and then everyone will benefit from it and so that's like one of the things that i did very
early on was basically wrap all of our tools you may call them or like processes in something
called invoke which is just a python cli wrapper but to build you don't have to run three scripts in various different places,
you just ran invoke build. Or if you wanted to debug, you would just run invoke debug. And you
didn't have to like, set up your environment in a certain way and get it all like figured out.
And then when we radically switch things under the hood, when people didn't know, like, oh,
we figured out a newer way to build or a newer way to debug
or a a better way to load assets that happened all the time we kept improving the speed at which
assets were loaded onto the device like people didn't have to change their ways they just
continue to run invoke asset dot load um and it just magically sped up by like you know two or
three x here and there every now and then.
A fourth thing I will add that kind of all those things depend on is getting things installed in a consistent way
on every developer's machine, which was another revelation.
And I think we used Conda, right?
Yes.
To deploy all of this stuff in a way that every developer
had the same install.
And so you didn't, wow, I was about to say you didn't hand somebody a floppy disk.
So I'm going to just leave.
I'm just going to head out.
I'll talk to you guys later.
Are you sure you ate lunch?
Yeah, I did.
But you don't, you know, you didn't say, okay, go to this internal website
and grab the zip file and run the installer.
And I'll make sure you get this version. Or, you know, go to these eight places and get the latest version of this the installer and make sure you get this version or you know
go to these eight places and get the latest
version of this this this this this and then
you know with a big install document
it was type this
command okay all your stuff's
installed now if you want to like
you said to a build or all those other stuff look at this
invoke command it's a top level
thing that drives all of those without having you
type make dash j8 yes that was my most frustrating thing oh my goodness the amount of times that
i was frustrated the fact that like it wasn't just by default dash j8 like everyone had a
eight core machine at the time and then like people were developing for like two years in
and never knew about dash j8 or just the j parameter well that
was the thing with iar everybody had iar except me because i go and look for the options on
everything anytime i look at a tool like i want to see all the options um but nobody had iar's
thing turned up either and it was always by default set to one core and so like oh my builds take 10
minutes well have you tried this oh and i didn't know it was there it was all
just random knowledge you had to find on a wiki page and that and that was the point of invoke
like you know we we we detected the number of cores in your machine or we detected a lot of
things about the environment and and you could you could of course like turn these things off
in an options file but like the the all of the defaults were sane and wonderful and that's that was the beauty
of it um and yeah to to go back there we we did use conda conda for for the listeners is
it's a common it's like docker files you could you could kind of equate it to that or running
things in docker but it's more so like running a python virtual environment but can also also has a lot
of system packages so like um image magic and um l conv and and all these other like system level
packages you could also install those through condo and so yeah you'd basically end up installing
like one or two things using brew or apt or on windows. And then the rest of it was just automatically pushed onto your machine by
just running like conda ENV update.
And that was great.
And if ever got messed up,
which it did like once a month,
you could delete the entire folder and then just run it again and then walk
away.
And like five minutes later,
your machine's back up and building,
which is great.
Would you still use conda or would you go for Docker now?
I would still use Conda.
I think it's both.
I think if everyone is on Linux, you can use Docker
because file system performance on Linux through Docker
is like 100%.
There's no file system degradation.
If it's a Mac shop or if it's a Windows shop,
the file system performance between Docker
and then Mac and Windows is not great,
or at least it wasn't at the time.
And so you're trying to edit your files
in Mac or Windows on that file system
because you want your IDE to be able to process
all the metadata very quickly,
but it would slow the builds down or vice versa. The builds would be fast, but the editor would be
super slow. And in those cases, I would probably just use Conda again because there are packages
for GCC, GDB, everything you can need. And it's pretty simple otherwise.
So you normalized the build environment. Did you normalize the editing environment?
We continued. It's a great question. There were pushes to do that, but I think that is something
that I don't believe in. I don't think everyone needs to use the same editing environment. There are people that will take
Emacs or Vim or Eclipse to the grave with them. And I think that's actually fair.
And so what I spent a good amount of time on is for the people who wanted a graphical editor,
um, that, that, that missed ID, you know, uh, or sorry, I, i i are i are had a bunch of views like
reviewing registers and threads and graphical debugging and point and click um for for the
engineers who wanted that i got eclipse working so you could click the play button and the debug
button and drop down and like select what build you were running and you could graphically debug
and so for for the people who wanted an ide we did settle on eclipse and that was easily working otherwise everyone used their
own editor from people people worked hard to get vs code working well too so there were it was early
days for vs code though yeah yeah yeah early days for vs code there Code, there was the embedded plugins.
There were two of them, I think, at the time, and they were getting there.
But I don't think I actually worked on anything VS Code for Fitbit.
Would you do VS Code now, or would you do Eclipse?
VS Code, most likely.
I really like VS Code. I'm surprised.
And I understand not trying to push a single editor on everyone. I think you're right there. Because there are people who are so ingrained in what they have. As a consultant, I will use any editor they ask me to, except VI. Sorry, people. Just can't.
I mean, I do some of my development in nano because that's the easiest thing to do.
And so my goal is basically to be able to walk up to whoever's computer I'm trying to talk to and be able to use it.
And that's a different goal than what a company needs.
I'm trying to be flexible because that's what I should be for my
clients. But if you're working full-time for a company, trying to figure out how much leeway to
give in IDEs, it's a tough problem because you do want to be able to walk with them um i i at least in those cases yes i liked to get the processes
to be the same but you i mean my terrible analogy you don't tell an artist to like switch from a
crayon to a pen to a paintbrush to like charcoal you just you know tell them have your artwork done in this way um by this time and
that sort of thing um and whenever i was coding with somebody i very rarely like use their keyboard
i i mostly was like you should do it this way click this button type these commands now and
then i kind of like help people get into the flow of how things worked. I interviewed for a company, just really quickly, a fun fact.
Interviewed for a company as my first job.
And they did pair programming.
It was like their thing.
You pair programmed most of your development.
And yeah, they had like a, this is your environment.
This is the keyboards that you use generally.
You need to use VI or Vim.
You need to use these plugins.
And we expect that it'll take you like a month or two to get ramped up on all these tools.
But you will use the same tools as everyone else.
Thought that was kind of cool.
It's a different choice, but it's a very viable one.
It's a viable choice that everybody use a different IDE, but the same build tools.
It is not really viable to use different IDEs and different build tools.
At that point, what you've got is just a disaster.
A mess.
Yeah.
Okay.
So I think we covered kind of what you were talking about
in the notes for ground rules for developer productivity,
just accidentally by talking about Fitbit and stuff.
Did you want to talk a little bit about, you have some notes about evolution of tools and I found that pretty interesting
because I've thought about this stuff, but only in the sense that I've done each of these
steps accidentally. But I think walking people through how you go from the need for a script or tool or something
to taking it to something that actually increases
developer productivity is an interesting path.
This came up as, so now you get a little bit
into the brains of Memful.
This came up in an exercise.
There is something called a maturity model.
And I didn't really know what they were.
I never heard of one but what it is is basically like you give yourself or your process or your company like a one to it's like a scale from one to ten whereas like one is early days
your maturity model it's like we can barely do something right whereas like 10 it's like
fully automated you know there's probably some ML and
AI at this point in time, like that's where maturity models are going. And I think of
tools and processes kind of the same way now. And so the early days of a tool is like, it doesn't
exist. And, and I, I did give an example and the little bit of a script that I wrote for this,
which was like, you have firmware, and it crashes, it spits out a couple of those addresses,
you know, OX8000, it's going to give you a line of code. And you need to take those lines of code,
use a symbol file, symbolicate it, and then you can kind of get a backtrace on what happened.
Sure, you just search through the memory map file, and then you find the address that's closest,
but not, I mean, you have to go to the next greater, not next less than. And then you type
in that command, you search for that function, and then you find where it's instantiated.
And then you look around there to make sure there aren't any other functions
just in case you don't have it quite right.
And then you look in that function and you try to find which line,
but that's kind of hard because now you have the list file.
It's 445 and it's kind of time to go home.
Exactly. Yes.
And you realize you don't even have the right symbol file
yes and then you realize you're at a code space so some developer turns on lto and then you realize
that all of this is a mess anyways and it doesn't make sense yeah i mean this is exactly how you do
it um and we've all been there too so that's like a one you know you you you're very early days
you may not even know how to do it but but somebody on a wiki page has written,
get out the map file and like do binary search
in this map file for like maybe a function.
And then you realize there's this tool
called address to line, ADDR to line.
You can use that.
And what does that come with?
I forget.
Is that just part of the GNU tools?
Yeah, binutils or GNU tools.
You install at least the arm embedded
tool chain and it comes with it okay um but you run the tool you give it a simple file hopefully
it's the right one and then you give it an address just gives you the file and the line number
um and so like the the inspiring engineer is like oh this this is annoying like i can write a little
script that wraps address to line and maybe it can take in multiple addresses and give me multiple function and line numbers back at the same time.
And that's like, you're moving along.
And now you share that with your coworkers, and now it's revision controlled, and everyone's kind of using the same tool, but it's still very annoying to use.
But only like three people are using it. The other eight
know about it, but they're
just not really willing to
try that. They're still good with the map file.
And there's one person who doesn't know how to use
Hattershaline at all. Oh yeah, no,
that person is
just going through the object file on their
own. Oh man,
I'm letting you guys say all these
things. This is not me.
Well, I've been that one person so i'm okay fair enough um and and you this this keeps getting better and keeps evolving
like maybe your script now automatically detects the symbol file that needs to be downloaded from
a remote server or artifactory what what we were using at Fitbit.
And then eventually like all the engineers actually start using it and then
it's going to become better.
And then what I found is evolving these tools further in which I don't think
many people,
many firmware engineers do anymore today is like build a web application.
Like that was one of the bigger
realizations from the CEO of Memfault at Pebble, Francois. He spun up a server that hosted a bunch
of these small Python applications that we would use. And one of them was kind of exactly this.
It would take a bunch of addresses, go download the symbol file and spit them back out
kind of in a UI. And so without even having an environment, without even having Python in your
computer, you could log into the website, use your Pebble login, and you could debug your firmware
that way. And that allowed pretty much anyone, you know, it became part of the process that if
you're in QA or if you're, you know, if you're a web developer or if you're the CTO that like has
no hand in firmware, but like it's technical, you can take the things that spit out in your email,
these addresses, throw them into this tool and you can kind of know where it was. And you can
kind of know if it's the same issue that you've been seeing all the time that everyone's talking
about, like, oh, the circular buffer crash, same one. And then these things keep evolving and evolving and evolving and and like sure enough
the final the final piece of it is like you actually never see these addresses anymore
they're automatically symbolicated in your cli or like yeah it's actually cool if the esp idf
this i mean this is the problem because like there are so many tools around it. The ESP IDF, if you use
their terminal wrapper, when the firmware boots, it spits out a build ID or the firmware version,
and then it also takes an assemble file. And whenever it sees those addresses,
it will automatically symbolicate the function and line numbers. And you will actually never
see those addresses ever. And it will just automatically printicate the function and line numbers, and you will actually never see those addresses ever,
and it will just automatically print out the file and line number,
which I think is super cool.
And that's just tooling, improving, and improving slowly but surely.
Isn't some of this just doing what my IDE does when I run?
What do you mean?
I mean, if I have JTAG on and it crashes,
it will give me a backtrace,
and then I can just click on the stack trace or the backtrace,
and it will go to wherever I'm supposed to be.
Ooh, for sure.
But what happens when your device is no longer connected to a debugger,
which is most of the time spent is debugging bugs outside of the debugger, I would argue.
Oh, yeah.
I mean, if you're in the debugger, your life is so much better than when you're trying to debug bugs that are not...
You can't see them.
You can't touch them.
You can't reproduce them.
From Fitbit experience, though, it is extremely difficult to go for a run with your debugger attached.
Yes, exactly.
Yes.
A laptop is not compatible with jogging.
And those are the times when you need to actually, because that's how your customers are going to use their products, too, right?
Like, your customers are not going to use a product sitting on a desk, flat, not using it.
They're going to be...
Shaking it whenever they want the step to go.
Yeah. They're going to be shaking it whenever they want the step to go. Yeah. They're going to be shaking it. They're going to be, uh, I have to make a call out to
my favorite blog post, the, the tower of terror, um, Fitbit blog post. I think shiver like, uh,
the Fitbit would crash in zero gravity falling and free fall on a, on a theme park ride, which
again, great blog post.
But those are the places you need to test your device
out in the real field.
And we'll continue to talk about tools,
but the only way you can debug products in the field
are by using tools and processes that you've created.
There are no out-of-the-box ways to debug a device without a debugger.
Again, slight pitch for Memfault.
That is the way you do it now.
But yeah, no embedded systems, no firmware.
They don't have these things built in.
So companies have to build them.
And if you don't build them, you're just not going to be able to debug these products without a debugger. And this is where Memfault comes in because you've
built them and you're willing to help people use your tools to work on their products.
Exactly. Like the vision for us is you check out a new project using the Nordic SDK or Zephyr or the ESP IDF. And your first thing
that you do is install memfault, which is, you know, getting closer to just the checkbox.
And then from here on out, like all of your logs, all of your crashes, core dumps,
metrics, everything is automatically collected and sent up and isn't a dashboard
waiting for you to like find the issues rather than doing exactly what you said was really hard
before was like we just have to get stuff done there's no time to build tools there's no time to
to capture you know logs automatically there's no time to capture metrics and build a whole system
to do that we just need to like get the product working but adder to line i didn't even know it existed until now
um what i didn't i've always i'm doing that thing where i'm not supposed to be surprised
i'm a little surprised i mean i know how to do it manually so i've always done it manually yeah
and and i you know i don't i do develop on Linux some, but it's not my native environment.
And the GNU tools, I've used them for a long time.
I'm happy with Makefile.
But I started out with the expensive compilers, and they didn't have that sort of thing.
They didn't do it.
Oh, right.
They just didn't have that tool.
Yeah.
Objump dump is still something that I look at and go, that's really useful.
But you're talking about more tools.
How does it not become just a wall of tools that I don't know how to use and feel like I should, but I don't.
And it doesn't, that wall doesn't help me be productive because it's just too many.
It's a great question.
I believe I spent at least 30 to 40%.
So again, developer productivity engineer, I was mostly hired
on as an engineer. That was my job. Um, most people thought I was coding most of the time.
I feel like I spent at least 30 or 40% like marketing. I was doing internal marketing and
training for what these tools we were building are, why you should learn how to use make files,
why you should learn about this thing that we wrote,
or the wrapper, the invoke wrapper,
how you debug efficiently in GDB without using a GUI,
how you write GDB Python scripts
to automate the things that you do every single day that don't
need to take you know 30 minutes to do it could just take a couple seconds so like that's what
that's what i did i i half of my day i probably walked around the office asking people like tell
me what you're working on what are you frustrated with and it a lot of the times it was like oh did
you know about this script did you know about this tool that we have do you know that you can do this automatically like in your debugger um that sort
of thing and like tell me tell me all the blogs that you read that tell you all this stuff there's
just not many i wouldn't say there are many blogs explaining and dumbing everything down that like
you should use these tools for these reasons this
is how you use them um but there are a ton of those for software um in other worlds i guess
i think that's one of the things i was kind of alluding to at the beginning
it's not and i think this is where you're kind of heading with your question alicia
it's not enough to have the tools and the processes they can just if there's been a
lot of places I've been at
where there's just piles of stuff sitting around
that could be useful.
Oh, many wiki pages I've written have never been read.
But it takes a commitment on the part of the company
to have people whose job it is to be the educators
and to make sure that these things are actually getting some uptake
and that people are understanding them and then measure that there's been improvements
and that that has to be a dedicated person or people uh at least an engineer who's you know
50 time is is doing this stuff and oh that's pretty rare from my perspective in the embedded world
yeah and their title may or may not be firmware engineer that's the thing is like my title
was a firmware engineer but like i didn't write much firmware um not very little that i wrote
was actually shipped to customers it was a lot of internal tooling, a lot of unit tests, and a lot of web applications.
How do you convince people to adopt it, especially the cranky curmudgeons?
Mm-hmm.
Well, they didn't exist.
I know people at Fitbit.
I knew them.
I was leaving at the time, but there were some,
um, I, I probably traveled between all of those offices more than anyone,
except for like the directors and the managers, I would say in terms of like,
you know, just a normal engineer. I,
I spent time next to everyone's desk, um,
just watching them work and helping them out. And it was,
they always had frustrations and then I always had, I always tried to have answers
and that was just my role. Like that's exactly what I wanted to do at the time. Um, and it did
help. And so. Yeah. After the second or third time you helped me answer a question, I would definitely start coming to you with questions.
Which wasn't always great either, because I wrote some pretty awesome wiki pages that I mostly pointed people to.
I'm like, oh, you want to do this?
Like, here's here's the wiki page that is like so up to date.
It hurts. so up to date it hurts um yeah i the the curmudgeon the curmudgeon people at the end of the day just
want to get their job done too and if you convince them that this is the best way to do it they're
going to to use the the new tool or the new way What about some of the things that they do
that are more common in software world
than from where like Jenkins?
Go on, elaborate with the question.
Like unit tests and-
Continuous integration.
Continuous integration and automatic builds
and all of that.
That, that, oh man.
That's another show. That's a whole i mean that's a whole different type of
tool that's a whole different type of tool right like i guess the tools we've been talking about
kind of generally is you do something as an individual and you try to convince people to
use it and ultimately or eventually everyone does use
it and then it becomes part of the process ci and probably not unit test but but ci specifically is
like it becomes part of the process like day day seven like somebody builds it it's immediately
useful and that is the way that you build make sure the bill doesn't fail and hopefully that's
the way that you like mark a release for deployment and like build the final release that you actually
release to customers and so with that i actually like ci the best because you don't have to convince
anyone to use it it just becomes the only way to do things you have no choice what's hard is
convincing people to do things on their computer differently
but if you just change the way that you actually release the firmware and you disallow any firmware
being built on the computer that you are running on and shipped then like it's a pretty easy sell
unit tests are incredibly difficult to convince people to to convince people that it's worth their time.
And the only way that I could convince or, or, or get anyone to use it was literally like,
yeah, I was sitting next to the desk, sitting next to their desk and pair programming. It's like,
Hey, it looks like you're struggling with this. Or like, you've been working on this thing that I thought was kind of simple for two or three days.
Let's take another approach.
Let's wrap this function in a unit test
and see if we can quickly iterate.
And it was even worse
when the build times were 10 to 15 minutes
because their iteration cycle is 10 to 15 minutes
on testing something.
And the iteration cycle on a unit test is on the
order of like, you know, 500 milliseconds to five seconds, depending on how you've set them up. And
that is, people just want to get their job done. And if you convince them that that is the quickest
way they'll get that job done, they will generally favor that. Whether they will be able to do it
without you next to them and sitting next to the desk, like coaching them along that whether they will be able to do it without you next to them and sitting next
to the desk like coaching them along or you know whether they take the initiative to learn how
everything works is a different story do you have a preferred framework for unit testing
we all we all love to to talk about our favorites of things and i don't have a favorite like i think what i what i generally turn to is cppu test um there's a blog post on that as well on on interrupt mem faults interrupt
blog it just doesn't matter like at the end of the day it is a c or c plus plus framework creates binaries, which run a bunch of functions and give
you a status report at the end. I've seen people also successfully build robust and perfect software
using one very large main function that runs on the device or on host. And it's just every single
assert will basically stop the program and there's no status
report it's just like this assert failed it's just the idea of doing it and the other side of it is
like being able to the the the bare minimum of a unit test framework, in my opinion, is being able to build them separately. And so like,
I'm testing this module, this needs to be a separate binary, I'm building this other module,
I want to like switch in a couple modules, like a fake and a stub or a mock, depending on which
piece I'm testing, and that needs to be another binary. And they have to be debuggable, I have to
be able to throw this binary into a debugger, which is why I almost always advocate for running unit tests on the host machine, like on your Mac or on your Linux box, because getting it into a debugger is so much easier.
So much easier. quickly iterate on something to be able to build and run a single unit test you know the suite for
better or worse at previous companies like you know you run test and that would take two three
four minutes to run the whole suite of tests and sometimes five or six minutes depending on like
how many tests were being committed or if one was like really long um you just need to be able to
run one very quickly and and those are just the bare minimums of like
if you're not if you don't have these bare minimums of a unit test framework or or main
file or that you're trying to test like you should work on that but i generally don't mind
you had the levels of maturity for for talking about the tools and i think that's true for unit
testing as well where sometimes depending on the
environment or the client or what's going on I will take whatever module I'm having trouble with
or building and I will write a some wrapper around it so that I can program all right so I can
develop just it and I might write something underneath it that
simulates what the hardware would do. But usually that's in memory or with text files,
like a text file that will read in ADC files or ADC values so that I can pretend to do my
algorithm. And then I look at the output, or maybe I graph the output really,
and then once that works, I'm done. And it's a test to develop. It's a development test,
but it's not going to be useful in the long run. I mean, I might run it again later if I think I'm
changing this module, but it doesn't feel like it's useful to look at all of those graphs every time I commit. Is that a different kind of test? And I'm still doing it the unit test way
where I'm, you know, writing a little bit of code and testing it and making sure it breaks.
Yeah. I, I would still, I don't think there's a better word for it, at least that I know of.
I would still call it a unit test, but there's a better word for it, at least that I know of. I would still call it a unit test,
but there's nothing stopping you from committing that into the repo and at least verifying that the results of the algorithm or the module don't change. Like, instead of
viewing a graph, like actually viewing a graph to verify the results is really great. But you can
also just take the data points,
write that into an assertion,
and just verify that the data points don't ever change either.
Because inevitably, everything around that module,
and maybe even in the module,
will be refactored at some point.
And somebody will love you for having committed this test
to verify that this works.
And I actually saw a bunch of remnants of that
in the various repos that I worked on at Fitbit
was like somebody had wrote a main function
that runs on the device that tests this one specific module.
And like, I find it two years later
and I was like, oh, like there is no way
and like any moons
that I will actually get this running again.
And so it was always nice when somebody committed a unit test to you know the overarching system and it ran on ci
and you just verified that it never broke and with my graphs example usually the next step
to committing it or to make it part of CI is maybe not,
does the value not change at all?
Because sometimes, you know, you have things,
floating point numbers being what they are, you know, within an epsilon.
And that was a step that I don't see people take. They're like, okay, well, it doesn't matter because this works
once and that was all I needed. But making it work over and over again really does help.
Just to make sure things don't get broken. So you don't have to do that.
Okay, it worked two months ago and now, so I'm going to go a month back into version control.
And then somebody on slack says hey have you
tried to get bisect that was also a lot of training i did was how to use get bisect to find where
faults happened um yeah and i mean sorry to touch on that flaky tests are terrible and like it's
it's the unit test that fails one of
every 10 times or one of every 100 times in ci that are really frustrating and it kind of is
that it's like a floating point number or a race condition that you accidentally added or some
global state not being cleaned up between tests that will cause a lot of those issues and so
that's always an impediment to merging your unit test that you just wrote, but don't really want to commit it to the repo.
Yeah.
I'm working on a system now that has unit tests, which is great, except it's a machine learning system and it fails at least one unit test for no particular reason pretty much every time.
Yeah.
I want to go back a little bit to the web apps
because I found that really interesting.
Not something I would have considered.
Yeah.
I mean, we've been doing ML stuff for a while now,
so we're living in Python a lot,
and we use Jupyter Notebooks, which are web-based.
That's a quick way of getting stuff running
without having somebody need to install
a bunch of Python dependencies and what have you,
which is always a nightmare. But it seems like a really powerful way to get things that
developers shouldn't have to worry about installing and keeping up to date in a place where
it's always right and simple things run there. Is there limit to that i know some people have i know vs
code now works in the browser at some point it feels like everything should just move to the
browser and then you don't have to install anything ever um where's the appropriate place
for those and where where do you see that there's maybe limits where it shouldn't be
things how do you tell which should be yeah yeah Yeah, yeah. Which should be a web application versus which should be run locally.
Yeah.
Got it.
I think if Google has their way,
there's likely going to be no limits
because you now, I mean, it's so cool.
Chrome now has web Bluetooth
and web serial, web USB,
where basically you plug in your development device
and you just use the chrome browser and you can send in in our case we always send assets over the
the serial port but you can send assets to it you can is this like like like hyperterm or teraterm
now lives in my browser no the drivers live in the browser.
There's a user-level stack for all of that stuff.
It doesn't go to the kernel.
It's all just part of the browser.
So the browser itself can connect to Bluetooth devices or to USB devices.
But if I want to connect it to a UART, am I looking at Hyperterm or Petty or something like that?
It gets exposed as a raw api in the chrome browser so like the chrome
javascript api like on my web page i can use javascript to basically say like i i'm faking
the the api but like webs web serial dot query devices and then you you see your device in that
list you say connect to device and then the device connects and then you say like send bytes or
receive bytes in JavaScript,
in the browser itself.
You don't have to,
you're not coding the browser.
You can,
you can literally like go to like Tyler Hoffman's web debugger.com.
It'll,
I'll just like send you a JavaScript file,
which then it will connect on locally to those devices connected to your
computer over serial. that would be magical for
manufacturing and for education and then for well for deployment of tools and yeah and yeah so there
are like the possibilities there are are vast um we had an engineer at fitbit more of a manager high level engineer who who wrote a lot
of these like he basically took what i had tried to do with invoke which was a lot of local local
scripts and people ran them locally to orchestrate their mac or their machine to do things he wrote
a lot of this in the browser and so if you wanted to send assets to a device,
I had written invoke.assets.load or something like that,
you would drag and drop the assets file to the browser
and then it would connect to the Fitbit device
and then send that over using the Chrome browser.
And you would query for logs from a device.
And basically all of the serial CLI was wrapped
so that all of the requests and responses were all kind of orchestrated with buttons.
It was really cool and it worked really well. And the hidden benefit of the web applications
that I didn't realize until I started just using them more and more
at Pebble was, yes, firmware engineers can probably figure out both how to use a web
application and also how to use local tools. But if you're on any other team in your company,
you will not have the firmware developer environment that you need to run those local tools.
And you will probably never have it.
Like you're the, the sales person will not go spend three months downloading IR and a
windows VM or whatever to like load the assets or debug the device, but they will very easily open a web browser, plug their device in
over USB. And if it can, you know, talk to this device without actually having any knowledge,
like that's amazing. Now your whole company can kind of do debugging or other things you need to
do with your, on your device without an environment. And that's, we built a lot of those
tools that anyone could use. And that's kind of where we
always focus on web applications. It was like, great, you wrote a local script that only you
can use or only we can use, like, this is only so useful. So go take a week and like create a
web application for it. And everyone can use it then. That was kind of what we did at Pebble.
It was like, you prove that this is valuable go do
it for everyone now there's an xkcd uh cartoon that seems slightly irrelevant where it there's
a graph of how long it would take you to do something like to do it yes yes um how do you decide you shouldn't make web apps you
shouldn't have a script for everybody you should in fact keep your your your little script to
yourself i like to follow as a rough guideline the rule of three if it's like if if i've done it once or it's
one engineer they're fine that that needs it it's fine having it be like the one-off terrible way to
do it if it happens again it's fine if that person teaches trains you know sends it up somewhere that somebody else can download
it's fine for two but as soon as three people or it's the third time i've done something that's
really annoying i really strive to then clean it up and i'll quote unquote productize it like
it now becomes a product and it now becomes something that we will support and that we will value and will hopefully never, never, you know, become stale unless we don't need it anymore.
But also like all these things take time too.
Supporting tools takes a lot of time because there's always somebody who's changed something and they still want it to work, but that wasn't what you meant for it to do.
And yet it's not a bad idea.
But I had plans this weekend.
This is another hidden benefit of the web applications is like you run your web application in a Docker image on a web server that very much probably never
changes.
Like you can spin up a web application from 10 years ago and it will probably still work
pretty well.
And so if you build a tool that does address to line and you deploy it on a web server and you never take that web server down,
like it will work forever. No matter if a, your, your local machine or your brew update goes bad
or Mac OS updates. And now you have a bunch of kernel errors or permission errors. Like that
web application will continue to work. And that was one of the hidden benefits
that I didn't realize for a while.
It was like, oh, I made this four years ago
and it's still working
and I've had to put no effort into fixing it ever,
but I've had to clean up the local version
a bunch of times
due to the Python 3 upgrades and everything.
That was always really nice.
I just want to do everything in the browser now.
GDB and Python.
I've never really played with them together.
I mean, sure, I can imagine sometimes scripting things in GDB
because I get bored, but GDB has some scripting.
Correct.
Like GDB, you can write Python. think you you had a podcast about this a while
ago i think i listened to that one don't expect us to remember yeah fair enough um
anyways i i think hopefully a lot of people by now know that gdb has a python interface
and you can code most you can orchestrate mostly everything in gdb and whenever you can't do
something indirectly python you can always kind of like shell out to gdb's normal interface and
just like run dot execute and then just like orchestrate gdb itself um my favorite easter
egg of gdb is like everything that is not exposed or you feel like you should be able to do,
there is likely a maintenance command for it.
There is like a whole wealth of commands
that basically power a lot of this stuff on GDB,
but it's all under like the maintenance namespace.
Very cool place anyways.
But you can do all this through Python.
And so one of, to give the grand vision one of
my favorite things that we had built both at pebble and at fitbit an engineer did it there
is we kind of ran a a doctor command um in gdb using python and what the doctor command would do
is for any device that is connected over over the debugger you run doctor
and it will kind of look at all of the global variables or a lot of them it'll look at the
state it'll look at bluetooth connection status it'll look at um wi-fi connection status threads
memory heap block pools it'll kind of and then it'll just like guess as to what could
be wrong so it'll be like hey it looks like your heap is like almost out of space it looks like
there's a deadlock here or it looks like your stack overflow like your little watermark here
is very very high or very low and it was just a a one-stop shop for quickly diagnosing what could be the issue.
And if anyone found something that they could add to that command, they would then add it.
That then anyone else could use and discover if they were just running the same doctor command as well.
That was super cool.
And that's like the easiest and most applicable example to, I think, a lot of teams.
Oh, wait a minute. Are you in GDB and executing Python scripts? Or are you in Python and executing GDB commands?
You can do both, but this is mostly in Python. So you type... Great question. I probably should have figured that out first. You load up
GDB, you connect the device. I highly advocate that. This was one of the more important commands
that I would wrap and invoke. So you would basically run invoke debug that would grab
the symbol file, spin up GDB and load all of the common scripts that I or other team members had written.
And everyone would have access to all those. The opposite of that is everyone has their own GDB
Python scripts that they've written that they don't share, or they don't put in the repo,
or they're in the repo, but no one knows about them. So they don't source them.
And so that's why I like to wrap the command. So you're in GDB.
And if it loads all of these scripts that are basically written in Python, but can be loaded
into GDB, now you have access to them. So you can type just the command name, and it's actually
executing Python code in that case. But it has access to the memory space so you can you know print a variable you can
print the type of a variable you can see all the fields of a struct you can run backtrace and kind
of get all the threads and iterate over everything and this is something where if you also had like
a piece of semi-custom hardware or or not even a custom hardware, something with peripheral registers and things
that weren't exposed in your IDE
that you could decode them using a script like that, right?
Exactly, yes.
There's a GDB Python package called GDB SVD files.
I don't exactly know what it is,
but you can load SVD files into the GDB using Python as well.
And you can kind of get at all the peripheral registers that way as well.
So the,
and the coolest part about the doctor command and the Python thing is
when at Pebble and then at Fitbit,
we both built a way to basically capture core dumps
from devices that were remote.
And so your device would crash.
It would capture a core dump,
which is essentially a memory space on a device
and the registers that were running at that exact time.
You can actually load these into GDB
with a decent amount of work
yes you can convert that into a file that then you can load into gdb and just as you can run
doctor on your or on gdb connected to a device over j tag you can run this same doctor command on a core dump. And by that point, like,
you can do so much. Like, how cool is that, that you can basically receive a report from a device,
you know, halfway across the world, it sends you a core dump, you run doctor on it, and you
immediately know that like, oh, this weird behavior was exhibited because we were out of heap and
that's the issue let's go like fix our memory leak okay and but doctor is not just something i can
run this is this is you have to make it you have to make it and and there's there's nothing that
gdb is going to give you out of the box that will like fix all your problems, especially for embedded. Um, but there, there are again,
not super, not super, um, like all over the place and all over GitHub and not super shared, but
there are various GDB scripts that will print out all the threads and free our toss or all of the,
the heap information and free our toss or in Zephyr or, you know, what have you, RTOS.
And you can find those on GitHub
and you can load them up and whatnot
and build your own doctor command.
But it's just an API that allows you to very quickly
and easily access all of the peripheral state
and memory space of a device
that is connected to a debugger.
And I'll make sure and put the link in,
a link in with the, you have a blog post about it.
So I will.
We have a bunch of blog posts about GDB with Python
because it's so underutilized.
Yeah, it really is.
And it's something that's in the back of my mind
when I use GDB all the time
and then I just forget about it
because either other people have done the legwork
to make cool scripts or I just forget about it.
Well, I mean, I'm still like excited about Python having a debugger. So,
you know, it's confusing now that you're going to put...
Your Python in your debugger.
Yeah, exactly. My brain only holds so much each week.
Tyler, before we wrap up, because we were getting close to the end,
there's one thing i
do want you to teach me about and it's in your list here fuzzy search for terminal history
i don't know what that is but it sounds like something i need
um and so i i i mentioned before like i love to watch people
that sounded weird i enjoy watching people work.
I enjoy helping them and improving processes
for the individual and the team at large.
And so I have watched so many people work
that I realize deeply that just finding that command
that they ran in the terminal at one point in time,
trying to find that again,
has proven to be very difficult for a lot of people.
Or they just don't know it exists.
Like they, you know,
I at one point also didn't know that control R
searched through my terminal history.
What?
And so like, and I, you know,
one of the other things is I had to watch people
type the same command over and over again.
And that command is something like make dash J8, pass this make file, pass these build arguments.
The up arrow, push the up arrow.
Well, the up arrow only gets you so far.
History pipe grep.
I'm only, I mean, I never use the cat history grep. or i don't do that i have an alias to just be
two letters so i'm sorry i'm not paying any attention anymore because i just type control
i'm gonna give you a bronze star for that effort um the the the way that i suggest and the way that
i do it as well is there is there is a tool called fzF and there are probably many other tools as well that you can use,
but it just takes your history file
and runs a very quick fuzzy search on it.
I think it's written in Go now.
So it's actually like insanely quick to run.
But if your history file is like 100,000 commands,
you can pretty much find that command
you ran three years ago at your company during onboarding that you're like,
oh, I wonder how I did that.
It's like incredibly quick to run that.
And yeah, you can just run grep on it too.
But it allows you to within just like a couple seconds,
like find the command, click enter.
It's now pasted in your browser and your terminal
and you can run it again.
I'm still stuck on control R.
But it's my favorite thing. And it's the one thing it's it's that is the first thing i tell people to install when i when i watch them if if they like don't know about control r or they click
the up arrow 20 times i'm like no no brew install fzf i'm sophisticated i use use grip. Yeah. That's number one.
Exactly.
Okay.
Well, I need to go install FZF.
FZF.
FZF.
Frank Zulu Frank.
And Christopher apparently is stuck in a control R loop,
which I don't even know what that means, but that's cool.
Should I come over to your computer and look? Just hit control R and then you start typing
and then it just gives you a command
that matches what you're typing.
Yeah, but it's very crude search.
I don't think it's not case insensitive
and it has to be an exact match.
And so it's like...
I don't care.
But it's a lot faster
because I have like really long Python lines
that I run a bunch of times
and that will be a lot faster for me to find stuff.
Well, if they're really long lines that you write a bunch of times,
you should probably wrap them in something that makes it easy to...
Yes, I know.
Yes, we know.
I mean, yeah.
But if I can't see all the parameters,
then I'm not sure I'm doing the right thing.
But I have automated a lot of the command line options
to sensible default, so I don't have to type in...
Yes, that's the important one.
As long as there's a constant improvement,
then I'm okay with it.
That's actually a really good thing to end on, sort of.
As long as there's a constant improvement,
Tyler is good with it.
One of my core values internally is baby steps.
Like as long, yeah, constant improvement.
Tyler, do you have any thoughts you'd like to leave us with?
I talked a lot here about improving tools
and building tools and building web applications.
And honestly, it takes a lot of time and energy and effort.
I have to pitch at this point that like a lot of it
is now built into Memfault and at least
check it out. And if not that, read our blog. A lot of this has been spit out in the Interrupt
blog as well. But also like reach out. I love talking about this stuff.
Our guest has been Tyler Hoffman, co-founder of Memfault. You can find the Memfault blog by typing Memfault interrupt
or interrupt Python GDB or interrupt unit tests.
And I'm sure you'll find it.
Thanks, Tyler.
This was chock full of information.
I think I'm going to have to go listen to this.
Well, I have to listen to it anyway because I have to edit it.
But I'm going to do it with a notepad.
What will our third podcast be?
That's the question.
What topic?
But we'll figure that out.
Something non-tech related.
Thank you to Christopher for producing and hosting.
Thank you to our patreon listener
slot group for questions in particular philip johnston and thank you for listening you can
always contact us at show at embedded.fm or hit the contact link on the top of the embedded.fm
page unless you're on a mobile device in which case it's in the hamburger. I didn't even know it was called a hamburger, did you? Now a quote to leave you with from William Kwam Kwamba. It didn't have a drill,
so I had to make my own. First, I heated a long nail in the fire, then drove it through half a
maze cob, creating a handle. I placed the nail back in the coals until it became red hot, then used it
to bore holes into both sides of plastic blades. That's from the book The Boy Who Harnessed the
Wind, creating currents of electricity and hope.